Retrieve string values with list - python

I am having some problems trying to manipulate some strings here. I am scraping some data from a website and I am facing 2 challenges:
I am scraping unnecessary data as the website I target has redundant class naming. My goal is to isolate this data and delete it so I can keep only the data I am interested in.
With the data kept, I need to split the string in order to store some information into specific variables.
So initially I was planning to use a simple split() function and store each new string into list and then play with it to keep the parts that I want. Unfortunately, every time I do this, I end up with 3 separate lists that I cannot manipulate/split.
Here is the code:
from selenium import webdriver
from bs4 import BeautifulSoup
driver = webdriver.Chrome('\\Users\\rapha\\Desktop\\10Milz\\4. Python\\Python final\\Scrape\\chromedriver.exe')
driver.get("https://www.atptour.com/en/scores/2020/7851/MS011/match-stats")
content = driver.page_source
soup = BeautifulSoup(content, "html.parser" )
for infos in soup.find_all('h3', class_='section-title'):
title = infos.get_text()
title = ' '.join(title.split())
title_list = []
title_list = title.split(" | ")
print(title_list)
Here is the "raw data" retrieve
Player Results
Tournament Results
Salvatore Caruso VS. Brandon Nakashima | Indian Wells 2020
And here is what I like to achieve
Variable_1 = Salvatore Caruso
Variable_2 = Brandon Nakashima
Variable 3 = Indian Wells
Variable 4 = 2020
Could you please let me know how to proceed here?

How about this ?
Its not so pretty but will work as long as there is always a VS. and a | separating the names and that the date is always 4 digits for the year.
from selenium import webdriver
from bs4 import BeautifulSoup
driver = webdriver.Chrome('/home/lewis/Desktop/chromedriver')
driver.get("https://www.atptour.com/en/scores/2020/7851/MS011/match-stats")
content = driver.page_source
soup = BeautifulSoup(content, "html.parser" )
text = soup.find_all('h3', class_='section-title')[2].get_text().replace("\n","")
while text.find(" ")> -1:
text = text.replace(" "," ")
text = text.strip()
#split by two parameters
split = [st.split("|") for st in text.split("VS.")]
#flatten the nested lists
flat_list = [item for sublist in split for item in sublist]
#extract the date from the end of the last item
flat_list.append(flat_list[-1][-4:])
#remove date fromt the 3rd item
flat_list[2] = flat_list[2][:-4]
#strip any leading or trailing white space
final_list = [x.strip() for x in flat_list]
print(final_list)
output
['Salvatore Caruso', 'Brandon Nakashima', 'Indian Wells', '2020']

Related

How can I edit web scraped text data using python?

Trying to build my first webscraper to print out how the stock market is doing on Yahoo finance. I have found out how to isolate the information I want but it returns super sloppy. How can I manipulate this data to present in an easier way?
import requests
from bs4 import BeautifulSoup
#Import your website here
html_text = requests.get('https://finance.yahoo.com/').text
soup = BeautifulSoup(html_text, 'lxml')
#Find the part of the webpage where your information is in
sp_market = soup.find('h3', class_ = 'Maw(160px)').text
print(sp_market)
The return here is : S&P 5004,587.18+65.64(+1.45%)
I want to grab these elements such as the labels and percentages and isolate them so I can print them in a way I want. Anyone know how? Thanks so much!
edit:
((S&P 5004,587.18+65.64(+1.45%)))
For simple splitting you could use the .split(separator) method that is built-in. (f.e. First split by 'x', then split by 'y', then split by 'z' with x, y, z being seperators). Since this is not efficient and if you have bit more complex regular expressions that look the same way for different elements (here: stocks) then take a look at the python regex module.
string = "Stock +45%"
pattern = '[a-z]+[0-9][0-9]'
Then, consider to use a function like find_all oder search.
I assume that the format is always S&P 500\n[number][+/-][number]([+/-][number]%).
If that is the case, we could do the following.
import re
# [your existing code]
# e.g.
# sp_market = 'S&P 500\n4,587.18+65.64(+1.45%)'
label,line2 = sp_market.split('\n')
pm = re.findall(r"[+-]",line2)
total,change,percent,_ = re.split(r"[\+\-\(\)%]+",line2)
total = float(''.join(total.split(',')))
change = float(change)
if pm[0]=='-':
change=-change
percent = float(percent)
if pm[1]=='-':
percent=-percent
print(label, total,change,percent)
# S&P 500 4587.18 65.64 1.45
Not sure, cause question do not provide an expected result, but you can "isolate" the information with stripped_strings.
This will give you a list of "isolated" values you can process:
list(soup.find('h3', class_ = 'Maw(160px)').stripped_strings)
#Output
['S&P 500', '4,587.18', '+65.64', '(+1.45%)']
For example stripping following characters "()%":
[x.strip('\(|\)|%') for x in soup.find('h3', class_ = 'Maw(160px)').stripped_strings]
#Output
['S&P 500', '4,587.18', '+65.64', '+1.45']
Simplest way to print the data not that sloppy way, is to join() the values by whitespace:
' '.join([x.strip('\(|\)|%') for x in soup.find('h3', class_ = 'Maw(160px)').stripped_strings])
#Output
S&P 500 4,587.18 +65.64 +1.45
You can also create dict() and print the key / value pairs:
for k, v in dict(zip(['Symbol','Last Price','Change','% Change'], [x.strip('\(|\)|%') for x in soup.find('h3', class_ = 'Maw(160px)').stripped_strings])).items():
print(f'{k}: {v}')
#Output
Symbol: S&P 500
Last Price: 4,587.18
Change: +65.64
% Change: +1.45

Find Location of All Numbers with a Comma

I have a been scraping some HTML pages with beautiful soup trying to extract some updated financial data. I only care about numbers that have a comma ie 100,000 or 12,000,000 but not 450 for example. The goal is just to find the location of the comma separated numbers within a string then I need to extract the entire sentence they are in.
I moved the entire scrape to a string list and within that list I want to extract all numbers that have a comma.
url = 'https://www.sec.gov/Archives/edgar/data/354950/000035495020000024/hd-2020proxystatement.htm'
r = requests.get(url)
soup = BeautifulSoup(r.content)
text = soup.find_all(text = True)
strings = []
for i in range(len(text)):
text_s = str(proxy_text[i])
strings.append(text)
I thought about the follow re code but I am not sure if it will extract all instances.. ie within the list there may be multiple instances of numbers separated by commas.
number = re.sub('[^>0-9,]', "", text)
Any thoughts would be a huge help! Thank you
You can use:
from bs4 import BeautifulSoup
import requests, re
url = 'https://www.sec.gov/Archives/edgar/data/354950/000035495020000024/hd-2020proxystatement.htm'
soup = BeautifulSoup(requests.get(url).text, "html5lib")
for el in soup.find_all(True): # loop all element in page
if re.search(r"(?=\d+,\d+).*", el.text):
print(el.text)
# print("END OF ELEMENT\n") # debug only
If you simply want to check if a number has a comma or not, and you want to extract it if it does, then you could try the following.
new = []
for i in text:
if ',' in i:
new.append(i)
This will append all the elements in the 'text' collection that contain a comma, even if the exact same element is repeated multiple times.

Scraping lists of items from Wikipedia

I would need to get all the information from this page:
https://it.wikipedia.org/wiki/Categoria:Periodici_italiani_in_lingua_italiana
from symbol " to letter Z.
Then:
"
"900", Cahiers d'Italie et d'Europe
A
Abitare
Aerei
Aeronautica & Difesa
Airone (periodico)
Alp (periodico)
Alto Adige (quotidiano)
Altreconomia
....
In order to do this, I have tried using the following code:
res = requests.get("https://it.wikipedia.org/wiki/Categoria:Periodici_italiani_in_lingua_italiana")
soup = bs(res.text, "html.parser")
url_list = []
links = soup.find_all('a')
for link in links:
url = link.get("href", "")
url_list.append(url)
lists_A=[]
for url in url_list:
lists_A(url)
print(lists_A)
However this code collects more information than what I would need.
In particular, the last item that I should collect would be La Zanzara (possibly all the items should not have any word in the brackets, i.e. they should not contain (rivista), (periodico), (settimanale), and so on, but just the title (e.g. Jack (periodico) should be just Jack).
Could you give me any advice on how to get this information? Thanks
This will help you to filter out some of the unwanted urls (not all though). Basically everything before "Corriere della Sera", which I'm assuming should be the first expected URL.
links = [a.get('href') for a in soup.find_all('a', {'title': True, 'href': re.compile('/wiki/(.*)'), 'accesskey': False})]
You can safely assume that all the magazine URLs are ordered at this point and since you know that "La Zanzara" should be the last expected URL you can get the position of that particular string in your new list and slice up to that index + 1
links.index('/wiki/La_zanzara_(periodico)')
Out[20]: 144
links = links[:145]
As for removing ('periodico') and other data cleaning you need to inspect your data and figure out what is it that you want to remove.
Write a simple function like this maybe:
def clean(string):
to_remove = ['_(periodico)', '_(quotidiano)']
for s in to_remove:
if s in string:
return replace(string, s, '')

Check wether words from a list are inside a string of another list Python

So I tried getting all the headlines of the New York Times homepage and wanted to see how many times a certain word has been mentioned. In this particular case, I wanted to see how many headlines mentioned either the Coronavirus or Trump. This is my code but it won't work as 'number' remains the integer I give it before the while loop.
import requests
from bs4 import BeautifulSoup
url = 'https://www.nytimes.com'
r = requests.get(url)
soup = BeautifulSoup(r.text, "html.parser")
a = soup.findAll("h2", class_="esl82me0")
for story_heading in a:
print(story_heading.contents[0])
lijst = ["trump", "Trump", "Corona", "COVID", "virus", "Virus", "Coronavirus", "COVID-19"]
number = 0
run = 0
while run < len(a)+1:
run += 1
if any(lijst in s for s in a)
number += 1
print("\nTrump or the Corona virus have been mentioned", number, "times.")
So I basically want the variable 'number' to increase by 1 if a headline (which is an entry in the list a) has the word Trump or Coronavirus or both in them.
Does anyone know how to do this?
In general, I recommend putting more thought into naming variables. I like how you tried to print the story headings. The line if any(lijst in s for s in a) does not do what you think it should: you need to instead iterate over each word in a single h2. The any function is just a short hand for the following:
def any(iterable):
for element in iterable:
if element:
return True
return False
In other words, you're trying to see if an entire list is in an h2 element, which will never be true. Here is an example fix.
import requests
from bs4 import BeautifulSoup
url = 'https://www.nytimes.com'
r = requests.get(url)
soup = BeautifulSoup(r.text, "html.parser")
h2s = soup.findAll("h2", class_="esl82me0")
for story_heading in h2s:
print(story_heading.contents[0])
keywords = ["trump", "Trump", "Corona", "COVID", "virus", "Virus", "Coronavirus", "COVID-19"]
number = 0
run = 0
for h2 in h2s:
headline = h2.text
words_in_headline = headline.split(" ")
for word in words_in_headline:
if word in keywords:
number += 1
print("\nTrump or the Corona virus have been mentioned", number, "times.")
Output
Trump or the Corona virus have been mentioned 7 times.

How do I remove extra text to the right of a string?

I am trying to get the name of a car model as it appears on the website but for some reason (after trying the all of the following), it doesn't seem to work.
import requests
from bs4 import BeautifulSoup
import pandas as pd
url = "https://www.carsales.com.au/cars/results?offset=12"
r = requests.get(url)
soup = BeautifulSoup(r.text, "html.parser")
model_name = soup.find_all('a', attrs={'data-webm-clickvalue':'sv-view-title'})
final_model_name = model_name[1]
clean_model_name = final_model_name.text
clean_model_name = clean_model_name.replace("\r", "")
clean_model_name = clean_model_name.replace("\n", "")
clean_model_name = clean_model_name.strip()
clean_model_name = clean_model_name.rstrip()
print(clean_model_name)
I have also created a variable that contains the whole sentence I want to remove (which works) which is then parsed in the strip function, but the MY14 element of it changes based on the year of the car. Creating a variable for each year doesn't seem very efficient.
Some indexes return clean results, however, others return the following (scroll across):
2014 Holden Cruze SRi Z Series JH Series II Auto MY14 Manufacturer Marketing Year (MY) The manufacturer's marketing year of this model.
I don't need any of the details after the car model - after researching, strip() should remove white space either side (but in this case it doesn't) and rstrip() should remove everything to the right (but in this case it doesn't)
I have successfully created a for loop which loops through each of the cars on this page, but some rows in the DataFrame are extended due to the additional unwanted text.
strip() would only remove the white space characters at the front and rear of the string that you are working with, you can try this:
import requests
from bs4 import BeautifulSoup
import pandas as pd
url = "https://www.carsales.com.au/cars/results?offset=12"
r = requests.get(url)
soup = BeautifulSoup(r.text, "html.parser")
model_name = soup.find_all('a', attrs={'data-webm-clickvalue':'sv-view-title'})
final_model_name = model_name[1]
clean_model_name = final_model_name.text
clean_model_name = clean_model_name.strip().split()[:5]
clean_model_name = ' '.join(clean_model_name)
print(clean_model_name)
I noticed that most of the model names have 5 key parts (the year, brand and the model) so I used [:5] to get the first five elements of the model name, but if you want to minus the first series element then just change the value to 3. strip() helps to split the model name by the spaces. Hope this helps

Categories