This question has been asked before, but I've searched and tried and still can't get it to work. I'm a beginner when it comes to Selenium.
Have a look at: https://finance.yahoo.com/quote/FB
I'm trying to web scrape the "Recommended Rating", which in this case at the time of writing is 2. I've tried:
driver.get('https://finance.yahoo.com/quote/FB')
time.sleep(10)
rating = driver.find_element_by_css_selector('#Col2-4-QuoteModule-Proxy > div > section > div > div > div')
print(rating.text)
...which doesn't give me an error, but doesn't print any text either. I've also tried with xpath, class_name, etc. Instead I tried:
source = driver.page_source
print(source)
This doesn't work either, I'm just getting the actual source without the dynamically generated content. When I click "View Source" in Chrome, it's not there. I tried saving the webpage in chrome. Didn't work.
Then I discovered that if I save the entire webpage, including images and css-files and everything, the source code is different from the one where I just save the HTML.
The HTML-file I get when I save the entire webpage using Chrome DOES contain the information that I need, and at first I was thinking about using pyautogui to just Ctrl + S every webpage, but there must be another way.
The information that I need is obviosly there, in the html-code, but how do I get it without downloading the entire web page?
Try this to execute the dynamically generated content (JavaScript):
driver.execute_script("return document.body.innerHTML")
See similar question:
Running javascript in Selenium using Python
The CSS selector, div.rating-text, is working just fine and is unique on the page. Returning .text will give you the value you are looking for.
First, you need to wait for the element to be clickable, then make sure you scroll down to the element before getting the rating. Try
element.location_once_scrolled_into_view
element.text
EDIT:
Use the following XPath selector:
'//a[#data-test="recommendation-rating-header"]//following-sibling::div//div[#class="rating-text Arrow South Fw(b) Bgc($buy) Bdtc($buy)"]'
Then you will have:
rating = driver.find_element_by_css_selector('//a[#data-test="recommendation-rating-header"]//following-sibling::div//div[#class="rating-text Arrow South Fw(b) Bgc($buy) Bdtc($buy)"]')
To extract the value of the slider, use
val = rating.get_attribute("aria-label")
The script below answers a different question but somehow I think this is what you are after.
import requests
from bs4 import BeautifulSoup
base_url = 'http://finviz.com/screener.ashx?v=152&s=ta_topgainers&o=price&c=0,1,2,3,4,5,6,7,25,63,64,65,66,67'
html = requests.get(base_url)
soup = BeautifulSoup(html.content, "html.parser")
main_div = soup.find('div', attrs = {'id':'screener-content'})
light_rows = main_div.find_all('tr', class_="table-light-row-cp")
dark_rows = main_div.find_all('tr', class_="table-dark-row-cp")
data = []
for rows_set in (light_rows, dark_rows):
for row in rows_set:
row_data = []
for cell in row.find_all('td'):
val = cell.a.get_text()
row_data.append(val)
data.append(row_data)
# sort rows to maintain original order
data.sort(key=lambda x: int(x[0]))
import pandas
pandas.DataFrame(data).to_csv("AAA.csv", header=False)
Related
I successfully put together a script to retrieve search results from sales navigator in Linkedin. The following is the script, using python, selenium, and bs4.
browser = webdriver.Firefox(executable_path=r'D:\geckodriver\geckodriver.exe')
url1 = "https://www.linkedin.com/sales/search/company?companySize=E&geoIncluded=emea%3A0%2Ceurope%3A0&industryIncluded=6&keywords=AI&page=1&searchSessionId=zreYu57eQo%2BSZiFskdWJqg%3D%3D"
browser.get(url1)
time.sleep(15)
parsed = browser.find_element_by_tag_name('html').get_attribute('innerHTML')
soup = BeautifulSoup(parsed, 'html.parser')
search_results = soup.select('dt.result-lockup__name a')
print(len(search_results))
time.sleep(5)
browser.quit()
Irrespective of the no.of results, the answer was always 10 (i.e.) only 10 results were returned. Upon further investigation into the source, I noticed the following :
That the first 10 results are represented at a different level and the rest are under a div tag with style class named as deferred area. Though the dt class name is the same for all the search results (result-lockup__name), due to the change in levels, I am not able to access/retrieve it.
What would be the right way to retrieve all results in such a case?
EDIT 1
An example of how the tag levels are within li
And an example of the html script of the result that is not being retrieved
EDIT 2
The page source as requested
https://pastebin.com/D11YpHGQ
A lot of sites don't display all search results on page load rather only display them when needed, e.g the visitor keeps scrolling indicating they want to view more.
We can use javascript to scroll to the bottom of the page for us window.scrollTo(0,document.body.scrollHeight) , (you may want to loop this if you expect hundreds of results) forcing all results on the page, after which we can grab the HTML.
Below should do the trick.
browser = webdriver.Firefox(executable_path=r'D:\geckodriver\geckodriver.exe')
url1 = "https://www.linkedin.com/sales/search/company?companySize=E&geoIncluded=emea%3A0%2Ceurope%3A0&industryIncluded=6&keywords=AI&page=1&searchSessionId=zreYu57eQo%2BSZiFskdWJqg%3D%3D"
browser.get(url1)
time.sleep(15)
browser.execute_script('window.scrollTo(0,document.body.scrollHeight)')
time.sleep(15)
parsed = browser.find_element_by_tag_name('html').get_attribute('innerHTML')
soup = BeautifulSoup(parsed, 'html.parser')
search_results = soup.select('dt.result-lockup__name a')
print(len(search_results))
I'm new to programming so it's very likely my idea of doing what I'm trying to do is totally not the way to do that.
I'm trying to scrape standings table from this site - http://www.flashscore.com/hockey/finland/liiga/ - for now it would be fine if I could even scrape one column with team names, so I try to find td tags with the class "participant_name col_participant_name col_name" but the code returns empty brackets:
import requests
from bs4 import BeautifulSoup
import lxml
def table(url):
teams = []
source = requests.get(url).content
soup = BeautifulSoup(source, "lxml")
for td in soup.find_all("td"):
team = td.find_all("participant_name col_participant_name col_name")
teams.append(team)
print(teams)
table("http://www.flashscore.com/hockey/finland/liiga/")
I tried using tr tag to retrieve whole rows, but no success either.
I think the main problem here is that you are trying to scrape a dynamically generated content using requests, note that there's no participant_name col_participant_name col_name text at all in the HTML source of the page, which means this is being generated with JavaScript by the website. For that job you should use something like selenium together with ChromeDriver or the driver that you find better, below is an example using both of the mentioned tools:
from bs4 import BeautifulSoup
from selenium import webdriver
url = "http://www.flashscore.com/hockey/finland/liiga/"
driver = webdriver.Chrome()
driver.get(url)
source = driver.page_source
soup = BeautifulSoup(source, "lxml")
elements = soup.findAll('td', {'class':"participant_name col_participant_name col_name"})
I think another issue with your code is the way you were trying to access the tags, if you want to match a specific class or any other specific attribute you can do so using a Python's dictionary as an argument of .findAll function.
Now we can use elements to find all the teams' names, try print(elements[0]) and notice that the team's name is inside an a tag, we can access it using .a.text, so something like this:
teams = []
for item in elements:
team = item.a.text
print(team)
teams.append(team)
print(teams)
teams now should be the desired output:
>>> teams
['Assat', 'Hameenlinna', 'IFK Helsinki', 'Ilves', 'Jyvaskyla', 'KalPa', 'Lukko', 'Pelicans', 'SaiPa', 'Tappara', 'TPS Turku', 'Karpat', 'KooKoo', 'Vaasan Sport', 'Jukurit']
teams could also be created using list comprehension:
teams = [item.a.text for item in elements]
Mr Aguiar beat me to it! I will just point out that you can do it all with selenium alone. Of course he is correct in pointing out that this is one of the many sites that loads most of its content dynamically.
You might be interested in observing that I have used an xpath expression. These often make for compact ways of saying what you want. Not too hard to read once you get used to them.
>>> from selenium import webdriver
>>> driver = webdriver.Chrome()
>>> driver.get('http://www.flashscore.com/hockey/finland/liiga/')
>>> items = driver.find_elements_by_xpath('.//span[#class="team_name_span"]/a[text()]')
>>> for item in items:
... item.text
...
'Assat'
'Hameenlinna'
'IFK Helsinki'
'Ilves'
'Jyvaskyla'
'KalPa'
'Lukko'
'Pelicans'
'SaiPa'
'Tappara'
'TPS Turku'
'Karpat'
'KooKoo'
'Vaasan Sport'
'Jukurit'
You're very close.
Start out being a little less ambitious, and just focus on "participant_name". Take a look at https://www.crummy.com/software/BeautifulSoup/bs4/doc/#find-all . I think you want something like:
for td in soup.find_all("td", "participant_name"):
Also, you must be seeing different web content than I am. After a wget of your URL, grep doesn't find "participant_name" in the text at all. You'll want to verify that your code is looking for an ID or a class that is actually present in the HTML text.
Achieving the same using css selector which will let you make the code more readable and concise:
from selenium import webdriver; driver = webdriver.Chrome()
driver.get('http://www.flashscore.com/hockey/finland/liiga/')
for player_name in driver.find_elements_by_css_selector('.participant_name'):
print(player_name.text)
driver.quit()
I am trying to scrape website, but I encountered a problem. When I try to scrape data, it looks like the html differs from what I see on google inspect and from what I get from python. I get this with http://edition.cnn.com/election/results/states/arizona/house/01 I tried to scrape election results. I used this script to check HTML part of the webpage, and I noticed that they different. There is no classes that I need, like section-wrapper.
page =requests.get('http://edition.cnn.com/election/results/states/arizona/house/01')
soup = BeautifulSoup(page.content, "lxml")
print(soup)
Anyone knows what is the problem ?
http://data.cnn.com/ELECTION/2016/AZ/county/H_d1_county.json
This site use JavaScript fetch data, you can check the url above.
You can find this url in chrome dev-tools, there are many links, check it out
Chrome >>F12>> network tab>>F5(refresh page)>>double click the .josn url>> open new tab
import requests
from bs4 import BeautifulSoup
page=requests.get('http://edition.cnn.com/election/results/states/arizona/house/01')
soup = BeautifulSoup(page.content)
#you can try all sorts of tags here I used class: "ad" and class:"ec-placeholder"
g_data = soup.find_all("div", {"class":"ec-placeholder"})
h_data = soup.find_all("div"),{"class":"ad"}
for item in g_data:print item
#print '\n'
#for item in h_data:print item
I'm currently trying to scrape data off a website, but using the code beneath it would return an empty array " [] " for some reason. I can't seem to figure out the reasoning behind it. When I check the html generated there seems to be a lot of \t \r \n. I am unsure what the issue seems to be with my code.
url = "http://www.hkex.com.hk/eng/csm/price_movement_result.htm?location=priceMoveSearch&PageNo=1&SearchMethod=2&mkt=hk&LangCode=en&StockType=ALL&Ranking=ByMC&x=51&y=6"
html = requests.get(url)
soup = BeautifulSoup(html.text,'html.parser')
rows = soup.find_all('tr')
print rows
I have attempted to parse non ".text" and also "lxml" instead of "html.parser" but ended up with the same result.
EDIT: Found the workaround, used selenium to open the page and grab the source that way instead.
url = "http://www.hkex.com.hk/eng/csm/price_movement_result.htm?location=priceMoveSearch&PageNo=1&SearchMethod=2&mkt=hk&LangCode=en&StockType=ALL&Ranking=ByMC&x=51&y=6"
driver = webdriver.Firefox()
driver.get(url)
f = driver.page_source
soup = BeautifulSoup(f,'html.parser')
rows = soup.find_all('tr')
this page use javascript to fetch data from server, and you can find javascript use this link to request data in chrome's dev_tools, so you can requests this link to get the info you need.
http://www.hkex.com.hk/eng/csm/ws/Result.asmx/GetData?location=priceMoveSearch&SearchMethod=2&LangCode=en&StockCode=&StockName=&Ranking=ByMC&StockType=ALL&mkt=hk&PageNo=1&ATypeSHEx=&AType=&FDD=&FMM=&FYYYY=&TDD=&TMM=&TYYYY=
there is no need to use selenium
There are no true HTML rows in the document. The rows are dynamically generated by JavaScript. BeautifulSoup cannot execute JavaScript.
If you view the contents of the html.text variable, you will notice that the content is generated dynamically and does not have any valid elements.
This is my first time with Python and web scraping. Have been looking around and still unable to get what I need to do.
Below are print screen of the elements that I've used via Chrome.
As you can see, it is from the dropdown 'Apartments'.
My 1st step in trying to do is get the list of cities from the drop down
My 2nd step is then, from the given city list, go to each of them (...url.../Brantford/ for example)
My 3rd step is then, given the available apartments, click each of the available apartments to get the price range for each bedroom type
Currently, I am JUST trying to 'loop' through the cities in the first step and it's not working.
Could you please help me out as well, if there's a good forum, article, tutorial etc that's good for beginner like me to read and learn. I'd really like to be good in this so that I may give me to society one day.
Thank you!
import requests
from bs4 import BeautifulSoup
url = 'http://www.homestead.ca/apartments-for-rent/'
response = requests.get(url)
html = response.content
soup = BeautifulSoup(html,'lxml')
dropdown_list = soup.find(".child-pages dropdown-menu a href")
print (dropdown_list.prettify())
Screenshot
You can access the elements by the class and a child "a" node. Then access the attribute "href" and add a domain name.
import requests
from bs4 import BeautifulSoup
url = 'http://www.homestead.ca/apartments-for-rent/'
response = requests.get(url)
html = response.content
soup = BeautifulSoup(html,'lxml')
dropdown_list = soup.select(".primary .child-pages a")
links=['http://www.homestead.ca'+x['href'] for x in dropdown_list]
print (links)
city_names=[x.text for x in dropdown_list]
print (city_names)
result=[]
for link in links:
response = requests.get(link)
html = response.content
soup = BeautifulSoup(html,'lxml')
...
result.append(...)
Explanation:
soup.select(".primary .child-pages a")
Using CSS selector I select the "a" nodes that are children of a node with the class "child-pages" which is a child of the the node with a class "primary". There were two nodes with class "child-pages" and I filtered one that was under node with "primary" class.
[x.text for x in dropdown_list]
This is a list comprehension in Python. It means that I choose all elements of dropdown_list and then take only the attribute text of each of them and then return as a list.
You can then iterate over the links and append the data to a list (here "result").
I found this introduction to BeautifulSoup pretty good though without going through the links: http://programminghistorian.org/lessons/intro-to-beautiful-soup
I would also recommend reading a book. For example this one: Web Scraping with Python: Collecting Data from the Modern Web