I am attempting to get the source code from a webpage including html that is generated by javascript. My code currently is as follows:
from selenium import webdriver
from bs4 import BeautifulSoup
case_url = "http://na.leagueoflegends.com/tribunal/en/case/5555631/#nogo"
try:
browser = webdriver.Firefox()
browser.get(case_url)
url = browser.page_source
print url
browser.close
except:
...
soup=BeautifulSoup(url)
...extraction code that finds the right tags, but they are empty...
When I print the source stored in url, it prints the usual HTML, but is missing the generated html information. How do I get the same HTML as when I press f12 (but I would prefer to do this programatically)?
Further to alexce's answer above, your underlying issue was that you were extracting the HTML before the JavaScript had generated it. Selenium returns control as soon as the browser has loaded and does not wait for any post load JavaScript generated HTML.
By using "find_elements", you will be automatically waiting for the elements to appear (depending on the timeout set when instantiating your driver).
If you were to call get "page_source" after the "find_elements", then you would see the full HTML.
I have automated many dynamically client side generated web pages, and have had no issues providing you wait for the HTML to be rendered.
Alexce is correct that there is no need to use BeautifulSoup, but I wanted to make it clear that Selenium is perfectly able to automate JavaScript generated HTML
You don't really need to use BeautifulSoup for parsing html in this case, selenium itself is pretty powerful in terms of Locating Elements.
Here's how you can parse the contents of each tab/game one by one:
from selenium import webdriver
case_url = "http://na.leagueoflegends.com/tribunal/en/case/5555631/#nogo"
browser = webdriver.Firefox()
browser.get(case_url)
game_tabs = browser.find_elements_by_xpath('//a[contains(#id, "tab-")]')
for index, tab in enumerate(game_tabs, start=1):
tab.click()
game = browser.find_element_by_id('game%d' % index)
game_type = game.find_element_by_id('stat-type-fill').text
game_length = game.find_element_by_id('stat-length-fill').text
game_outcome = game.find_element_by_id('stat-outcome-fill').text
game_chat = game.find_element_by_class_name('chat-log')
enemy_chat = [msg.text for msg in game_chat.find_elements_by_class_name('enemy') if msg.text]
ally_chat = [msg.text for msg in game_chat.find_elements_by_class_name('ally') if msg.text]
print game_type, game_length, game_outcome
print "Enemy chat: ", enemy_chat
print "Ally chat: ", ally_chat
print "------"
prints:
Classic 34:48 Loss
Enemy chat: [u'Akali [All] [00:01:38] lol', ... ]
Ally chat: [u'Gangplank [All] [00:00:12] anyone remember the april fools lee sin spotlight? lol', ... ]
------
Dominion 19:22 Loss
Enemy chat: [u'Evelynn [All] [00:00:10] Our GP has a Ti-83', ... ]
Ally chat: [u'Miss Fortune [All] [00:00:18] arr ye wodden computer needs to walk the plank!', ... ]
Related
I am attempting to web-scrape info off of the following website: https://www.axial.net/forum/companies/united-states-family-offices/
I am trying to scrape the description for each family office, so "https://www.axial.net/forum/companies/united-states-family-offices/"+insert_company_name" are the pages I need to scrape.
So I wrote the following code to test the program for just one page:
from bs4 import BeautifulSoup as soup
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Chrome('insert_path_here/chromedriver')
driver.get("https://network.axial.net/company/ansaco-llp")
page_source = driver.page_source
soup2 = soup(page_source,"html.parser")
soup2.findAll('axl-teaser-description')[0].text
This works for the single page, as long as the description doesn't have a "show full description" drop down button. I will save that for another question.
I wrote the following loop:
#Note: Lst2 has all the names for the companies. I made sure they match the webpage
lst3=[]
for key in lst2[1:]:
driver.get("https://network.axial.net/company/"+key.lower())
page_source = driver.page_source
for handle in driver.window_handles:
driver.switch_to.window(handle)
word_soup = soup(page_source,"html.parser")
if word_soup.findAll('axl-teaser-description') == []:
lst3.append('null')
else:
c = word_soup.findAll('axl-teaser-description')[0].text
lst3.append(c)
print(lst3)
When I run the loop, all of the values come out as "null", even the ones without "click for full description" buttons.
I edited the loop to instead print out "word_soup", and the page is different then if I had run it without a loop and does not have the description text.
I don't understand why a loop would cause that but apparently it does. Does anyone know how to fix this problem?
Found solution. pause the program for 3 seconds after driver.get:
import time
lst3=[]
for key in lst2[1:]:
driver.get("https://network.axial.net/company/"+key.lower())
time.sleep(3)
page_source = driver.page_source
word_soup = soup(page_source,"html.parser")
if word_soup.findAll('axl-teaser-description') == []:
lst3.append('null')
else:
c = word_soup.findAll('axl-teaser-description')[0].text
lst3.append(c)
print(lst3)
I see that the page uses javascript to generate the text meaning it doesn't show up in the page source, which is weird but ok. I don't quite understand why you're only iterating through and switching to all the instances of Selenium you have open, but you definitely won't find the description in the page source / beautifulsoup.
Honestly, I'd personally look for a better website if you can, otherwise, you'll have to try it with selenium which is inefficient and horrible.
I'm practicing web scraping with Python atm and I found a problem, I wanted to scrape one website that has a list of anime that I watched before but when I try to scrape it (via requests or selenium) it only gets around 30 of 110 anime names from the page.
Here is my code with selenium:
from selenium import webdriver
from bs4 import BeautifulSoup
browser = webdriver.Firefox()
browser.get("https://anilist.co/user/Agusmaris/animelist/Completed")
data = BeautifulSoup(browser.page_source, 'lxml')
for title in data.find_all(class_="title"):
print(title.getText())
And when I run it, the page source only shows up until an anime called 'Golden time' when there are like 70 or more left that are in the page.
Thanks
Edit: Code that works now thanks to 'supputuri':
from selenium import webdriver
from bs4 import BeautifulSoup
import time
driver = webdriver.Firefox()
driver.get("https://anilist.co/user/Agusmaris/animelist/Completed")
time.sleep(3)
footer = driver.find_element_by_css_selector("div.footer")
preY = 0
print(str(footer))
while footer.rect['y'] != preY:
preY = footer.rect['y']
footer.location_once_scrolled_into_view
print('loading')
html = driver.page_source
soup = BeautifulSoup(html, 'lxml')
for title in soup.find_all(class_="title"):
print(title.getText())
driver.close()
driver.quit()
ret = input()
Here is the solution.
Make sure to add import time
driver.get("https://anilist.co/user/Agusmaris/animelist/Completed")
time.sleep(3)
footer =driver.find_element_by_css_selector("div.footer")
preY =0
while footer.rect['y']!=preY:
preY = footer.rect['y']
footer.location_once_scrolled_into_view
time.sleep(1)
print(str(driver.page_source))
This will iterate until all the anime is loaded and then gets the page source.
Let us know if this was helpful.
So, this is the jist of what I get when I load the page source:
AniListwindow.al_token = 'E1lPa1kzYco5hbdwT3GAMg3OG0rj47Gy5kF0PUmH';Sorry, AniList requires Javascript.Please enable Javascript or http://outdatedbrowser.com>upgrade to a modern web browser.Sorry, AniList requires a modern browser.Please http://outdatedbrowser.com>upgrade to a newer web browser.
Since I know damn well that Javascript is enabled and my Chrome version is fully up to date, and the URL listed takes one to a nonsecure website to "download" a new version of your browser, I think this is a spam site. Not sure if you were aware of that when posting so I won't flag as such, but I wanted you and others who come across this to be aware.
I'm currently working on a learner project for webscraping
I've picked my site:
https://www.game.co.uk/en/m/games/best-selling-games/best-selling-xbox-one-games/?merchname=MobileTopNav-_-XboxOne_Games-_-BestSellers#Page0
On this page, there is a button on the bottom that displays the list of the next 10 products there without this button being clicked it does not display the next batch of products however the URL does not change when the button is clicked.
I wanted to ask how I will solve this dilemma using requests module.
My code is below:
import requests
from bs4 import BeautifulSoup
r = requests.get("https://www.game.co.uk/en/m/games/best-selling-games/best-selling-xbox-one-games/?merchname=MobileTopNav-_-XboxOne_Games-_-BestSellers")
c = r.content
soup = BeautifulSoup(c,"html.parser")
all=soup.find_all("div",{"class":"product"})
for item in all:
print(item.find({"h2": "productInfo"}).text.replace('\h2','').replace(" ", ""))
print(item.find("span",{"class": "condition"}).text + " " + item.find("span",{"class": "value"}).text )
try:
print(item.find_all("span",{"class": "condition"})[1].text + " " + item.find_all("span",{"class": "value"})[1].text )
except:
print("No Preowned")
print(" ")
Try this code to get all the items available in that page. You can make use of chrome dev tools to retrieve this url in which there is an option for page number increment.
from bs4 import BeautifulSoup
import requests
page_link = "https://www.game.co.uk/en/m/games/best-selling-games/best-selling-xbox-one-games/?merchname=MobileTopNav-_-XboxOne_Games-_-BestSellers&pageNumber={}&pageMode=true"
page_no = 0
while True:
page_no+=1
res = requests.get(page_link.format(page_no))
soup = BeautifulSoup(res.text,'lxml')
container = soup.select(".productInfo h2")
if len(container)<=1:break
for content in container:
print(content.text)
Output of the last few titles:
ARK Survival Evolved
Kingdom Come Deliverance Special Edition
Halo 5 Guardians
Sonic Forces
The Elder Scrolls Online: Summerset - Digital
you need to use a webcrawler that supports javascript/jquery execution - i.e. selenium (it uses BoutifulSoup under the hood)
The problem you're facing is that the content you try to access gets created dynamically via javascript when the mentioned button is clicked.
When you request the page the additional html elements you want to read from are not created so BoutifulSoup cant find them.
Using selenium you can click buttons/fill out forms and much more. You can also wait for the server to create the content you want to access.
The documentation of selenium should be self explaining...
I'm fairly new to coding and Python so I apologize if this is a silly question. I'd like a script that goes through all 19,000 search results pages and scrapes each page for all of the urls. I've got all of the scrapping working but can't figure out how to deal with the fact that the page uses AJAX to paginate. Usually I'd just make a loop with the url to capture each search result but that's not possible. Here's the page: http://www.heritage.org/research/all-research.aspx?nomobile&categories=report
This is the script I have so far:
with io.open('heritageURLs.txt', 'a', encoding='utf8') as logfile:
page = urllib2.urlopen("http://www.heritage.org/research/all-research.aspx?nomobile&categories=report")
soup = BeautifulSoup(page)
snippet = soup.find_all('a', attrs={'item-title'})
for a in snippet:
logfile.write ("http://www.heritage.org" + a.get('href') + "\n")
print "Done collecting urls"
Obviously, it scrapes the first page of results and nothing more.
And I have looked at a few related questions but none seem to use Python or at least not in a way that I can understand. Thank you in advance for your help.
For the sake of completeness, while you may try accessing the POST request and to find a way round to access to next page, like I suggested in my comment, if an alternative is possible, using Selenium will be quite easy to achieve what you want.
Here is a simple solution using Selenium for your question:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from time import sleep
# uncomment if using Firefox web browser
driver = webdriver.Firefox()
# uncomment if using Phantomjs
#driver = webdriver.PhantomJS()
url = 'http://www.heritage.org/research/all-research.aspx?nomobile&categories=report'
driver.get(url)
# set initial page count
pages = 1
with open('heritageURLs.txt', 'w') as f:
while True:
try:
# sleep here to allow time for page load
sleep(5)
# grab the Next button if it exists
btn_next = driver.find_element_by_class_name('next')
# find all item-title a href and write to file
links = driver.find_elements_by_class_name('item-title')
print "Page: {} -- {} urls to write...".format(pages, len(links))
for link in links:
f.write(link.get_attribute('href')+'\n')
# Exit if no more Next button is found, ie. last page
if btn_next is None:
print "crawling completed."
exit(-1)
# otherwise click the Next button and repeat crawling the urls
pages += 1
btn_next.send_keys(Keys.RETURN)
# you should specify the exception here
except:
print "Error found, crawling stopped"
exit(-1)
Hope this helps.
I am learning to use Python Selenium and BeautifulSoup for web scraping. Currently, I am trying to scrape the hot searches on Google search trends http://www.google.com/trends/hottrends#pn=p5
This is my current code. However, I realized the full html is not downloaded and I only have content from the most recent few dates. What can I do to rectify this problem?
from selenium import webdriver
from bs4 import BeautifulSoup
googleURL = "http://www.google.com/trends/hottrends#pn=p5"
browser = webdriver.Firefox()
browser.get(googleURL)
content = browser.page_source
soup = BeautifulSoup(content)
print soup
Users add more content to the page (from previous dates) by clicking the <div onclick="control.moreData()" id="moreLink">More...</div> element at the bottom of the page.
So to get your desired content, you could use Selenium to click the id="moreLink" element or execute some JavaScript to call control.moreData(); in a loop.
For example, if you want to get all content as far back as Friday, February 15, 2013 (it looks like a string of this format exists for every date, for loaded content) your python might look something like this:
content = browser.page_source
desired_content_is_loaded = false;
while (desired_content_is_loaded == false):
if not "Friday, February 15, 2013" in content:
sel.run_script("control.moreData();")
content = browser.page_source
else:
desired_content_is_loaded = true;
EDIT:
If you disable JavaScript in your browser and reload the page, you will see that there is no "trends" content at all. What that tells me, is that the those items are loaded dynamically. Meaning, they are not part of the HTML document which is downloaded when you open the page. Selenium's .get() waits for the HTML document to load, but not for all JS to complete. There's no telling if async JS will complete before or after any other event. It completes when it's ready, and could be different every time. That would explain why you might sometimes get all, some, or none of that content when you call browser.page_source because it depends how fast async JS happens to be working at that moment.
So, after opening the page, you might try waiting a few seconds before getting the source - giving the JS which loads the content time to complete.
browser.get(googleURL)
time.sleep(3)
content = browser.page_source