I need scrape the contents of an iframe using python.
As the web-page loads it submits a request and gets the content of the iframe in the response. When I use BeautifulSoup to get the data it just gives the initial blank iframe contents.
Is there any way I can get the contents? If so, how do I do it in my case?
Here is my code:
import requests
from bs4 import BeautifulSoup
page = requests.get("https://etherscan.io/token/0x168296bb09e24a88805cb9c33356536b980d3fc5#balances" + "/token/generic-tokenholders2?a=0x168296bb09e24a88805cb9c33356536b980d3fc5&s=100000000000000000", headers={"content-type":"type"}, timeout=5)
soup = BeautifulSoup(page.content, "html.parser")
balances = soup.find(id="balances")
print(balances.prettify())
table_items = balances.find_all('tr') #I'm trying to collect all the <tr> tags inside an <iframe>
print(table_items) #It shows an empty list because the <iframe> didn't load
You must switch to iframe, to get content from it. Find iframe and then make new request.
iframe_content = requests.get(soup.find("iframe_name")["src"])
see this question
Related
This is the website I'm trying to scrape with Python:
https://www.ebay.de/sch/i.html?_from=R40&_nkw=iphone+8&_sacat=0&LH_Sold=1&LH_Complete=1&rt=nc&LH_ItemCondition=3000
I want to access the 'ul' element with the class of 'srp-results srp-list clearfix'. This is what I tried with requests and BeautifulSoup:
from bs4 import BeautifulSoup
import requests
url = 'https://www.ebay.de/sch/i.html?_from=R40&_nkw=iphone+8&_sacat=0&LH_Sold=1&LH_Complete=1&rt=nc&LH_ItemCondition=3000'
r = requests.get(url)
soup = BeautifulSoup(r.text, 'html.parser')
uls = soup.find_all('ul', attrs = {'class': 'srp-results srp-list clearfix'})
And the output is always an empty string.
I also tried scraping the website with Selenium Webdriver and I got the same result.
First I was a little bit confused about your error but after a bit of debugging I figured out that: eBay dynamically generates that ul with JavaScript
So since you can't execute JavaScript with BeautifulSoup you have to use selenium and wait until the JavaScript loads that ul
It is probably because the content you are looking for is rendered by JavaScript After the page loads on a web browser this means that the web browser load that content after running javascript which you cannot get with requests.get request from python.
I would suggest to learn Selenium to Scrape the data you want
i try to scrape some informations from a webpage and on the one page it is working fine, but on the other webpage it is not working cause i only get a none return-value
This code / webpage is working fine:
# https://realpython.com/beautiful-soup-web-scraper-python/
import requests
from bs4 import BeautifulSoup
URL = "https://www.monster.at/jobs/suche/?q=Software-Devel&where=Graz"
page = requests.get(URL)
soup = BeautifulSoup(page.content, "html.parser")
name_box = soup.findAll("div", attrs={"class": "company"})
print (name_box)
But with this code / webpage i only get a None as return-value
# https://www.freecodecamp.org/news/how-to-scrape-websites-with-python-and-beautifulsoup-5946935d93fe/
import requests
from bs4 import BeautifulSoup
URL = "https://www.bloomberg.com/quote/SPX:IND"
page = requests.get(URL)
soup = BeautifulSoup(page.content, "html.parser")
name_box = soup.find("h1", attrs={"class": "companyName__99a4824b"})
print (name_box)
Why is that?
(at first i thought due the number in the class on the second webpage "companyName__99a4824b" it changes the classname dynamicly - but this is not the case - when i refresh the webpage it is still the same classname...)
The reason you get None is that the Bloomberg page uses Javascript to load its content while the user is on the page.
BeautifulSoup simply returns to you the html of the page as found as soon as it reaches the page -- which does not contain the companyName_99a4824b class-tag.
Only after the user has waited for the page to fully load does the html include the desired tag.
If you want to scrape that data, you'll need to use something like Selenium, which you can instruct to wait until the desired element of the page is ready.
The website blocks scrapers, check the title:
print(soup.find("title"))
To bypass this you must use a real browser which can run JavaScript.
A tool called Selenium can do that for you.
I am trying to scrape the data from this link
I have tried this way
from bs4 import BeautifulSoup
import urllib.request
import csv
# specify the url
urlpage = 'https://www.ikh.se/sysNet/getProductsJSON/getProductsJSONDB.aspx?' \
'sua=1&lang=2&navid=19277994'
# query the website and return the html to the variable 'page'
page = urllib.request.urlopen(urlpage)
# parse the html using beautiful soup and store in variable 'soup'
soup = BeautifulSoup(page, 'html.parser')
tag = soup.find('div', attrs={'class':'dnsCell'})
text = (''.join(tag.stripped_strings))
print (page)
I got the HTML dom but the product list dom are missing. Actually I guess the product list dom manages by a JSON array that requests from this link but I am not sure about the product list dom load method. I am right or wrong.
I want to scrape the all product details from this site and export in Excel.
The requests library does not load the Javascript. If you want to download completely rendered website, use selenium library : https://selenium-python.readthedocs.io/
I am trying to scrape website, but I encountered a problem. When I try to scrape data, it looks like the html differs from what I see on google inspect and from what I get from python. I get this with http://edition.cnn.com/election/results/states/arizona/house/01 I tried to scrape election results. I used this script to check HTML part of the webpage, and I noticed that they different. There is no classes that I need, like section-wrapper.
page =requests.get('http://edition.cnn.com/election/results/states/arizona/house/01')
soup = BeautifulSoup(page.content, "lxml")
print(soup)
Anyone knows what is the problem ?
http://data.cnn.com/ELECTION/2016/AZ/county/H_d1_county.json
This site use JavaScript fetch data, you can check the url above.
You can find this url in chrome dev-tools, there are many links, check it out
Chrome >>F12>> network tab>>F5(refresh page)>>double click the .josn url>> open new tab
import requests
from bs4 import BeautifulSoup
page=requests.get('http://edition.cnn.com/election/results/states/arizona/house/01')
soup = BeautifulSoup(page.content)
#you can try all sorts of tags here I used class: "ad" and class:"ec-placeholder"
g_data = soup.find_all("div", {"class":"ec-placeholder"})
h_data = soup.find_all("div"),{"class":"ad"}
for item in g_data:print item
#print '\n'
#for item in h_data:print item
I want to load the list of images in this page in Python. However, when I opened the page in my browser (Chrome or Safari) and opened the dev tools, the inspector returned the list of images as <img class="grid-item--image">....
However, when I tried to parse it in Python, the result seemed different. Specifically, I got the list of images as <img class="carousel--image"...>, whereas the soup.findAll("img", "grid-item--image") did return an empty list. Also, I tried saving those images using its srcset tag, most of the saved images are NOT those that were listed on the web.
I think the web page used some sort of technics when rendering. How can I parse the web pages successfully?
I used BeautifulSoup 4 on Python 3.5. I loaded the page as follows:
import requests
from bs4 import BeautifulSoup
html = requests.get(url).text
soup = BeautifulSoup(html, "html.parser", from_encoding="utf-8")
return soup
You would do better to use something like selenium for this as follows:
from bs4 import BeautifulSoup
from selenium import webdriver
browser = webdriver.Firefox()
browser.get("http://www.vogue.com/fashion-shows/fall-2016-menswear/fendi#collection")
html_source = browser.page_source
soup = BeautifulSoup(html_source, "html.parser")
for item in soup.find_all("img", {"class":"grid-item--image"}):
print(item.get('srcset'))
This would display the following kind of output:
http://assets.vogue.com/photos/569d37e434324c316bd70f04/master/w_195/_FEN0016.jpg
http://assets.vogue.com/photos/569d37e5d928983d20a78e4f/master/w_195/_FEN0027.jpg
http://assets.vogue.com/photos/569d37e834324c316bd70f0a/master/w_195/_FEN0041.jpg
http://assets.vogue.com/photos/569d37e334324c316bd70efe/master/w_195/_FEN0049.jpg
http://assets.vogue.com/photos/569d37e702e08d8957a11e32/master/w_195/_FEN0059.jpg
...
...
...
http://assets.vogue.com/photos/569d3836486d6d3e20ae9625/master/w_195/_FEN0616.jpg
http://assets.vogue.com/photos/569d381834324c316bd70f3b/master/w_195/_FEN0634.jpg
http://assets.vogue.com/photos/569d3829fa6d6c9057f91d2a/master/w_195/_FEN0649.jpg
http://assets.vogue.com/photos/569d382234324c316bd70f41/master/w_195/_FEN0663.jpg
http://assets.vogue.com/photos/569d382b7dcd2a8a57748d05/master/w_195/_FEN0678.jpg
http://assets.vogue.com/photos/569d381334324c316bd70f2f/master/w_195/_FEN0690.jpg
http://assets.vogue.com/photos/569d382dd928983d20a78eb1/master/w_195/_FEN0846.jpg
This allows the full rendering of the page to take place inside the browser, and the resulting HTML can then be obtained.