I am trying to learn how to use Beautiful Soup and I have a problem when scraping a table from Wikipedia.
from bs4 import BeautifulSoup
import urllib2
wiki = 'https://en.wikipedia.org/wiki/List_of_S%26P_500_companies'
page = urllib2.urlopen(wiki)
soup = BeautifulSoup(page, 'lxml')
print soup
It seems like I can't get the full Wikipedia table, but the last entry I get with this code is Omnicon Group and it stops before getting the /tr in the source code. If you check in the original link the last entry of the table is Zoetis so it stops about half way.
Everything seems ok in the Wikipedia source code... Any idea of what I might be doing wrong?
try this. read this for more http://www.crummy.com/software/BeautifulSoup/bs4/doc/
from bs4 import BeautifulSoup
from urllib.request import urlopen
wiki = 'https://en.wikipedia.org/wiki/List_of_S%26P_500_companies'
page = urlopen(wiki)
soup = BeautifulSoup(page, 'lxml')
result = soup.find("table", class_="wikitable")
print(result)
this should be the last <tr> in your result
<tr>
<td><a class="external text" href="https://www.nyse.com/quote/XNYS:ZTS" rel="nofollow">ZTS</a></td>
<td>Zoetis</td>
<td><a class="external text" href="http://www.sec.gov/cgi-bin/browse-edgar?CIK=ZTS&action=getcompany" rel="nofollow">reports</a></td>
<td>Health Care</td>
<td>Pharmaceuticals</td>
<td>Florham Park, New Jersey</td>
<td>2013-06-21</td>
<td>0001555280</td>
</tr>
You will also need to install requests with pip install requests and i used
python==3.4.3
beautifulsoup4==4.4.1
This is my working answer. It should work for you without even installing lxml.
I used Python 2.7
from bs4 import BeautifulSoup
import urllib2
wiki = 'https://en.wikipedia.org/wiki/List_of_S%26P_500_companies'
page = urllib2.urlopen(wiki)
soup = BeautifulSoup(page, "html.parser")
print soup.table
Related
I am trying to parse this page "https://www.imdb.com/title/tt0068112/?ref_=fn_al_tt_1", but I can't find the href that I need (href="/title/tt0068112/episodes?ref_=tt_eps_sm").
I tried with this code:
url="https://www.imdb.com/title/tt0068112/?ref_=fn_al_tt_1"
page(requests.get(url)
soup=BeautifulSoup(page.content,"html.parser")
for a in soup.find_all('a'):
print(a['href'])
What's wrong with this? I also tried to check "manually" with print(soup.prettify()) but it seems that that link is hidden or something like that.
You can get the page html with requests, the href item is in there, no need for special apis. I tried this and it worked:
import requests
from bs4 import BeautifulSoup
page = requests.get("https://www.imdb.com/title/tt0068112/?ref_=fn_al_tt_1")
soup = BeautifulSoup(page.content, "html.parser")
scooby_link = ""
for item in soup.findAll("a", href="/title/tt0068112/episodes?ref_=tt_eps_sm"):
print(item["href"])
scooby_link = "https://www.imdb.com" + "/title/tt0068112/episodes?ref_=tt_eps_sm"
print(scooby_link)
I'm assuming you also wanted to save the link to a variable for further scraping so I did that as well. 🙂
To get the link with Episodes you can use next example:
import requests
from bs4 import BeautifulSoup
url = "https://www.imdb.com/title/tt0068112/?ref_=fn_al_tt_1"
soup = BeautifulSoup(requests.get(url).content, "html.parser")
print(soup.select_one("a:-soup-contains(Episodes)")["href"])
Prints:
/title/tt0068112/episodes?ref_=tt_eps_sm
I'm having some serious issues trying to extract the titles from a webpage. I've done this before on some other sites but this one seems to be an issue because of the Javascript.
The test link is "https://www.thomasnet.com/products/adhesives-393009-1.html"
The first title I want extracted is "Toagosei America, Inc."
Here is my code:
import requests
from bs4 import BeautifulSoup
url = ("https://www.thomasnet.com/products/adhesives-393009-1.html")
r = requests.get(url).content
soup = BeautifulSoup(r, "html.parser")
print(soup.get_text())
Now if I run it like this, with get_text, i can find the titles in the result, however as soon as I change it to find_all or find, the titles are lost. I cant find them using web browser's inspect tool, because its all JS generated.
Any advice would be greatly appreciated.
You have to specify what to find, in this case <h2> to get first title:
import requests
from bs4 import BeautifulSoup
url = 'https://www.thomasnet.com/products/adhesives-393009-1.html'
soup = BeautifulSoup(requests.get(url).content, 'html.parser')
first_title = soup.find('h2')
print(first_title.text)
Prints:
Toagosei America, Inc.
I am trying to scrape 2 tables from a webpage simultaneously.
BeautifulSoup finds the first table no problem, but no matter what I try it cannot find the second table, here is the webpage: Hockey Reference: Justin Abdelkader.
It is the table underneath the Playoffs header.
Here is my code.
sauce = urllib.request.urlopen('https://www.hockey-reference.com/players/a/abdelju01/gamelog/2014', timeout=None).read()
soup = bs.BeautifulSoup(sauce, 'html5lib')
table = soup.find_all('table')
print(len(table))
Which always prints 1.
If I print(soup), and use the search function in my terminal I can locate 2 seperate table tags. I don't see any javascript that would be hindering BS4 from finding the tag. I have also tried finding the table by id and class, even the parent div of the table seems to be unfindable. Does anyone have any idea what I could be doing wrong?
Because of javascript loading additional information
Today requests_html can load with html page also javascript content.
pip install requests-html
from requests_html import HTMLSession
session = HTMLSession()
r = session.get('https://www.hockey-reference.com/players/a/abdelju01/gamelog/2014')
r.html.render()
res = r.html.find('table')
print(len(res))
4
The second table seems to be inside a HTML comment tag <--... <table class=.... I guess that's why BeautifulSoup doesn't find it.
Looks like that table is a widget — click "Share & more" -> "Embed this Table", you'll get a script with link:
https://widgets.sports-reference.com/wg.fcgi?css=1&site=hr&url=%2Fplayers%2Fa%2Fabdelju01%2Fgamelog%2F2014&div=div_gamelog_playoffs
How can we parse it?
import requests
import bs4
url = 'https://widgets.sports-reference.com/wg.fcgi?css=1&site=hr&url=%2Fplayers%2Fa%2Fabdelju01%2Fgamelog%2F2014&div=div_gamelog_playoffs'
widget = requests.get(url).text
fixed = '\n'.join(s.lstrip("document.write('").rstrip("');") for s in widget.splitlines())
soup = bs4.BeautifulSoup(fixed)
soup.find('td', {'data-stat': "date_game"}).text # => '2014-04-18'
Voila!
You can reach Comment line with bs4 Comment like :
from bs4 import BeautifulSoup , Comment
from urllib import urlopen
search_url = 'https://www.hockey-reference.com/players/a/abdelju01/gamelog/2014'
page = urlopen(search_url)
soup = BeautifulSoup(page, "html.parser")
table = soup.findAll('table') ## html part with no comment
table_with_comment = soup.findAll(text=lambda text:isinstance(text, Comment))
[comment.extract() for comment in table_with_comment]
## print table_with_comment print all comment line
start = '<table class'
for c in range(0,len(table_with_comment)):
if start in table_with_comment[c]:
print table_with_comment[c] ## print comment line has <table class
I want to scrape information from this page.
Specifically, I want to scrape the table which appears when you click "View all" under the "TOP 10 HOLDINGS" (you have to scroll down on the page a bit).
I am new to webscraping, and have tried using BeautifulSoup to do this. However, there seems to be an issue because the "onclick" function I need to take into account. In other words: The HTML code I scrape directly from the page doesn't include the table I want to obtain.
I am a bit confused about my next step: should I use something like selenium or can I deal with the issue in an easier/more efficient way?
Thanks.
My current code:
from bs4 import BeautifulSoup
import requests
Soup = BeautifulSoup
my_url = 'http://www.etf.com/SHE'
page = requests.get(my_url)
htmltxt = page.text
soup = Soup(htmltxt, "html.parser")
print(soup)
You can get a json response from the api: http://www.etf.com/view_all/holdings/SHE. The table you're looking for is located in 'view_all'.
import requests
from bs4 import BeautifulSoup as Soup
url = 'http://www.etf.com/SHE'
api = "http://www.etf.com/view_all/holdings/SHE"
headers = {'X-Requested-With':'XMLHttpRequest', 'Referer':url}
page = requests.get(api, headers=headers)
htmltxt = page.json()['view_all']
soup = Soup(htmltxt, "html.parser")
data = [[td.text for td in tr.find_all('td')] for tr in soup.find_all('tr')]
print('\n'.join(': '.join(row) for row in data))
HTML noob here, so I could be misunderstanding something about the HTML document, so bear with me.
I'm using Beautiful Soup to parse web data in Python. Here is my code:
import urllib
import BeautifulSoup
url = "http://www.nba.com/gameline/20160323/"
page = urllib.urlopen(url).read()
soup = BeautifulSoup.BeautifulSoup(page)
indicateGameDone = str(soup.find("div", {"class": "nbaModTopStatus"}))
print indicateGameDone
now, if you look at the website, the HTML code has the line <p class="nbaLiveStatTxSm"> FINAL </p>, (inspect the 'Final' text on the left side of the container on the first ATL-WAS game on the page to see it for youself.) But when I run the code above, my code doesn't return the 'FINAL' that is seen on the webpage, and instead the nbaLiveStatTxSm class is empty.
On my machine, this is the output when I print indicateGameDone:
<div class="nbaModTopStatus"><p class="nbaLiveStatTx">Live</p><p class="nbaLiveStatTxSm"></p><p class="nbaFnlStatTx">Final</p><p class="nbaFnlStatTxSm"></p></div>
Does anyone know why this is happening?
EDIT: clarification: the problem isn't retrieving the text within the tag, the problem is that when I take the html code from the website and print it out in python, something that I saw when I inspected the element on the web is not there in the print statement in Python.
You can use this logic to extract any text.
This code allows you to extract any data between any tags.
Output - FINAL
import urllib
from bs4 import BeautifulSoup
url = "http://www.nba.com/gameline/20160323/"
page = urllib.urlopen(url)
soup = BeautifulSoup(page)
indicateGameDone = soup.find("div", {"class": "nbaFnlStatTx"})
for p in indicateGameDone:
p_text = soup.find("p", {"class": "nbaFnlStatTxSm"})
print(p_text.getText())
break;
It looks like your problem is not with BeautifulSoup but instead with urllib.
Try running the following commands
>>> import urllib
>>> url = "http://www.nba.com/gameline/20160323/"
>>> page = urllib.urlopen(url).read()
>>> page.find('<div class="nbaModTopStatus">')
44230
Which is no surprise considering that Beautiful Soup was able to find the div itself. However when we look a little deeper into what urllib is actually collecting we can see that the <p class="nbaFnlStatTxSm"> is empty by running
>>> page[44230:45000]
'<div class="nbaModTopStatus"><p class="nbaLiveStatTx">Live</p><p class="nbaLiveStatTxSm"></p><p class="nbaFnlStatTx">Final</p><p class="nbaFnlStatTxSm"></p></div><div id="nbaGLBroadcast"><img src="/.element/img/3.0/sect/gameline/broadcasters/lp.png"></div><div class="nbaTeamsRow"><div class="nbaModTopTeamScr nbaModTopTeamAw"><h5 class="nbaModTopTeamName awayteam">ATL</h5><img src="http://i.cdn.turner.com/nba/nba/.element/img/2.0/sect/gameline/teams/ATL.gif" width="34" height="22" title="Atlanta Hawks"><h4 class="nbaModTopTeamNum win"></h4></div><div class="nbaModTopTeamScr nbaModTopTeamHm"><h5 class="nbaModTopTeamName hometeam">WAS</h5><img src="http://i.cdn.turner.com/nba/nba/.element/img/2.0/sect/gameline/teams/WAS.gif" width="34" '
You can see that the tag is empty, so your problem is the data that's being passed to Beautiful Soup, not the package itself.
changed the import of beautifulsoup to the proper syntax for the current version of BeautifulSoup
corrected the way you were constructing the BeautifulSoup object
fixed your find statement, then used the .text command to get the string representation of the text in the HTML you're after.
With some minor modifications to your code as listed above, your code runs for me.
import urllib
from bs4 import BeautifulSoup
url = "http://www.nba.com/gameline/20160323/"
page = urllib.urlopen(url).read()
soup = BeautifulSoup(page)
indicateGameDone = soup.find("div", {"class": "nbaModTopStatus"})
print indicateGameDone.text ## "LiveFinal "
to address comments:
import urllib
from bs4 import BeautifulSoup
url = "http://www.nba.com/gameline/20160323/"
page = urllib.urlopen(url).read()
soup = BeautifulSoup(page)
indicateGameDone = soup.find("p", {"class": "nbaFnlStatTx"})
print indicateGameDone.text