Can't get item with python beautifulsoup - python

I'm trying to learn how to webscrape with beautifulsoup + python, and I want to grab the name of the cinematographer from https://letterboxd.com/film/donnie-darko/ but I can't figure out how to isolate the text. The html for what I want is written as below, what I want to output is "Steven Poster":
<h3><span>Cinematography</span></h3>
<div class="text-sluglist">
<p>
Steven Poster
</p>
</div>
within my code I've done soup.find(text="Cinematography"), and a mixture of different thigns like trying to find item or get_text from within the a and p tags, but ...

I would use a regex to parse the soup object for a link that contains "cinematography".
import re
import requests
from bs4 import BeautifulSoup
r = requests.get('https://letterboxd.com/film/donnie-darko/')
soup = BeautifulSoup(r.text, 'lxml')
cinematographer = soup(href=re.compile(r'/cinematography/'))[0].text
print cinematographer
# outputs "Stephen Poster"

You can do the same without using regex as well:
import requests
from bs4 import BeautifulSoup
res = requests.get('https://letterboxd.com/film/donnie-darko/')
soup = BeautifulSoup(res.text,'lxml')
item = soup.select("[href*='cinematography']")[0].text
print(item)
Output:
Steven Poster

Use CSS partial text selector:
soup.find('a[href*="cinematography"]').text

Related

Python beautifulsoup search issue

I'm having issues having bs find this text. I think it's because the text on the page has extra quotes around it. I was told it's because the class is actually blank. If that's the case, then any suggestions on how I can build my search?
Actual text on website: <span class="" data-product-price="">
My code (I've tried several variations): soup.find_all('span',{'class' : '" data-product-price="'})
I've also tried just doing a regular search, but I'm not doing that correctly. Any suggestions or should I use something other than bs?
Edited to include full code:
import bs4
import requests
from bs4 import BeautifulSoup
r=requests.get('https://www.gouletpens.com/products/twsbi-diamond-580-
fountain-pen-clear?variant=11884892028971')
soup = bs4.BeautifulSoup(r.text, features="html.parser")
print(soup)
#soup.find_all('span',{'class' : '" data-product-price="'})
#soup.find_all('span',{'class' : 'data-product-price'})[0].text
After looking at URL, you can select the price with CSS selector:
import requests
from bs4 import BeautifulSoup
url = 'https://www.gouletpens.com/products/twsbi-diamond-580-fountain-pen-clear?variant=11884892028971'
soup = BeautifulSoup(requests.get(url).content, 'html.parser')
print(soup.select_one('span[data-product-price]').get_text(strip=True))
Prints:
$50.00
OR: with bs4 API (set {'data-product-price':True} to search tags with this attribute regardless of value in it:
print(soup.find('span', {'data-product-price':True}).get_text(strip=True))

Webscraping merriam-webster using beautifulsoup

I am using beautifulSoup and trying to scrape only the first definition (very cold) of a word from merriam-webster but it scrapes second line (a sentence) as well. This is my code.
P.S: i want only the "very cold" part. "put on your jacket...." should not be included in the output. Please someone help.
import requests
from bs4 import BeautifulSoup
url = "https://www.merriam-webster.com/dictionary/freezing"
r = requests.get(url)
soup = BeautifulSoup(r.content,"lxml")
definition = soup.find("span", {"class" : "dt"})
tag = definition.findChild()
print(tag.text)
Selecting by class is second faster method for css selector matching. Using select_one returns only first match and using next_sibling will take you to the node you want
import requests
from bs4 import BeautifulSoup as bs
r = requests.get('https://www.merriam-webster.com/dictionary/freezing')
soup = bs(r.content, 'lxml')
print(soup.select_one('.mw_t_bc').next_sibling.strip())
The way that Merriam-Webster structures their page is a little strange, but you can find the <strong> tag that precedes the definition, grab the next sibling and strip out all whitespace like this:
>>> tag.find('strong').next_sibling.strip()
u'very cold'

BeautifulSoup and urllib to find data from website

Background
I am trying to understand the process in which specific data can be extracted from a website using beautifulsoup4 and urllib libraries.
How would I get the specific price of a DVD from a website, if:
The div class is <div class="productPrice" data-component="productPrice">
The p class is <p class="productPrice_price" data-product-price="price">£9.99 </p>
Code so far:
from bs4 import BeautifulSoup
from urllib.request import urlopen
html = urlopen("https://www.zavvi.com/dvd/rampage-includes-digital-download/11729469.html ")
bsObj = BeautifulSoup(html.read(), features='html.parser')
all_divs = bsObj.find_all('div', {'class':'productPrice'}) # 1. get all divs
What is the remaining process of finding the price?
Website (https://www.zavvi.com/dvd/rampage-includes-digital-download/11729469.html)
You're almost there, just one more step. You just need to loop through the elements and find the <p> tag, with class="productPrice_price", and grab the text:
from bs4 import BeautifulSoup
from urllib.request import urlopen
html = urlopen("https://www.zavvi.com/dvd/rampage-includes-digital-download/11729469.html ")
bsObj = BeautifulSoup(html.read(), features='html.parser')
all_divs = bsObj.find_all('div', {'class':'productPrice'}) # 1. get all divs
for ele in all_divs:
price = ele.find('p', {'class':'productPrice_price'}).text
print (price)
Output:
£9.99

Using Beautiful Soup to find specific class

I am trying to use Beautiful Soup to scrape housing price data from Zillow.
I get the web page by property id, eg. http://www.zillow.com/homes/for_sale/18429834_zpid/
When I try the find_all() function, I do not get any results:
results = soup.find_all('div', attrs={"class":"home-summary-row"})
However, if I take the HTML and cut it down to just the bits I want, eg.:
<html>
<body>
<div class=" status-icon-row for-sale-row home-summary-row">
</div>
<div class=" home-summary-row">
<span class=""> $1,342,144 </span>
</div>
</body>
</html>
I get 2 results, both <div>s with the class home-summary-row. So, my question is, why do I not get any results when searching the full page?
Working example:
from bs4 import BeautifulSoup
import requests
zpid = "18429834"
url = "http://www.zillow.com/homes/" + zpid + "_zpid/"
response = requests.get(url)
html = response.content
#html = '<html><body><div class=" status-icon-row for-sale-row home-summary-row"></div><div class=" home-summary-row"><span class=""> $1,342,144 </span></div></body></html>'
soup = BeautifulSoup(html, "html5lib")
results = soup.find_all('div', attrs={"class":"home-summary-row"})
print(results)
Your HTML is non-well-formed and in cases like this, choosing the right parser is crucial. In BeautifulSoup, there are currently 3 available HTML parsers which work and handle broken HTML differently:
html.parser (built-in, no additional modules needed)
lxml (the fastest, requires lxml to be installed)
html5lib (the most lenient, requires html5lib to be installed)
The Differences between parsers documentation page describes the differences in more detail. In your case, to demonstrate the difference:
>>> from bs4 import BeautifulSoup
>>> import requests
>>>
>>> zpid = "18429834"
>>> url = "http://www.zillow.com/homes/" + zpid + "_zpid/"
>>> response = requests.get(url)
>>> html = response.content
>>>
>>> len(BeautifulSoup(html, "html5lib").find_all('div', attrs={"class":"home-summary-row"}))
0
>>> len(BeautifulSoup(html, "html.parser").find_all('div', attrs={"class":"home-summary-row"}))
3
>>> len(BeautifulSoup(html, "lxml").find_all('div', attrs={"class":"home-summary-row"}))
3
As you can see, in your case, both html.parser and lxml do the job, but html5lib does not.
import requests
from bs4 import BeautifulSoup
zpid = "18429834"
url = "http://www.zillow.com/homes/" + zpid + "_zpid/"
r = requests.get(url)
soup = BeautifulSoup(r.content, "lxml")
g_data = soup.find_all("div", {"class": "home-summary-row"})
print g_data[1].text
#for item in g_data:
# print item("span")[0].text
# print '\n'
I got this working too -- but it looks like someone beat me to it.
going to post anyways.
According to the W3.org Validator, there are a number of issues with the HTML such as stray closing tags and tags split across multiple lines. For example:
<a
href="http://www.zillow.com/danville-ca-94526/sold/" title="Recent home sales" class="" data-za-action="Recent Home Sales" >
This kind of markup can make it much more difficult for BeautifulSoup to parse the HTML.
You may want to try running something to clean up the HTML, such as removing the line breaks and trailing spaces from the end of each line. BeautifulSoup can also clean up the HTML tree for you:
from BeautifulSoup import BeautifulSoup
tree = BeautifulSoup(bad_html)
good_html = tree.prettify()

Beautiful Soup not returning everything in HTML file?

HTML noob here, so I could be misunderstanding something about the HTML document, so bear with me.
I'm using Beautiful Soup to parse web data in Python. Here is my code:
import urllib
import BeautifulSoup
url = "http://www.nba.com/gameline/20160323/"
page = urllib.urlopen(url).read()
soup = BeautifulSoup.BeautifulSoup(page)
indicateGameDone = str(soup.find("div", {"class": "nbaModTopStatus"}))
print indicateGameDone
now, if you look at the website, the HTML code has the line <p class="nbaLiveStatTxSm"> FINAL </p>, (inspect the 'Final' text on the left side of the container on the first ATL-WAS game on the page to see it for youself.) But when I run the code above, my code doesn't return the 'FINAL' that is seen on the webpage, and instead the nbaLiveStatTxSm class is empty.
On my machine, this is the output when I print indicateGameDone:
<div class="nbaModTopStatus"><p class="nbaLiveStatTx">Live</p><p class="nbaLiveStatTxSm"></p><p class="nbaFnlStatTx">Final</p><p class="nbaFnlStatTxSm"></p></div>
Does anyone know why this is happening?
EDIT: clarification: the problem isn't retrieving the text within the tag, the problem is that when I take the html code from the website and print it out in python, something that I saw when I inspected the element on the web is not there in the print statement in Python.
You can use this logic to extract any text.
This code allows you to extract any data between any tags.
Output - FINAL
import urllib
from bs4 import BeautifulSoup
url = "http://www.nba.com/gameline/20160323/"
page = urllib.urlopen(url)
soup = BeautifulSoup(page)
indicateGameDone = soup.find("div", {"class": "nbaFnlStatTx"})
for p in indicateGameDone:
p_text = soup.find("p", {"class": "nbaFnlStatTxSm"})
print(p_text.getText())
break;
It looks like your problem is not with BeautifulSoup but instead with urllib.
Try running the following commands
>>> import urllib
>>> url = "http://www.nba.com/gameline/20160323/"
>>> page = urllib.urlopen(url).read()
>>> page.find('<div class="nbaModTopStatus">')
44230
Which is no surprise considering that Beautiful Soup was able to find the div itself. However when we look a little deeper into what urllib is actually collecting we can see that the <p class="nbaFnlStatTxSm"> is empty by running
>>> page[44230:45000]
'<div class="nbaModTopStatus"><p class="nbaLiveStatTx">Live</p><p class="nbaLiveStatTxSm"></p><p class="nbaFnlStatTx">Final</p><p class="nbaFnlStatTxSm"></p></div><div id="nbaGLBroadcast"><img src="/.element/img/3.0/sect/gameline/broadcasters/lp.png"></div><div class="nbaTeamsRow"><div class="nbaModTopTeamScr nbaModTopTeamAw"><h5 class="nbaModTopTeamName awayteam">ATL</h5><img src="http://i.cdn.turner.com/nba/nba/.element/img/2.0/sect/gameline/teams/ATL.gif" width="34" height="22" title="Atlanta Hawks"><h4 class="nbaModTopTeamNum win"></h4></div><div class="nbaModTopTeamScr nbaModTopTeamHm"><h5 class="nbaModTopTeamName hometeam">WAS</h5><img src="http://i.cdn.turner.com/nba/nba/.element/img/2.0/sect/gameline/teams/WAS.gif" width="34" '
You can see that the tag is empty, so your problem is the data that's being passed to Beautiful Soup, not the package itself.
changed the import of beautifulsoup to the proper syntax for the current version of BeautifulSoup
corrected the way you were constructing the BeautifulSoup object
fixed your find statement, then used the .text command to get the string representation of the text in the HTML you're after.
With some minor modifications to your code as listed above, your code runs for me.
import urllib
from bs4 import BeautifulSoup
url = "http://www.nba.com/gameline/20160323/"
page = urllib.urlopen(url).read()
soup = BeautifulSoup(page)
indicateGameDone = soup.find("div", {"class": "nbaModTopStatus"})
print indicateGameDone.text ## "LiveFinal "
to address comments:
import urllib
from bs4 import BeautifulSoup
url = "http://www.nba.com/gameline/20160323/"
page = urllib.urlopen(url).read()
soup = BeautifulSoup(page)
indicateGameDone = soup.find("p", {"class": "nbaFnlStatTx"})
print indicateGameDone.text

Categories