Parsing HTML contained in API call response - python

I'm having some trouble figuring out how to parse HTML that's contained within the response of an API call in Python 3.7 (requests + BS4).
Say I want to parse out the article URLs from a response like this one.
I'm able to get the "rendering" entry of the response which seemingly contains the HTML I'd like to parse, however, when I pass the text along to Beautiful Soup's HTML parser, it does not seem to work as expected (unable to locate HTML tags of any kind):
import requests
from bs4 import BeautifulSoup
url = """https://www.washingtonpost.com/pb/api/v2/render/feature/?service=prism-query&contentConfig={%22url%22:%22prism://prism.query/ap-articles-by-site-id,/world%22,%22offset%22:0,%22limit%22:5}&customFields={%22isLoadMore%22:false,%22offset%22:0,%22maxToShow%22:50,%22dedup%22:true}&id=f00boImX29Vv3s&rid=&uri=/world/"""
r = requests.get(url).json()
soup = BeautifulSoup(r['rendering'], 'html.parser')
links_html = soup.find_all("div", attrs={"class":"headline x-small normal-style text-align-inherit "})
links = []
for div in links_html:
links.append(div.find('a', href = True)['href'])
Am I wrong in my assumption that the "rendering" entry in the response is raw HTML?

You want to use the json library (or in hindsight, Request.json()), because whatever link you're visiting isn't actually a website, but what seems to be an api on top of it that gives you the html along with encoding, content type, and some other things that won't be necessary.
Here's how I did it.
>>> import requests
>>> from bs4 import BeautifulSoup
>>> r = requests.get("https://www.washingtonpost.com/pb/api/v2/render/feature/?service=prism-query&contentConfig=%7B%22url%22:%22prism://prism.query/ap-articles-by-site-id,/world%22,%22offset%22:0,%22limit%22:5%7D&customFields=%7B%22isLoadMore%22:false,%22offset%22:0,%22maxToShow%22:50,%22dedup%22:true%7D&id=f00boImX29Vv3s&rid=&uri=/world/")
>>> bs = BeautifulSoup(r.content, 'html.parser')
>>> first_div = bs.find("div", class_="moat-trackable")
>>> first_div
>>> import json
>>> html_dict = json.loads(r.content)
>>> html_dict
{'rendering': '<div class="moat-trackable ...'}
>>> html_dict.keys()
dict_keys(['rendering', 'encoding', 'contentType', 'pageResources', 'externalResources', 'httpHeaders'])
>>> bs = BeautifulSoup(html_dict["rendering"], 'html.parser')
>>> first_div = bs.find("div", class_="moat-trackable")
>>> first_div
<div class="moat-trackable

Related

Parsing a script tag with dicts in BeautifulSoup

Working on a partial answer to this question, I came across a bs4.element.Tag that is a mess of nested dicts and lists (s, below).
Is there a way to return a list of urls contained in s without using re.find_all? Other comments regarding the structure of this tag are helpful too.
from bs4 import BeautifulSoup
import requests
link = 'https://stackoverflow.com/jobs?med=site-ui&ref=jobs-tab&sort=p'
r = requests.get(link)
soup = BeautifulSoup(r.text, 'html.parser')
s = soup.find('script', type='application/ld+json')
## the first bit of s:
# s
# Out[116]:
# <script type="application/ld+json">
# {"#context":"http://schema.org","#type":"ItemList","numberOfItems":50,
What I've tried:
randomly perusing through methods with tab completion on s.
picking through the docs.
My problem is that s only has 1 attribute (type) and doesn't seem to have any child tags.
You can use s.text to get the content of the script. It's JSON, so you can then just parse it with json.loads. From there, it's simple dictionary access:
import json
from bs4 import BeautifulSoup
import requests
link = 'https://stackoverflow.com/jobs?med=site-ui&ref=jobs-tab&sort=p'
r = requests.get(link)
soup = BeautifulSoup(r.text, 'html.parser')
s = soup.find('script', type='application/ld+json')
urls = [el['url'] for el in json.loads(s.text)['itemListElement']]
print(urls)
More easy:
from bs4 import BeautifulSoup
import requests
link = 'https://stackoverflow.com/jobs?med=site-ui&ref=jobs-tab&sort=p'
r = requests.get(link)
soup = BeautifulSoup(r.text, 'html.parser')
s = soup.find('script', type='application/ld+json')
# JUST THIS
json = json.loads(s.string)

how to reach dipper divs inside <span> tag using python crawler?

the body tag has a <span> tag. There are many other divs inside the span tag. I want to go dipper but when I trying this code:
from bs4 import BeautifulSoup
from urllib.request import urlopen
url = 'https://www.instagram.com/artfido/'
data = urlopen(url)
soup = BeautifulSoup(data, 'html.parser')
result = soup.body.span
print (result)
the result was just this:
<span id="react-root"></span>
How can I reach to divs inside the span tag?
Can we parse the <span> tag? Is it possible? If yes so why I'm not able to parse the span?
By using this:
result = soup.body.span.contents
The output was:
[]
As talked in comments, urlopen(url) returns a file like object, which means that you need to read from it if you want to get what's inside it.
from bs4 import BeautifulSoup
from urllib.request import urlopen
url = 'https://www.instagram.com/artfido/'
data = urlopen(url)
soup = BeautifulSoup(data.read(), 'html.parser')
result = soup.body.span
print (result)
The code I used for my python 2.7 setup:
from bs4 import BeautifulSoup
import urllib2
url = 'https://www.instagram.com/artfido/'
data = urllib2.urlopen(url)
soup = BeautifulSoup(data.read(), 'lxml')
result = soup.body.span
print result
EDIT
for future reference, if you want something more simple for handling the url, there is a package called requests . In this case, it is similar but I find it easier to understand.
from bs4 import BeautifulSoup
import requests
url = 'https://www.instagram.com/artfido/'
r = requests.get(url)
data = r.text
soup = BeautifulSoup(data, 'lxml')
result = soup.body.span
print result

Beautiful Soup not returning everything in HTML file?

HTML noob here, so I could be misunderstanding something about the HTML document, so bear with me.
I'm using Beautiful Soup to parse web data in Python. Here is my code:
import urllib
import BeautifulSoup
url = "http://www.nba.com/gameline/20160323/"
page = urllib.urlopen(url).read()
soup = BeautifulSoup.BeautifulSoup(page)
indicateGameDone = str(soup.find("div", {"class": "nbaModTopStatus"}))
print indicateGameDone
now, if you look at the website, the HTML code has the line <p class="nbaLiveStatTxSm"> FINAL </p>, (inspect the 'Final' text on the left side of the container on the first ATL-WAS game on the page to see it for youself.) But when I run the code above, my code doesn't return the 'FINAL' that is seen on the webpage, and instead the nbaLiveStatTxSm class is empty.
On my machine, this is the output when I print indicateGameDone:
<div class="nbaModTopStatus"><p class="nbaLiveStatTx">Live</p><p class="nbaLiveStatTxSm"></p><p class="nbaFnlStatTx">Final</p><p class="nbaFnlStatTxSm"></p></div>
Does anyone know why this is happening?
EDIT: clarification: the problem isn't retrieving the text within the tag, the problem is that when I take the html code from the website and print it out in python, something that I saw when I inspected the element on the web is not there in the print statement in Python.
You can use this logic to extract any text.
This code allows you to extract any data between any tags.
Output - FINAL
import urllib
from bs4 import BeautifulSoup
url = "http://www.nba.com/gameline/20160323/"
page = urllib.urlopen(url)
soup = BeautifulSoup(page)
indicateGameDone = soup.find("div", {"class": "nbaFnlStatTx"})
for p in indicateGameDone:
p_text = soup.find("p", {"class": "nbaFnlStatTxSm"})
print(p_text.getText())
break;
It looks like your problem is not with BeautifulSoup but instead with urllib.
Try running the following commands
>>> import urllib
>>> url = "http://www.nba.com/gameline/20160323/"
>>> page = urllib.urlopen(url).read()
>>> page.find('<div class="nbaModTopStatus">')
44230
Which is no surprise considering that Beautiful Soup was able to find the div itself. However when we look a little deeper into what urllib is actually collecting we can see that the <p class="nbaFnlStatTxSm"> is empty by running
>>> page[44230:45000]
'<div class="nbaModTopStatus"><p class="nbaLiveStatTx">Live</p><p class="nbaLiveStatTxSm"></p><p class="nbaFnlStatTx">Final</p><p class="nbaFnlStatTxSm"></p></div><div id="nbaGLBroadcast"><img src="/.element/img/3.0/sect/gameline/broadcasters/lp.png"></div><div class="nbaTeamsRow"><div class="nbaModTopTeamScr nbaModTopTeamAw"><h5 class="nbaModTopTeamName awayteam">ATL</h5><img src="http://i.cdn.turner.com/nba/nba/.element/img/2.0/sect/gameline/teams/ATL.gif" width="34" height="22" title="Atlanta Hawks"><h4 class="nbaModTopTeamNum win"></h4></div><div class="nbaModTopTeamScr nbaModTopTeamHm"><h5 class="nbaModTopTeamName hometeam">WAS</h5><img src="http://i.cdn.turner.com/nba/nba/.element/img/2.0/sect/gameline/teams/WAS.gif" width="34" '
You can see that the tag is empty, so your problem is the data that's being passed to Beautiful Soup, not the package itself.
changed the import of beautifulsoup to the proper syntax for the current version of BeautifulSoup
corrected the way you were constructing the BeautifulSoup object
fixed your find statement, then used the .text command to get the string representation of the text in the HTML you're after.
With some minor modifications to your code as listed above, your code runs for me.
import urllib
from bs4 import BeautifulSoup
url = "http://www.nba.com/gameline/20160323/"
page = urllib.urlopen(url).read()
soup = BeautifulSoup(page)
indicateGameDone = soup.find("div", {"class": "nbaModTopStatus"})
print indicateGameDone.text ## "LiveFinal "
to address comments:
import urllib
from bs4 import BeautifulSoup
url = "http://www.nba.com/gameline/20160323/"
page = urllib.urlopen(url).read()
soup = BeautifulSoup(page)
indicateGameDone = soup.find("p", {"class": "nbaFnlStatTx"})
print indicateGameDone.text

BS4 get info from class with weird name

Got this weird html from the Steam Community market search:
<span class=\"normal_price\">$2.69 USD<\/span>
How to extract data with bs4? This is not working:
soup.find("span", attrs={"class": "\"normal_price\""})
You have HTML embedded in a JSON string, which must escape the quotes. Rather than manually extract that data, parse the JSON first:
import json
data = json.loads(json_data)
html = data['results_html']
If you are using the requests library, the response can be decoded for you:
response = requests.get('http://steamcommunity.com/market/search/render/?query=appid:730&start=0&count=3&currency=3&l=english&cc=pt')
html = response.json()['results_html']
after which you can parse this with BeautifulSoup just fine:
>>> import requests
>>> from bs4 import BeautifulSoup
>>> html = requests.get('http://steamcommunity.com/market/search/render/?query=appid:730&start=0&count=3&currency=3&l=english&cc=pt').json()['results_html']
>>> BeautifulSoup(html, 'lxml').find('span', class_='normal_price').span
<span class="normal_price">$2.69 USD</span>

Beautifulsoup lost nodes

I am using Python and Beautifulsoup to parse HTML-Data and get p-tags out of RSS-Feeds. However, some urls cause problems because the parsed soup-object does not include all nodes of the document.
For example I tried to parse http://feeds.chicagotribune.com/~r/ChicagoBreakingNews/~3/T2Zg3dk4L88/story01.htm
But after comparing the parsed object with the pages source code, I noticed that all nodes after ul class="nextgen-left" are missing.
Here is how I parse the Documents:
from bs4 import BeautifulSoup as bs
url = 'http://feeds.chicagotribune.com/~r/ChicagoBreakingNews/~3/T2Zg3dk4L88/story01.htm'
cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
request = urllib2.Request(url)
response = opener.open(request)
soup = bs(response,'lxml')
print soup
The input HTML is not quite conformant, so you'll have to use a different parser here. The html5lib parser handles this page correctly:
>>> import requests
>>> from bs4 import BeautifulSoup
>>> r = requests.get('http://feeds.chicagotribune.com/~r/ChicagoBreakingNews/~3/T2Zg3dk4L88/story01.htm')
>>> soup = BeautifulSoup(r.text, 'lxml')
>>> soup.find('div', id='story-body') is not None
False
>>> soup = BeautifulSoup(r.text, 'html5')
>>> soup.find('div', id='story-body') is not None
True

Categories