Python BS4 with SDMX - python

I would like to retrieve data given in a SDMX file (like https://www.bundesbank.de/cae/servlet/StatisticDownload?tsId=BBK01.ST0304&its_fileFormat=sdmx&mode=its). I tried to use BeautifulSoup, but it seems, it does not see the tags. In the following the code
import urllib2
from bs4 import BeautifulSoup
url = "https://www.bundesbank.de/cae/servlet/StatisticDownload?tsId=BBK01.ST0304&its_fileFormat=sdmx"
html_source = urllib2.urlopen(url).read()
soup = BeautifulSoup(html_source, 'lxml')
ts_series = soup.findAll("bbk:Series")
which gives me an empty object.
Is BS4 the wrong tool, or (more likely) what am I doing wrong?
Thanks in advance

soup.findAll("bbk:series") would return the result.
In fact, in this case, even you use lxml as the parser, BeautifulSoup still parse it as html, since html tags are case insensetive, BeautifulSoup downcases all the tags, thus soup.findAll("bbk:series") works. See Other parser problems from the official doc.
If you want to parse it as xml, use soup = BeautifulSoup(html_source, 'xml') instead. It also uses lxml since lxml is the only xml parser BeautifulSoup has. Now you can use ts_series = soup.findAll("Series") to get the result as beautifulSoup will strip the namespace part bbk.

Related

Unexpectedly renaming field in file

I'm using Python BS4/lxml to parse an xml-formatted RSS feed (specifically https://itch.io/games/on-sale.xml). I'm finding that in the transition from Requests receiving the page data and BS4 reading it from text, the name of the link field is being changed. Specifically, res.text contains ...</saleends><link>https://foo.itch.io/bar</link><description>... but reading it into BS4/lxml and printing that results in ...</saleends><link/>https://foo.itch.io/bar<description>..., which BS4 is unable to parse correctly. My code is available here, line 237.
I can provide a stripped-down version of the project without the login and logging pieces for easy testing.
Edit with simplified code:
import requests
from bs4 import BeautifulSoup
res = requests.get("https://itch.io/feed/sales.xml")
soup = BeautifulSoup(res.text, 'lxml')
print(soup.item.link)
Expected behavior: Prints "https://itch.io/s/12345/foobar" (whatever the most recent link in the RSS is)
Actual behavior: Prints "<link/>"
lxml is lxml's HTML parser and lxml-xml and xml are lxml's XML parser. (Refer this answer which points to this documentation)
So, instead of using lxml parser, you should use lxml-xml or xml parser.
import requests
from bs4 import BeautifulSoup
res = requests.get("https://itch.io/feed/sales.xml")
soup = BeautifulSoup(res.text, 'lxml-xml')
print(soup.item.link.text)
Output:
https://itch.io/s/38593/halloween-event-sale

Get URL from BeautifulSoup object

Somebody is handing my function a BeautifulSoup object (BS4) that he has gotten using the typical call:
soup = BeautifulSoup(url)
my code:
def doSomethingUseful(soup):
url = soup.???
How do I get the original URL from the soup object? I tried reading the docs AND the BeautifulSoup source code... I'm still not sure.
If the url variable is a string of an actual URL, then you should just forget the BeautifulSoup here and use the same variable url. You should be using BeautifulSoup to parse HTML code, not a simple URL. In fact, if you try to use it like this, you get a warning:
>>> from bs4 import BeautifulSoup
>>> url = "https://foo"
>>> soup = BeautifulSoup(url)
C:\Python27\lib\site-packages\bs4\__init__.py:336: UserWarning: "https://foo" looks like a URL. Beautiful Soup is not an HTTP client. You should probably use an HTTP client like requests to get the document behind the URL, and feed that document to Beautiful Soup.
' that document to Beautiful Soup.' % decoded_markup
Since the URL is just a string, BeautifulSoup doesn't really know what to do with it when you "soupify" it, except for wrapping it up in basic HTML:
>>> soup
<html><body><p>https://foo</p></body></html>
If you still wanted to extract the URL from this, you could just use .text on the object, since it's the only thing in there:
>>> print(soup.text)
https://foo
If on the other hand url is not really a URL at all but rather a bunch of HTML code (in which case the variable name would be very misleading), then how you'd extract a specific link inside would beg the question of how it's in your code. Doing a find to get the first a tag, then extracting the href value would be one way.
>>> actual_html = '<html><body>My link text</body></html>'
>>> newsoup = BeautifulSoup(actual_html)
>>> newsoup.find('a')['href']
'http://moo'

"soup.prettify()" gives just URL

I'm using Python3, BeautifulSoup4
When I run code below, it gives just url "www.google.com" not XML.
I couldn't find it What is wrong.
from bs4 import BeautifulSoup
import urllib
html = "www.google.com"
soup = BeautifulSoup(html)
print (soup.prettify())
You need to use urllib2 or a similar library to fetch the HTML
import urllib2
html = urllib2.urlopen("www.google.com")
soup = BeautifulSoup(html)
print (soup.prettify())
EDIT: Just as a side note to clarify why I suggested urllib2. If you read the urllib documentation, you'll find "The urlopen() function has been removed in Python 3 in favor of urllib2.urlopen()." Given that you have tagged Python3, urllib2 would probably be your best option.

Parsing HTML with requests and BeautifulSoup

I'm not sure if I'm approaching this correctly. I'm using requests to make a GET:
con = s.get(url)
when I call con.content, the whole page is there. But when I pass con into BS:
soup = BeautifulSoup(con.content)
print(soup.a)
I get none. There are lots of tags in there, not behind any JS, that are preset when i call con.content, but when I try to parse with BS most of the page is not there.
Change the parser to html5lib
pip install html5lib
And then,
soup = BeautifulSoup(con.content,'html5lib')
The a tags are probably not on the top level.
soup.find_all('a')
is probably what you wanted.
In general, I found lxml to be more reliable, consistent in the API and faster. Yes, even more reliable - I have repeatedly had documents where BeautifulSoup failed to parse them, but lxml in its robust mode lxml.html.soupparser still worked well. And there is the lxml.etree API which is really easy to use.
Without being able to see you're html you're getting I just did this on the hacker news site and it returns all the a tags as expected.
import requests
from bs4 import BeautifulSoup
s = requests.session()
con = s.get('https://news.ycombinator.com/')
soup = BeautifulSoup(con.text)
links = soup.findAll('a')
for link in links:
print link

Passing lxml output to BeautifulSoup

My offline code works fine but I'm having trouble passing a web page from urllib via lxml to BeautifulSoup. I'm using urllib for basic authentication then lxml to parse (it gives a good result with the specific pages we need to scrape) then to BeautifulSoup.
#! /usr/bin/python
import urllib.request
import urllib.error
from io import StringIO
from bs4 import BeautifulSoup
from lxml import etree
from lxml import html
file = open("sample.html")
doc = file.read()
parser = etree.HTMLParser()
html = etree.parse(StringIO(doc), parser)
result = etree.tostring(html.getroot(), pretty_print=True, method="html")
soup = BeautifulSoup(result)
# working perfectly
With that working, I tried to feed it a page via urllib:
# attempt 1
page = urllib.request.urlopen(req)
doc = page.read()
# print (doc)
parser = etree.HTMLParser()
html = etree.parse(StringIO(doc), parser)
# TypeError: initial_value must be str or None, not bytes
Trying to deal with the error message, I tried:
# attempt 2
html = etree.parse(bytes.decode(doc), parser)
#OSError: Error reading file
I didn't know what to do about the OSError so I sought another method. I found suggestions to use lxml.html instead of lxml.etree so the next attempt is:
attempt 3
page = urllib.request.urlopen(req)
doc = page.read()
# print (doc)
html = html.document_fromstring(doc)
print (html)
# <Element html at 0x140c7e0>
soup = BeautifulSoup(html) # also tried (html, "lxml")
# TypeError: expected string or buffer
This clearly gives a structure of some sort, but how to pass it to BeautifulSoup? My question is twofold: How can I pass a page from urllib to lxml.etree (as in attampt 1, closest to my working code)? or, How can I pass a lxml.html structure to BeautifulSoup (as above)? I understand that both revolve around datatypes but don't know what to do about them.
python 3.3, lxml 3.0.1, BeautifulSoup 4. I'm new to python. Thanks to the internet for code fragments and examples.
BeautifulSoup can use the lxml parser directly, no need to go to these lengths.
BeautifulSoup(doc, 'lxml')

Categories