How can i grab CData out of BeautifulSoup - python

I have a website that I'm scraping that has a similar structure the following. I'd like to be able to grab the info out of the CData block.
I'm using BeautifulSoup to pull other info off the page, so if the solution can work with that, it would help keep my learning curve down as I'm a python novice.
Specifically, I want to get at the two different types of data hidden in the CData statement. the first which is just text I'm pretty sure I can throw a regex at it and get what I need. For the second type, if i could drop the data that has html elements into it's own beautifulsoup, I can parse that.
I'm just learning python and beautifulsoup, so I'm struggling to find the magical incantation that will give me just the CData by itself.
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>
Cows and Sheep
</title>
</head>
<body>
<div id="main">
<div id="main-precontents">
<div id="main-contents" class="main-contents">
<script type="text/javascript">
//<![CDATA[var _ = g_cow;_[7654]={cowname_enus:'cows rule!',leather_quality:99,icon:'cow_level_23'};_[37357]={sheepname_enus:'baa breath',wool_quality:75,icon:'sheep_level_23'};_[39654].cowmeat_enus = '<table><tr><td><b class="q4">cows rule!</b><br></br>
<!--ts-->
get it now<table width="100%"><tr><td>NOW</td><th>NOW</th></tr></table><span>244 Cows</span><br></br>67 leather<br></br>68 Brains
<!--yy-->
<span class="q0">Cow Bonus: +9 Cow Power</span><br></br>Sheep Power 60 / 60<br></br>Sheep 88<br></br>Cow Level 555</td></tr></table>
<!--?5695:5:40:45-->
';
//]]>
</script>
</div>
</div>
</div>
</body>
</html>

One thing you need to be careful of BeautifulSoup grabbing CData is not to use a lxml parser.
By default, the lxml parser will strip CDATA sections from the tree and replace them by their plain text content, Learn more here
#Trying it with html.parser
>>> from bs4 import BeautifulSoup
>>> import bs4
>>> s='''<?xml version="1.0" ?>
<foo>
<bar><![CDATA[
aaaaaaaaaaaaa
]]></bar>
</foo>'''
>>> soup = BeautifulSoup(s, "html.parser")
>>> soup.find(text=lambda tag: isinstance(tag, bs4.CData)).string.strip()
'aaaaaaaaaaaaa'
>>>

BeautifulSoup sees CData as a special case (subclass) of "navigable strings". So for example:
import BeautifulSoup
txt = '''<foobar>We have
<![CDATA[some data here]]>
and more.
</foobar>'''
soup = BeautifulSoup.BeautifulSoup(txt)
for cd in soup.findAll(text=True):
if isinstance(cd, BeautifulSoup.CData):
print 'CData contents: %r' % cd
In your case of course you could look in the subtree starting at the div with the 'main-contents' ID, rather than all over the document tree.

You could try this:
from BeautifulSoup import BeautifulSoup
// source.html contains your html above
f = open('source.html')
soup = BeautifulSoup(''.join(f.readlines()))
s = soup.findAll('script')
cdata = s[0].contents[0]
That should give you the contents of cdata.
Update
This may be a little cleaner:
from BeautifulSoup import BeautifulSoup
import re
// source.html contains your html above
f = open('source.html')
soup = BeautifulSoup(''.join(f.readlines()))
cdata = soup.find(text=re.compile("CDATA"))
Just personal preference, but I like the bottom one a little better.

import re
from bs4 import BeautifulSoup
soup = BeautifulSoup(content)
for x in soup.find_all('item'):
print re.sub('[\[CDATA\]]', '', x.string)

For anyone using BeautifulSoup4, Alex Martelli's solution works but do this:
from bs4 import BeautifulSoup, CData
soup = BeautifulSoup(txt)
for cd in soup.findAll(text=True):
if isinstance(cd, Cdata):
print 'CData contents: %r' % cd

Related

BS4 breaks HTML trying to repair it

BS4 corrects faulty html. Usually this is not a problem. I tried parsing, altering and saving the html of this page: ulisses-regelwiki.de/index.php/sonderfertigkeiten.html
In this case the repairing changes the representation. After the repairing many lines of the page are no longer centered, but leftaligned instead.
Since I have to work with the broken html of said page, I cannot simply repair the html code.
How can I prevent bs4 from repairing the html or fix the "correction" somehow?
(this minimal example just shows bs4 repairing broken html-code; I couldn't create a minimal example where bs4 does this in a wrong way like with the page mentioned above)
#!/usr/bin/env python3
from bs4 import BeautifulSoup
html = '''
<!DOCTYPE html>
<center>
Some Test content
<!-- A comment -->
<center>
'''
def is_string_only(t):
return type(t) is NavigableString
soup = BeautifulSoup(html, 'lxml') #or html.parse
print(str(soup))
Try this lib.
from simplified_scrapy import SimplifiedDoc
html = '''
<!DOCTYPE html>
<center>
Some Test content
<!-- A comment -->
<center>
'''
doc = SimplifiedDoc(html)
print (doc.html)
Here are more examples: https://github.com/yiyedata/simplified-scrapy-demo/tree/master/doc_examples

Select tag having a dot with beautifulsoup

How can select and modify the tag <Tagwith.dot> with some other text using beautifulsoup? If its not possible with beautifulsoup then what is the next best library for xml document edit and creation, would be lxml?
from bs4 import BeautifulSoup as bs
stra = """
<body>
<Tagwith.dot>Text inside tag with dot</Tagwith.dot>
</body>"""
soup = bs(stra)
Desired XML:
<body>
<Tagwith.dot>Edited text</Tagwith.dot>
</body>
BS4 assumes and converts all the tags to lower case. The below code works fine. Provide the tag name in lower case.
from bs4 import BeautifulSoup as bs
stra = """
<body>
<Tagwith.dot>Text inside tag with dot</Tagwith.dot>
</body>"""
soup = bs(stra, 'html.parser')
print(soup.find_all('tagwith.dot'))
Output:
[<tagwith.dot>Text inside tag with dot</tagwith.dot>]
You can use xml.etree.elementtree to achieve what you want as follows
import xml.etree.ElementTree as ET
stra = """
<body>
<Tagwith.dot>Text inside tag with dot</Tagwith.dot>
</body>"""
#Read xml string and convert to xml object
xml_obj = ET.fromstring(stra)
#Iterate through elements
for elem in xml_obj:
#If tag is found, modify the text
if elem.tag == 'Tagwith.dot':
elem.text = 'Edited text'
#Print updated xml object as a string
print(ET.tostring(xml_obj).decode())
The output will be
<body>
<Tagwith.dot>Edited text</Tagwith.dot>
</body>

Parsing MS specific html tags in BeautifulSoup

When trying to parse an email sent using MS Outlook, I want to be able to strip the annoying Microsoft XML tags that it has added. One such example is the o:p tag. When trying to use Python's BeautifulSoup to parse an email as HTML, it can't seem to find these specialty tags.
For example:
from bs4 import BeautifulSoup
textToParse = """
<html>
<head>
<title>Something to parse</title>
</head>
<body>
<p><o:p>This should go</o:p>Paragraph</p>
</body>
</html>
"""
soup = BeautifulSoup(textToParse, "html5lib")
body = soup.find('body')
for otag in body.find_all('o'):
print(otag)
for otag in body.find_all('o:p'):
print(otag)
This will output no text to the console, but if I switched the find_all call to search for p then it would output the p node as expected.
How come these custom tags do not seem to work?
It's a namespace issue. Apparently, BeautifulSoup does not consider custom namespaces valid when parsed with "html5lib".
You can work around this with a regular expression, which – strangely – does work correctly!
print (soup.find_all(re.compile('o:p')))
>>> [<o:p>This should go</o:p>]
but the "proper" solution is to change the parser to "lxml-xml" and introducing o: as a valid namespace.
from bs4 import BeautifulSoup
textToParse = """
<html xmlns:o='dummy_url'>
<head>
<title>Something to parse</title>
</head>
<body>
<p><o:p>This should go</o:p>Paragraph</p>
</body>
</html>
"""
soup = BeautifulSoup(textToParse, "lxml-xml")
body = soup.find('body')
print ('this should find nothing')
for otag in body.find_all('o'):
print(otag)
print ('this should find o:p')
for otag in body.find_all('o:p'):
print(otag)
>>>
this should find nothing
this should find o:p
<o:p>This should go</o:p>

Retrieve contents from broken <a> tags using Beautiful Soup

I am trying to parse a website and retrieve the texts that contain Hyper link.
For eg:
This is an Example
I need to retrieve "This is an Example", which I am able to do for pages that dont have broken tags. I am unable to retrieve in following case:
<html>
<body>
<a href = "http:\\www.google.com">Google<br>
Example
</body>
</html>
In such cases it the code is unable to retrieve Google because of the broken tag that links google and only gives me "Example". Is there a way to also retrieve "Google"?
My code is here:
from bs4 import BeautifulSoup
from bs4 import SoupStrainer
f = open("sol.html","r")
soup = BeautifulSoup(f,parse_only=SoupStrainer('a'))
for link in soup.findAll('a',text=True):
print link.renderContents();
Please note sol.html contains the above given html code itself.
Thanks
- AJ
Remove text=True from your code and it should work just fine:
>>> from bs4 import BeautifulSoup
>>> soup = BeautifulSoup('''
... <html>
... <body>
... <a href = "http:\\www.google.com">Google<br>
... Example
... </body>
... </html>
... ''')
>>> [a.get_text().strip() for a in soup.find_all('a')]
[u'Google', u'Example']
>>> [a.get_text().strip() for a in soup.find_all('a', text=True)]
[u'Example']
Try this code:
from BeautifulSoup import BeautifulSoup
text = '''
<html>
<body>
<a href = "http:\\www.google.com">Google<br>
Example
</body>
</html>
'''
soup = BeautifulSoup(text)
for link in soup.findAll('a'):
if link.string != None:
print link.string
Here's the output when i ran the code:
Example
Just replace text with text = open('sol.html').read(), or whatever it is you need to go there.

Issues with BeautifulSoup parsing

I am trying to parse an html page with BeautifulSoup, but it appears that BeautifulSoup doesn't like the html or that page at all. When I run the code below, the method prettify() returns me only the script block of the page (see below). Does anybody has an idea why it happens?
import urllib2
from BeautifulSoup import BeautifulSoup
url = "http://www.futureshop.ca/catalog/subclass.asp?catid=10607&mfr=&logon=&langid=FR&sort=0&page=1"
html = "".join(urllib2.urlopen(url).readlines())
print "-- HTML ------------------------------------------"
print html
print "-- BeautifulSoup ---------------------------------"
print BeautifulSoup(html).prettify()
The is the output produced by BeautifulSoup.
-- BeautifulSoup ---------------------------------
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<script language="JavaScript">
<!--
function highlight(img) {
document[img].src = "/marketing/sony/images/en/" + img + "_on.gif";
}
function unhighlight(img) {
document[img].src = "/marketing/sony/images/en/" + img + "_off.gif";
}
//-->
</script>
Thanks!
UPDATE: I am using the following version, which appears to be the latest.
__author__ = "Leonard Richardson (leonardr#segfault.org)"
__version__ = "3.1.0.1"
__copyright__ = "Copyright (c) 2004-2009 Leonard Richardson"
__license__ = "New-style BSD"
Try with version 3.0.7a as Łukasz suggested. BeautifulSoup 3.1 was designed to be compatible with Python 3.0 so they had to change the parser from SGMLParser to HTMLParser which seems more vulnerable to bad HTML.
From the changelog for BeautifulSoup 3.1:
"Beautiful Soup is now based on HTMLParser rather than SGMLParser, which is gone in Python 3. There's some bad HTML that SGMLParser handled but HTMLParser doesn't"
Try lxml. Despite its name, it is also for parsing and scraping HTML. It's much, much faster than BeautifulSoup, and it even handles "broken" HTML better than BeautifulSoup, so it might work better for you. It has a compatibility API for BeautifulSoup too if you don't want to learn the lxml API.
Ian Blicking agrees.
There's no reason to use BeautifulSoup anymore, unless you're on Google App Engine or something where anything not purely Python isn't allowed.
BeautifulSoup isn't magic: if the incoming HTML is too horrible then it isn't going to work.
In this case, the incoming HTML is exactly that: too broken for BeautifulSoup to figure out what to do. For instance it contains markup like:
SCRIPT type=""javascript""
(Notice the double quoting.)
The BeautifulSoup docs contains a section what you can do if BeautifulSoup can't parse you markup. You'll need to investigate those alternatives.
Samj: If I get things like
HTMLParser.HTMLParseError: bad end tag: u"</scr' + 'ipt>"
I just remove the culprit from markup before I serve it to BeautifulSoup and all is dandy:
html = urllib2.urlopen(url).read()
html = html.replace("</scr' + 'ipt>","")
soup = BeautifulSoup(html)
I had problems parsing the following code too:
<script>
function show_ads() {
document.write("<div><sc"+"ript type='text/javascript'src='http://pagead2.googlesyndication.com/pagead/show_ads.js'></scr"+"ipt></div>");
}
</script>
HTMLParseError: bad end tag: u'', at line 26, column 127
Sam
I tested this script on BeautifulSoup version '3.0.7a' and it returns what appears to be correct output. I don't know what changed between '3.0.7a' and '3.1.0.1' but give it a try.
import urllib
from BeautifulSoup import BeautifulSoup
>>> page = urllib.urlopen('http://www.futureshop.ca/catalog/subclass.asp?catid=10607&mfr=&logon=&langid=FR&sort=0&page=1')
>>> soup = BeautifulSoup(page)
>>> soup.prettify()
In my case by executing the above statements, it returns the entire HTML page.

Categories