I am trying to check if a certain word is on a page for many sites. The script runs fine for say 15 sites and then it stops.
UnicodeDecodeError: 'utf8' codec can't decode byte 0x96 in position 15344: invalid start byte
I did a search on stackoverflow and found many issues on it but I can't seem to understand what went wrong in my case.
I would like to either solve it or if there is an error skip that site. Pls advice how I can do this as I am new and the below code itself has taken me a day to write. By the way the site which the script halted on was http://www.homestead.com
filetocheck = open("bloglistforcommenting","r")
resultfile = open("finalfile","w")
for countofsites in filetocheck.readlines():
sitename = countofsites.strip()
htmlfile = urllib.urlopen(sitename)
page = htmlfile.read().decode('utf8')
match = re.search("Enter your name", page)
if match:
print "match found : " + sitename
resultfile.write(sitename+"\n")
else:
print "sorry did not find the pattern " +sitename
print "Finished Operations"
As per Mark's comments I changed the code to implement beautifulsoup
htmlfile = urllib.urlopen("http://www.homestead.com")
page = BeautifulSoup((''.join(htmlfile)))
print page.prettify()
now I am getting this error
page = BeautifulSoup((''.join(htmlfile)))
TypeError: 'module' object is not callable
I am trying their quick start example from http://www.crummy.com/software/BeautifulSoup/documentation.html#Quick%20Start. If I copy paste it then the code works fine.
I FINALLY got it to work. Thank you all for your help. Here is the final code.
import urllib
import re
from BeautifulSoup import BeautifulSoup
filetocheck = open("listfile","r")
resultfile = open("finalfile","w")
error ="for errors"
for countofsites in filetocheck.readlines():
sitename = countofsites.strip()
htmlfile = urllib.urlopen(sitename)
page = BeautifulSoup((''.join(htmlfile)))
pagetwo =str(page)
match = re.search("Enter YourName", pagetwo)
if match:
print "match found : " + sitename
resultfile.write(sitename+"\n")
else:
print "sorry did not find the pattern " +sitename
print "Finished Operations"
The byte at 15344 is 0x96. Presumably at position 15343 there is either a single-byte encoding of a character, or the last byte of a multiple-byte encoding, making 15344 the start of a character. 0x96 is in binary 10010110, and any byte matching the pattern 10XXXXXX (0x80 to 0xBF) can only be a second or subsequent byte in a UTF-8 encoding.
Hence the stream is either not UTF-8 or else is corrupted.
Examining the URI you link to, we find the header:
Content-Type: text/html
Since there is no encoding stated, we should use the default for HTTP, which is ISO-8859-1 (aka "Latin 1").
Examining the content we find the line:
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
Which is a fall-back mechanism for people who are, for some reason, unable to set their HTTP headings correctly. This time we are explicitly told the character encoding is ISO-8859-1.
As such, there's no reason to expect reading it as UTF-8 to work.
For extra fun though, when we consider that in ISO-8859-1 0x96 encodes U+0096 which is the control character "START OF GUARDED AREA" we find that ISO-8859-1 isn't correct either. It seems the people creating the page made a similar error to yourself.
From context, it would seem that they actually used Windows-1252, as in that encoding 0x96 encodes U+2013 (EN-DASH, looks like –).
So, to parse this particular page you want to decode in Windows-1252.
More generally, you want to examine headers when picking character encodings, and while it would perhaps be incorrect in this case (or perhaps not, more than a few "ISO-8859-1" codecs are actually Windows-1252), you'll be correct more often. You still need to have something catch failures like this by reading with a fallback. The decode method takes a second parameter called errors. The default is 'strict', but you can also have 'ignore', 'replace', 'xmlcharrefreplace' (not appropriate), 'backslashreplace' (not appropriate) and you can register your own fallback handler with codecs.register_error().
Many web pages are encoded incorrectly. For parsing HTML try BeautifulSoup as it can handle many types of incorrect HTML that are found in the wild.
Beautiful Soup is a Python HTML/XML parser designed for quick
turnaround projects like screen-scraping. Three features make it
powerful:
Beautiful Soup won't choke if you give it bad markup. It yields a
parse tree that makes approximately as much sense as your original
document. This is usually good enough to collect the data you need and
run away.
Beautiful Soup provides a few simple methods and Pythonic
idioms for navigating, searching, and modifying a parse tree: a
toolkit for dissecting a document and extracting what you need. You
don't have to create a custom parser for each application.
Beautiful
Soup automatically converts incoming documents to Unicode and outgoing
documents to UTF-8. You don't have to think about encodings, unless
the document doesn't specify an encoding and Beautiful Soup can't
autodetect one. Then you just have to specify the original encoding.
Emphasis mine.
The site 'http://www.homestead.com' doesn't claim to be sending you utf-8, the response actually claims to be iso-8859-1:
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
You must use the correct encoding for the page you actually received, not just guess randomly.
Related
I'm currently in the learning process of Python3, I am scraping a site for some data, which works fine, but when it comes to printing out the p tags I just can't get it to work as I expect.
import urllib
import lxml
from urllib import request
from bs4 import BeautifulSoup
data = urllib.request.urlopen('www.site.com').read()
soup = BeautifulSoup(data, 'lxml')
stat = soup.find('div', {'style' : 'padding-left: 10px';})
dialog = stat.findChildren('p')
for child in dialog:
childtext = child.get_text()
#have tried child.string aswell (exactly the same result)
childlist.append(childtext.encode('utf-8', 'ignore')
#Have tried with str(childtext.encode('utf-8', 'ignore'))
print (childlist)
That all works, but the printing is "bytes"
b'This is a ptag.string'
b'\xc2\xa0 (probably  '
b'this is anotherone'
Real sample text that is ascii encoded:
b"Announcementb'Firefox users may encounter browser warnings encountering SSL SHA-1 certificates"
Note that Announcement is p and the rest is 'strong' under a p tag.
Same sample with utf-8 encode
b"Announcement\xc2\xa0\xe2\x80\x93\xc2\xa0b'Firefox users may encounter browser warnings encountering SSL SHA-1 "
I WISH to get:
"Announcement"
(newline / new item in list)
"Firefox users may encounter browser warnings encountering SSL SHA-1 certificates"
As you see, the incorrect chars are stripped in "ascii", but as some are that destroys some linebreaks and I have yet to figure out how to print that correctly, also, the b's are still there then!
I really can't figure out how to remove b's and encode or decode properly. I have tried every "solution" that I can google up.
HTML Content = utf-8
I would most rather not change the full data before processing because it will mess up my other work and I don't think it is needed.
Prettify does not work.
Any suggestions?
First, you're getting output of the form b'stuff' because you are calling .encode(), which returns a bytes object. If you want to print strings for reading, keep them as strings!
As a guess, I assume you're looking to print strings from HTML nicely, pretty much as they would be seen in a browser. For that, you need to decode the HTML string encoding, as described in this SO answer, which for Python 3.5 means:
import html
html.unescape(childtext)
Among other things, this will convert any sequences in the HTML string into '\xa0' characters, which are printed as spaces. However, if you want to break lines on these characters despite literally meaning "non-breaking space", you'll have to replace those with actual spaces before printing, e.g. using x.replace('\xa0', ' ').
i used requests to retrieve a url which contains some unicode characters, and want to do some processing with it , then write it out.
r=requests.get(url)
f=open('unicode_test_1.html','w');f.write(r.content);f.close()
html = lxml.html.fromstring(r.content)
htmlOut = lxml.html.tostring(html)
f=open('unicode_test_2.html','w');f.write(htmlOut);f.close()
in unicode_test_1.html, all chars looks fine, but in unicode_test_2.html, some chars changed to gibberish, why is that ?
i then tried
html = lxml.html.fromstring(r.text)
htmlOut = lxml.html.tostring(html,encoding='latin1')
f=open('unicode_test_2.html','w');f.write(htmlOut);f.close()
it seems it's working now. but i don't know why is this happening, always use latin1 ?
what's the difference between r.text and r.content, and why can't i write html out using encoding='utf-8' ?
You've not specified if you're using python 2 or 3. Encoding is handled quite differently depending on which version you're using. The following advice is more or less universal anyway.
The difference between r.text and r.content is in the Requests docs. Simply put Requests will attempt to figure out the character encoding for you and return Unicode after decoding it. This which is accessible via r.text. To get just the bytes use r.content.
You really need to get to grips with encodings. Read http://www.joelonsoftware.com/articles/Unicode.html and watch https://www.youtube.com/watch?v=sgHbC6udIqc to get started. Also, do a search for "Overcoming frustration: Correctly using unicode in python2" for additional help.
Just to clarify, it's not as simple as always use one encoding over another. Make a Unicode sandwich by doing any I/O in bytes and work with Unicode in your application. If you start with bytes (isinstance(mytext, str)) you need to know the encoding to decode to Unicode, if you start with Unicode (isinstance(mytext, unicode)) you should encode to UTF-8 as it will handle all the worlds characters.
Make sure your editor, files, server and database are configured to UTF-8 also otherwise you'll get more 'gibberish'.
If you want further help post the source files and output of your script.
I am trying to scrape text from the web using BeautifulSoup 4 to parse it out. I am running into an issue when printing bs4 processed text out to the console. Whenever I hit a character that was originally an HTML entity, like ’ I get garbage characters on the console. I believe bs4 is converting these entities to unicode correctly because if I try using another encoding to print out the text, it will complain about the appropriate lack of unicode mapping for a character (like u'\u2019.) I'm not sure why the print function gets confused over these characters. I've tried changing around fonts, which changes the garbage characters, and am on a Windows 7 machine with US-English locale. Here is my code for reference, any help is appreciated. Thanks in advance!
#!/usr/bin/python
import json
import urllib2
import cookielib
import bs4
cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
url = "http://api.nytimes.com/svc/search/v2/articlesearch.json?q=Tiguan\
&page=0&api-key=blah"
response = opener.open(url)
articles = response.read()
decoded = json.loads(articles)
totalpages = decoded['response']['meta']['hits']/10
for page in range(totalpages + 1):
if page>0:
url = "http://api.nytimes.com/svc/search/v2/articlesearch.json?\
q=Tiguan&page=" + str(page) + "&api-key=blah"
response = opener.open(url)
articles = response.read()
decoded = json.loads(articles)
for url in decoded['response']['docs']:
print url['web_url']
urlstring = url['web_url']
art = opener.open(urlstring)
soup = bs4.BeautifulSoup(art.read())
goodstuff = soup.findAll('nyt_text')
for tag in goodstuff:
print tag.prettify().encode("UTF")
The problem has nothing to do with bs4, or HTML entities, or anything else. You could reproduce the exact same behavior, on most Windows systems, with a one-liner program to print out the same characters that are appearing as garbage when you try to print them, like this:
print u'\u2019'.encode('UTF-8')
The problem here is that, like the vast majority of Windows systems (and nothing else anyone uses in 2013), your default character set is not UTF-8, but something like CP1252.
So, when you encode your Unicode strings to UTF-8 and print those bytes to the console, the console interprets them as CP1252. Which, in this case, means you get ’ instead of ’.
Changing fonts won't help. The UTF-8 encoding of \u2013 is the three bytes \xe2, \x80, and \x99, and the CP1252 meaning of those three bytes is â, €, and ™.
If you want to encode manually for the console, you need to encode to the right character set, the one your console actually uses. You may be able to get that as sys.stdout.encoding.
Of course you may get an exception trying to encode things for the right character set, because 8-bit character sets like CP1252 can only handle about 240 of the 110K characters in Unicode. The only way to handle that is to use the errors argument to encode to either ignore them or replace them with replacement characters.
Meanwhile, if you haven't read the Unicode HOWTO, you really need to. Especially if you plan to stick with Python 2.x and Windows.
If you're wondering why a few command-line programs seem to be able to get around these problems: Microsoft's solution to the character set problem is to create a whole parallel set of APIs that use 16-bit characters instead of 8-bit, and those APIs always use UTF-16. Unfortunately, many things, like the portable stdio wrappers that Microsoft provides for talking to the console and that Python 2.x relies on, only have the 8-bit API. Which means the problem isn't solved at all. Python 3.x no longer uses those wrappers, and there have been recurring discussions on making some future version talk UTF-16 to the console. But even if that happens in 3.4 (which seems very unlikely), that won't help you as long as you're using 2.x.
#abarnert's answer contains a good explanation of the issue.
In your particular case, you could just pass encoding parameter to prettify() instead of default utf-8.
If you are printing to console, you could try to print Unicode directly:
print soup.prettify(encoding=None, formatter='html') # print Unicode
It may fail. If you pass ascii; then BeautifulSoup may use numerical character references instead of non-ascii characters:
print soup.prettify('ascii', formatter='html')
It assumes that current Windows codepage is ascii-based encoding (most of them do). It should also work if the output is redirected to a file or another program via a pipe.
For portability, you could always print Unicode (encoding=None above) and use PYTHONIOENCODING to get appropriate character encoding e.g., utf-8 for files, pipes and ascii:xmlcharrefreplace to avoid garbage in a console.
I am using python 3.3 in Windows 7.
if "iso-8859-1" in str(source):
source = source.decode('iso-8859-1')
if "utf-8" in str(source):
source = source.decode('utf-8')
So, currently my application is valid for the above two charsets only ... but I want to cover every possible charset.
Actually, I'm finding these charsets manually from the source of the website, and I have experienced that all the websites in the world are not just from these two. Sometimes websites do not show their charset in their HTML source! So, my application fails to move ahead there!
What should I do to detect a charset automatically and decode according to it?
Please try to make me aware in-depth and with examples if possible. You can suggest important links too.
BeautifulSoup provides a function UnicodeDammit() that goes through a number of steps1 to determine the encoding of any string you give it, and converts it to unicode. It's pretty straightforward to use:
from bs4 import UnicodeDammit
unicode_string = UnicodeDammit(encoded_string)
If you use BeautifulSoup to process your HTML, it will automatically use UnicodeDammit to convert it to unicode for you.
1According to the documentation for BeautifulSoup 3, these are the actions UnicodeDammit takes:
Beautiful Soup tries the following encodings, in order of priority, to
turn your document into Unicode:
An encoding you pass in as the fromEncoding argument to the soup constructor.
An encoding discovered in the document itself: for instance, in an XML
declaration or (for HTML documents) an http-equiv META tag. If Beautiful
Soup finds this kind of encoding within the document, it parses the
document again from the beginning and gives the new encoding a try. The
only exception is if you explicitly specified an encoding, and that
encoding actually worked: then it will ignore any encoding it finds in the
document.
An encoding sniffed by looking at the first few bytes of the file. If an
encoding is detected at this stage, it will be one of the UTF-* encodings,
EBCDIC, or ASCII.
An encoding sniffed by the chardet library, if you have it installed.
UTF-8
Windows-1252
That explanation doesn't seem to be present in the BeautifulSoup 4 documentation, but presumably BS4's UnicodeDammit works in much the same way (though I haven't checked the source to be sure).
I am trying to download page_source to a file. However, every time I get a:
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 (or something else) in
position 8304: ordinal not in range(128)
I've tried using value.encode('utf-8'), but it seems every time it throws the same exception (in addition to manually trying to replace every non-ascii character). Is there a way to 'pre-process' the html to put it into a 'write-able' format?
There are third party libraries such as BeautifulSoup and lxml that can deal with encoding issues automatically. But here's a crude example using just urlllib2:
First download some webpage containing non-ascii characters:
>>> import urllib2
>>> response = urllib2.urlopen('http://www.ltg.ed.ac.uk/~richard/unicode-sample.html')
>>> data = response.read()
Now have a look for the "charset" at the top of the page:
>>> data[:200]
'<html>\n<head>\n<title>Unicode 2.0 test page</title>\n<meta
content="text/html; charset=UTF-8" http-equiv="Content-type"/>\n
</head>\n<body>\n<p>This page contains characters from each of the
Unicode\ncharact'
If there was no obvious charset, "UTF-8" is usually a good guess, anyway.
Finally, convert the webpage to unicode text:
>>> text = data.decode('utf-8')
I am not sure, however http://www.crummy.com/software/BeautifulSoup/ has a function .prettify() that returns well formatted HTML. You could try using that for "preprocessing".
The problem is probably that you're trying to go str -> utf-8, when you need to go str -> unicode -> utf-8. In other words, try unicode(s, 'utf-8').encode('utf-8').
See http://farmdev.com/talks/unicode/ for more info.