I'm using BeautifulSoup and trying to read a site which is written in hebrew and encoded in windows-1255 according to this line:
<meta http-EQUIV="Content-Type" Content="text/html; charset=windows-1255">
when I'm trying to encode it, I get the following error:
> UnicodeEncodeError: 'charmap' codec can't encode characters in position 6949-6950: character maps to <undefined>
The code:
from bs4 import BeautifulSoup
import requests
r = requests.get('http://www.plonter.co.il')
soup = BeautifulSoup(r.text)
print soup.prettify().encode('windows-1255')
If the site is already encoded in windows-1255 you should decode it to get unicode or just use it with the current encoding.
--edit
I didn't know r.text was already decoded.
>>> import requests
>>> r = requests.get('http://www.plonter.co.il')
>>> isinstance(r.text, unicode)
True
>>> isinstance(r.content, unicode)
False
>>> isinstance(r.content, str)
True
>>> r.encoding
'ISO-8859-1'
>>> r.content.decode(r.encoding).encode('utf-8') # works
>>> r.content.decode(r.encoding).encode('windows-1255') # fails
>>> r.content.decode(r.encoding).encode('windows-1255', 'ignore'). # works
>>> r.content.decode(r.encoding).encode('windows-1252') # works
So, I think you got the encoding "wrong". 'windows-1255' can't handle the content encode without errors. On the other hand 'utf-8', 'iso-8859-1' and 'windows-1252' seem to be able to handle it.
>>> r.content.decode(r.encoding) == r.text
True
Related
Example code:
import requests
import bs4 as bs
import urllib
import urllib.request
from urllib.request import Request
req = Request("https://www.openbugbounty.org/researchers/Spam404/vip/page/1/",
headers ={'User-Agent':'Mozilla/5.0'})
sauce = urllib.request.urlopen(req).read()
soup = bs.BeautifulSoup(sauce,'lxml')
print(soup)
My output gives me the following error:
File "/Users/student/Desktop/AutoBots/kbfza2.py", line 15, in
print(soup)
UnicodeEncodeError: 'ascii' codec can't encode character '\xa0' in position 5194: ordinal not in range(128)
After searching for a solution online for a while, it seemed that changing my soup line to:
soup = bs.BeautifulSoup(sauce.decode('utf-8','ignore'),'lxml)
would solve this for me but this hasn't fixed anything for me.
Am I mistaken in thinking that the decode function with the ignore argument should allow me to print(soup) without error even if it isn't successfully decoded completely?
Just re-read your question and I believe you are trying to print Unicode text to a console that doesn't support that character set (I couldn't reproduce the error with the code you posted).
You may need to force your console output to utf-8 encoding, or if you are using an IDE like sublime text, change it to render utf-8 encoding.
I am using bs4 to extract text from a web document. But Its output is very strange. like
Ú©Ø¨Ú¾Û ÛÛ Ø¨Ø§Øª سÙجھ ÙÛÚº ÙÛ Ø§ÙØªÛ ØªÚ¾ÛÛ Ù¾Ú¾Ø± اÙÛØ³ØªÛ Ø§ÙÛØ³ØªÛ Ø¬Ø¨ Ú©ÚÚ¾ عÙ٠اÙÙÛ Ø´Ø±Ùع ÛÙØ¦Û ØªÙ Ø¨Ø§Øª Ú©ÚÚ¾ Ù¾ÙÛ Ù¾ÚÛÛÙÛک٠اب ÛÛ Ø¨Ø§Øª اÛØ³Û Ø³Ùجھ ÙÛÚº اÙØ¦Û Ú©Û Ø³ÙÚا اÙÙ¾ Ú©ÛÙÙÚ¯ÙÚº Ú©Û Ø¨Ú¾Û Ø´Ûئر کرÙÚºÛ ÚÙد رÙز Ùب٠ÙÛرا 8 Ù
I think it is some encoding. I am a new user of bs4. Please guide me how to decode it to show as urdu text.
Here is a document source whose title I want to extract
Follwoing code I am using to do it.
from bs4 import BeautifulSoup
import urllib2
import requests
url="http://blog.jang.com.pk/blog_details.asp?id=11058"
r = requests.get(url)
data = r.text
soup = BeautifulSoup(data,'lxml')
print str(soup.title)
Burhan Khalid's answer works, but because the original web page is encoded in utf-8:
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
You should update the requests' response field to match the original page's encoding:
from bs4 import BeautifulSoup
import urllib2
import requests
url="http://blog.jang.com.pk/blog_details.asp?id=11058"
r = requests.get(url)
# Update encoding to match source
r.encoding = "utf-8"
data = r.text
soup = BeautifulSoup(data,'lxml')
print str(soup.title)
Now any field you access will have the correct encoding rather than having to set to Urdu on a per field basis.
If you simply try to print the string, you'll get garbage characters out:
>>> import requests
>>> from bs4 import BeautifulSoup as bs4
>>> r = requests.get('http://blog.jang.com.pk/blog_details.asp?id=11058')
>>> s = bs4(r.text, 'lxml')
>>> print s.title.text
Ú©ÚÚ¾ تÙØ¬Û Ø§Ø³ طر٠بھÛ!
You need to encode it properly, since the result is a unicode bytestring.
>>> print s.title.text.encode('iso-8859-1')
کچھ توجہ اس طرف بھی!
If it displays the glyphs correctly, but in the wrong order (ie, they are not right-to-left), then this is a problem with your operating system/terminal/shell/program you are using to run the application.
The above is from gnome-terminal, which doesn't support Arabic RTL properly.
If I run the same code in mlterm:
The white box is there because I am using an Arabic font, which doesn't have all the characters in the Urdu language.
I think what is happening is that there some badly formed Unicode in the website response:
----> 1 r.content.decode()
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd8 in position 1106: invalid continuation byte
Hence the text is being decoded using the ANSI codec which is obviously wrong. You can work around this issue by calling decode with the option errors='ignore' (we are using the content rather than text because this is the raw binary response from the website:
data = r.content.decode(errors='ignore')
soup = BeautifulSoup(data,'lxml')
print (str(soup.title))
<title>کچھ توجہ اس طرف بھی!</title>
I'm trying to code a python google api. Getting some unicode issues. My really basic PoC so far is:
#!/usr/bin/env python
import urllib2
from bs4 import BeautifulSoup
query = "filetype%3Apdf"
url = "http://www.google.com/search?sclient=psy-ab&hl=en&site=&source=hp&q="+query+"&btnG=Search"
opener = urllib2.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
response = opener.open(url)
data = response.read()
data = data.decode('UTF-8', 'ignore')
data = data.encode('UTF-8', 'ignore')
soup = BeautifulSoup(data)
print u""+soup.prettify('UTF-8')
My traceback is:
Traceback (most recent call last):
File "./google.py", line 22, in <module>
print u""+soup.prettify('UTF-8')
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 48786: ordinal not in range(128)
Any ideas?
You are converting your soup tree to UTF-8 (an encoded byte string), then try to concatenate this to an empty u'' unicode string.
Python will automatically try and decode your encoded byte string, using the default encoding, which is ASCII, and fails to decode the UTF-8 data.
You need to explicitly decode the prettify() output:
print u"" + soup.prettify('UTF-8').decode('UTF-8')
The Python Unicode HOWTO explains this better, including about default encodings. I really, really recommend you read Joel Spolsky's article on Unicode as well.
I'm not familiar with beautifulsoup's encoding.
when I tackle with some pages,some attribute is chinese, and I want to use this chinese attribute to extract tags.
for example,a html like below:
<P class=img_s>
<A href="/pic/93/b67793.jpg" target="_blank" title="查看大图">
<IMG src="/pic/93/s67793.jpg">
</A>
</P>
I want to extract the '/pic/93/b67793.jpg'
so what I done is:
img_urls = form_soup.findAll('a',title='查看大图')
and encounter:
UnicodeDecodeError: 'ascii' codec can't decode byte 0xb2 in position 0: ordinalnot in range(128)
to tackle with this,I have done two method,both failed:
one way is :
import sys
reload(sys)
sys.setdefaultencoding("utf-8")
another way is:
response = unicode(response, 'gb2312','ignore').encode('utf-8','ignore')
You need to pass in unicode to the findAll method:
# -*- coding: utf-8
...
img_urls = form_soup.findAll('a', title=u'查看大图')
Note the u unicode literal marker in front of the title value. You do need to specify an encoding on your source file for this to work (the coding comment at the top of the file), or switch to unicode escape codes instead:
img_urls = form_soup.findAll('a', title=u'\u67e5\u770b\u5927\u56fe')
Internally, BeautifulSoup uses unicode, but you are passing it a byte-string with non-ascii characters in them. BeautifulSoup tries to decode that to unicode for you and fails as it doesn't know what encoding you used. By providing it with ready-made unicode instead you side-step the issue.
Working example:
>>> from BeautifulSoup import BeautifulSoup
>>> example = u'<P class=img_s>\n<IMG src="/pic/93/s67793.jpg"></P>'
>>> soup = BeautifulSoup(example)
>>> soup.findAll('a', title=u'\u67e5\u770b\u5927\u56fe')
[<img src="/pic/93/s67793.jpg" />]
Beautiful Soup 4.1.0 will automatically convert attribute values from UTF-8, which solves this problem:
I need to get all the text from a page using BeautifulSoup. At BeautifulSoup's documentation, it showed that you could do soup.get_text() to do this. When I tried doing this on reddit.com, I got this error:
UnicodeEncodeError in soup.py:16
'cp932' codec can't encode character u'\xa0' in position 2262: illegal multibyte sequence
I get errors like that on most of the sites I checked.
I got similar errors when I did soup.prettify() too, but I fixed it by changing it to soup.prettify('UTF-8'). Is there any way to fix this? Thanks in advance!
Update June 24
I've found a bit of code that seems to work for other people, but I still need to use UTF-8 instead of the default. Code:
texts = soup.findAll(text=True)
def visible(element):
if element.parent.name in ['style', 'script', '[document]', 'head', 'title']:
return False
elif re.match('', str(element)): return False
elif re.match('\n', str(element)): return False
return True
visible_texts = filter(visible, texts)
print visible_texts
Error is different, though. Progress?
UnicodeEncodeError in soup.py:29
'ascii' codec can't encode character u'\xbb' in position 1: ordinal not in range
(128)
soup.get_text() returns a Unicode string that's why you're getting the error.
You can solve this in a number of ways including setting the encoding at the shell level.
export PYTHONIOENCODING=UTF-8
You can reload sys and set the encoding by including this in your script.
if __name__ == "__main__":
reload(sys)
sys.setdefaultencoding("utf-8")
Or you can encode the string as utf-8 in code. For your reddit problem something like the following would work:
import urllib
from bs4 import BeautifulSoup
url = "https://www.reddit.com/r/python"
html = urllib.urlopen(url).read()
soup = BeautifulSoup(html)
# get text
text = soup.get_text()
print(text.encode('utf-8'))
You can't do str(text) if you may be dealing with unicode on the page. Instead of str(), use unicode().