I am trying to extract text from a Vietnamese website, which charset is in utf-8. However, the text I got is always in Ascii, and I can't find a way to convert them to unicode or get exactly the text on the website. As a result, I can't save them into file as expected.
I know this is the very popular problem with unicode in Python, but I still hope someone will help me to figure it out. Thanks.
My code:
import requests, re, io
import simplejson as json
from lxml import html, etree
base = "http://www.amthuc365.vn/cong-thuc/"
page = requests.get(base + "trang-" + str(1) + ".html")
pageTree = html.fromstring(page.text)
links = pageTree.xpath('//ul[contains(#class, "mt30")]/li/a/#href')
names = pageTree.xpath('//h3[#class="title"]/a/text()')
for name in names[:1]:
print name
# Là m bánh oreo nhân bÆ¡ Äáºu phá»ng thÆ¡m bùi
but what I need is "Làm bánh oreo nhân bơ đậu phộng thơm bùi"
Thanks.
Just switching from page.text to page.content should make it work.
Explanation here.
Also see:
What is the difference between 'content' and 'text'
HTML encoding and lxml parsing
Related
I’m trying to parse a page and I’m having some issue with special characters such as é è à, etc.
According to the Firefox page information tool, the page is encoded in UTF - 8
My code is the following :
import bs4
import requests
url = 'https://www.registreentreprises.gouv.qc.ca/RQEntrepriseGRExt/GR/GR99/GR99A2_05A_PIU_AfficherMessages_PC/ActiEcon.html'
page = requests.get(url)
cae_obj_soup = bs4.BeautifulSoup(page.text, 'lxml', from_encoding='utf-8')
list_all_domain = cae_obj_soup.find_all('th')
for element in list_all_domain:
print(element.get_text())
The output is :
Pêche et piégeage
Exploitation forestière
I tried changing the encoding with iso-8859-1 (French encoding) and some other encodings without success. I read several posts on parsing special characters, and they basically states that it’s an issue of selecting the right encoding. Is there a possibility that I can’t decode correctly the special characters on some specific webpage or am I doing something wrong ?
The requests library takes a strict approach to the decoding of web pages. On the other hand, BeautifulSoup has powerful tools for determining the encoding of text. So it's better to pass the raw response from the request to BeautifulSoup, and let BeautifulSoup try to determine the encoding.
>>> r = requests.get('https://www.registreentreprises.gouv.qc.ca/RQEntrepriseGRExt/GR/GR99/GR99A2_05A_PIU_AfficherMessages_PC/ActiEcon.html')
>>> soup = BeautifulSoup(r.content, 'lxml')
>>> list_all_domain = soup.find_all('th')
>>> [e.get_text() for e in list_all_domain]
['Agriculture', "Services relatifs à l'agriculture", 'Pêche et piégeage', ...]
When I using lxml library in python to get data on a html page (Youtube video title), It not return text correctly It return a text Like this "à·à·à¶½à¶±à·à¶§à¶ºà¶±à"
Here my code,
page = requests.get("https://www.youtube.com/watch?v=MZMapfEg5g8")
source = html.fromstring(page.content)
links = source.xpath('//link[#type="text/xml+oembed"]')
for href in links:
return href.attrib['title']
Language I need is is in sinhala, and it's unicode.
Use:
source = html.fromstring(page.text)
With the requests library, page.text will be the decoded text. page.content is the raw byte content, and page.encoding is the encoding of the page.
In summary:
import requests
from lxml import html
page = requests.get("https://www.youtube.com/watch?v=MZMapfEg5g8")
source = html.fromstring(page.text)
links = source.xpath('//link[#type="text/xml+oembed"]')
for href in links:
print href.attrib['title']
Output (requires terminal/IDE that supports the characters/font):
පේස් බුක් මැසේජ් එක (Facebook Messege)
Apparently the title you get is a byte string, you need to .decode() it to turn into Unicode. The title in question, "පේස් බුක් මැසේජ් එක", when encoded as UTF-8 and then decoded / displayed as Latin-1, gives the output you complain about.
A simple .decode('utf-8') should fix the problem.
I have installed Python 2.7.13, pip and beautifulsoup on Win10. I want to convert a big file with html entities into Unicode characters and I am not sure how to go about it (I don't know much about Python). The file contents look like this:
<b>γέρων</b>, <i>οντος, ὁ</i>, Wurzel <i>ΓΕΡ</i>, verwandt mit <i>γέρας, γεραρός, γεραιός</i>
I can do small parts with EmEditor (using Edit > Encode/Decode Selection -> HTML/XML character reference to Unicode) but it is too slow and cannot cope with a big file conversion).
I would be happy for any (offline) solution for this.
this is html encoded, try with this:
from HTMLParser import HTMLParser
f = open("myfile.txt")
h = HTMLParser()
new_file_content = h.unescape(f.read())
new_file = open("newfile.txt", 'w')
new_file.write(new_file_content)
BeautifulSoup has a built in function for doing this called .decode(). Simply add this to the end of the line when you read in the file!
Example:
site_read = site_download.read().decode('utf-8')
import bs4
html = '''<b>γέρων</b>, <i>οντος, ὁ</i>, Wurzel <i>ΓΕΡ</i>, verwandt mit <i>γέρας, γεραρός, γεραιός</i>'''
soup = bs4.BeautifulSoup(html, 'lxml')
out:
<html><body><b>γέρων</b>, <i>οντος, ὁ</i>, Wurzel <i>ΓΕΡ</i>, verwandt mit <i>γέρας, γεραρός, γεραιός</i></body></html>
Document:
To parse a document, pass it into the BeautifulSoup constructor. You
can pass in a string or an open filehandle:
from bs4 import BeautifulSoup
>
> soup = BeautifulSoup(open("index.html")) # you can open you file in here
>
> soup = BeautifulSoup("<html>data</html>")
First, the document is
converted to Unicode, and HTML entities are converted to Unicode
characters:
Thank you for your help, I did manage to do it quite easily with the latest version of EmEditor which proved to be quite fast:
Select text > Edit > Encode/Decode Selection -> HTML/XML character reference to Unicode
I am using bs4 to extract text from a web document. But Its output is very strange. like
Ú©Ø¨Ú¾Û ÛÛ Ø¨Ø§Øª سÙجھ ÙÛÚº ÙÛ Ø§ÙØªÛ ØªÚ¾ÛÛ Ù¾Ú¾Ø± اÙÛØ³ØªÛ Ø§ÙÛØ³ØªÛ Ø¬Ø¨ Ú©ÚÚ¾ عÙ٠اÙÙÛ Ø´Ø±Ùع ÛÙØ¦Û ØªÙ Ø¨Ø§Øª Ú©ÚÚ¾ Ù¾ÙÛ Ù¾ÚÛÛÙÛک٠اب ÛÛ Ø¨Ø§Øª اÛØ³Û Ø³Ùجھ ÙÛÚº اÙØ¦Û Ú©Û Ø³ÙÚا اÙÙ¾ Ú©ÛÙÙÚ¯ÙÚº Ú©Û Ø¨Ú¾Û Ø´Ûئر کرÙÚºÛ ÚÙد رÙز Ùب٠ÙÛرا 8 Ù
I think it is some encoding. I am a new user of bs4. Please guide me how to decode it to show as urdu text.
Here is a document source whose title I want to extract
Follwoing code I am using to do it.
from bs4 import BeautifulSoup
import urllib2
import requests
url="http://blog.jang.com.pk/blog_details.asp?id=11058"
r = requests.get(url)
data = r.text
soup = BeautifulSoup(data,'lxml')
print str(soup.title)
Burhan Khalid's answer works, but because the original web page is encoded in utf-8:
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
You should update the requests' response field to match the original page's encoding:
from bs4 import BeautifulSoup
import urllib2
import requests
url="http://blog.jang.com.pk/blog_details.asp?id=11058"
r = requests.get(url)
# Update encoding to match source
r.encoding = "utf-8"
data = r.text
soup = BeautifulSoup(data,'lxml')
print str(soup.title)
Now any field you access will have the correct encoding rather than having to set to Urdu on a per field basis.
If you simply try to print the string, you'll get garbage characters out:
>>> import requests
>>> from bs4 import BeautifulSoup as bs4
>>> r = requests.get('http://blog.jang.com.pk/blog_details.asp?id=11058')
>>> s = bs4(r.text, 'lxml')
>>> print s.title.text
Ú©ÚÚ¾ تÙØ¬Û Ø§Ø³ طر٠بھÛ!
You need to encode it properly, since the result is a unicode bytestring.
>>> print s.title.text.encode('iso-8859-1')
کچھ توجہ اس طرف بھی!
If it displays the glyphs correctly, but in the wrong order (ie, they are not right-to-left), then this is a problem with your operating system/terminal/shell/program you are using to run the application.
The above is from gnome-terminal, which doesn't support Arabic RTL properly.
If I run the same code in mlterm:
The white box is there because I am using an Arabic font, which doesn't have all the characters in the Urdu language.
I think what is happening is that there some badly formed Unicode in the website response:
----> 1 r.content.decode()
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd8 in position 1106: invalid continuation byte
Hence the text is being decoded using the ANSI codec which is obviously wrong. You can work around this issue by calling decode with the option errors='ignore' (we are using the content rather than text because this is the raw binary response from the website:
data = r.content.decode(errors='ignore')
soup = BeautifulSoup(data,'lxml')
print (str(soup.title))
<title>کچھ توجہ اس طرف بھی!</title>
This question already has answers here:
Python correct encoding of Website (Beautiful Soup)
(3 answers)
Closed 1 year ago.
I'm writing a crawler with Python using BeautifulSoup, and everything was going swimmingly till I ran into this site:
http://www.elnorte.ec/
I'm getting the contents with the requests library:
r = requests.get('http://www.elnorte.ec/')
content = r.content
If I do a print of the content variable at that point, all the spanish special characters seem to be working fine. However, once I try to feed the content variable to BeautifulSoup it all gets messed up:
soup = BeautifulSoup(content)
print(soup)
...
<a class="blogCalendarToday" href="/component/blog_calendar/?year=2011&month=08&day=27&modid=203" title="1009 artÃculos en este dÃa">
...
It's apparently garbling up all the spanish special characters (accents and whatnot). I've tried doing content.decode('utf-8'), content.decode('latin-1'), also tried messing around with the fromEncoding parameter to BeautifulSoup, setting it to fromEncoding='utf-8' and fromEncoding='latin-1', but still no dice.
Any pointers would be much appreciated.
In your case this page has wrong utf-8 data which confuses BeautifulSoup and makes it think that your page uses windows-1252, you can do this trick:
soup = BeautifulSoup.BeautifulSoup(content.decode('utf-8','ignore'))
by doing this you will discard any wrong symbols from the page source and BeautifulSoup will guess the encoding correctly.
You can replace 'ignore' by 'replace' and check text for '?' symbols to see what has been discarded.
Actually it's a very hard task to write crawler which can guess page encoding every time with 100% chance(Browsers are very good at this nowadays), you can use modules like 'chardet' but, for example, in your case it will guess encoding as ISO-8859-2, which is not correct too.
If you really need to be able to get encoding for any page user can possibly supply - you should either build a multi-level(try utf-8, try latin1, try etc...) detection function(like we did in our project) or use some detection code from firefox or chromium as C module.
could you try:
r = urllib.urlopen('http://www.elnorte.ec/')
x = BeautifulSoup.BeautifulSoup(r.read)
r.close()
print x.prettify('latin-1')
I get the correct output.
Oh, in this special case you could also x.__str__(encoding='latin1').
I guess this is because the content is in ISO-8859-1(5) and the meta http-equiv content-type incorrectly says "UTF-8".
Could you confirm?
You can try this, which works for every encoding
from bs4 import BeautifulSoup
from bs4.dammit import EncodingDetector
headers = {"User-Agent": USERAGENT}
resp = requests.get(url, headers=headers)
http_encoding = resp.encoding if 'charset' in resp.headers.get('content-type', '').lower() else None
html_encoding = EncodingDetector.find_declared_encoding(resp.content, is_html=True)
encoding = html_encoding or http_encoding
soup = BeautifulSoup(resp.content, 'lxml', from_encoding=encoding)
I'd suggest taking a more methodical fool proof approach.
# 1. get the raw data
raw = urllib.urlopen('http://www.elnorte.ec/').read()
# 2. detect the encoding and convert to unicode
content = toUnicode(raw) # see my caricature for toUnicode below
# 3. pass unicode to beautiful soup.
soup = BeautifulSoup(content)
def toUnicode(s):
if type(s) is unicode:
return s
elif type(s) is str:
d = chardet.detect(s)
(cs, conf) = (d['encoding'], d['confidence'])
if conf > 0.80:
try:
return s.decode( cs, errors = 'replace' )
except Exception as ex:
pass
# force and return only ascii subset
return unicode(''.join( [ i if ord(i) < 128 else ' ' for i in s ]))
You can reason no matter what you throw at this, it will always send valid unicode to bs.
As a result your parsed tree will behave much better and not fail in newer more interesting ways every time you have new data.
Trial and Error doesnt work in Code - There are just too many combinations :-)
The first answer is right, this functions some times are efective.
def __if_number_get_string(number):
converted_str = number
if isinstance(number, int) or \
isinstance(number, float):
converted_str = str(number)
return converted_str
def get_unicode(strOrUnicode, encoding='utf-8'):
strOrUnicode = __if_number_get_string(strOrUnicode)
if isinstance(strOrUnicode, unicode):
return strOrUnicode
return unicode(strOrUnicode, encoding, errors='ignore')
def get_string(strOrUnicode, encoding='utf-8'):
strOrUnicode = __if_number_get_string(strOrUnicode)
if isinstance(strOrUnicode, unicode):
return strOrUnicode.encode(encoding)
return strOrUnicode