There are weird characters even though it's encoded utf-8 [duplicate] - python

This question already has answers here:
python requests.get() returns improperly decoded text instead of UTF-8?
(4 answers)
Closed last year.
I spent my last 3 hours to solve this problem even though there are plenty of solutions. It just doesn't work for me, I suspicious of if the website that I'm scrapping is corrupted but Firefox shows the content perfectly.. As I said, this is asked before but I think there is a difference for my code and I want to learn what it is.
from bs4 import BeautifulSoup
import requests
html_text = requests.get('link_for_scrapping').text
soup = BeautifulSoup(html_text, 'lxml')
print(soup.encoding)
soup.encoding = 'utf-8'
print(soup.encoding)
Output:
None
utf-8
Why is it encoded as "None" at first? The content I'm looking for is written with Turkish characters, but in other people's code, they weren't encoded as "None". They were something like "ISO-xxxx-x" or something else
Also, when I converted it to "utf-8" nothing changes. There are still same weird characters.
If we add this code, we can see it better:
menu = soup.find(class_="panel-grid-cell col-md-6").text
print(menu)
Output:
None
utf-8
1) 31.01.2022 Pazartesi Yemekler :
Mercimek Ãorba Fırın Patates Mor Dünya Salatası Sıhhiye Kırmızı Lahana Havuç Salata Elma *Etsiz PatatesKalori : 1099
If I change the encoding to utf-8 or not, problem persists.
Expected Output:
None
utf-8
1) 31.01.2022 Pazartesi Yemekler :
Mercimek Çorba Fırın Patates Mor Dünya Salatası Sıhhiye Kırmızı Lahana Havuç Salata Elma *Etsiz PatatesKalori : 1099
Thanks in advance!

The Problem:
import requests
r = requests.get('link')
print(r.encoding)
Output: ISO-8859-1
The server is not sending the appropriate header, requests doesn't parse <meta charset="utf-8" />, so it defaults to ISO-8859-1.
Solution 1: Tell requests what encoding to use
r.encoding = 'utf-8'
html_text = r.text
Solution 2: Do the decoding yourself
html_text = r.content.decode('utf-8')
Solution 3: Have requests take a guess
r.encoding = r.apparent_encoding
html_text = r.text
In any case, html_text will now contain the (correctly decoded) html source and can be fed to BeautifulSoup.
The encoding setting of BeautifulSoup didn't help, because at that point you already had a wrongly decoded string!

Related

BeautifulSoup returning messed-up arabic characters

I am scraping an arabic website using BeautiifulSoup but the arabic characters returned are returned inverted and separate chars (pasting it here correctly encodes it so you have to trust me on that :).
The website charset is using UTF-8
<meta charset=UTF-8>
This is how I am parsing it:
url = 'https://new.kooora4live.net/matches-today-1/'
page = requests.get(url)
soup = BeautifulSoup(page.content, 'lxml', from_encoding='utf-8')
Writing the requested HTML to a file with utf-8 encoding correctly formats it in the file so it seems as it's an issue with BeautifulSoup.
Any idea what am I doing wrong or how to fix it?
....
Update:
Encoding with utf-8-sig also doesn't work.
You need to set the page encoding to match its apparent encoding.
Try this:
import requests
from bs4 import BeautifulSoup
page = requests.get('https://new.kooora4live.net/matches-today-1/')
page.encoding = page.apparent_encoding
soup = BeautifulSoup(page.content, 'lxml').select("a")
print("\n".join(a.getText(strip=True) for a in soup))
This will print out:
الأخبار
أهم المباريات
جداول
ترتيب الفرق
ترتيب الهدافين
مباريات الأمس
مباريات اليوم
مباريات الغد
جمهورية التشيك7:00 PM0-0لم تبدأ بعدالدنماركبي ان ماكس 1احمد البلوشييورو 2020
اوكرانيا10:00 PM0-0لم تبدأ بعدإنجلترابي ان ماكس 1حسن العيدروسيورو 2020
and more ...

Issue with parsing special characters in a utf-8 encoded page with bs4

I’m trying to parse a page and I’m having some issue with special characters such as é è à, etc.
According to the Firefox page information tool, the page is encoded in UTF - 8
My code is the following :
import bs4
import requests
url = 'https://www.registreentreprises.gouv.qc.ca/RQEntrepriseGRExt/GR/GR99/GR99A2_05A_PIU_AfficherMessages_PC/ActiEcon.html'
page = requests.get(url)
cae_obj_soup = bs4.BeautifulSoup(page.text, 'lxml', from_encoding='utf-8')
list_all_domain = cae_obj_soup.find_all('th')
for element in list_all_domain:
print(element.get_text())
The output is :
Pêche et piégeage
Exploitation forestière
I tried changing the encoding with iso-8859-1 (French encoding) and some other encodings without success. I read several posts on parsing special characters, and they basically states that it’s an issue of selecting the right encoding. Is there a possibility that I can’t decode correctly the special characters on some specific webpage or am I doing something wrong ?
The requests library takes a strict approach to the decoding of web pages. On the other hand, BeautifulSoup has powerful tools for determining the encoding of text. So it's better to pass the raw response from the request to BeautifulSoup, and let BeautifulSoup try to determine the encoding.
>>> r = requests.get('https://www.registreentreprises.gouv.qc.ca/RQEntrepriseGRExt/GR/GR99/GR99A2_05A_PIU_AfficherMessages_PC/ActiEcon.html')
>>> soup = BeautifulSoup(r.content, 'lxml')
>>> list_all_domain = soup.find_all('th')
>>> [e.get_text() for e in list_all_domain]
['Agriculture', "Services relatifs à l'agriculture", 'Pêche et piégeage', ...]

How to extract Urdu text from a webpage using beautifulsoup

I am using bs4 to extract text from a web document. But Its output is very strange. like
Ú©Ø¨Ú¾Û ÛÛ Ø¨Ø§Øª سÙجھ ÙÛÚº ÙÛ Ø§ÙØªÛ ØªÚ¾ÛÛ Ù¾Ú¾Ø± اÙÛØ³ØªÛ Ø§ÙÛØ³ØªÛ Ø¬Ø¨ Ú©ÚÚ¾ عÙ٠اÙÙÛ Ø´Ø±Ùع ÛÙØ¦Û ØªÙ Ø¨Ø§Øª Ú©ÚÚ¾ Ù¾ÙÛ Ù¾ÚÛÛÙÛک٠اب ÛÛ Ø¨Ø§Øª اÛØ³Û Ø³Ùجھ ÙÛÚº اÙØ¦Û Ú©Û Ø³ÙÚا اÙÙ¾ Ú©ÛÙÙÚ¯ÙÚº Ú©Û Ø¨Ú¾Û Ø´Ûئر کرÙÚºÛ ÚÙد رÙز Ùب٠ÙÛرا 8 Ù
I think it is some encoding. I am a new user of bs4. Please guide me how to decode it to show as urdu text.
Here is a document source whose title I want to extract
Follwoing code I am using to do it.
from bs4 import BeautifulSoup
import urllib2
import requests
url="http://blog.jang.com.pk/blog_details.asp?id=11058"
r = requests.get(url)
data = r.text
soup = BeautifulSoup(data,'lxml')
print str(soup.title)
Burhan Khalid's answer works, but because the original web page is encoded in utf-8:
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
You should update the requests' response field to match the original page's encoding:
from bs4 import BeautifulSoup
import urllib2
import requests
url="http://blog.jang.com.pk/blog_details.asp?id=11058"
r = requests.get(url)
# Update encoding to match source
r.encoding = "utf-8"
data = r.text
soup = BeautifulSoup(data,'lxml')
print str(soup.title)
Now any field you access will have the correct encoding rather than having to set to Urdu on a per field basis.
If you simply try to print the string, you'll get garbage characters out:
>>> import requests
>>> from bs4 import BeautifulSoup as bs4
>>> r = requests.get('http://blog.jang.com.pk/blog_details.asp?id=11058')
>>> s = bs4(r.text, 'lxml')
>>> print s.title.text
Ú©ÚÚ¾ تÙØ¬Û Ø§Ø³ طر٠بھÛ!
You need to encode it properly, since the result is a unicode bytestring.
>>> print s.title.text.encode('iso-8859-1')
کچھ توجہ اس طرف بھی!
If it displays the glyphs correctly, but in the wrong order (ie, they are not right-to-left), then this is a problem with your operating system/terminal/shell/program you are using to run the application.
The above is from gnome-terminal, which doesn't support Arabic RTL properly.
If I run the same code in mlterm:
The white box is there because I am using an Arabic font, which doesn't have all the characters in the Urdu language.
I think what is happening is that there some badly formed Unicode in the website response:
----> 1 r.content.decode()
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd8 in position 1106: invalid continuation byte
Hence the text is being decoded using the ANSI codec which is obviously wrong. You can work around this issue by calling decode with the option errors='ignore' (we are using the content rather than text because this is the raw binary response from the website:
data = r.content.decode(errors='ignore')
soup = BeautifulSoup(data,'lxml')
print (str(soup.title))
<title>کچھ توجہ اس طرف بھی!</title>

Python: UnicodeDecodeError: 'utf-8' codec can't decode byte...invalid continuation byte

I'm building a web scraper using BeautifulSoup on Python 3.3
However I get a problem which prevents me from getting a valid strin* that I can use with BeautifulSoup. That is:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe0 in position 7047: invalid continuation byte
I know there are dozens of similar questions but I haven't so far found a method that can help me to diagnose what's wrong with the following code:
import urllib.request
URL = "<url>" # sorry, I cannot show the url for privacy reasons, but it's a normal html document
page = urllib.request.urlopen(URL)
page = page.read().decode("utf-8") # from bytes to <source encodings>
As I guessed I notice this error occurs only with some URLS and not with others. Even with this same error I wasn't having this error until yesterday. Then today I run the program again and the error popped up..
Any clue on how to diagnose the error?
You should not decode the response. First of all, you are incorrectly assuming the response is UTF-8 encoded (it is not, as the error shows), but more importantly, BeautifulSoup will detect the encoding for you. See the Encodings section of the BeautifulSoup documentation.
Pass a byte string to BeautifulSoup and it'll use any <meta> header proclaiming the correct encoding, or do great job of autodetecting the encoding for you.
In the event that auto-detection fails, you can always fall back to the server-provided encoding:
encoding = page.info().get_charset()
page = page.read()
soup = BeautifulSoup(page)
if encoding is not None and soup.original_encoding != encoding:
print('Server and BeautifulSoup disagree')
print('Content-type states it is {}, BS4 states thinks it is {}'.format(encoding, soup.original_encoding)
print('Forcing encoding to server-supplied codec')
soup = BeautifulSoup(page, from_encoding=encoding)
This still leaves the actual decoding to BeautifulSoup, but if the server included a charset parameter in the Content-Type header then the above assumes that the server is correctly configured and forces BeautifulSoup to use that encoding.

Python and BeautifulSoup encoding issues [duplicate]

This question already has answers here:
Python correct encoding of Website (Beautiful Soup)
(3 answers)
Closed 1 year ago.
I'm writing a crawler with Python using BeautifulSoup, and everything was going swimmingly till I ran into this site:
http://www.elnorte.ec/
I'm getting the contents with the requests library:
r = requests.get('http://www.elnorte.ec/')
content = r.content
If I do a print of the content variable at that point, all the spanish special characters seem to be working fine. However, once I try to feed the content variable to BeautifulSoup it all gets messed up:
soup = BeautifulSoup(content)
print(soup)
...
<a class="blogCalendarToday" href="/component/blog_calendar/?year=2011&month=08&day=27&modid=203" title="1009 artículos en este día">
...
It's apparently garbling up all the spanish special characters (accents and whatnot). I've tried doing content.decode('utf-8'), content.decode('latin-1'), also tried messing around with the fromEncoding parameter to BeautifulSoup, setting it to fromEncoding='utf-8' and fromEncoding='latin-1', but still no dice.
Any pointers would be much appreciated.
In your case this page has wrong utf-8 data which confuses BeautifulSoup and makes it think that your page uses windows-1252, you can do this trick:
soup = BeautifulSoup.BeautifulSoup(content.decode('utf-8','ignore'))
by doing this you will discard any wrong symbols from the page source and BeautifulSoup will guess the encoding correctly.
You can replace 'ignore' by 'replace' and check text for '?' symbols to see what has been discarded.
Actually it's a very hard task to write crawler which can guess page encoding every time with 100% chance(Browsers are very good at this nowadays), you can use modules like 'chardet' but, for example, in your case it will guess encoding as ISO-8859-2, which is not correct too.
If you really need to be able to get encoding for any page user can possibly supply - you should either build a multi-level(try utf-8, try latin1, try etc...) detection function(like we did in our project) or use some detection code from firefox or chromium as C module.
could you try:
r = urllib.urlopen('http://www.elnorte.ec/')
x = BeautifulSoup.BeautifulSoup(r.read)
r.close()
print x.prettify('latin-1')
I get the correct output.
Oh, in this special case you could also x.__str__(encoding='latin1').
I guess this is because the content is in ISO-8859-1(5) and the meta http-equiv content-type incorrectly says "UTF-8".
Could you confirm?
You can try this, which works for every encoding
from bs4 import BeautifulSoup
from bs4.dammit import EncodingDetector
headers = {"User-Agent": USERAGENT}
resp = requests.get(url, headers=headers)
http_encoding = resp.encoding if 'charset' in resp.headers.get('content-type', '').lower() else None
html_encoding = EncodingDetector.find_declared_encoding(resp.content, is_html=True)
encoding = html_encoding or http_encoding
soup = BeautifulSoup(resp.content, 'lxml', from_encoding=encoding)
I'd suggest taking a more methodical fool proof approach.
# 1. get the raw data
raw = urllib.urlopen('http://www.elnorte.ec/').read()
# 2. detect the encoding and convert to unicode
content = toUnicode(raw) # see my caricature for toUnicode below
# 3. pass unicode to beautiful soup.
soup = BeautifulSoup(content)
def toUnicode(s):
if type(s) is unicode:
return s
elif type(s) is str:
d = chardet.detect(s)
(cs, conf) = (d['encoding'], d['confidence'])
if conf > 0.80:
try:
return s.decode( cs, errors = 'replace' )
except Exception as ex:
pass
# force and return only ascii subset
return unicode(''.join( [ i if ord(i) < 128 else ' ' for i in s ]))
You can reason no matter what you throw at this, it will always send valid unicode to bs.
As a result your parsed tree will behave much better and not fail in newer more interesting ways every time you have new data.
Trial and Error doesnt work in Code - There are just too many combinations :-)
The first answer is right, this functions some times are efective.
def __if_number_get_string(number):
converted_str = number
if isinstance(number, int) or \
isinstance(number, float):
converted_str = str(number)
return converted_str
def get_unicode(strOrUnicode, encoding='utf-8'):
strOrUnicode = __if_number_get_string(strOrUnicode)
if isinstance(strOrUnicode, unicode):
return strOrUnicode
return unicode(strOrUnicode, encoding, errors='ignore')
def get_string(strOrUnicode, encoding='utf-8'):
strOrUnicode = __if_number_get_string(strOrUnicode)
if isinstance(strOrUnicode, unicode):
return strOrUnicode.encode(encoding)
return strOrUnicode

Categories