Decoding in utf-8 in parsed data from website via Python - python

I am trying to parse data from website and I am getting an error.
Here's my python code
import urllib.request
import re
url = "http://ihned.cz"
req = urllib.request.Request(url)
resp = urllib.request.urlopen(req)
respData = resp.read().decode('utf-8')
#print(respData) #html kód
authors = re.findall(r'data-author="(.*?)"', str(respData))
for author in authors:
print(authors)
And here's the error.
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe1 in position 368: invalid continuation byte
Can you please help me?
Thank you.

The source of that website says charset="windows-1250". Try decode('windows-1250').

Related

'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte

I am trying to read the page source of a web page by this code:
from urllib.request import urlopen
url = "http://www.tsetmc.com/Loader.aspx?ParTree=15"
page = urlopen(url)
htmlSource = page.read().decode("utf-8")
f=open("output.txt",'w')
f.write(htmlSource)
but I get this erroe:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte
I dont know if this page is encoded with utf-8 or not.
Thanks for your help.
I dont know if this page is encoded with utf-8 or not.
If you don't know how the page is encoded, you can just write the bytes to the file without trying to examine them:
page = urlopen(url)
htmlSource = page.read()
f=open("output.txt",'wb')
f.write(htmlSource)

UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 261060: character maps to <undefined>

I'm currently trying to extract the href (Emails) from HTML files provided by a client of my company. They sent me 6 months worth of data but I'm unable to extract the emails from 2 particular files. I keep getting the same UnicodeDecodeError everytime no matter what I try. According to my analysis, these files are encoded in "utf-8" format. I'll leave the code down below:
from bs4 import BeautifulSoup as bsoup
url = r"C:\Users\Maximiliano\Documents\enero.html"
soup = bsoup((open(url).read()))
data = []
for p in soup.find_all("a"):
datos = p.get("href")
if datos[0] != "m":
pass
else:
data.append(datos)
print(data)
I've already tried adding a ".decode("utf-8") after the read but it is not doing anything.
Please help me!
file: https://gofile.io/?c=SFM1T3
As suggested in the comments, you simply have to add the encoding parameter:
soup = bsoup((open(url, encoding="utf-8").read()))

Ascii codec can't encode character error, content.decode('utf-8','ignore') won't work. Any other options?

Example code:
import requests
import bs4 as bs
import urllib
import urllib.request
from urllib.request import Request
req = Request("https://www.openbugbounty.org/researchers/Spam404/vip/page/1/",
headers ={'User-Agent':'Mozilla/5.0'})
sauce = urllib.request.urlopen(req).read()
soup = bs.BeautifulSoup(sauce,'lxml')
print(soup)
My output gives me the following error:
File "/Users/student/Desktop/AutoBots/kbfza2.py", line 15, in
print(soup)
UnicodeEncodeError: 'ascii' codec can't encode character '\xa0' in position 5194: ordinal not in range(128)
After searching for a solution online for a while, it seemed that changing my soup line to:
soup = bs.BeautifulSoup(sauce.decode('utf-8','ignore'),'lxml)
would solve this for me but this hasn't fixed anything for me.
Am I mistaken in thinking that the decode function with the ignore argument should allow me to print(soup) without error even if it isn't successfully decoded completely?
Just re-read your question and I believe you are trying to print Unicode text to a console that doesn't support that character set (I couldn't reproduce the error with the code you posted).
You may need to force your console output to utf-8 encoding, or if you are using an IDE like sublime text, change it to render utf-8 encoding.

error: UnicodeEncodeError: 'gbk' codec can't encode charactor

I'm a python beginner. I wrote code as following:
from bs4 import BeautifulSoup
import requests
url = "http://www.google.com"
response = requests.get(url)
soup = BeautifulSoup(response.text, "html.parser")
links = soup.find_all("a")
for link in links:
print(link.text)
When run this .py file in windows powershell, the print(link.text) causes the following error.
error: UnicodeEncodeError: 'gbk' codec can't encode charactor '\xbb' in position 5:
illegal multibyte sequence.
I know the error is caused by some chinese characters, and It seem like I should use 'decode' or 'ignore', but I don't know how to fix my code. Help please! Thanks!
If you don't wish to display those special chars:
You can ignore them by:
print(link.text.encode(errors="ignore"))
You can encode the string in utf8.
for link in links:
print(link.text.encode('utf8'))
But better approach is:
response = requests.get(url)
soup = BeautifulSoup(response.text.encode("utf8"), "html.parser")
To understand more about the problem you are facing, you should look at this stackoverflow answer.

Python: UnicodeDecodeError: 'utf-8' codec can't decode byte...invalid continuation byte

I'm building a web scraper using BeautifulSoup on Python 3.3
However I get a problem which prevents me from getting a valid strin* that I can use with BeautifulSoup. That is:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe0 in position 7047: invalid continuation byte
I know there are dozens of similar questions but I haven't so far found a method that can help me to diagnose what's wrong with the following code:
import urllib.request
URL = "<url>" # sorry, I cannot show the url for privacy reasons, but it's a normal html document
page = urllib.request.urlopen(URL)
page = page.read().decode("utf-8") # from bytes to <source encodings>
As I guessed I notice this error occurs only with some URLS and not with others. Even with this same error I wasn't having this error until yesterday. Then today I run the program again and the error popped up..
Any clue on how to diagnose the error?
You should not decode the response. First of all, you are incorrectly assuming the response is UTF-8 encoded (it is not, as the error shows), but more importantly, BeautifulSoup will detect the encoding for you. See the Encodings section of the BeautifulSoup documentation.
Pass a byte string to BeautifulSoup and it'll use any <meta> header proclaiming the correct encoding, or do great job of autodetecting the encoding for you.
In the event that auto-detection fails, you can always fall back to the server-provided encoding:
encoding = page.info().get_charset()
page = page.read()
soup = BeautifulSoup(page)
if encoding is not None and soup.original_encoding != encoding:
print('Server and BeautifulSoup disagree')
print('Content-type states it is {}, BS4 states thinks it is {}'.format(encoding, soup.original_encoding)
print('Forcing encoding to server-supplied codec')
soup = BeautifulSoup(page, from_encoding=encoding)
This still leaves the actual decoding to BeautifulSoup, but if the server included a charset parameter in the Content-Type header then the above assumes that the server is correctly configured and forces BeautifulSoup to use that encoding.

Categories