utf-8 error when opening csv file in pandas on mac - python

I am trying to open a csv file with Japanese characters using utf8 on my mac.
The code that I am using is as follows:
foo = pd.read_csv("filename.csv", encoding = 'utf8')
However, I have been getting the following error message.
'utf-8' codec can't decode byte 0x96 in position 0
I've tried looking around but a lot of the solutions seem to be for windows/I haven't had any success with other solutions yet.
Appreciate the help!

It seems that your file really has a non-unicode character. A correct encoding for this file strongly depends on its content, but in the most common case, 0x96 can be decoded with CP-1252. So, just try to decode it like following:
foo = pd.read_csv("filename.csv", encoding = 'cp1252')
If you don't know the original encoding of the file, you can try to detect it with third-party libs such as chardet.
I may help you a little bit more if you upload a chunk of the file to reproduce the problem.

Related

I keep getting the "UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 3131: invalid start byte" error when I write simple code [duplicate]

https://github.com/affinelayer/pix2pix-tensorflow/tree/master/tools
An error occurred when compiling "process.py" on the above site.
python tools/process.py --input_dir data -- operation resize --outp
ut_dir data2/resize
data/0.jpg -> data2/resize/0.png
Traceback (most recent call last):
File "tools/process.py", line 235, in <module>
main()
File "tools/process.py", line 167, in main
src = load(src_path)
File "tools/process.py", line 113, in load
contents = open(path).read()
File"/home/user/anaconda3/envs/tensorflow_2/lib/python3.5/codecs.py", line 321, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
What is the cause of the error?
Python's version is 3.5.2.
Python tries to convert a byte-array (a bytes which it assumes to be a utf-8-encoded string) to a unicode string (str). This process of course is a decoding according to utf-8 rules. When it tries this, it encounters a byte sequence which is not allowed in utf-8-encoded strings (namely this 0xff at position 0).
Since you did not provide any code we could look at, we only could guess on the rest.
From the stack trace we can assume that the triggering action was the reading from a file (contents = open(path).read()). I propose to recode this in a fashion like this:
with open(path, 'rb') as f:
contents = f.read()
That b in the mode specifier in the open() states that the file shall be treated as binary, so contents will remain a bytes. No decoding attempt will happen this way.
Use this solution it will strip out (ignore) the characters and return the string without them. Only use this if your need is to strip them not convert them.
with open(path, encoding="utf8", errors='ignore') as f:
Using errors='ignore'
You'll just lose some characters. but if your don't care about them as they seem to be extra characters originating from a the bad formatting and programming of the clients connecting to my socket server.
Then its a easy direct solution.
reference
Use encoding format ISO-8859-1 to solve the issue.
Had an issue similar to this, Ended up using UTF-16 to decode. my code is below.
with open(path_to_file,'rb') as f:
contents = f.read()
contents = contents.rstrip("\n").decode("utf-16")
contents = contents.split("\r\n")
this would take the file contents as an import, but it would return the code in UTF format. from there it would be decoded and seperated by lines.
I've come across this thread when suffering the same error, after doing some research I can confirm, this is an error that happens when you try to decode a UTF-16 file with UTF-8.
With UTF-16 the first characther (2 bytes in UTF-16) is a Byte Order Mark (BOM), which is used as a decoding hint and doesn't appear as a character in the decoded string. This means the first byte will be either FE or FF and the second, the other.
Heavily edited after I found out the real answer
It simply means that one chose the wrong encoding to read the file.
On Mac, use file -I file.txt to find the correct encoding. On Linux, use file -i file.txt.
I had a similar issue with PNG files. and I tried the solutions above without success.
this one worked for me in python 3.8
with open(path, "rb") as f:
use only
base64.b64decode(a)
instead of
base64.b64decode(a).decode('utf-8')
This is due to the different encoding method when read the file. In python, it defaultly
encode the data with unicode. However, it may not works in various platforms.
I propose an encoding method which can help you solve this if 'utf-8' not works.
with open(path, newline='', encoding='cp1252') as csvfile:
reader = csv.reader(csvfile)
It should works if you change the encoding method here. Also, you can find other encoding method here standard-encodings , if above doesn't work for you.
Those getting similar errors while handling Pandas for data frames use the following solution.
example solution.
df = pd.read_csv("File path", encoding='cp1252')
I had this UnicodeDecodeError while trying to read a '.csv' file using pandas.read_csv(). In my case, I could not manage to overcome this issue using other encoder types. But instead of using
pd.read_csv(filename, delimiter=';')
I used:
pd.read_csv(open(filename, 'r'), delimiter=';')
which just seems working fine for me.
Note that: In open() function, use 'r' instead of 'rb'. Because 'rb' returns bytes object that causes to happen this decoder error in the first place, that is the same problem in the read_csv(). But 'r' returns str which is needed since our data is in .csv, and using the default encoding='utf-8' parameter, we can easily parse the data using read_csv() function.
if you are receiving data from a serial port, make sure you are using the right baudrate (and the other configs ) : decoding using (utf-8) but the wrong config will generate the same error
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
to check your serial port config on linux use : stty -F /dev/ttyUSBX -a
I had a similar issue and searched all the internet for this problem
if you have this problem just copy your HTML code in a new HTML file and use the normal <meta charset="UTF-8">
and it will work....
just create a new HTML file in the same location and use a different name
Check the path of the file to be read. My code kept on giving me errors until I changed the path name to present working directory. The error was:
newchars, decodedbytes = self.decode(data, self.errors)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
If you are on a mac check if you for a hidden file, .DS_Store. After removing the file my program worked.
I had a similar problem.
Solved it by:
import io
with io.open(filename, 'r', encoding='utf-8') as fn:
lines = fn.readlines()
However, I had another problem. Some html files (in my case) were not utf-8, so I received a similar error. When I excluded those html files, everything worked smoothly.
So, except from fixing the code, check also the files you are reading from, maybe there is an incompatibility there indeed.
You have to use the encoding as latin1 to read this file as there are some special character in this file, use the below code snippet to read the file.
The problem here is the encoding type. When Python can't convert the data to be read, it gives an error.
You can you latin1 or other encoding values.
I say try and test to find the right one for your dataset.
I have the same issue when processing a file generated from Linux. It turns out it was related with files containing question marks..
Following code worked in my case:
df = pd.read_csv(filename,sep = '\t', encoding='cp1252')
If possible, open the file in a text editor and try to change the encoding to UTF-8. Otherwise do it programatically at the OS level.

Pandas dataframe imports and renders incorrectly and causes UnicodeDecodeError

I am trying to import an csv that contains Chinese characters.
this command is to download the csv file
!wget -O wm.csv https://raw.githubusercontent.com/hierarchyJK/compare-LIBSVM-with-Linear-and-Gassian-Kernel/master/%E8%A5%BF%E7%93%9C3.0.csv
The repository is not mine, so I am not sure if it is encoded the right way.
what I can be sure is that it renders correctly.
this code
pd.read_csv('wm.csv',encoding = 'utf-8')
causes this Error
'utf-8' codec can't decode byte 0xb1 in position 0: invalid start byte
I've searched this error, didn't find appropriate rca and solution.
this code executed properly
pd.read_csv('wm.csv',encoding = 'cp1252')
but renders the garbled
the system renders Chinese characters correctly.
with python open command
with open('wm.csv', 'r', encoding='cp1252') as f:
for line in f.readlines():
print(line)
break
this code renders something garbled without any warning or error.
±àºÅ,É«Ôó,¸ùµÙ,ÇÃÉù,ÎÆÀí,Æ겿,´¥¸Ð,ÃܶÈ,º¬ÌÇÂÊ,ºÃ¹Ï,Ðò¹Øϵ
The encoding is 'GB18030'. I found this by opening the file in a text editor and checking the suggested encoding. Github actually also shows you the encoding when you go to the github link and click on edit file
You should use the encoding="GBK". Hope this will help.
df = pd.read_csv('wm.csv', encoding="GBK")
More details check HERE
Here is a link with all of the standard encodings. Latin_1 have worked well for me when I have had issues, but in your case you can try utf_16_be. Good Luck.!
Standard Encodings

Reading erroneous data form csv file using read_csv from pandas

I am trying to read data from a huge csv file I have. I is showing me this error UnicodeDecodeError: 'utf-8' codec can't decode byte 0xae in position 13: invalid start byte. Is there any way to just skip through the lines that cause this exception to be thrown? From the millions of lines these are just a handful and I can't manually delete them. I tried adding error_bad_lines=False, but that did not solve the problem. I am using Python 3.6.1 that I got through Anaconda 4.4.0. I am also using a Mac if that helps. Please help me I am new to this.
Seems to me that there are some non-ascii characters in your file that cannot be decoded. Pandas accepts an encoding as an argument for read_csv (if that helps):
my_file = pd.read_csv('Path/to/file.csv', encoding = 'encoding')
The default encoding is None, which is why you might be getting those errors.Here is a link to the standard Python encodings - Try "ISO-8859-1" (aka 'latin1') or maybe 'utf8' to start.
Pandas does allow you to specify rows to skip when reading a csv, but you would need to know the index of those rows, which in your case would be very difficult.

pandas read_csv encoding weird character

I tried to read my dataset in text file format using pandas. However, some characters are not encoded correctly. I got ??? for apostrophe.
What should I do to encode my file correctly? I've tried
encoding = "utf8" but I got UnicodeDecodeError: 'utf8' codec can't decode byte 0xc3 in position 2044: unexpected end of data.
encoding = "latin1" but this gave me a lot of ???
encoding = "ISO-8859-1" or "ISO-8859-2" but this also gave me just like no encoding...
When I open my data in sublime, I got this character ’.
UPDATED: But when I access the entry using loc I got something like \u0102\u02d8\xe2\x82\u0179\xc2\u015, \u0102\u02d8\xe2\x82\u0179\xe2\x84\u02d8
You may be able to determine the encoding with chardet:
$ pip install chardet
>>> import urllib
>>> rawdata = urllib.urlopen('http://yahoo.co.jp/').read()
>>> import chardet
>>> chardet.detect(rawdata)
{'encoding': 'EUC-JP', 'confidence': 0.99}
The basic usage also suggests how you can use this to infer the encoding from large files e.g. files too large to read into memory - it'll read the file until it's confident enought about the encoding.
According to this answer you should try encoding="ISO-8859-2":
My guess is that your input is encoded as ISO-8859-2 which contains Ă as 0xC3.
Note: Sublime may not infer the encoding correctly either so you have to take it's output with a pinch of salt, it's best to check with your vendor (wherever you're getting the file from) what the actual encoding is...

Python 3 unicode to utf-8 on file

I am trying to parse through a log file, but the file format is always in unicode. My usual process that I would like to automate:
I pull file up in notepad
Save as...
change encoding from unicode to UTF-8
Then run python program on it
So this is the process I would like to automate in Python 3.4. Pretty much just changed the file to UTF-8 or something like open(filename,'r',encoding='utf-8') although this exact line was throwing me this error when I tried to call read() on it:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
It would be EXTREMELY helpful if I could convert the entire file (like in my first scenario) or just open the whole thing in UTF-8 that way I don't have to str.encode (or something like that) every time I analyze a string.
Anybody been through this and know which method I should use and how to do it?
EDIT:
In the python3 repr, I did
>>> f = open('file.txt','r')
>>> f
(_io.TextIOWrapper name='file.txt' mode='r' encoding='cp1252')
So now my python code in my program opens the file with open('file.txt','r',encoding='cp1252'). I am running a lot of regex looking through this file though and it isn't picking it up (I think because it isn't utf-8). So I just have to figure out how to switch from cp1252 to UTF-8. Thank you #Mark Ransom
What notepad considers Unicode is utf16 to Python. Windows "Unicode" files start with a byte order mark (BOM) of FF FE, which indicates little-endian UTF-16. This is why you get the following when using utf8 to decode the file:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
To convert to UTF-8, you could use:
with open('log.txt',encoding='utf16') as f:
data = f.read()
with open('utf8.txt','w',encoding='utf8') as f:
f.write(data)
Note that many Windows editors like a UTF-8 signature at the beginning of the file, or may assume ANSI instead. ANSI is really the local language locale. On US Windows it is cp1252, but it varies for other localized builds. If you open utf8.txt and it still looks garbled, use encoding='utf-8-sig' when writing instead.

Categories