#source file is encoded in utf8
import urllib2
import re
req = urllib2.urlopen('http://people.w3.org/rishida/scripts/samples/hungarian.html')
c = req.read()#.decode('utf-8')
p = r'title="This is Latin script \(Hungarian language\)">(.+)'
text = re.search(p, c).group(1)
name = text[:10]+'.txt' #file name will have special chars in it
f = open(name, 'wb')
f.write(text) #content of file will have special chars in it
f.close()
x = raw_input('done')
As you can see the script does a couple things:
- Reads content that is known to have unicode characters from a webpage into a variable
(The source file is saved in utf-8 but this should not make a difference unless unicode strings are actually being defined in the source code... As you can see the unicode string is being defined dynamially into a variable.. what encoding the source is shouldn't matter in this scenario)
Writes a file with a name containing unicode characters
Write unicode content into this file as well
Here's the weird behavior I get (Windows 7, Python 2.7) :
When I don't use the decode function:
c = req.read()
The NAME of the file will come out gibberish, but the CONTENT of the file will come out readable (that is you can see the correct unicode hungarian characters)
Yet, when I USE the decode function:
c = req.read().decode('utf-8')
It will NOT ERROR on opening the file (really creating it with 'w' mode)
and the resulting file's NAME will be readable, yep now it shows the correct unicode characters.
So far so good right?
Well, then it WILL ERROR on trying to write the unicode content to the file:
f.write(text) #content of file will have special chars in it
UnicodeEncodeError: 'ascii' codec can't encode character u'\xe1' in position 8: ordinal not in range(128)
You see, I can't seem to have the cake and eat it too...
Either I can correctly write the NAME of the file or I can correctly write the CONTENT of the file..
How can I do both?
I've also tried writing the file with
f = codecs.open(name, encoding='utf-8', mode='wb')
But it also errors..
The only problem for you seems to be just "unreadable" file name from your original source file. This can solve your problem:
f = open(name.decode('utf-8').encode( sys.getfilesystemencoding() ) , 'wb')
While winterTTR's answer does work.. I've realized that this approach is convoluted.
Rather, all you really need to do is encode the data you write to the file. The name you don't need to encode and both the name and the content will come out "readable".
content = '\xunicode chars'.decode('utf-8')
f = open(content[:5]+'.txt', 'wb')
f.write(content.encode('utf-8'))
f.close()
Related
I already have the code to iterate through all files in a deep file structure where all files are utf-8 and need to be converted to c1252 a.k.a. ANSI.
I need to achieve the same simple result as coverting the file in any serious text editor... why would there be any losses? Yes, some characters are standardly replaced by different ones: Šš=Šš Čč=Èè Ťť=?? Žž=Žž Ěě=Ìì Řř=Øø Ďď=Ïï Ňň=Òò Ůů=Ùù
But since a simple string conversion like
>>> print("Šš Čč Ťť Žž Ěě Řř Ďď Ňň Ůů".encode("utf-8").decode("cp1252"))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Program Files\Python310\lib\encodings\cp1252.py", line 15, in decode
return codecs.charmap_decode(input,errors,decoding_table)
UnicodeDecodeError: 'charmap' codec can't decode byte 0x8d in position 8: character maps to <undefined>
... doesn't work what are my chances? I've been literally through dozens of articles here and there throughout the whole day and could not find a working solution or understand the hell of this cp conversion PITA. Found even complete functions and converter obviously written for Python 2 none working.
Also not working:
chcp 65001
Active code page: 65001
with open(fpath, mode="r", encoding="utf-8") as fd:
content = fd.read()
with open(fpath, mode="w", encoding="cp1252") as fd:
fd.write(content)
or
with open(fpath, mode="r", encoding="utf-8") as fd:
decoded = fd.decode("utf-8")
content = decoded.encode("cp1252")
Your first example will never work. Encoding a Unicode string using one scheme and decoding to another is incorrect, but you can decode a file or byte string using the encoding it was generated with, then re-encode it in another encoding. The encodings need to support the same Unicode code points, however.
UTF-8 supports encoding all Unicode code points while CP1252 supports <256, so don't expect your files to contain the same information if you go this route.
There is an errors parameter that can be used when decoding (reading) a file and encoding (writing) a file. Here's an example of the loss from the example string provided:
>>> s = "Šš Čč Ťť Žž Ěě Řř Ďď Ňň Ůů"
>>> s.encode('cp1252',errors='ignore').decode('cp1252')
'Šš Žž '
>>> s.encode('cp1252',errors='replace').decode('cp1252')
'Šš ?? ?? Žž ?? ?? ?? ?? ??'
There are non-lossy conversions as well, but use replacement schemes. See Error Handlers in the Python codecs documentation.
So the second example can work with loss by providing the errors parameter:
with open(fpath, mode="r", encoding="utf-8") as fd:
content = fd.read()
with open(fpath, mode="w", encoding="cp1252", errors='replace') as fd:
fd.write(content)
My Django application is working with both .txt and .doc filetypes. And this application opens a file, compares it with other files in db and prints out some report.
Now the problem is that, when file type is .txt, I get 'utf-8' codec can't decode byte error (here I'm using encoding='utf-8'). When I switch encoding='utf-8' to encoding='ISO-8859-1' error changes to 'latin-1' codec can't decode byte.
I want to find such encoding format that works with every type of a file. This is a small part of my function:
views.py:
#login_required(login_url='sign_in')
def result(request):
last_uploaded = OriginalDocument.objects.latest('id')
original = open(str(last_uploaded.document), 'r', encoding='utf-8')
original_words = original.read().lower().split()
words_count = len(original_words)
open_original = open(str(last_uploaded.document), "r")
read_original = open_original.read()
report_fives = open("static/report_documents/" + str(last_uploaded.student_name) +
"-" + str(last_uploaded.document_title) + "-5.txt", 'w')
# Path to the documents with which original doc is comparing
path = 'static/other_documents/doc*.txt'
files = glob.glob(path)
rows, found_count, fives_count, rounded_percentage_five, percentage_for_chart_five, fives_for_report, founded_docs_for_report = search_by_five(last_uploaded, 5, original_words, report_fives, files)
context = {
...
}
return render(request, 'result.html', context)
There is no general encoding which automatically knows how to decode an already encoded file in a specific encoding.
UTF-8 is a good option with many compatibilities with other encodings. You can e.g. simply ignore or replace characters which aren't decodable like this:
from codecs import open
original = open(str(last_uploaded.document), encoding="utf-8", errors="ignore")
original_words = original.read().lower().split()
...
original.close()
Or even using a context manager (with statement) who closes the file for you:
with open(str(last_uploaded.document), encoding="utf-8", errors="ignore") as fr:
original_words = fr.read().lower().split()
...
(Note: You do not need to use the codecs library if you're using Python 3, but you have tagged your question with python-2.7.)
You can see advantages and disadvantages of using different error handlers here and here. You have to know that not using an error handler will default to using errors="strict" which you probably do not want. Other options may be nearly self-explaining, e.g.:
using errors="replace" will replace an undecodable character with a suitable replacement marker
using errors="ignore" will simply ignore the character and continues reading the file data.
What you should use depends on your needs and usecase(s).
You're saying that you also have encoding problems not only with plain text files, but also with proprietary doc files:
The .doc format is not a plain text file which you can simply read with open() or codecs.open() since there are many information stored in binary format, see this site for more information. So you need a special reader for .doc files to get the text from it. Which library you are using depends on your Python version and maybe also on the operating system you are using. Maybe here is a good starting point for you.
Unfortunately, using a library does not prevent you completely from encoding errors. (Maybe yes, but I'm not sure if the encoding is saved in the file itself like in a .docx file.) You maybe also have the chance to figure out the encoding of the file. How you can handle encoding errors likely depends on the library itself.
So I just guess that you are trying opening .doc files as simple text files. Then you will get decoding errors, because it's not saved as human readable text. And even if you get rid of the error, you only will see the non human readable text: (I've created a simple text file with LibreOffice in doc-format (Microsoft Word 1997-2003)):
In [1]: open("./test.doc", "r").read()
UnicodeDecodeError: 'utf-8' codec can`t decode byte 0xd0 in position 0: invalid continuation byte
In [2]: open("./test.doc", "r", errors="replace").read() # or open("./test.doc", "rb").read()
'��\x11\u0871\x1a�\x00\x00\x00' ...
I downloaded the file 'pi_million_digits.txt' from here:
https://github.com/ehmatthes/pcc/blob/master/chapter_10/pi_million_digits.txt
I then used this code to open and read it:
filename = 'pi_million_digits.txt'
with open(filename) as file_object:
lines = file_object.readlines()
pi_string = ''
for line in lines:
pi_string += line.strip()
print(pi_string[:52] + "...")
print(len(pi_string))
However the output produced is correct apart from the fact it is preceded by same strange symbols: "3.141...."
What causes these strange symbols? I am stripping the lines so I'd expect such symbols to be removed.
It looks like you're opening a file with a UTF-8 encoded Byte Order Mark using the ISO-8859-1 encoding (presumably because this is the default encoding on your OS).
If you open it as bytes and read the first line, you should see something like this:
>>> next(open('pi_million_digits.txt', 'rb'))
b'\xef\xbb\xbf3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679\n'
… where \xef\xbb\xbf is the UTF-8 encoding of the BOM. Opened as ISO-8859-1, it looks like what you're getting:
>>> next(open('pi_million_digits.txt', encoding='iso-8859-1'))
'3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679\n'
… and opening it as UTF-8 shows the actual BOM character U+FEFF:
>>> next(open('pi_million_digits.txt', encoding='utf-8'))
'\ufeff3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679\n'
To strip the mark out, use the special encoding utf-8-sig:
>>> next(open('pi_million_digits.txt', encoding='utf-8-sig'))
'3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679\n'
The use of next() in the examples above is just for demonstration purposes. In your code, you just need to add the encoding argument to your open() line, e.g.
with open(filename, encoding='utf-8-sig') as file_object:
# ... etc.
I have a one line txt file with a bunch of unicode characters with no spaces
example
🅾🆖🆕Ⓜ🆙🆚🈁🈂
And I want to output a txt file with one character on each line
When I try to do this I think end up splitting the unicode charachters, how can I go about this?
There is no such thing as a text file with a bunch of unicode characters, it only makes sense to speak about a "unicode object" once the file has been read and decoded into Python objects. The data in the text file is encoded, one way or another.
So, the problem is about reading the file in the correct way in order to decode the characters to unicode objects correctly.
import io
enc_source = enc_target = 'utf-8'
with io.open('my_file.txt', encoding=enc_source) as f:
the_line = f.read().strip()
with io.open('output.txt', mode='w', encoding=enc_target) as f:
f.writelines([c + '\n' for c in the_line])
Above I am assuming the target and source file encodings are both utf-8. This is not necessarily the case, and you should know what the source file is encoded with. You get to choose enc_target, but somebody has to tell you enc_source (the file itself can't tell you).
This works in Python 3.5
line = "😀👍"
with open("file.txt", "w", encoding="utf8") as f:
f.write("\n".join(line))
I'm trying to get a Python 3 program to do some manipulations with a text file filled with information. However, when trying to read the file I get the following error:
Traceback (most recent call last):
File "SCRIPT LOCATION", line NUMBER, in <module>
text = file.read()
File "C:\Python31\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 2907500: character maps to `<undefined>`
The file in question is not using the CP1252 encoding. It's using another encoding. Which one you have to figure out yourself. Common ones are Latin-1 and UTF-8. Since 0x90 doesn't actually mean anything in Latin-1, UTF-8 (where 0x90 is a continuation byte) is more likely.
You specify the encoding when you open the file:
file = open(filename, encoding="utf8")
If file = open(filename, encoding="utf-8") doesn't work, try
file = open(filename, errors="ignore"), if you want to remove unneeded characters. (docs)
Alternatively, if you don't need to decode the file, such as uploading the file to a website, use:
open(filename, 'rb')
where r = reading, b = binary
As an extension to #LennartRegebro's answer:
If you can't tell what encoding your file uses and the solution above does not work (it's not utf8) and you found yourself merely guessing - there are online tools that you could use to identify what encoding that is. They aren't perfect but usually work just fine. After you figure out the encoding you should be able to use solution above.
EDIT: (Copied from comment)
A quite popular text editor Sublime Text has a command to display encoding if it has been set...
Go to View -> Show Console (or Ctrl+`)
Type into field at the bottom view.encoding() and hope for the best (I was unable to get anything but Undefined but maybe you will have better luck...)
TLDR: Try: file = open(filename, encoding='cp437')
Why? When one uses:
file = open(filename)
text = file.read()
Python assumes the file uses the same codepage as current environment (cp1252 in case of the opening post) and tries to decode it to its own default UTF-8. If the file contains characters of values not defined in this codepage (like 0x90) we get UnicodeDecodeError. Sometimes we don't know the encoding of the file, sometimes the file's encoding may be unhandled by Python (like e.g. cp790), sometimes the file can contain mixed encodings.
If such characters are unneeded, one may decide to replace them by question marks, with:
file = open(filename, errors='replace')
Another workaround is to use:
file = open(filename, errors='ignore')
The characters are then left intact, but other errors will be masked too.
A very good solution is to specify the encoding, yet not any encoding (like cp1252), but the one which has ALL characters defined (like cp437):
file = open(filename, encoding='cp437')
Codepage 437 is the original DOS encoding. All codes are defined, so there are no errors while reading the file, no errors are masked out, the characters are preserved (not quite left intact but still distinguishable).
Stop wasting your time, just add the following encoding="cp437" and errors='ignore' to your code in both read and write:
open('filename.csv', encoding="cp437", errors='ignore')
open(file_name, 'w', newline='', encoding="cp437", errors='ignore')
Godspeed
for me encoding with utf16 worked
file = open('filename.csv', encoding="utf16")
def read_files(file_path):
with open(file_path, encoding='utf8') as f:
text = f.read()
return text
OR (AND)
def read_files(text, file_path):
with open(file_path, 'rb') as f:
f.write(text.encode('utf8', 'ignore'))
For those working in Anaconda in Windows, I had the same problem. Notepad++ help me to solve it.
Open the file in Notepad++. In the bottom right it will tell you the current file encoding.
In the top menu, next to "View" locate "Encoding". In "Encoding" go to "character sets" and there with patiente look for the enconding that you need. In my case the encoding "Windows-1252" was found under "Western European"
Before you apply the suggested solution, you can check what is the Unicode character that appeared in your file (and in the error log), in this case 0x90: https://unicodelookup.com/#0x90/1 (or directly at Unicode Consortium site http://www.unicode.org/charts/ by searching 0x0090)
and then consider removing it from the file.
In the newer version of Python (starting with 3.7), you can add the interpreter option -Xutf8, which should fix your problem. If you use Pycharm, just got to Run > Edit configurations (in tab Configuration change value in field Interpreter options to -Xutf8).
Or, equivalently, you can just set the environmental variable PYTHONUTF8 to 1.
for me changing the Mysql character encoding the same as my code helped to sort out the solution. photo=open('pic3.png',encoding=latin1)