File. exe in win cannot identify the file encoding, the file seems to be corrupted, what can be done? - python

For some files, python's chardet library of chardet.detect(f.read())['encoding'] returns None.
path=r"C:\A chinese novel.TXT"
with codecs.open(path, 'rb') as f:
encoding=chardet.detect(f.read())
print(encoding)
# RETURN {'encoding': None, 'confidence': 0.0, 'language': None}
I'll use os.popen("file -bi \"%s\" | gawk -F'[ =]' '{print $3}'" % f).read() view file coding, the compiler hints file encoding is unknown - 8 bit
'file xxx.txt' output xxx.txt: Non-ISO extended-ASCII text, with very long lines (560), with CRLF line terminator
Here's the GIf link to understand the situation: https://i.imgur.com/5kvmnRL.gif
However, Notepad++ can be opened normally, Notepad shows that the file is GB2312 encoding, and the character display is basically normal.
The file may become corrupted and so a mixed-encoding file that the chardet library cannot recognize?
Chatgpt suggested that I use iconv to re-encode the bad file, but the text editor (Notepad++) could not confirm which encoding the file is before opening. Is there a more reliable way to identify file encodings by python in windows10?

chardet: A very popular Python package for detecting encoding.
cchardet: A Python module written in C++, similar to the chardet package.
File-magic: A Python-wrapped libmagic library that recognizes file types and encodings.
import chardet
import cchardet
import magic
# chardet
with open('your_file_path', 'rb') as f:
rawdata = f.read()
result = chardet.detect(rawdata)
encoding = result['encoding']
print(encoding)
# cchardet
with open('your_file_path', 'rb') as f:
rawdata = f.read()
result = cchardet.detect(rawdata)
encoding = result['encoding']
print(encoding)
# file-magic
with magic.Magic() as m:
file_type = m.id_filename('your_file_path')
print(file_type)
After verification, cchardet recognition effect is good. It can successfully output the correct encoding format.

Related

How to read binary file to text in python 3.9

I have a .sql files that I want to read into my python session (python 3.9). I'm opening using the file context manager.
with open('file.sql', 'r') as f:
text = f.read()
When I print the text, I still get the binary characters, i.e., \xff\xfe\r\x00\n\x00-\x00-..., etc.
I've tried all the arguments such as 'rb', encoding='utf-8, etc., but the results are still binary text. It should be noted that I've used this very same procedure many times over in my code before and this has not been a problem.
Did something change in python 3.9?
First two bytes \xff\xfe looks like BOM (Byte Order Mark)
and table at Wikipedia page BOM shows that \xff\xfe can means encoding UTF-16-LE
So you could try
with open('file.sql', 'r', encoding='utf-16-le') as f:
EDIT:
There is module chardet which you may also try to use to detect encoding.
import chardet
with open('file.sql', 'rb') as f: # read bytes
data = f.read()
info = chardet.detect(data)
print(info['encoding'])
text = data.decode(info['encoding'])
Usually files don't have BOM but if they have then you may try to detect it using example from unicodebook.readthedocs.io/guess_encoding/check-for-bom-markers
from codecs import BOM_UTF8, BOM_UTF16_BE, BOM_UTF16_LE, BOM_UTF32_BE, BOM_UTF32_LE
BOMS = (
(BOM_UTF8, "UTF-8"),
(BOM_UTF32_BE, "UTF-32-BE"),
(BOM_UTF32_LE, "UTF-32-LE"),
(BOM_UTF16_BE, "UTF-16-BE"),
(BOM_UTF16_LE, "UTF-16-LE"),
)
def check_bom(data):
return [encoding for bom, encoding in BOMS if data.startswith(bom)]
# ---------
with open('file.sql', 'rb') as f: # read bytes
data = f.read()
encoding = check_bom(data)
print(encoding)
if encoding:
text = data.decode(encoding[0])
else:
print('unknown encoding')

Python reading a PE file and changing resource section

I am trying to open a Windows PE file and alter some strings in the resource section.
f = open('c:\test\file.exe', 'rb')
file = f.read()
if b'A'*10 in file:
s = file.replace(b'A'*10, newstring)
In the resource section I have a string that is just:
AAAAAAAAAA
And I want to replace that with something else. When I read the file I get:
\x00A\x00A\x00A\x00A\x00A\x00A\x00A\x00A\x00A\x00A
I have tried opening with UTF-16 and decoding as UTF-16 but then I run into a error:
UnicodeDecodeError: 'utf-16-le' codec can't decode bytes in position 1604-1605: illegal encoding
Everyone I seen who had the same issue fixed by decoding to UTF-16. I am not sure why this doesn't work for me.
If resource inside binary file is encoded to utf-16, you shouldn't change encoding.
try this
f = open('c:\\test\\file.exe', 'rb')
file = f.read()
unicode_str = u'AAAAAAAAAA'
encoded_str = unicode_str.encode('UTF-16')
if encoded_str in file:
s = file.replace(encoded_str, new_utf_string.encode('UTF-16'))
inside binary file everything is encoded, keep in mind

Change file encoding scheme in Python

I'm trying to open a file using latin-1 encoding in order to produce a file with a different encoding. I get a NameError stating that unicode is not defined. Here the piece of code I use to this:
sourceEncoding = "latin-1"
targetEncoding = "utf-8"
source = open(r'C:\Users\chsafouane\Desktop\saf.txt')
target = open(r'C:\Users\chsafouane\Desktop\saf2.txt', "w")
target.write(unicode(source.read(), sourceEncoding).encode(targetEncoding))
I'm not used at all to handling files so I don't know if there is a module I should import to use "unicode"
The fact that you see unicode not defined suggests that you're in Python3. Here's a code snippet that'll generate a latin1-encoded file, then does what you want to do, slurp the latin1-encoded file and spit out a UTF8-encoded file:
# Generate a latin1-encoded file
txt = u'U+00AxNBSP¡¢£¤¥¦§¨©ª«¬SHY­®¯U+00Bx°±²³´µ¶·¸¹º»¼½¾¿U+00CxÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏU+00DxÐÑÒÓÔÕÖ×ØÙÚÛÜÝÞßU+00ExàáâãäåæçèéêëìíîïU+00Fxðñòóôõö÷øùúûüýþÿ'
latin1 = txt.encode('latin1')
with open('example-latin1.txt', 'wb') as fid:
fid.write(latin1)
# Read in the latin1 file
with open('example-latin1.txt', 'r', encoding='latin1') as fid:
contents = fid.read()
assert contents == latin1.decode('latin1') # sanity check
# Spit out a UTF8-encoded file
with open('converted-utf8.txt', 'w') as fid:
fid.write(contents)
If you want the output to be something other than UTF8, add an encoding argument to open, e.g.,
with open('converted-utf_32.txt', 'w', encoding='utf_32') as fid:
fid.write(contents)
The docs have a list of all supported codecs.

Can someone help me with this unicode error? [duplicate]

I received some text that is encoded, but I don't know what charset was used. Is there a way to determine the encoding of a text file using Python? How can I detect the encoding/codepage of a text file deals with C#.
EDIT: chardet seems to be unmantained but most of the answer applies. Check https://pypi.org/project/charset-normalizer/ for an alternative
Correctly detecting the encoding all times is impossible.
(From chardet FAQ:)
However, some encodings are optimized
for specific languages, and languages
are not random. Some character
sequences pop up all the time, while
other sequences make no sense. A
person fluent in English who opens a
newspaper and finds “txzqJv 2!dasd0a
QqdKjvz” will instantly recognize that
that isn't English (even though it is
composed entirely of English letters).
By studying lots of “typical” text, a
computer algorithm can simulate this
kind of fluency and make an educated
guess about a text's language.
There is the chardet library that uses that study to try to detect encoding. chardet is a port of the auto-detection code in Mozilla.
You can also use UnicodeDammit. It will try the following methods:
An encoding discovered in the document itself: for instance, in an XML declaration or (for HTML documents) an http-equiv META tag. If Beautiful Soup finds this kind of encoding within the document, it parses the document again from the beginning and gives the new encoding a try. The only exception is if you explicitly specified an encoding, and that encoding actually worked: then it will ignore any encoding it finds in the document.
An encoding sniffed by looking at the first few bytes of the file. If an encoding is detected at this stage, it will be one of the UTF-* encodings, EBCDIC, or ASCII.
An encoding sniffed by the chardet library, if you have it installed.
UTF-8
Windows-1252
Another option for working out the encoding is to use
libmagic (which is the code behind the
file command). There are a profusion of
python bindings available.
The python bindings that live in the file source tree are available as the
python-magic (or python3-magic)
debian package. It can determine the encoding of a file by doing:
import magic
blob = open('unknown-file', 'rb').read()
m = magic.open(magic.MAGIC_MIME_ENCODING)
m.load()
encoding = m.buffer(blob) # "utf-8" "us-ascii" etc
There is an identically named, but incompatible, python-magic pip package on pypi that also uses libmagic. It can also get the encoding, by doing:
import magic
blob = open('unknown-file', 'rb').read()
m = magic.Magic(mime_encoding=True)
encoding = m.from_buffer(blob)
Some encoding strategies, please uncomment to taste :
#!/bin/bash
#
tmpfile=$1
echo '-- info about file file ........'
file -i $tmpfile
enca -g $tmpfile
echo 'recoding ........'
#iconv -f iso-8859-2 -t utf-8 back_test.xml > $tmpfile
#enca -x utf-8 $tmpfile
#enca -g $tmpfile
recode CP1250..UTF-8 $tmpfile
You might like to check the encoding by opening and reading the file in a form of a loop... but you might need to check the filesize first :
# PYTHON
encodings = ['utf-8', 'windows-1250', 'windows-1252'] # add more
for e in encodings:
try:
fh = codecs.open('file.txt', 'r', encoding=e)
fh.readlines()
fh.seek(0)
except UnicodeDecodeError:
print('got unicode error with %s , trying different encoding' % e)
else:
print('opening the file with encoding: %s ' % e)
break
Here is an example of reading and taking at face value a chardet encoding prediction, reading n_lines from the file in the event it is large.
chardet also gives you a probability (i.e. confidence) of it's encoding prediction (haven't looked how they come up with that), which is returned with its prediction from chardet.predict(), so you could work that in somehow if you like.
import chardet
from pathlib import Path
def predict_encoding(file_path: Path, n_lines: int=20) -> str:
'''Predict a file's encoding using chardet'''
# Open the file as binary data
with Path(file_path).open('rb') as f:
# Join binary lines for specified number of lines
rawdata = b''.join([f.readline() for _ in range(n_lines)])
return chardet.detect(rawdata)['encoding']
This might be helpful
from bs4 import UnicodeDammit
with open('automate_data/billboard.csv', 'rb') as file:
content = file.read()
suggestion = UnicodeDammit(content)
suggestion.original_encoding
#'iso-8859-1'
If you are not satisfied with the automatic tools you can try all codecs and see which codec is right manually.
all_codecs = ['ascii', 'big5', 'big5hkscs', 'cp037', 'cp273', 'cp424', 'cp437',
'cp500', 'cp720', 'cp737', 'cp775', 'cp850', 'cp852', 'cp855', 'cp856', 'cp857',
'cp858', 'cp860', 'cp861', 'cp862', 'cp863', 'cp864', 'cp865', 'cp866', 'cp869',
'cp874', 'cp875', 'cp932', 'cp949', 'cp950', 'cp1006', 'cp1026', 'cp1125',
'cp1140', 'cp1250', 'cp1251', 'cp1252', 'cp1253', 'cp1254', 'cp1255', 'cp1256',
'cp1257', 'cp1258', 'euc_jp', 'euc_jis_2004', 'euc_jisx0213', 'euc_kr',
'gb2312', 'gbk', 'gb18030', 'hz', 'iso2022_jp', 'iso2022_jp_1', 'iso2022_jp_2',
'iso2022_jp_2004', 'iso2022_jp_3', 'iso2022_jp_ext', 'iso2022_kr', 'latin_1',
'iso8859_2', 'iso8859_3', 'iso8859_4', 'iso8859_5', 'iso8859_6', 'iso8859_7',
'iso8859_8', 'iso8859_9', 'iso8859_10', 'iso8859_11', 'iso8859_13',
'iso8859_14', 'iso8859_15', 'iso8859_16', 'johab', 'koi8_r', 'koi8_t', 'koi8_u',
'kz1048', 'mac_cyrillic', 'mac_greek', 'mac_iceland', 'mac_latin2', 'mac_roman',
'mac_turkish', 'ptcp154', 'shift_jis', 'shift_jis_2004', 'shift_jisx0213',
'utf_32', 'utf_32_be', 'utf_32_le', 'utf_16', 'utf_16_be', 'utf_16_le', 'utf_7',
'utf_8', 'utf_8_sig']
def find_codec(text):
for i in all_codecs:
for j in all_codecs:
try:
print(i, "to", j, text.encode(i).decode(j))
except:
pass
find_codec("The example string which includes ö, ü, or ÄŸ, ö")
This script creates at least 9409 lines of output. So, if the output cannot fit to the terminal screen try to write the output to a text file.
# Function: OpenRead(file)
# A text file can be encoded using:
# (1) The default operating system code page, Or
# (2) utf8 with a BOM header
#
# If a text file is encoded with utf8, and does not have a BOM header,
# the user can manually add a BOM header to the text file
# using a text editor such as notepad++, and rerun the python script,
# otherwise the file is read as a codepage file with the
# invalid codepage characters removed
import sys
if int(sys.version[0]) != 3:
print('Aborted: Python 3.x required')
sys.exit(1)
def bomType(file):
"""
returns file encoding string for open() function
EXAMPLE:
bom = bomtype(file)
open(file, encoding=bom, errors='ignore')
"""
f = open(file, 'rb')
b = f.read(4)
f.close()
if (b[0:3] == b'\xef\xbb\xbf'):
return "utf8"
# Python automatically detects endianess if utf-16 bom is present
# write endianess generally determined by endianess of CPU
if ((b[0:2] == b'\xfe\xff') or (b[0:2] == b'\xff\xfe')):
return "utf16"
if ((b[0:5] == b'\xfe\xff\x00\x00')
or (b[0:5] == b'\x00\x00\xff\xfe')):
return "utf32"
# If BOM is not provided, then assume its the codepage
# used by your operating system
return "cp1252"
# For the United States its: cp1252
def OpenRead(file):
bom = bomType(file)
return open(file, 'r', encoding=bom, errors='ignore')
#######################
# Testing it
#######################
fout = open("myfile1.txt", "w", encoding="cp1252")
fout.write("* hi there (cp1252)")
fout.close()
fout = open("myfile2.txt", "w", encoding="utf8")
fout.write("\u2022 hi there (utf8)")
fout.close()
# this case is still treated like codepage cp1252
# (User responsible for making sure that all utf8 files
# have a BOM header)
fout = open("badboy.txt", "wb")
fout.write(b"hi there. barf(\x81\x8D\x90\x9D)")
fout.close()
# Read Example file with Bom Detection
fin = OpenRead("myfile1.txt")
L = fin.readline()
print(L)
fin.close()
# Read Example file with Bom Detection
fin = OpenRead("myfile2.txt")
L =fin.readline()
print(L) #requires QtConsole to view, Cmd.exe is cp1252
fin.close()
# Read CP1252 with a few undefined chars without barfing
fin = OpenRead("badboy.txt")
L =fin.readline()
print(L)
fin.close()
# Check that bad characters are still in badboy codepage file
fin = open("badboy.txt", "rb")
fin.read(20)
fin.close()
It is, in principle, impossible to determine the encoding of a text file, in the general case. So no, there is no standard Python library to do that for you.
If you have more specific knowledge about the text file (e.g. that it is XML), there might be library functions.
Depending on your platform, I just opt to use the linux shell file command. This works for me since I am using it in a script that exclusively runs on one of our linux machines.
Obviously this isn't an ideal solution or answer, but it could be modified to fit your needs. In my case I just need to determine whether a file is UTF-8 or not.
import subprocess
file_cmd = ['file', 'test.txt']
p = subprocess.Popen(file_cmd, stdout=subprocess.PIPE)
cmd_output = p.stdout.readlines()
# x will begin with the file type output as is observed using 'file' command
x = cmd_output[0].split(": ")[1]
return x.startswith('UTF-8')
If you know the some content of the file you can try to decode it with several encoding and see which is missing. In general there is no way since a text file is a text file and those are stupid ;)
This site has python code for recognizing ascii, encoding with boms, and utf8 no bom: https://unicodebook.readthedocs.io/guess_encoding.html. Read file into byte array (data): http://www.codecodex.com/wiki/Read_a_file_into_a_byte_array. Here's an example. I'm in osx.
#!/usr/bin/python
import sys
def isUTF8(data):
try:
decoded = data.decode('UTF-8')
except UnicodeDecodeError:
return False
else:
for ch in decoded:
if 0xD800 <= ord(ch) <= 0xDFFF:
return False
return True
def get_bytes_from_file(filename):
return open(filename, "rb").read()
filename = sys.argv[1]
data = get_bytes_from_file(filename)
result = isUTF8(data)
print(result)
PS /Users/js> ./isutf8.py hi.txt
True
Using linux file -i command
import subprocess
file = "path/to/file/file.txt"
encoding = subprocess.Popen("file -bi "+file, shell=True, stdout=subprocess.PIPE).stdout
encoding = re.sub(r"(\\n)[^a-z0-9\-]", "", str(encoding.read()).split("=")[1], flags=re.IGNORECASE)
print(encoding)
You can use `python-magic package which does not load the whole file to memory:
import magic
def detect(
file_path,
):
return magic.Magic(
mime_encoding=True,
).from_file(file_path)
The output is the encoding name for example:
iso-8859-1
us-ascii
utf-8
You can use the chardet module
import chardet
with open (filepath , "rb") as f:
data= f.read()
encode=chardet.UniversalDetector()
encode.close()
print(encode.result)
Or you can use the chardet3 command in linux but it takes a few time :
chardet3 fileName
Example :
chardet3 donnee/dir/donnee.csv
donnee/dir/donnee.csv: ISO-8859-1 with confidence 0.73
Some text files are aware of their encoding, most are not. Aware:
a text file having a BOM
an XML file is encoded in UTF-8 or its encoding is given in the preamble
a JSON file is always encoded in UTF-8
Not aware:
a CSV file
any random text file
Some encodings are versatile, ie they can decode any sequence of bytes, some are not. US-ASCII is not versatile, since any byte greater than 127 is not mapped to any character. UTF-8 is not versatile since any sequence of bytes is not valid.
On the contrary, Latin-1, Windows-1252, etc. are versatile (even if some bytes are not officially mapped to a character):
>>> [b.to_bytes(1, 'big').decode("latin-1") for b in range(256)]
['\x00', ..., 'ÿ']
Given a random text file encoded in a sequence of bytes, you can't determine the encoding unless the file is aware of its encoding, because some encodings are versatile. But you can sometimes exclude non versatile encodings. All versatile encodings are still possible. The chardet modules uses the frequency of bytes to guess which encoding fits the best to the encoded text.
If you don't want to use this module or a similar one, here's a simple method:
check if the file is aware of its encoding (BOM)
check non versatile encodings and accept the first that can decode the bytes (ASCII before UTF-8 because it is stricter)
choose a fallback encoding.
The second step is a bit risky if you check only a sample, because some bytes in the rest of the file may be invalid.
The code:
def guess_encoding(data: bytes, fallback: str = "iso8859_15") -> str:
"""
A basic encoding detector.
"""
for bom, encoding in [
(codecs.BOM_UTF32_BE, "utf_32_be"),
(codecs.BOM_UTF32_LE, "utf_32_le"),
(codecs.BOM_UTF16_BE, "utf_16_be"),
(codecs.BOM_UTF16_LE, "utf_16_le"),
(codecs.BOM_UTF8, "utf_8_sig"),
]:
if data.startswith(bom):
return encoding
if all(b < 128 for b in data):
return "ascii" # you may want to use the fallback here if data is only a sample.
decoder = codecs.getincrementaldecoder("utf_8")()
try:
decoder.decode(data, final=False)
except UnicodeDecodeError:
return fallback
else:
return "utf_8" # not certain if data is only a sample
Remember that non versatile encoding may fail. The errors parameter of the decode method can be set to 'ignore' , 'replace' or 'backslashreplace' to avoid exceptions.
A long time ago, I had this need.
Reading old code of mine, I found this:
import urllib.request
import chardet
import os
import settings
[...]
file = 'sources/dl/file.csv'
media_folder = settings.MEDIA_ROOT
file = os.path.join(media_folder, str(file))
if os.path.isfile(file):
file_2_test = urllib.request.urlopen('file://' + file).read()
encoding = (chardet.detect(file_2_test))['encoding']
return encoding
This worked for me and returned ascii

Convert a bunch of files from guessed encoding to UTF-8

I have this Python script that attempts to detect the character encoding of a text file (in this case, C# .cs source files, but they could be any text file) and then convert them from that character encoding and into UTF-8 (without BOM).
While chardet detects the encoding well enough and the script runs without errors, characters like © are encoded into $. So I assume there's something wrong with the script and my understanding of encoding in Python 2. Since converting files from UTF-8-SIG to UTF-8 works, I have a feeling that the problem is the decoding (reading) part and not the encoding (writing) part.
Can anyone tell me what I'm doing wrong? If switching to Python 3 is a solution, I'm all for it, I then just need help figuring out how to convert the script from running on version 2.7 to 3.4. Here's the script:
import os
import glob
import fnmatch
import codecs
from chardet.universaldetector import UniversalDetector
# from http://farmdev.com/talks/unicode/
def to_unicode_or_bust(obj, encoding='utf-8'):
if isinstance(obj, basestring):
if not isinstance(obj, unicode):
obj = unicode(obj, encoding)
return obj
def enforce_unicode():
detector = UniversalDetector()
for root, dirnames, filenames in os.walk('.'):
for filename in fnmatch.filter(filenames, '*.cs'):
detector.reset()
filepath = os.path.join(root, filename)
with open(filepath, 'r') as f:
for line in f:
detector.feed(line)
if detector.done: break
detector.close()
encoding = detector.result['encoding']
if encoding and not encoding == 'UTF-8':
print '%s -> UTF-8 %s' % (encoding.ljust(12), filepath)
with codecs.open(filepath, 'r', encoding=encoding) as f:
content = ''.join(f.readlines())
content = to_unicode_or_bust(content)
with codecs.open(filepath, 'w', encoding='utf-8') as f:
f.write(content)
enforce_unicode()
I have tried to do content = content.decode(encoding).encode('utf-8') before writing the file, but that fails with the following error:
/usr/local/.../lib/python2.7/encodings/utf_8_sig.py:19: UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal
if input[:3] == codecs.BOM_UTF8:
Traceback (most recent call last):
File "./enforce-unicode.py", line 48, in <module>
enforce_unicode()
File "./enforce-unicode.py", line 43, in enforce_unicode
content = content.decode(encoding).encode('utf-8')
File "/usr/local/.../lib/python2.7/encodings/utf_8_sig.py", line 22, in decode
(output, consumed) = codecs.utf_8_decode(input, errors, True)
UnicodeEncodeError: 'ascii' codec can't encode character u'\xa9' in position 87: ordinal not in range(128)
Ideas?
chardet simply got the detected codec it wrong, your code is otherwise correct. Character detection is based on statistics, heuristics and plain guesses, it is not a foolproof method.
For example, the Windows 1252 codepage is very close to the Latin-1 codec; files encoded with the one encoding can be decoded without error in the other encoding. Detecting the difference between a control code in the one or a Euro symbol in the other usually takes a human being looking at the result.
I'd record the chardet guesses for each file, if the file turns out to be wrongly re-coded, you need to look at what other codecs could be close. All of the 1250-series codepages look a lot alike.

Categories