Python's os.path choking on Hebrew filenames - python

I'm writing a script that has to move some file around, but unfortunately it doesn't seem os.path plays with internationalization very well. When I have files named in Hebrew, there are problems. Here's a screenshot of the contents of a directory:
(source: thegreenplace.net)
Now consider this code that goes over the files in this directory:
files = os.listdir('test_source')
for f in files:
pf = os.path.join('test_source', f)
print pf, os.path.exists(pf)
The output is:
test_source\ex True
test_source\joe True
test_source\mie.txt True
test_source\__()'''.txt True
test_source\????.txt False
Notice how os.path.exists thinks that the hebrew-named file doesn't even exist?
How can I fix this?
ActivePython 2.5.2 on Windows XP Home SP2

Hmm, after some digging it appears that when supplying os.listdir a unicode string, this kinda works:
files = os.listdir(u'test_source')
for f in files:
pf = os.path.join(u'test_source', f)
print pf.encode('ascii', 'replace'), os.path.exists(pf)
===>
test_source\ex True
test_source\joe True
test_source\mie.txt True
test_source\__()'''.txt True
test_source\????.txt True
Some important observations here:
Windows XP (like all NT derivatives) stores all filenames in unicode
os.listdir (and similar functions, like os.walk) should be passed a unicode string in order to work correctly with unicode paths. Here's a quote from the aforementioned link:
os.listdir(), which returns filenames,
raises an issue: should it return the
Unicode version of filenames, or
should it return 8-bit strings
containing the encoded versions?
os.listdir() will do both, depending
on whether you provided the directory
path as an 8-bit string or a Unicode
string. If you pass a Unicode string
as the path, filenames will be decoded
using the filesystem's encoding and a
list of Unicode strings will be
returned, while passing an 8-bit path
will return the 8-bit versions of the
filenames.
And lastly, print wants an ascii string, not unicode, so the path has to be encoded to ascii.

It looks like a Unicode vs ASCII issue - os.listdir is returning a list of ASCII strings.
Edit: I tried it on Python 3.0, also on XP SP2, and os.listdir simply omitted the Hebrew filenames instead of listing them at all.
According to the docs, this means it was unable to decode it:
Note that when os.listdir() returns a
list of strings, filenames that cannot
be decoded properly are omitted rather
than raising UnicodeError.

It works like a charm using Python 2.5.1 on OS X:
subdir/bar.txt True
subdir/foo.txt True
subdir/עִבְרִית.txt True
Maybe that means that this has to do with Windows XP somehow?
EDIT: I also tried with unicode strings to try mimic the Windows behaviour better:
for f in os.listdir(u'subdir'):
pf = os.path.join(u'subdir', f)
print pf, os.path.exists(pf)
subdir/bar.txt True
subdir/foo.txt True
subdir/עִבְרִית.txt True
In the Terminal (os x stock command prompt app) that is. Using IDLE it still worked but didn't print the filename correctly. To make sure it really is unicode there I checked:
>>>os.listdir(u'listdir')[2]
u'\u05e2\u05b4\u05d1\u05b0\u05e8\u05b4\u05d9\u05ea.txt'

A question mark is the more or less universal symbol displayed when a unicode character can't be represented in a specific encoding. Your terminal or interactive session under Windows is probably using ASCII or ISO-8859-1 or something. So the actual string is unicode, but it gets translated to ???? when printed to the terminal. That's why it works for PEZ, using OSX.

Related

File name encoding in Python 2.7

I want to read files with special file names in Python (2.7). But whatever I try, it always fails to open them.
The filenames are
F\xA8\xB9hrerschein
and
Gro\xDFhandel
I know, the encoding was done with one of several codepages. I could try to find out which one and try to convert it and all the mumbo jumbo, but I don't want that.
Can't I somehow tell python to open that file without having to go through all that encoding stuff? I mean opening the file by its raw name in bytes?
After all, I fixed it with
reload(sys)
sys.setdefaultencoding('utf-8')
and setting the environment variable
LANG="C.UTF-8"
Thanks for the hints.
One way is to use os.listdir(). See the following example.
Add some data to a file with non-ascii character 0xdf in the name:
$ echo abcd > `printf "A\xdfA"`
Check that the file contains a non-ascii character:
$ ls A*
A?A
Start Python, read the directory and open the first file (which is the one with the non-ascii character):
$ Python
>>> import os
>>> d = os.listdir('.')
>>> d
['A\xdfA']
>>> f = open(d[0])
>>> f.readline()
'abcd\n'
>>>
If you have source code like
with open('Großhandel') as input:
#stuff
You should look at Source Code Encodings and write
#!python2
# -*- coding: utf-8 -*-
with open('Großhandel') as input:
…
It is worth mention that the authors of PEP-263 are Marc-André Lemburg and Martin von Löwis, which I suppose makes pushing defined toward source encoding back in 2002 slightly more understandable.
Under Linux, filenames can be encoded in any character encoding. When opening a file, you must use the exact name encoded to match.
I.e. If the filename is Großhandel.txt encoded using UTF-8, it must be encoded as Gro\xc3\x9fhandel.txt.
If you pass a Unicode string to open(), the user's locale is used to encode the filename, which may match the filename.
Under OS X, UTF-8 encoding is enforced. Under Windows, the character encoding is abstracted by the i/o drivers. A Unicode object passed to open() should always be used for these Operating Systems, where it'll be converted appropriately.
If you're reading filenames from the filesystem, it would be useful to get decoded Unicode filenames to pass straight to open() - Well, you can pass Unicode strings to os.listdir().
E.g.
Locale: LANG=en_GB.UTF-8
A directory with the following files, with their filenames encoded to UTF-8:
test.txt
€.txt
When running Python 2.7 using a string:
>>> os.listdir(".")
['\xe2\x82\xac.txt', 'test.txt']
Using a Unicode path:
>>> os.listdir(u".")
[u'\u20ac.txt', u'test.txt']

UnicodeDecodeError when performing os.walk

I am getting the error:
'ascii' codec can't decode byte 0x8b in position 14: ordinal not in range(128)
when trying to do os.walk. The error occurs because some of the files in a directory have the 0x8b (non-utf8) character in them. The files come from a Windows system (hence the utf-16 filenames), but I have copied the files over to a Linux system and am using python 2.7 (running in Linux) to traverse the directories.
I have tried passing a unicode start path to os.walk, and all the files & dirs it generates are unicode names until it comes to a non-utf8 name, and then for some reason, it doesn't convert those names to unicode and then the code chokes on the utf-16 names. Is there anyway to solve the problem short of manually finding and changing all the offensive names?
If there is not a solution in python2.7, can a script be written in python3 to traverse the file tree and fix the bad filenames by converting them to utf-8 (by removing the non-utf8 chars)? N.B. there are many non-utf8 chars in the names besides 0x8b, so it would need to work in a general fashion.
UPDATE: The fact that 0x8b is still only a btye char (just not valid ascii) makes it even more puzzling. I have verified that there is a problem converting such a string to unicode, but that a unicode version can be created directly. To wit:
>>> test = 'a string \x8b with non-ascii'
>>> test
'a string \x8b with non-ascii'
>>> unicode(test)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0x8b in position 9: ordinal not in range(128)
>>>
>>> test2 = u'a string \x8b with non-ascii'
>>> test2
u'a string \x8b with non-ascii'
Here's a traceback of the error I am getting:
80. for root, dirs, files in os.walk(unicode(startpath)):
File "/usr/lib/python2.7/os.py" in walk
294. for x in walk(new_path, topdown, onerror, followlinks):
File "/usr/lib/python2.7/os.py" in walk
294. for x in walk(new_path, topdown, onerror, followlinks):
File "/usr/lib/python2.7/os.py" in walk
284. if isdir(join(top, name)):
File "/usr/lib/python2.7/posixpath.py" in join
71. path += '/' + b
Exception Type: UnicodeDecodeError at /admin/casebuilder/company/883/
Exception Value: 'ascii' codec can't decode byte 0x8b in position 14: ordinal not in range(128)
The root of the problem occurs in the list of files returned from listdir (on line 276 of os.walk):
names = listdir(top)
The names with chars > 128 are returned as non-unicode strings.
Right I just spent some time sorting through this error, and wordier answers here aren't getting at the underlying issue:
The problem is, if you pass a unicode string into os.walk(), then os.walk starts getting unicode back from os.listdir() and tries to keep it as ASCII (hence 'ascii' decode error). When it hits a unicode only special character which str() can't translate, it throws the exception.
The solution is to force the starting path you pass to os.walk to be a regular string - i.e. os.walk(str(somepath)). This means os.listdir returns regular byte-like strings and everything works the way it should.
You can reproduce this problem (and show it's solution works) trivially like:
Go into bash in some directory and run touch $(echo -e "\x8b\x8bThis is a bad filename") which will make some test files.
Now run the following Python code (iPython Qt is handy for this) in the same directory:
l = []
for root,dir,filenames in os.walk(unicode('.')):
l.extend([ os.path.join(root, f) for f in filenames ])
print l
And you'll get a UnicodeDecodeError.
Now try running:
l = []
for root,dir,filenames in os.walk('.'):
l.extend([ os.path.join(root, f) for f in filenames ])
print l
No error and you get a print out!
Thus the safe way in Python 2.x is to make sure you only pass raw text to os.walk(). You absolutely should not pass unicode or things which might be unicode to it, because os.walk will then choke when an internal ascii conversion fails.
This problem stems from two fundamental problems. The first is fact that Python 2.x default encoding is 'ascii', while the default Linux encoding is 'utf8'. You can verify these encodings via:
sys.getdefaultencoding() #python
sys.getfilesystemencoding() #OS
When os module functions returning directory contents, namely os.walk & os.listdir return a list of files containing ascii only filenames and non-ascii filenames, the ascii-encoding filenames are converted automatically to unicode. The others are not. Therefore, the result is a list containing a mix of unicode and str objects. It is the str objects that can cause problems down the line. Since they are not ascii, python has no way of knowing what encoding to use, and therefore they can't be decoded automatically into unicode.
Therefore, when performing common operations such as os.path(dir, file), where dir is unicode and file is an encoded str, this call will fail if the file is not ascii-encoded (the default). The solution is to check each filename as soon as they are retrieved and decode the str (encoded ones) objects to unicode using the appropriate encoding.
That's the first problem and its solution. The second is a bit trickier. Since the files originally came from a Windows system, their filenames probably use an encoding called windows-1252. An easy means of checking is to call:
filename.decode('windows-1252')
If a valid unicode version results you probably have the correct encoding. You can further verify by calling print on the unicode version as well and see the correct filename rendered.
One last wrinkle. In a Linux system with files of Windows origin, it is possible or even probably to have a mix of windows-1252 and utf8 encodings. There are two means of dealing with this mixture. The first and preferable is to run:
$ convmv -f windows-1252 -t utf8 -r DIRECTORY --notest
where DIRECTORY is the one containing the files needing conversion.This command will convert any windows-1252 encoded filenames to utf8. It does a smart conversion, in that if a filename is already utf8 (or ascii), it will do nothing.
The alternative (if one cannot do this conversion for some reason) is to do something similar on the fly in python. To wit:
def decodeName(name):
if type(name) == str: # leave unicode ones alone
try:
name = name.decode('utf8')
except:
name = name.decode('windows-1252')
return name
The function tries a utf8 decoding first. If it fails, then it falls back to the windows-1252 version. Use this function after a os call returning a list of files:
root, dirs, files = os.walk(path):
files = [decodeName(f) for f in files]
# do something with the unicode filenames now
I personally found the entire subject of unicode and encoding very confusing, until I read this wonderful and simple tutorial:
http://farmdev.com/talks/unicode/
I highly recommend it for anyone struggling with unicode issues.
I can reproduce the os.listdir() behavior: os.listdir(unicode_name) returns undecodable entries as bytes on Python 2.7:
>>> import os
>>> os.listdir(u'.')
[u'abc', '<--\x8b-->']
Notice: the second name is a bytestring despite listdir()'s argument being a Unicode string.
A big question remains however - how can this be solved without resorting to this hack?
Python 3 solves undecodable bytes (using filesystem's character encoding) bytes in filenames via surrogateescape error handler (os.fsencode/os.fsdecode). See PEP-383: Non-decodable Bytes in System Character Interfaces:
>>> os.listdir(u'.')
['abc', '<--\udc8b-->']
Notice: both string are Unicode (Python 3). And surrogateescape error handler was used for the second name. To get the original bytes back:
>>> os.fsencode('<--\udc8b-->')
b'<--\x8b-->'
In Python 2, use Unicode strings for filenames on Windows (Unicode API), OS X (utf-8 is enforced) and use bytestrings on Linux and other systems.
\x8 is not a valid utf-8 encoding character. os.path expects the filenames to be in utf-8. If you want to access invalid filenames, you have to pass the os.path.walk the non-unicode startpath; this way the os module will not do the utf8 decoding. You would have to do it yourself and decide what to do with the filenames that contain incorrect characters.
I.e.:
for root, dirs, files in os.walk(startpath.encode('utf8')):
After examination of the source of the error, something happens within the C-code routine listdir which returns non-unicode filenames when they are not standard ascii. The only fix therefore is to do a forced decode of the directory list within os.walk, which requires a replacement of os.walk. This replacement function works:
def asciisafewalk(top, topdown=True, onerror=None, followlinks=False):
"""
duplicate of os.walk, except we do a forced decode after listdir
"""
islink, join, isdir = os.path.islink, os.path.join, os.path.isdir
try:
# Note that listdir and error are globals in this module due
# to earlier import-*.
names = os.listdir(top)
# force non-ascii text out
names = [name.decode('utf8','ignore') for name in names]
except os.error, err:
if onerror is not None:
onerror(err)
return
dirs, nondirs = [], []
for name in names:
if isdir(join(top, name)):
dirs.append(name)
else:
nondirs.append(name)
if topdown:
yield top, dirs, nondirs
for name in dirs:
new_path = join(top, name)
if followlinks or not islink(new_path):
for x in asciisafewalk(new_path, topdown, onerror, followlinks):
yield x
if not topdown:
yield top, dirs, nondirs
By adding the line:
names = [name.decode('utf8','ignore') for name in names]
all the names are proper ascii & unicode, and everything works correctly.
A big question remains however - how can this be solved without resorting to this hack?
I got this problem when use os.walk on some directories with Chinese (unicode) names. I implemented the walk function myself as follows, which worked fine with unicode dir/file names.
import os
ft = list(tuple())
def walk(dir, cur):
fl = os.listdir(dir)
for f in fl:
full_path = os.path.join(dir,f)
if os.path.isdir(full_path):
walk(full_path, cur)
else:
path, filename = full_path.rsplit('/',1)
ft.append((path, filename, os.path.getsize(full_path)))

Python 3 unicode encode error

I'm using glob.glob to get a list of files from a directory input. When trying to open said files, Python fights me back with this error:
UnicodeEncodeError: 'charmap' codec can't encode character '\xf8' in position 18: character maps to < undefined >
By defining a string variable first, I can do this:
filePath = r"C:\Users\Jørgen\Tables\\"
Is there some way to get the 'r' encoding for a variable?
EDIT:
import glob
di = r"C:\Users\Jørgen\Tables\\"
def main():
fileList = getAllFileURLsInDirectory(di)
print(fileList)
def getAllFileURLsInDirectory(directory):
return glob.glob(directory + '*.xls*')
There is a lot more code, but this problem stops the process.
Independently on whether you use the raw string literal or a normal string literal, Python interpreter must know the source code encoding. It seems you use some 8-bit encoding, not the UTF-8. Therefore you have to add the line like
# -*- coding: cp1252 -*-
at the beginning of the file (or using another encoding used for the source files). It need not to be the first line, but it usually is the first or second (the first should contain #!python3 for the script used on Windows).
Anyway, it is usually better not to use non ASCII characters in the file/directory names.
You can also use normal slashes in the path (the same way as in Unix-based systems). Also, have a look at os.path.join when you need to compose the paths.
Updated
The problem is probably not where you search it for. My guess is that the error manifests only when you want to display the resulting list via print. This is usually because the console by default uses non-unicode encoding that is not capable to display the character. Try the chcp command without arguments in your cmd window.
You can modify the print command in your main() function to convert the string representation to the ASCII one that can always be displayed:
print(ascii(fileList))
Please also see:
Convert python filenames to unicode
and
Listing chinese filenames in directory with python
You can tell Python to explicitly handle strings as unicode -- but you have to maintain that from the first string onward.
In this case passing a u'somepath' to os.walk.

open file with a unicode filename?

I don't seem to be able to open a file which has a unicode filename. Lets say I do:
for i in os.listdir():
open(i, 'r')
When I try to search for some solution, I always get pages about how to read and write a unicode string to a file, not how to open a file with file() or open() which has a unicode name.
Simply pass open() a unicode string for the file name:
In Python 2.x:
>>> open(u'someUnicodeFilenameλ')
<open file u'someUnicodeFilename\u03bb', mode 'r' at 0x7f1b97e70780>
In Python 3.x, all strings are Unicode, so there is literally nothing to it.
As always, note that the best way to open a file is always using the with statement in conjunction with open().
Edit: With regards to os.listdir() the advice again varies, under Python 2.x, you have to be careful:
os.listdir(), which returns filenames, raises an issue: should it return the Unicode version of filenames, or should it return 8-bit strings containing the encoded versions? os.listdir() will do both, depending on whether you provided the directory path as an 8-bit string or a Unicode string. If you pass a Unicode string as the path, filenames will be decoded using the filesystem’s encoding and a list of Unicode strings will be returned, while passing an 8-bit path will return the 8-bit versions of the filenames.
Source
So in short, if you want Unicode out, put Unicode in:
>>> os.listdir(".")
['someUnicodeFilename\xce\xbb', 'old', 'Dropbox', 'gdrb']
>>> os.listdir(u".")
[u'someUnicodeFilename\u03bb', u'old', u'Dropbox', u'gdrb']
Note that the file will still open either way - it won't be represented well within Python as it'll be an 8-bit string, but it'll still work.
open('someUnicodeFilename\xce\xbb')
<open file 'someUnicodeFilenameλ', mode 'r' at 0x7f1b97e70660>
Under 3.x, as always, it's always Unicode.
You can try this:
import os
import sys
for filename in os.listdir(u"/your-direcory-path/"):
open(filename.encode(sys.getfilesystemencoding()), "r")

Convert python filenames to unicode

I am on python 2.6 for Windows.
I use os.walk to read a file tree. Files may have non-7-bit characters (German "ae" for example) in their filenames. These are encoded in Pythons internal string representation.
I am processing these filenames with Python library functions and that fails due to wrong encoding.
How can I convert these filenames to proper (unicode?) python strings?
I have a file "d:\utest\ü.txt". Passing the path as unicode does not work:
>>> list(os.walk('d:\\utest'))
[('d:\\utest', [], ['\xfc.txt'])]
>>> list(os.walk(u'd:\\utest'))
[(u'd:\\utest', [], [u'\xfc.txt'])]
If you pass a Unicode string to os.walk(), you'll get Unicode results:
>>> list(os.walk(r'C:\example')) # Passing an ASCII string
[('C:\\example', [], ['file.txt'])]
>>>
>>> list(os.walk(ur'C:\example')) # Passing a Unicode string
[(u'C:\\example', [], [u'file.txt'])]
I was looking for a solution for Python 3.0+. Will put it up here incase someone else needs it.
rootdir = r'D:\COUNTRY\ROADS\'
fs_enc = sys.getfilesystemencoding()
for (root, dirname, filename) in os.walk(rootdir.encode(fs_enc)):
# do your stuff here, but remember that now
# root, dirname, filename are represented as bytearrays
a more direct way might be to try the following -- find your file system's encoding, and then convert that to unicode. for example,
unicode_name = unicode(filename, "utf-8", errors="ignore")
to go the other way,
unicode_name.encode("utf-8")
os.walk(unicode(root_dir, 'utf-8'))
os.walk isn't specified to always use os.listdir, but neither is it listed how Unicode is handled. However, os.listdir does say:
Changed in version 2.3: On Windows
NT/2k/XP and Unix, if path is a
Unicode object, the result will be a
list of Unicode objects. Undecodable
filenames will still be returned as
string objects.
Does simply using a Unicode argument work for you?
for dirpath, dirnames, filenames in os.walk(u"."):
print dirpath
for fn in filenames:
print " ", fn
No, they are not encoded in Pythons internal string representation, there is no such thing. They are encoded in the encoding of the operating system/file system. Passing in unicode works for os.walk though.
I don't know how os.walk behaves when filenames can't be decoded, but I assume that you'll get a string back, like with os.listdir(). In that case you'll again have problems later. Also, not all of Python 2.x standard library will accept unicode parameters properly, so you may need to encode them as strings anyway. So, the problem may in fact be somewhere else, but you'll notice if that is the case. ;-)
If you need more control of the decoding you can't always pass in a string, and then just decode it with
filename = filename.decode()
as usual.

Categories