python3 UnicodeDecodeError while logging to console [duplicate] - python

This question already has an answer here:
Python3 UnicodeDecodeError
(1 answer)
Closed 4 years ago.
I just ported my webapp to python 3. I develop in my Mac and deploy in a CentOS server. I found many UnicodeDecodeError that don't happen in my local test environment but they appear in the deployment server (of course :D )
Most of them I fixed by specifiying the encoding when opening files. However there is one place where I don't know how to specify encoding and it is in logging. I still get errors such as:
UnicodeEncodeError: 'ascii' codec can't encode character '\xab' in position 85: ordinal not in range(128)
The same problem existed (in both platforms) in python 2 and it was solved with this
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
Which changed the value of sys.getdefaultencoder() from ascii to utf-8
But now in python3 sys.getdefaultencoder() is already utf-8 by default ( Why should we NOT use sys.setdefaultencoding("utf-8") in a py script? ) so I'm clueless on what's causing this difference of behavior.
So:
what should I look for to see why both platforms are having different defaults for encoding?
how can I solve this for logging?

I found the answer here Python3 UnicodeDecodeError. Let me expand:
This is solved by setting the environment value LC_CTYPE to en_US.UTF-8 instead of UTF-8. This can be set in .bashrc:
export LC_CTYPE=en_US.UTF-8
Strangely enough, both my mac and deployment server have LC_CTYPE=UTF-8 and in my mac it just works, but in the deployment server I need to set it to en_US.UTF-8 otherwise it won't work.
But this seems like a weird config from my deployment server because if I set it to UTF-8 it complains like this:
$ export LC_CTYPE=UTF-8
bash: warning: setlocale: LC_CTYPE: cannot change locale (UTF-8): No such file or directory
(My mac doesnt complain).
So obviously python is not reading LC_CTYPE directly but rather reading something else (a locale?) that is set by setting LC_CTYPE.

Related

Redirecting python output to a file causes UnicodeEncodeError on Windows

I'm trying to redirect output of python script to a file. When output contains non-ascii characters it works on macOS and Linux, but not on Windows.
I've deduced the problem to a simple test. The following is what is shown in Windows command prompt window. The test is only one print call.
Microsoft Windows [Version 10.0.17134.472]
(c) 2018 Microsoft Corporation. All rights reserved.
D:\>set PY
PYTHONIOENCODING=utf-8
D:\>type pipetest.py
print('\u0422\u0435\u0441\u0442')
D:\>python pipetest.py
Тест
D:\>python pipetest.py > test.txt
D:\>type test.txt
Тест
D:\>type test.txt | iconv -f utf-8 -t utf-8
Тест
D:\>set PYTHONIOENCODING=
D:\>python pipetest.py
Тест
D:\>python pipetest.py > test.txt
Traceback (most recent call last):
File "pipetest.py", line 1, in <module>
print('\u0422\u0435\u0441\u0442')
File "C:\Python\Python37\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode characters in position 0-3: character maps to <undefined>
D:\>python -V
Python 3.7.2
As one can see setting PYTHONIOENCODING environment variable helps but I don't understand why it needed to be set. When output is terminal it works but if output is a file it fails. Why does cp1252 is used when stdout is not a console?
Maybe it is a bug and can be fixed in Windows version of python?
Based on Python documentation, Windows version use different character encoding on console device (utr-8) and non-character devices such as disk files and pipes (system locale). PYTHONIOENCODING can be used to override it.
https://docs.python.org/3/library/sys.html#sys.stdout
Another method is change the encoding directly in the program, I tried and it works fine.
sys.stdout.reconfigure(encoding='utf-8')
https://docs.python.org/3/library/io.html#io.TextIOWrapper.reconfigure
Python needs to write binary data to stdout (not strings) hence requirement for encoding parameter.
Encoding (used to convert strings into bytes) is determined differently for each platform:
on Linux and macOS it comes from current locale;
on Windows what is used is "Current language for non-Unicode programs" (codepage set in command line window is irrelevant).
(Thanks to #Eric Leung for precise link)
The follow up question would be why Python on Windows uses current system locale for non-Unicode programs, and not what is set by chcp command, but I will leave it for someone else.
Also it needs to be mentioned there's a checkbox titled "Beta: Use Unicode UTF-8..." in Region Settings on Windows 10 (to open - Win+R, type intl.cpl). By checking the checkbox the above example works without error. But this checkbox is off by default and really deep in system settings.

opening another python 3 process from subprocess.Popen is setting locale default encoding to ANSI_X3.4-1968 but only in certain instances

This is driving me nuts. I have a main python 3 (3.5.2) driver/program that I use subprocess and popen with to spawn additional python 3 processes that I communicate with using rpyc. This has been working well, especially in python 2.
I've successfully converted to python 3 and have verified that all of these processes spawn successfully if run from a terminal.
To launch them from my driver, it looks like this.
cmd_one = "/path/to/.virtualenv/venv_one/bin/python file_a.py"
cmd_two = "/path/to/.virtualenv/venv_two/bin/python file_b.py"
s_one = subprocess.Popen(cmd_one.split(), stdout=logfile, stderr=logfile)
s_two = subprocess.Popen(cmd_two.split(), stdtou=logfile, stderr=logfile)
This worked great in Python 2.7.
As I upgrade to Python 3, however, I'm seeing something weird with default encoding that I can't figure. For cmd_one, it works great-- if I do a
import locale
print(locale.getpreferredencoding())
it returns UTF-8 like I'd expect. However, for cmd_two, I am getting ANSI_X3.4-1968 for seemingly no reason, and it's throwing a boatload of unicodedecode errors as a result. Like I said, when spawned in the terminal both cmd_one and cmd_two work great and use the proper default encoding.
I've searched extensively but this seems to be a special case. I don't want to force the default encoding because I feel like that is masking some other issue. Is there something in the file_b.py and its constituents that is somehow setting the encoding to ANSII when it doesn't see that it's run in the terminal? file_b.py is part of a large Tensorflow project, and there are about 8 files it draw upon but I've looked in all of them but can't find anything.
This is on ubuntu 16.04 and the default python 3 is 3.5.2, and as far as I know, there's no way to pass encoding='utf-8' with Popen.
Any suggestions on what the heck is going on?
Thanks.
OP here, I think I found a solution but I still don't know why I need to do this only for this specific instance-- hopefully, somebody can weigh in so that I can understand this better.
From:
https://webkul.com/blog/setup-locale-python3/
When I run:
locale
in my terminal and as a subprocess, I get:
LANG=en_US.UTF-8
LANGUAGE=
LC_CTYPE="en_US.UTF-8"
LC_NUMERIC="en_US.UTF-8"
LC_TIME="en_US.UTF-8"
LC_COLLATE="en_US.UTF-8"
LC_MONETARY="en_US.UTF-8"
LC_MESSAGES="en_US.UTF-8"
LC_PAPER="en_US.UTF-8"
LC_NAME="en_US.UTF-8"
LC_ADDRESS="en_US.UTF-8"
LC_TELEPHONE="en_US.UTF-8"
LC_MEASUREMENT="en_US.UTF-8"
LC_IDENTIFICATION="en_US.UTF-8"
LC_ALL=
What fixed the default encoding for me was to set the locale environment LANGUAGE to en_US.en and LC_ALL to en_US.UTF-8 variables, and pass them directly to the subprocess with Popen.
s = subprocess.Popen(cmd_two.split(), env={'LANGUAGE':'en_US.en', 'LC_ALL':'en_US.UTF-8'})
Now it properly identifies the default encoding as UTF-8 in my subprocess and everything works.
Can anybody explain this to me? I don't need to do this with my other subprocess and it works just fine.

Accented characters in Python 2.7 [duplicate]

I'm running a recent Linux system where all my locales are UTF-8:
LANG=de_DE.UTF-8
LANGUAGE=
LC_CTYPE="de_DE.UTF-8"
LC_NUMERIC="de_DE.UTF-8"
LC_TIME="de_DE.UTF-8"
...
LC_IDENTIFICATION="de_DE.UTF-8"
LC_ALL=
Now I want to write UTF-8 encoded content to the console.
Right now Python uses UTF-8 for the FS encoding but sticks to ASCII for the default encoding :-(
>>> import sys
>>> sys.getdefaultencoding()
'ascii'
>>> sys.getfilesystemencoding()
'UTF-8'
I thought the best (clean) way to do this was setting the PYTHONIOENCODING environment variable. But it seems that Python ignores it. At least on my system I keep getting ascii as default encoding, even after setting the envvar.
# tried this in ~/.bashrc and ~/.profile (also sourced them)
# and on the commandline before running python
export PYTHONIOENCODING=UTF-8
If I do the following at the start of a script, it works though:
>>> import sys
>>> reload(sys) # to enable `setdefaultencoding` again
<module 'sys' (built-in)>
>>> sys.setdefaultencoding("UTF-8")
>>> sys.getdefaultencoding()
'UTF-8'
But that approach seems unclean. So, what's a good way to accomplish this?
Workaround
Instead of changing the default encoding - which is not a good idea (see mesilliac's answer) - I just wrap sys.stdout with a StreamWriter like this:
sys.stdout = codecs.getwriter(locale.getpreferredencoding())(sys.stdout)
See this gist for a small utility function, that handles it.
It seems accomplishing this is not recommended.
Fedora suggested using the system locale as the default,
but apparently this breaks other things.
Here's a quote from the mailing-list discussion:
The only supported default encodings in Python are:
Python 2.x: ASCII
Python 3.x: UTF-8
If you change these, you are on your own and strange things will
start to happen. The default encoding does not only affect
the translation between Python and the outside world, but also
all internal conversions between 8-bit strings and Unicode.
Hacks like what's happening in the pango module (setting the
default encoding to 'utf-8' by reloading the site module in
order to get the sys.setdefaultencoding() API back) are just
downright wrong and will cause serious problems since Unicode
objects cache their default encoded representation.
Please don't enable the use of a locale based default encoding.
If all you want to achieve is getting the encodings of
stdout and stdin correctly setup for pipes, you should
instead change the .encoding attribute of those (only).
--
Marc-Andre Lemburg
eGenix.com
This is how I do it:
#!/usr/bin/python2.7 -S
import sys
sys.setdefaultencoding("utf-8")
import site
Note the -S in the bangline. That tells Python to not automatically import the site module. The site module is what sets the default encoding and the removes the method so it can't be set again. But will honor what is already set.
How to print UTF-8 encoded text to the console in Python < 3?
print u"some unicode text \N{EURO SIGN}"
print b"some utf-8 encoded bytestring \xe2\x82\xac".decode('utf-8')
i.e., if you have a Unicode string then print it directly. If you have
a bytestring then convert it to Unicode first.
Your locale settings (LANG, LC_CTYPE) indicate a utf-8 locale and
therefore (in theory) you could print a utf-8 bytestring directly and it
should be displayed correctly in your terminal (if terminal settings
are consistent with the locale settings and they should be) but you
should avoid it: do not hardcode the character encoding of your
environment inside your script; print Unicode directly instead.
There are many wrong assumptions in your question.
You do not need to set PYTHONIOENCODING with your locale settings,
to print Unicode to the terminal. utf-8 locale supports all Unicode characters i.e., it works as is.
You do not need the workaround sys.stdout =
codecs.getwriter(locale.getpreferredencoding())(sys.stdout). It may
break if some code (that you do not control) does need to print bytes
and/or it may break while
printing Unicode to Windows console (wrong codepage, can't print undecodable characters). Correct locale settings and/or PYTHONIOENCODING envvar are enough. Also, if you need to replace sys.stdout then use io.TextIOWrapper() instead of codecs module like win-unicode-console package does.
sys.getdefaultencoding() is unrelated to your locale settings and to
PYTHONIOENCODING. Your assumption that setting PYTHONIOENCODING
should change sys.getdefaultencoding() is incorrect. You should
check sys.stdout.encoding instead.
sys.getdefaultencoding() is not used when you print to the
console. It may be used as a fallback on Python 2 if stdout is
redirected to a file/pipe unless PYTHOHIOENCODING is set:
$ python2 -c'import sys; print(sys.stdout.encoding)'
UTF-8
$ python2 -c'import sys; print(sys.stdout.encoding)' | cat
None
$ PYTHONIOENCODING=utf8 python2 -c'import sys; print(sys.stdout.encoding)' | cat
utf8
Do not call sys.setdefaultencoding("UTF-8"); it may corrupt your
data silently and/or break 3rd-party modules that do not expect
it. Remember sys.getdefaultencoding() is used to convert bytestrings
(str) to/from unicode in Python 2 implicitly e.g., "a" + u"b". See also,
the quote in #mesilliac's answer.
If the program does not display the appropriate characters on the screen,
i.e., invalid symbol,
run the program with the following command line:
PYTHONIOENCODING=utf8 python3 yourprogram.py
Or the following, if your program is a globally installed module:
PYTHONIOENCODING=utf8 yourprogram
On some platforms as Cygwin (mintty.exe terminal) with Anaconda Python (or Python 3), simply run export PYTHONIOENCODING=utf8 and
later run the program does not work,
and you are required to always do every time PYTHONIOENCODING=utf8 yourprogram to run the program correctly.
On Linux, in case of sudo, you can try to do pass the -E argument to export the user variables to the sudo process:
export PYTHONIOENCODING=utf8
sudo -E python yourprogram.py
If you try this and it did no work, you will need to enter on a sudo shell:
sudo /bin/bash
PYTHONIOENCODING=utf8 yourprogram
Related:
How to print UTF-8 encoded text to the console in Python < 3?
Changing default encoding of Python?
Forcing UTF-8 over cp1252 (Python3)
Permanently set Python path for Anaconda within Cygwin
https://superuser.com/questions/1374339/what-does-the-e-in-sudo-e-do
Why bash -c 'var=5 printf "$var"' does not print 5?
https://unix.stackexchange.com/questions/296838/whats-the-difference-between-eval-and-exec
While realizing the OP question is for Linux: when ending up here through a search engine, on Windows 10 the following fixes the issue:
set PYTHONIOENCODING=utf8
python myscript.py

UnicodeDecodeError when ssh from OS X

My Django app loads some files on startup (or when I execute management command). When I ssh from one of my Arch or Ubuntu machines all works fine, I am able to successfully run any commands and migrations.
But when I ssh from OS X (I have El Capital) and try to do same things I get this error:
UnicodeDecodeError: 'ASCII' codec can't decode byte 0xc3 in position 0: ordinal not in range(128)
To open my files I use with open(path_to_file) as f: ...
The error happens when sshing from both iterm and terminal. I found out that reason was LC_CTYPE environment variable. It wasn't set on my other Linux machines but on mac it was UTF-8 so after I ssh to the server it was set the same. The error was fixed after I unset LC_CTYPE.
So the actual question is what has happened and how to avoid this further? I can unset this variable in my local machine but will it take some negative effects? And what is the best way of doing this?
Your terminal at your local machine uses a character encoding. The encoding it uses appears to be UTF-8. When you log on to your server (BTW, what OS does it run?) the programs that run there need to know what encoding your terminal supports so that they display stuff as needed. They get this information from LC_CTYPE. ssh correctly sets it to UTF-8, because that's what your terminal supports.
When you unset LC_CTYPE, then your programs use the default, ASCII. The programs now display in ASCII instead of UTF-8, which works because UTF-8 is backward compatible with ASCII. However, if a program needs to display a special character that does not exist in ASCII, it won't work.
Although from the information you give it's not entirely clear to me why the system behaves in this way, I can tell you that unsetting LC_CTYPE is a bad workaround. To avoid problems in the future, it would be better to make sure that all your terminals in all your machines use UTF-8, and get rid of ASCII.
When you try to open a file, Python uses the terminal's (i.e. LC_CTYPE's) character set. I've never quite understood why it's made this way; why should the character set of your terminal indicate the encoding a file has? However, that's the way it's made and the way to fix the problem correctly is to use the encoding parameter of open if you are using Python 3, or the codecs standard library module if you are using Python 2.
I had a similar issue after updating my OS-X, ssh-ing to a UNIX server the copyright character was not encoded cause the UTF-8 locale was not properly set up. I solved the issue unchecking the setting "Set locale environment variables on startup" in the preferences of my terminal(s).

Output ascii characters to stdout in Python 3

I have a file named 'xxx.py' like this:
print("a simple string")
and when I run that like this (Python 3):
python xxx.py >atextfile.txt
I get a unicode file.
I would like an ascii file.
I don't mind if an exception is thrown if a non-ascii character is attempted to be printed.
What is a simple change I can make to my code that will output ascii characters?
My searches turn up solutions that all seem too verbose for such a simple problem.
[Edit] to report what I learned from setting LC_CTYPE:
I am running on windows 7.
When running on the powershell commandline I get a unicode file (two bytes/character)
When running in a .bat file without LC_CTYPE set I get an ascii file (could be utf-8 as #jwodder pointed out).
When running in a .bat file with LC_CTYPE=ascii set I get presumable an ascii file (1 byte/character).
The stdout encoding is defined by the environment that is executing the python script, e.g.:
$ python -c "import sys; print(sys.stdout.encoding)"
UTF-8
$ LC_CTYPE=ascii python -c "import sys; print(sys.stdout.encoding)"
US-ASCII
Try adjusting your environment before running the script. You can force the encoding value for Python by setting the PYTHONIOENCODING environment variable.

Categories