I'm trying to run my kivy app using autorun on my Raspberry Pi.
After restarting my OS it will run but during execution I faced up with encoding problem on the next lines of code:
CommonData.deviceSettings.Measurement.Alchogol = {}
for s in alchogolSettings:
key = s["Name"].encode('utf-8').strip()
value = s["Value"].encode('utf-8').strip()
CommonData.deviceSettings.Measurement.Alchogol.update({key: value})
The next error occured during execution
'ascii' codec can't encode characters in position 0-14: ordinal not in
range(128)
In the top of my .py file I setup next instructions:
#!/usr/bin/python
# -*- coding: utf8 -*-
The most interesting in this situation that if I will run this program from usual terminal it launch successfully, but when I'm trying to launch it using autorun this problem occurs
Does anybody know why this problem occurs and how to avoid it?
I found the reason of this problem. In my case I run a python script which is launch terminal and start another python script. The last script print some cyrrilic text to the terminal. This is where the problem lies. After deleting this print instruction I avoided this error. And this print instruction was located one line after the code I showed in this post
Related
This question already has an answer here:
Python3 UnicodeDecodeError
(1 answer)
Closed 4 years ago.
I just ported my webapp to python 3. I develop in my Mac and deploy in a CentOS server. I found many UnicodeDecodeError that don't happen in my local test environment but they appear in the deployment server (of course :D )
Most of them I fixed by specifiying the encoding when opening files. However there is one place where I don't know how to specify encoding and it is in logging. I still get errors such as:
UnicodeEncodeError: 'ascii' codec can't encode character '\xab' in position 85: ordinal not in range(128)
The same problem existed (in both platforms) in python 2 and it was solved with this
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
Which changed the value of sys.getdefaultencoder() from ascii to utf-8
But now in python3 sys.getdefaultencoder() is already utf-8 by default ( Why should we NOT use sys.setdefaultencoding("utf-8") in a py script? ) so I'm clueless on what's causing this difference of behavior.
So:
what should I look for to see why both platforms are having different defaults for encoding?
how can I solve this for logging?
I found the answer here Python3 UnicodeDecodeError. Let me expand:
This is solved by setting the environment value LC_CTYPE to en_US.UTF-8 instead of UTF-8. This can be set in .bashrc:
export LC_CTYPE=en_US.UTF-8
Strangely enough, both my mac and deployment server have LC_CTYPE=UTF-8 and in my mac it just works, but in the deployment server I need to set it to en_US.UTF-8 otherwise it won't work.
But this seems like a weird config from my deployment server because if I set it to UTF-8 it complains like this:
$ export LC_CTYPE=UTF-8
bash: warning: setlocale: LC_CTYPE: cannot change locale (UTF-8): No such file or directory
(My mac doesnt complain).
So obviously python is not reading LC_CTYPE directly but rather reading something else (a locale?) that is set by setting LC_CTYPE.
So I have a program that I want to autostart in Raspberry Pi. My program is supposed to grab some api-info online and then display it on a little screen. I've added these lines to rc.local:
sudo python ./home/pi/Documents/Skanetrafiken_projekt/testStart.py &
sudo python ./home/pi/Documents/Skanetrafiken_projekt/main.py &
The testStart.py just tests the display and it works fine, the screen lights up when the Raspberry Pi boots up. So that works. The main.py won't work at all however. In the beginning of the main code I even put the same code as in testStart.py, just to see if the display lights up, but it doesn't. So that is super weird to me.
Could it be something about that the main.py will connect to internet? I tried setting "Waiting of network to boot" in the raspi-config settings, but that didn't help.
The main works fine when I run in manually. I also tried to start with cron, but that didn't work. I don't have that much experience.
And ideas?
try using sudo crontab -e and then adding #reboot sudo python /path/to/your/script.py this then should run your script every time you boot up.
I tested running the script from the prompt rather than directly from the python script, and I guess the prompt uses another compilator or something because now I got many errors that I didn't get before. Including that I have to include the line
# -*- coding: utf-8 -*-
for it to understand my comments. So now it works anyway.
My Django app loads some files on startup (or when I execute management command). When I ssh from one of my Arch or Ubuntu machines all works fine, I am able to successfully run any commands and migrations.
But when I ssh from OS X (I have El Capital) and try to do same things I get this error:
UnicodeDecodeError: 'ASCII' codec can't decode byte 0xc3 in position 0: ordinal not in range(128)
To open my files I use with open(path_to_file) as f: ...
The error happens when sshing from both iterm and terminal. I found out that reason was LC_CTYPE environment variable. It wasn't set on my other Linux machines but on mac it was UTF-8 so after I ssh to the server it was set the same. The error was fixed after I unset LC_CTYPE.
So the actual question is what has happened and how to avoid this further? I can unset this variable in my local machine but will it take some negative effects? And what is the best way of doing this?
Your terminal at your local machine uses a character encoding. The encoding it uses appears to be UTF-8. When you log on to your server (BTW, what OS does it run?) the programs that run there need to know what encoding your terminal supports so that they display stuff as needed. They get this information from LC_CTYPE. ssh correctly sets it to UTF-8, because that's what your terminal supports.
When you unset LC_CTYPE, then your programs use the default, ASCII. The programs now display in ASCII instead of UTF-8, which works because UTF-8 is backward compatible with ASCII. However, if a program needs to display a special character that does not exist in ASCII, it won't work.
Although from the information you give it's not entirely clear to me why the system behaves in this way, I can tell you that unsetting LC_CTYPE is a bad workaround. To avoid problems in the future, it would be better to make sure that all your terminals in all your machines use UTF-8, and get rid of ASCII.
When you try to open a file, Python uses the terminal's (i.e. LC_CTYPE's) character set. I've never quite understood why it's made this way; why should the character set of your terminal indicate the encoding a file has? However, that's the way it's made and the way to fix the problem correctly is to use the encoding parameter of open if you are using Python 3, or the codecs standard library module if you are using Python 2.
I had a similar issue after updating my OS-X, ssh-ing to a UNIX server the copyright character was not encoded cause the UTF-8 locale was not properly set up. I solved the issue unchecking the setting "Set locale environment variables on startup" in the preferences of my terminal(s).
I'm trying to execute a script on a schedule it's coded in utf-8 at the beginning of the file:
#-*- coding: utf-8 -*- .
I've set Task Scheduler in windows to run a batch file that calls the py file for execution.
python C:/Users/admin_4190248/Desktop/Howard/redshift_howard.py
However, when the script runs in either CMD or PowerShell, it throws a "can't encode error" like this:
UnicodeDecodeError: 'ascii' codec can't encode character u'\u2013'...
Which I don't understand, since the file executes fine in an ipython console.
My question is therefore, is there another (more ipython like) tool that I can use to execute my script on a schedule? Or, can you help to solve this UTF-8 issue within CMD?
I am downloading data from a MySQL database. Some of the data is in Korean. When I try to print the string before putting it in a table (Qt), the windows command prompt returns:
File "C:\Python27\lib\encodings\cp437.py", line 12, in encode
return codecs.charmap_encode(input,errors,encoding_map)
UnicodeEncodeError: 'charmap' codec can't encode characters in position 0-2: character maps to (undefined)
However, when I use IDLE to run the code, it prints the Korean language fine. This caused me alot of headache when trying to debug why my program was not working as I just click the python file from the folder to run it. Finally when using idle it turned out everything works.
Is there something wrong with my python installation, windows installation, or python code trying to just print the characters? I assumed it wouldnt be the python code as it works in IDLE. Also, using a special function to print in windows seems bad as it limits the codes portability to another OS (or will every OS have this problem?)
IDLE is based on tkinter, which is based on tcl/tk, which supports the entire Basic Multilingual Plane (BMP). (But tcl/tk does not support supplementary planes with other characters). On Windows, the Python interactive interpreter runs in the same console window used by Command Prompt. This only supports Code Page subsets of the BMP, sometimes only 256 of 2^^16 characters.
The codepage that supports ASCII and Korean is 949. (Easy Google search.) In Command Prompt, chcp 949 should change to that codepage. If you then start Python, you should be able to display Korean characters.