The next line of code caused SyntaxError: invalid syntax:
#coding=utf-8
result_3_logspace_mean_proportion_сorrect_answers = Exception('3_logspace_mean_proportion_сorrect_answers').get_result(result_1_main)
while the second line of code not:
#coding=utf-8
result_3 = Exception('3_logspace_mean_proportion_сorrect_answers').get_result(result_1_main)
How to deal with the problem? I strongly want the first variable name.
The letter at the beginning of correct is not a c, it's a Cyrillic с, and Python 2 only accepts ASCII by default in source code.
The first variable name contains a non-ascii character: the first "c" in "correct" is a small cyrillic es. You can see this if you decode the string to ascii:
#Python3
>>> 'result_3_logspace_mean_proportion_сorrect_answers'.encode()
'result_3_logspace_mean_proportion_\xd1\x81orrect_answers'
#Python2
>>> u'result_3_logspace_mean_proportion_сorrect_answers'.encode('utf8')
'result_3_logspace_mean_proportion_\xd1\x81orrect_answers'
Substituting it for a normal "c" fixes the issue.
Related
In Python, I am trying to replace a symbol in a string.
I have this string:
a = "• HELLO • HOW • ARE • AYOU"
I want to replace the "•" by ";".
I tried that, but no modification to my string:
b = a.replace("", ";")
I tried that as well, that works in Python:
b = a.replace("•", ";")
but when I launched in my spark-submit, I have this error:
SyntaxError: Non-UTF-8 code starting with '\x95' in file file_test.py on line 392, but no encoding declared;
thank you for your help
the ascii number of • is 8229 which can be found using ord("•")
try changing the line to b=a.replace(chr(8226), ";")
The error message tells you that you need to declare an encoding in your source file. You do this by including the following command at the beginning:
# coding=utf-8
(Either as the very first line, or as the second line behind the shebang declaration.)
Your first code doesn’t work because is a HTML entity. It has nothing to do with Python. In Python, instead of using a Unicode character in code, you could also use an escape sequence to encode the value of the bullet character:
a.replace('\u2022', ';')
(U+2022 is the Unicode code point “BULLET”.)
I am using python 2.7 and used Chinese characters in my code, so...
# coding = utf-8
and the problem is part of my code, as follows:
def fileoutput():
global percent_shown
date = str(datetime.datetime.now()).decode('utf-8')
with open("result.txt","a") as datafile:
datafile.write(date+" "+str(percent_shown.get()))
percent_shown is a string that includes Chinese characters
When I run it, I get:
UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-2: ordinal not in range(128)
How to fix it? Thanks
As per PEP 263, the coding declaration must match the regular expression r"^[ \t\v]*#.*?coding[:=][ \t]*([-_.a-zA-Z0-9]+)" so you need to get rid of the space between "coding" and the equal sign:
# coding=utf-8
This declaration tells python that the .py file itself is utf-8 encoded, but doesn't change the rest of the program. This is useful if you are writing unicode literals but you still need to cast them to unicde properly to make sure things work.
Since you haven't shown us what you are trying to print, I found some Chinese characters to demonstrate. I have no idea what they mean... so appollogies for anyone I insult!
foo = u"学而设" # Good! you've got a unicode string
bar = "学而设" # Bad! you've got a utf-8 encoded string that python
# thinks is ascii
I think you can fix your program with a few tweaks. First, don't try to decode datetime.now(). Its just ascii. It didn't change its return type just because you declared the source file encoding. Second, use the codecs module to open the file with the encoding you wnat (I'm assuming its utf-8). Now, since you are working with unicode strings you can write them directly to the file.
import codecs
def fileoutput():
date = unicode(datetime.datetime.now())
with codecs.open("result.txt","a", encoding="utf-8") as datafile:
datafile.write(date+" "+percent_shown.get())
You can't have whitespace before the = in your coding comment. Try:
# coding=utf-8
See the regular expression in: https://www.python.org/dev/peps/pep-0263/
My aim is that converting base64 encoding "%EB" string to "\xEB". However, as soon as I tried, I found that it is hard and can't achieved by string.replace nor re.sub both.
My code failed as below:
target = '%EB%AF%B8%EB%9F%AC%EC%8A%A4%20%EC%97%A3%EC%A7%80'
target.replace('%','\x')
-> ValueError: invalid \x escape
re.sub('%','\x',target)
-> ValueError: invalid \x escape
UPDATED:
Thanks for comments, I tried '\x' and r'\x', however, it seems that those couldn't be a solution.
for example,
target = '%EB%AF%B8%EB%9F%AC%EC%8A%A4%20%EC%97%A3%EC%A7%80'
converted1 = target.replace('%',r'\x')
converted2 = target.replace('%','\\x')
converted1
-> '\\xEB\\xAF\\xB8\\xEB\\x9F\\xAC\\xEC\\x8A\\xA4\\x20\\xEC\\x97\\xA3\\xEC\\xA7\\x80'
converted2
-> '\\xEB\\xAF\\xB8\\xEB\\x9F\\xAC\\xEC\\x8A\\xA4\\x20\\xEC\\x97\\xA3\\xEC\\xA7\\x80'
Results:
print converted1
\xEB\xAF\xB8\xEB\x9F\xAC\xEC\x8A\xA4\x20\xEC\x97\xA3\xEC\xA7\x80
print converted2
\xEB\xAF\xB8\xEB\x9F\xAC\xEC\x8A\xA4\x20\xEC\x97\xA3\xEC\xA7\x80
What I want to have is:
print "\xEB\xAF\xB8\xEB\x9F\xAC\xEC\x8A\xA4\x20\xEC\x97\xA3\xEC\xA7\x80"
미러스 엣지
The method replace cannot decode URL-safe string. It just replace character % to \x.
If you want to decode URL-safe string, you should use urllib.unquote.
import urllib
target = '%EB%AF%B8%EB%9F%AC%EC%8A%A4%20%EC%97%A3%EC%A7%80'
print urllib.unquote(target)
>>> target = '%EB%AF%B8%EB%9F%AC%EC%8A%A4%20%EC%97%A3%EC%A7%80'
>>> target.replace('%',r'\x')
'xEBxAFxB8xEBx9FxACxECx8AxA4x20xECx97xA3xECxA7x80'
Why is '\x' invalid in Python?
For the second part of your code, use:
print target.replace('%',r'\x').decode('string-escape')
Though this fixes your error, the best solution is the one by #kamae
I think you missed difference between CLI of interactive Python and the python source code. What you actually do in your code is changing character "%" in the string into "\x" characters.
What you do from the Python's command line is to enter string with escape code interpreted at the moment of string creation (when you pressed Enter). Your string then is unicode and contains binary representation of your Korean characters.
Converting unicode codepoints to UTF8 hex in Python may help you.
I have a feature of my program where the user can upload a csv file, which my program goes through and uses as input. I have one user complaining about a problem where his input is throwing up an error. The error is caused by there being an illegal character that is encoded wrong. The characters is below:
�
Sometimes it appears as a diamond with a "?" in the middle, sometimes it appears as a double diamond with "?" in the middle, sometimes it appears as "\xa0", and sometimes it appears as "\xa0\xa0".
In my program if I do:
print str_with_weird_char
The string will show up in my terminal with the diamond "?" in place of the weird character. If I copy+paste that string into ipython, it will exit with this message:
In [1]: g="blah��blah"
WARNING:
********
You or a %run:ed script called sys.stdin.close() or sys.stdout.close()!
Exiting IPython!
notice how the diamond "?" is double now. For some reason copy+paste makes it double...
In the django traceback page, it looks like this:
UnicodeDecodeError at /chris/import.html
('ascii', 'blah \xa0 BLAH', 14, 15, 'ordinal not in range(128)')
The thing that messes me up is that I can't do anything with this string without it throwing an exception. I tried unicode(), I tried str(), I tried .encode(), I tried .encode("utf-8"), no matter what it throws up an error.
What can I do it get this thing to be a working string?
You can pass, "ignore" to skip invalid characters in .encode/.decode
like "ILLEGAL".decode("utf8","ignore")
>>> "ILLEGA\xa0L".decode("utf8")
...
UnicodeDecodeError: 'utf8' codec can't decode byte 0xa0 in position 6: unexpected code byte
>>> "ILLEGA\xa0L".decode("utf8","ignore")
u'ILLEGAL'
>>>
Declare the coding on the second line of your script. It really has to be second. Like
#!/usr/bin/python
# coding=utf-8
This might be enough to solve your problem all by itself. If not, see str.encode('utf-8') and str.decode('utf-8').
you can also use:
python3 -c "import urllib, sys ; print urllib.quote_plus(sys.stdin.read())";
taken from https://wiki.python.org/moin/Powerful%20Python%20One-Liners
** ps, in the website it's pointed to use python, but I tested in python3 and it works just fine
The only way to do it (at least in python2) is to use unicodedata.normalize:
unicodedata.normalize('NFKD', text).encode('utf-8', 'ignore')
decode('utf-8', 'ignore') will just raise exception.
I want to I check whether a string is in ASCII or not.
I am aware of ord(), however when I try ord('é'), I have TypeError: ord() expected a character, but string of length 2 found. I understood it is caused by the way I built Python (as explained in ord()'s documentation).
Is there another way to check?
I think you are not asking the right question--
A string in python has no property corresponding to 'ascii', utf-8, or any other encoding. The source of your string (whether you read it from a file, input from a keyboard, etc.) may have encoded a unicode string in ascii to produce your string, but that's where you need to go for an answer.
Perhaps the question you can ask is: "Is this string the result of encoding a unicode string in ascii?" -- This you can answer
by trying:
try:
mystring.decode('ascii')
except UnicodeDecodeError:
print "it was not a ascii-encoded unicode string"
else:
print "It may have been an ascii-encoded unicode string"
def is_ascii(s):
return all(ord(c) < 128 for c in s)
In Python 3, we can encode the string as UTF-8, then check whether the length stays the same. If so, then the original string is ASCII.
def isascii(s):
"""Check if the characters in string s are in ASCII, U+0-U+7F."""
return len(s) == len(s.encode())
To check, pass the test string:
>>> isascii("♥O◘♦♥O◘♦")
False
>>> isascii("Python")
True
New in Python 3.7 (bpo32677)
No more tiresome/inefficient ascii checks on strings, new built-in str/bytes/bytearray method - .isascii() will check if the strings is ascii.
print("is this ascii?".isascii())
# True
Vincent Marchetti has the right idea, but str.decode has been deprecated in Python 3. In Python 3 you can make the same test with str.encode:
try:
mystring.encode('ascii')
except UnicodeEncodeError:
pass # string is not ascii
else:
pass # string is ascii
Note the exception you want to catch has also changed from UnicodeDecodeError to UnicodeEncodeError.
Your question is incorrect; the error you see is not a result of how you built python, but of a confusion between byte strings and unicode strings.
Byte strings (e.g. "foo", or 'bar', in python syntax) are sequences of octets; numbers from 0-255. Unicode strings (e.g. u"foo" or u'bar') are sequences of unicode code points; numbers from 0-1112064. But you appear to be interested in the character é, which (in your terminal) is a multi-byte sequence that represents a single character.
Instead of ord(u'é'), try this:
>>> [ord(x) for x in u'é']
That tells you which sequence of code points "é" represents. It may give you [233], or it may give you [101, 770].
Instead of chr() to reverse this, there is unichr():
>>> unichr(233)
u'\xe9'
This character may actually be represented either a single or multiple unicode "code points", which themselves represent either graphemes or characters. It's either "e with an acute accent (i.e., code point 233)", or "e" (code point 101), followed by "an acute accent on the previous character" (code point 770). So this exact same character may be presented as the Python data structure u'e\u0301' or u'\u00e9'.
Most of the time you shouldn't have to care about this, but it can become an issue if you are iterating over a unicode string, as iteration works by code point, not by decomposable character. In other words, len(u'e\u0301') == 2 and len(u'\u00e9') == 1. If this matters to you, you can convert between composed and decomposed forms by using unicodedata.normalize.
The Unicode Glossary can be a helpful guide to understanding some of these issues, by pointing how how each specific term refers to a different part of the representation of text, which is far more complicated than many programmers realize.
Ran into something like this recently - for future reference
import chardet
encoding = chardet.detect(string)
if encoding['encoding'] == 'ascii':
print 'string is in ascii'
which you could use with:
string_ascii = string.decode(encoding['encoding']).encode('ascii')
How about doing this?
import string
def isAscii(s):
for c in s:
if c not in string.ascii_letters:
return False
return True
I found this question while trying determine how to use/encode/decode a string whose encoding I wasn't sure of (and how to escape/convert special characters in that string).
My first step should have been to check the type of the string- I didn't realize there I could get good data about its formatting from type(s). This answer was very helpful and got to the real root of my issues.
If you're getting a rude and persistent
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 263: ordinal not in range(128)
particularly when you're ENCODING, make sure you're not trying to unicode() a string that already IS unicode- for some terrible reason, you get ascii codec errors. (See also the Python Kitchen recipe, and the Python docs tutorials for better understanding of how terrible this can be.)
Eventually I determined that what I wanted to do was this:
escaped_string = unicode(original_string.encode('ascii','xmlcharrefreplace'))
Also helpful in debugging was setting the default coding in my file to utf-8 (put this at the beginning of your python file):
# -*- coding: utf-8 -*-
That allows you to test special characters ('àéç') without having to use their unicode escapes (u'\xe0\xe9\xe7').
>>> specials='àéç'
>>> specials.decode('latin-1').encode('ascii','xmlcharrefreplace')
'àéç'
To improve Alexander's solution from the Python 2.6 (and in Python 3.x) you can use helper module curses.ascii and use curses.ascii.isascii() function or various other: https://docs.python.org/2.6/library/curses.ascii.html
from curses import ascii
def isascii(s):
return all(ascii.isascii(c) for c in s)
You could use the regular expression library which accepts the Posix standard [[:ASCII:]] definition.
A sting (str-type) in Python is a series of bytes. There is no way of telling just from looking at the string whether this series of bytes represent an ascii string, a string in a 8-bit charset like ISO-8859-1 or a string encoded with UTF-8 or UTF-16 or whatever.
However if you know the encoding used, then you can decode the str into a unicode string and then use a regular expression (or a loop) to check if it contains characters outside of the range you are concerned about.
Like #RogerDahl's answer but it's more efficient to short-circuit by negating the character class and using search instead of find_all or match.
>>> import re
>>> re.search('[^\x00-\x7F]', 'Did you catch that \x00?') is not None
False
>>> re.search('[^\x00-\x7F]', 'Did you catch that \xFF?') is not None
True
I imagine a regular expression is well-optimized for this.
import re
def is_ascii(s):
return bool(re.match(r'[\x00-\x7F]+$', s))
To include an empty string as ASCII, change the + to *.
To prevent your code from crashes, you maybe want to use a try-except to catch TypeErrors
>>> ord("¶")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: ord() expected a character, but string of length 2 found
For example
def is_ascii(s):
try:
return all(ord(c) < 128 for c in s)
except TypeError:
return False
I use the following to determine if the string is ascii or unicode:
>> print 'test string'.__class__.__name__
str
>>> print u'test string'.__class__.__name__
unicode
>>>
Then just use a conditional block to define the function:
def is_ascii(input):
if input.__class__.__name__ == "str":
return True
return False