I am trying to print a clean chess board in python 2.7 that uses unique characters such as ♟.
I have tried simply replacing a value in a string ("g".replace(g, ♟)) but it is changed to '\xe2\x80\xa6'. If I put the character into an online ASCII converter, it returns "226 153 159"
♟ is a unicode character. In python 2, str holds ascii strings or binary data, while unicode holds unicode strings. When you do "♟" you get a binary encoded version of the unicode string. What that encoding is depends on the editor/console you used to type it in. Its common (and I think preferred) to use UTF-8 to encode strings but you may find that Windows editors favor little-endian UTF-16 strings.
Either way, you want to write your strings as unicode as much as possible. You can do some mix-and-matching between str and unicode but make sure anything outside of the ASCII code set is unicode from the beginning.
Python can take an encoding hint at the front of the file. So, assuming you use a UTF-8 editor, you can do
!#/usr/bin/env python
# -*- coding: utf-8 -*-
chess_piece = u"♟"
print u"g".replace(u"g", chess_piece)
Related
I want to run a Python source file that contains unicode (utf-8) characters in the source. I am aware of the fact that this can be done by adding the comment # -*- coding: utf-8 -*- in the beginning. However, I wish to do it without using this method.
One way I could think of was writing the unicode strings in escaped form. For example,
Edit: Updated Source. Added Unicode comments.
# Printing naïve and 男孩
def fxn():
print 'naïve'
print '男孩'
fxn()
becomes
# Printing na\xc3\xafve and \xe7\x94\xb7\xe5\xad\xa9
def fxn():
print 'na\xc3\xafve'
print '\xe7\x94\xb7\xe5\xad\xa9'
fxn()
I have two questions regarding the above method.
How do I convert the first code snippet, using Python, into its equivalent that
follows it? That is, only unicode sequences should be written in
escaped form.
Is the method foolproof considering only unicode (utf-8) characters are used? Is there something that can go wrong?
Your idea is generally sound but will break in Python 3 and will cause a headache when you manipulating and writing your strings in Python 2.
It's a good idea to use Unicode strings, not regular strings when dealing with non-ASCII.
Instead, you can encode your characters as Unicode (not UTF-8) escape sequences in Unicode strings.
u'na\xefve'
u'\u7537\u5b69'
note the u prefix
Your code is now encoding agnostic.
If you only use byte strings, and save your source file encoded as UTF-8, your byte strings will contain UTF-8-encoded data. No need for the coding statement (although REALLY strange that you don't want to use it...it's just a comment). The coding statement let's Python know the encoding of the source file, so it can decode Unicode strings correctly (u'xxxxx'). If you have no Unicode strings, it doesn't matter.
For your questions, no need to convert to escape codes. If you encode the file as UTF-8, you can use the more readable characters in your byte strings.
FYI, that won't work for Python 3, because byte strings cannot contain non-ASCII in that version.
That said, here's some code that will convert your example as requested. It reads the source assuming it is encoded in UTF-8, then uses a regular expression to locate all non-ASCII characters. It passes them through a conversion function to generate the replacement. This should be safe, since non-ASCII can only be used in string literals and constants in Python 2. Python 3, however, allows non-ASCII in variable names so this wouldn't work there.
import io
import re
def escape(m):
char = m.group(0).encode('utf8')
return ''.join(r'\x{:02x}'.format(ord(b)) for b in char)
with io.open('sample.py',encoding='utf8') as f:
content = f.read()
new_content = re.sub(r'[^\x00-\x7f]',escape,content)
with io.open('sample_new.py','w',encoding='utf8') as f:
f.write(new_content)
Result:
# Printing na\xc3\xafve and \xe7\x94\xb7\xe5\xad\xa9
def fxn():
print 'na\xc3\xafve'
print '\xe7\x94\xb7\xe5\xad\xa9'
fxn()
question 1:
try to use:
print u'naïve'
print u'长者'
question 2:
If you type the sentences by keyboard and Chinese input software, everything should be OK. But if you copy and paste sentence from some web pages, you should consider other encode format such as GBK,GB2312 and GB18030
This snippet of Python 3 should convert your program correctly to work in Python 2.
def convertchar(char): #converts individual characters
if 32<=ord(char)<=126 or char=="\n": return char #if normal character, return it
h=hex(ord(char))[2:]
if ord(char)<256: #if unprintable ASCII
h=" "*(2-len(h))+h
return "\\x"+h
elif ord(char)<65536: #if short unicode
h=" "*(4-len(h))+h
return "\\u"+h
else: #if long unicode
h=" "*(8-len(h))+h
return "\\U"+h
def converttext(text): #converts a chunk of text
newtext=""
for char in text:
newtext+=convertchar(char)
return newtext
def convertfile(oldfilename,newfilename): #converts a file
oldfile=open(oldfilename,"r")
oldtext=oldfile.read()
oldfile.close()
newtext=converttext(oldtext)
newfile=open(newfilename,"w")
newfile.write(newtext)
newfile.close()
convertfile("FILE_TO_BE_CONVERTED","FILE_TO_STORE_OUTPUT")
First a simple remarl: as you are using byte strings in a Python2 script, the # -*- coding: utf-8 -*- has simply no effect. It only helps to convert the source byte string to an unicode string if you had written:
# -*- coding: utf-8 -*-
...
utxt = u'naïve' # source code is the bytestring `na\xc3\xafve'
# but utxt must become the unicode string u'na\xefve'
Simply it might be interpreted by clever editors to automatically use a utf8 charset.
Now for the actual question. Unfortunately, what you are asking for is not really trivial: idenfying in a source file what is in a comment and in a string simply requires a Python parser... And AFAIK, if you use the parser of ast modules you will lose your comments except for docstrings.
But in Python 2, non ASCII characters are only allowed in comments and litteral strings! So you can safely assume that if the source file is a correct Python 2 script containing no litteral unicode string(*), you can safely transform any non ascii character in its Python representation.
A possible Python function reading a raw source file from a file object and writing it after encoding in another file object could be:
def src_encode(infile, outfile):
while True:
c = infile.read(1)
if len(c) < 1: break # stop on end of file
if ord(c) > 127: # transform high characters
c = "\\x{:2x}".format(ord(c))
outfile.write(c)
An nice property is that it works whatever encoding you use, provided the source file is acceptable by a Python interpreter and does not contain high characters in unicode litterals(*), and the converted file will behave exactly the same as the original one...
(*) A problem will arise if you use unicode litterals in an encoding other that Latin1, because the above function will behave as if the file contained the declaration # -*- coding: Latin1 -*-: u'é' will be translated correctly as u'\xe9' if original encoding is latin1 but as u'\xc3\xc9' (not what is expected...) if original encoding is utf8, and I cannot imagine a way to process correctly both litteral byte strings and unicode byte strings without fully parsing the source file...
This topic is already on StackOverflow but I didn't find any satisfying solution:
I have some strings in Unicode coming from a server and I have some hardcoded strings in the code which I'd like to match against. And I do understand why I can't just make a == but I do not succeed in converting them properly (I don't care if I've to do str -> unicode or unicode -> str).
I tried encode and decode but it didn't gave any result.
Here is what I receive...
fromServer = {unicode} u'Führerschein nötig'
fromCode = {str} 'Führerschein nötig'
(as you can see, it is german!)
How can have them equals in Python 2 ?
First make sure you declare the encoding of your Python source file at the top of the file. Eg. if your file is encoded as latin-1:
# -*- coding: latin-1 -*-
And second, always store text as Unicode strings:
fromCode = u'Führerschein nötig'
If you get bytes from somewhere, convert them to Unicode with str.decode before working with the text. For text files, specify the encoding when opening the file, eg:
# use codecs.open to open a text file
f = codecs.open('unicode.rst', encoding='utf-8')
Code which compares byte strings with Unicode strings will often fail at random, depending on system settings, or whatever encoding happens to be used for a text file. Don't rely on it, always make sure you compare either two unicode strings or two byte strings.
Python 3 changed this behaviour, it will not try to convert any strings. 'a' and b'a' are considered objects of a different type and comparing them will always return False.
tested on 2.7
for German umlauts latin-1 is used.
if 'Führerschein nötig'.decode('latin-1') == u'Führerschein nötig':
print('yes....')
yes....
So here I am, I read about encoding all day, now I need some clarification.
First off I'm using eclipse mars with pydev.
Unicode is a (character set + code points), basicaly a table of symbols associated with numerical value.
The way those value are going to be stored at a binary level are defined by the encoding, let's say UTF-8.
1 : shebang
What is the shebang for? when I put # -*- coding: utf-8 -*- does it do something? or does it just indicate that my file is encoded in UTF-8 (but since it's just an indication it could be a lie :o)
2 : Eclipse file encoding
After I wrote my shebang and saved I went into the property of the file, and it said encoding : ISO-8859-1, so my guess is that the shebang does nothing beside indicate in which encoding my file is.
Do I need to manually set every files to UTF-8 or is there a way to teach eclipse to read the shebang and act accordingly.
3 : Why does the shebang only specify the encoding?
My shebang say utf-8, ok right, so what? it does not tell me which caracter set is used.
Since UTF-8 is just an encoding I could use UTF-8 with any character set no?
I could encode ASCII in UTF-8 if I wanted, since an encoding is just a way to convert and store/read code points.
What if my character set encoded in utf-8 does not have the same code points than unicode? (is this possible?)
4 : maybe a solution?
I oftenly read that utf-8 is an implementation of unicode, does that mean that each times you read encoding = UTF-8 you can be 100%, and I say 100%, sure that the characterset+code points is unicode?
I'm lost
There are multiple misconceptions in your question.
Unicode is a standard that is commonly used for working with text. It is not "character set + code points" e.g., Unicode standard defines how to find word boundaries or how to compare Unicode string.
# -*- coding: utf-8 -*- is an encoding declaration. It is not a shebang. Shebang (as it name suggests) starts with #! e.g., #! /usr/bin/env python.
You might need the encoding declaration if there are non-ascii literal characters in your Python source code e.g., you don't need an encoding declaration if you write:
#!/usr/bin/env python2
print u"\N{SNOWMAN}"
But you need it if you use literal non-ascii characters:
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
print u"☃"
Both scripts produce the same output if the second script is saved using utf-8 encoding. The encoding declaration says how to interpret bytes that constitute the Python source code to get the program text.
"is there a way to teach eclipse to read the shebang encoding declaration and act accordingly." is a good separate question. If IDE has explicit Python support then it should do it automatically.
My shebang encoding declaration say utf-8, ok right, so what? it does not tell me which character set is used.
"character encoding", codepage, and charset may be used interchangeably in many contexts. See What's the difference between encoding and charset? The distinctions are irrelevant for the task of converting from bytes to text and back in Python:
unicode_text = bytestring.decode(character_encoding)
bytestring = unicode_text.encode(character_encoding)
A bytestring is an immutable sequence of bytes in Python (roughly speaking numbers in 0..255 range) that is used to represent arbitrary binary data e.g., images, zip-archives, encrypted data, and text encoded using some character encoding. A Unicode string is an immutable sequence of Unicode codepoints (roughly speaking, numbers in 0..sys.maxunicode range) that is used to represent text in Python.
Some character encodings such as cp437 support only a few Unicode characters. Others such as utf-8 support the full range of Unicode codepoints.
The right way to add the encoding declaration is > # -*- coding: utf-8 -*-
It tells python to change the encoding in the current script to UTF-8 it has nothing to do with the user .
Ok I think I found an awnser to all those questions
1/
thanks to J.Dev, the shebang only tells the python interpreter in what the file is encoded, but YOU have to encode the file in what you put in the shebang
2/
Apparently I have to do it manually
3/
Because an encoding is associated with a charset, if you say encoding=utf-8 then it will always be a unicode charset
Some old 1 byte charset don't have encoding, you don't need encoding since the char are all stored on 1 byte, the natural binary translation is the encoding.
So when you say ASCII for instance you mean the charset and encoding = ASCII
But this leave me wondering, is there other type of charset out there with multiple encoding implementation (like unicode can be encoded in utf-8/16/32)
The code bellow will cause an UnicodeDecodeError:
#-*- coding:utf-8 -*-
s="中文"
u=u"123"
u=s+u
I know it's because python interpreter is using ascii to decode s.
Why don't python interpreter use the file format(utf-8) for decoding?
Implicit decoding cannot know what source encoding was used. That information is not stored with strings.
All that Python has after importing is a byte string with characters representing bytes in the range 0-255. You could have imported that string from another module, or read it from a file object, etc. The fact that the parser knew what encoding was used for those bytes doesn't even matter for plain byte strings.
As such, it is always better to decode bytes explicitly, rather than rely on the implicit decoding. Either make use a Unicode literal for s as well, or explicitly decode using str.decode()
u = s.decode('utf8') + u
The types of the 2 strings are different - the first is a normal string, second is a unicode string, hence the error.
So, instead of doing s="中文", do as following to get unicode strings for both:
s=u"中文"
u=u"123"
u=s+u
The code works perfectly fine on Python 3.
However, in Python 2, if you do not add a u before a string literal, you are constructing a string of bytes. When one wants to combine a string of bytes and a string of characters, one either has to decode the string of bytes, or encode the string of characters. Python 2.x opted for the former. In order to prevent accidents (for example, someone appending binary data to a user input and thus generating garbage), the Python developers chose ascii as the encoding for that conversion.
You can add a line
from __future__ import unicode_literals
after the #coding declaration so that literals without u or b prefixes are always character and not byte literals.
Python file
# -*- coding: UTF-8 -*-
a = 'Köppler'
print a
print a.__class__.__name__
mydict = {}
mydict['name'] = a
print mydict
print mydict['name']
Output:
Köppler
str
{'name': 'K\xc3\xb6ppler'}
Köppler
It seems that the name remains the same, but only when printing a dictionary I get this strange escaped character string. What am I looking at then? Is that the UTF-8 representation?
The reason for that behavior is that the __repr__ function in Python 2 escapes non-ASCII unicode characters. As the link shows, this is fixed in Python 3.
Yes, that's the UTF-8 representation of ö (U+00F6 LATIN SMALL LETTER O WITH DIAERESIS). It consists of a 0xC3 octet followed by a 0xB6 octet. UTF-8 is a very elegant encoding, I think, and worth reading up on. The history of its design (on a placemat in a diner) is described here by Rob Pike.
As far as I'm concerned there are two methods in Python for displaying objects: str() and repr(). Str() is used internally inside print, however Apparently dict's str() uses repr() for keys and values.
As it has been mentioned: repr() escapes unicode characters.
It seems you are using python 2.x, where you have to specify that the object is actually a unicode string and not a plain ascii. You specified that the code is utf-8, thus you actually typed 2 bytes for your ö, and as it is a regular string, you got the 2 escaped chars.
Try to specify the unicode a= u'Köppler'. You may need to encode it before printing, depending on your consol encoding: print a.encode('utf-8')