I created a file containing a dictionary with data written in Spanish (i.e. Damián, etc.):
fileNameX.write(json.dumps(dictionaryX, indent=4))
The data come from some fql fetching operations, i.e.:
select name from user where uid in XXX
When I open the file, I find that, for instance, "Damián" looks like "Dami\u00e1n".
I've tried some options:
ensure_ascii=False:
fileNameX.write(json.dumps(dictionaryX, indent=4, ensure_ascii=False))
But I get an error (UnicodeEncodeError: 'ascii' codec can´t encode character u'\xe1' in position XXX: ordinal not in range(128)).
encode(encoding='latin-1):
dictionaryX.append({
'name': unicodeVar.encode(encoding='latin-1'),
...
})
But I get another error (UnicodeDecodeError: 'utf8' codec can't decode byte 0xe1 in position XXX: invalid continuation byte)
To sum up, I've tried several possibilities, but have less than a clue. I'm lost. Please, I need help. Thanks!
You have many options, and have stumbled upon something rather complicated that depends on your Python version and which you absolutely must understand fully in order to write correct code. Generally the approach taken in 3.x is stricter and a bit harder to work with, but it is much less likely that you will make a mistake or get yourself into a complicated situation. (Based on the exact symptoms you report, you seem to be using 2.x.)
json.dumps has different behaviour in 2.x and 3.x. In 2.x, it produces a str, which is a byte-string (unknown encoding). In 3.x, it still produces a str, but now str in 3.x is a proper Unicode string.
JSON is inherently a Unicode-supporting format, but it expects files to be in UTF-8 encoding. However, please understand that JSON supports \u style escapes in strings. When you read in this data, you will get the correct encoded string back. The reading code produces unicode objects (no matter whether you use 2.x or 3.x) when it reads strings out of the JSON.
When I open the file, I find that, for instance, "Damián" looks like "Dami\u00e1n"
á cannot be represented in ASCII. It gets encoded as \u00e1 by default, to avoid the other problems you had. This happens even in 3.x.
ensure_ascii=False
This disables the previous encoding. In 2.x, it means you get a unicode object instead - a real Unicode object, preserving the original á character. In 3.x, it means that the character is not explicitly translated. But either way, ensure_ascii=False means that json.dumps will give you a Unicode string.
Unicode strings must be encoded to be written to a file. There is no such thing as "unicode data"; Unicode is an abstraction. In 2.x, this encoding is implicitly 'ascii' when you feed a Unicode object to file.write; it was expecting a str. To get around this, you can use the codecs module, or explicitly encode as 'utf-8' before writing. In 3.x, the encoding is set with the encoding keyword argument when you open the file (the default is again probably not what you want).
encode(encoding='latin-1')
Here, you are encoding before producing the dictionary, so that you have a str object in your data. Now a problem occurs because when there are str objects in your data, the JSON encoder assumes, by default, that they represent Unicode strings in UTF-8 encoding. This can be changed, in 2.x, using the encoding keyword argument to json.dumps. (In 3.x, the encoder will simply refuse to serialize bytes objects, i.e. non-Unicode strings!)
However, if your goal is simply to get the data into the file directly, then json.dumps is the wrong tool for you. Have you wondered what that s in the name is for? It stands for "string"; this is the special case. The ordinary case, in fact, is writing directly to a file! (Instead of giving you a string and expecting you to write it yourself.) Which is what json.dump (no 's') does. Again, the JSON standard expects UTF-8 encoding, and again 2.x has an encoding keyword parameter that defaults to UTF-8 (you should leave this alone).
Use codecs.open() to open fileNameX with a specific encoding like encoding='utf-8' for example instead of using open().
Also, json.dump().
Since the string has a \u inside it that means it's a Unicode string. The string is actually correct! Your problem lies in displaying the string. If you print the string, Python's output encoding should print it in the proper encoding for your environment.
For example, this is what I get inside IDLE on Windows:
>>> print u'Dami\u00e1n'
Damián
Related
In my program I compare string with unicode and for most of the cases it works fine (I get True, as expected).
'Home' == u'Home'
returns true.
However, the following two object are considered to be different:
te®
te®
The first of the unicode and another one is tring.
These objects are supposed to be the same. Is there a way to transform one of the object such that the comparison returns true?
By the way, the comparison returns the following error message:
UnicodeWarning: Unicode equal comparison failed to convert both
arguments to Unicode - interpreting them as being unequal
Yes, comparisons will use ASCII as the default codec. Use the actual codec to explicitly decode the bytestring when comparing.
You'll have to figure out the correct codec first though, which is context dependent. If your data came from a URL for example, the server may have given you the correct codec in the content-type header. If it comes from an XML document, use an XML parser that'll heed the encoding information that is part of the XML document, etc.
In your case, you appear to have UTF8, if you interpret the second string as a Mojibake of UTF-8 bytes decoded as codepage 437 or codepage 850:
>>> print u'te®'.encode('utf8').decode('cp437')
te®
It may be a different codepage still, but you shouldn't really print bytestrings to your Windows console to determine their contents anyway. Always use the repr() function when debugging:
>>> print repr(u'te®'.encode('utf8'))
'te\xc2\xae'
This shows you Python literal syntax to re-create the value, with any non-printable non-ASCII bytes represented by escape sequences for easy copying.
When reading about codecs, encoding and decoding I found that i should use the encode function on the string directly and that worked fine. I've after that read about what the unicode and ascii is in addition to the different utf encodings.
But when reading further i found that most people seem to import the codecs module and use encode from the module. I dont see much of a difference between String.encode and codecs.encode.. does it matter which one i use ? I'm just specifying the encoding i need in the encode function.
Also, when reading this thread python string encode / decode i looked at the link in the accepted answer which shows a slide show which is suppose to "completely demystify unicode and utf" but on one of the slides he says that utf is used to translate numbers to characters which i cant see is correct.
From my understanding based on http://www.rrn.dk/the-difference-between-utf-8-and-unicode which was also quoted in another SO thread utf is not translating numbers to characters. Its translating binary numbers to numbers found in the unicode or the other choosen character set being used. So utf would be translation of a binary number to a number and then unicode would be translating that number again to a character..so he got it wrong when trying to completely mystify this?
The Python doc pages for these two functions are here:
https://docs.python.org/2/library/stdtypes.html#str.encode
https://docs.python.org/2/library/codecs.html#codecs.encode
str.encode() is called on a string object like this:
"this is a string".encode()
codecs.encode() is called with a string as an argument like this:
codecs.encode("this is a string")
They each take an optional encoding argument.
str.encode()'s default encoding is the current default, according to the doc page, but according the the Unicode HOWTO, that's "ascii"
codecs.encode()'s default encoding is "ascii"
Both functions take an errors argument that defaults to "strict".
It looks like they're pretty much the same except for the way they're called.
codecs.encode(obj, encoding='utf-8', errors='strict')
encode text to bytes, text to text, and bytes to bytes
str.encode(encoding="utf-8", errors="strict")
encode text to bytes
so, I think 2.⊆1.
One difference is what codecs you can use. str.encode is fine for casting among string codecs, but try converting a string to base64.
str.encode("base64")
LookupError: 'base64' is not a text encoding; use codecs.encode() to handle arbitrary codecs
but this will work
codecs.encode(str.encode(), "base64")
or this
base64.encodestring(str.encode())
Why in Python 3 would the following code
print(str(b"Hello"))
output b'Hello' instead of just Hello as it happens with regular text strings? It looks like ultimately explicit, would-be-easy creating a str object from the most related binary string type is so counter-intuitive.
In Python 3, bytes.__str__ is not defined, so bytes.__repr__ is used instead, when you use str() on the object. Note that print() also calls str() on objects passed in, so the call is entirely redundant here.
If you are expecting text, decode explicitly instead:
print(b'Hello'.decode('ascii'))
The str() type can handle bytes objects explicitly, but only if (again) you provide an explicit codec to decode the bytes with first:
print(str(b'Hello', 'ascii'))
The documentation is very explicit about this behaviour:
If neither encoding nor errors is given, str(object) returns object.__str__(), which is the “informal” or nicely printable string representation of object. For string objects, this is the string itself. If object does not have a __str__() method, then str() falls back to returning repr(object).
If at least one of encoding or errors is given, object should be a bytes-like object (e.g. bytes or bytearray). In this case, if object is a bytes (or bytearray) object, then str(bytes, encoding, errors) is equivalent to bytes.decode(encoding, errors).
and
Passing a bytes object to str() without the encoding or errors arguments falls under the first case of returning the informal string representation.
Emphasis mine.
Why do you want this to "work"? A bytes object is a bytes object, and its string representation in Python 3 is on that form. You can convert it's contents to a proper text string (in Python3 - which in Python2 would be "unicode" objects) you have to decode it to text.
And for that you need to know the encoding -
Try the following instead:
print(b"Hello".decode("latin-1"))
Note the assumed "latin-1" text codec which will translate transparently codes not in ASCII range (128-256) to unicode. It is the codec used by default by Windows for western-European languages.
The "utf-8" codec can represent a much larger range of characters, and is the preferred encoding for international text - but if your byte string is not properly composed of utf-8 characters you might have an UnicodeDecode error on the process.
Please read http://www.joelonsoftware.com/articles/Unicode.html to proper undestand what text is about.
Beforehand, sorry for my English...
Hey, I had this problem some weeks ago. It works as the people above said.
Here is a tip if the exceptions of the decoding process do not matter. In this case you can use:
bytesText.decode(textEncoding, 'ignore')
Ex:
>>> b'text \xab text'.decode('utf-8', 'ignore') # Using UTF-8 is nice as you might know!
'text text' # As you can see, the « (\xab) symbol was
# ignored :D
I have a string S="Test" in Python. I want to encode the string into CP1256, ISO-8859-1, ISO-8859-2, ISO-8859-6, ISO-8859-15 and Window-1252 formats. How can I do the encoding of the string into the mentioned formats?
I don't know why Slava Bacherikov deleted his answer, but it was the right answer, so I'll repeat it with more detail.
str.encode is exactly what you want:
Return an encoded version of the string as a bytes object. Default encoding is 'utf-8'. errors may be given to set a different error handling scheme. The default for errors is 'strict', meaning that encoding errors raise a UnicodeError. Other possible values are 'ignore', 'replace', 'xmlcharrefreplace', 'backslashreplace' and any other name registered via codecs.register_error(), see section Codec Base Classes. For a list of possible encodings, see section Standard Encodings.
If you follow that link to Standard Encodings, you'll see a nice table that shows you the names to use for each of these (you can use either the main codec name, or any of the aliases).
So:
encoded_bytes = [S.encode(codec) for codec in
('cp1256', 'iso-8859-1', 'iso-8859-2', 'iso-8859-6',
'iso-8859-15', 'windows-1252')]
While you could use codecs.encode as the other answers suggest, there's really no good reason to do so, and one good reason not to: str.encode enforces the fact that you're calling it on a str object, and using a codec that translates str to bytes; you'll get an exception if you accidentally use it on an already-encoded bytes or a list or something.
All of the above is assuming you're using Python 3. If you're using Python 2, a str is already encoded. So, if you can start with a unicode object, like u"Test" instead of "Test", do that; if not, you will want to decode it first. Unfortunately, Python 2 won't enforce that; if you call str.encode it will actually decode it with sys.getdefaultencoding, which will usually be ASCII, which will lead to silly errors.
its what that codecs module is for :
codecs.encode(S,'CP1256')
Just use the codecs module
import codecs
codecs.encode("hello", "iso-8859-6")
If you want to first check if python is aware of a certain encoding format just use
format_name = "iso-8859-6"
try:
codecs.lookup(format_name)
except LookupError:
print "Encoding {} can't be found".format(format_name)
The code bellow will cause an UnicodeDecodeError:
#-*- coding:utf-8 -*-
s="中文"
u=u"123"
u=s+u
I know it's because python interpreter is using ascii to decode s.
Why don't python interpreter use the file format(utf-8) for decoding?
Implicit decoding cannot know what source encoding was used. That information is not stored with strings.
All that Python has after importing is a byte string with characters representing bytes in the range 0-255. You could have imported that string from another module, or read it from a file object, etc. The fact that the parser knew what encoding was used for those bytes doesn't even matter for plain byte strings.
As such, it is always better to decode bytes explicitly, rather than rely on the implicit decoding. Either make use a Unicode literal for s as well, or explicitly decode using str.decode()
u = s.decode('utf8') + u
The types of the 2 strings are different - the first is a normal string, second is a unicode string, hence the error.
So, instead of doing s="中文", do as following to get unicode strings for both:
s=u"中文"
u=u"123"
u=s+u
The code works perfectly fine on Python 3.
However, in Python 2, if you do not add a u before a string literal, you are constructing a string of bytes. When one wants to combine a string of bytes and a string of characters, one either has to decode the string of bytes, or encode the string of characters. Python 2.x opted for the former. In order to prevent accidents (for example, someone appending binary data to a user input and thus generating garbage), the Python developers chose ascii as the encoding for that conversion.
You can add a line
from __future__ import unicode_literals
after the #coding declaration so that literals without u or b prefixes are always character and not byte literals.