I'm Using python 3.5
I have a couple of byte strings representing text that is encoded in various codecs so: b'mybytesstring' , now some are Utf8 encoded other are latin1 and so on. What I want to in the following order is:
transform the bytes string into an ascii like string.
transform the ascii like string back to a bytes string.
decode the bytes string with correct codec.
The problem is that I have to move the bytes string into something that does not accept bytes objects so I'm looking for a solution that lets me do bytes -> ascii -> bytes safely.
x = x.decode().encode('ascii',errors='ignore')
You use the encode and decode methods for this, and supply the desired encoding to them. It's not clear to me if you know the encoding beforehand. If you don't know it you're in trouble. You may have to guess the encoding in some way, risking garbage output.
OK I found a solution which is much easier than I thought
mybytes = 'ëýđþé'.encode()
str_mybytes = str(mybytes)
again_mybytes = eval(str_mybytes)
decoded = again_mybytes.decode('utf8')
Related
I need to convert a bytearray which contains non-encoded raw unicode data to an unicode string, e.g. the unicode \u2167 represents the roman number 8:
print(u'\u2167')
Ⅷ
having this information stored in a bytearray I need to find a way to convert it back to unicode. Decoding from e.g. 'utf8' obviously does not work:
b = bytearray([0x21,0x67])
print(b.decode('utf8'))
!g
Any ideas?
EDIT
#Luke's comment got me on the right track. Apparently the original data (not the simplified one I am showing here) is encoded as UTF-16le. The data is obtained from a wxpython TextDataObject. wxpython internally usually uses unicode. That is what made me think that I am dealing with unicode data.
... a bytearray which contains non-encoded raw unicode data
If it is in a bytearray, it is by definition encoded. The Python bytes or bytearray types can contain encoded Unicode data. The str type contains Unicode code points. You .decode a byte string to a Unicode string, and .encode a Unicode string into byte strings. The encoding used for your example is UTF-16BE:
>>> b = bytearray([0x21,0x67])
>>> b.decode('utf-16be')
'Ⅷ'
The line print(b.decode('utf8')) is not correct, correct usage is :
print(b.decode("utf-8"))
I have difficulties converting those bytes to string:
x = b'<strong>\xc5\xb7\xc3\xc0\xd0\xd4\xb8\xd0\xd0\xb1\xc1\xec\xb5\xa5\xbc\xe7\xb3\xa4\xd0\xe4\xb2\xbb\xb9\xe6\xd4\xf2\xc1\xac\xd2\xc2\xc8\xb9\xa3\xac\xb4\xf2\xd4\xec\xd1\xe7\xbb\xe1\xa1\xa2\xca\xb1\xc9\xd0\xb8\xd0\xca\xae\xd7\xe3\xa3\xac\xd5\xc3\xcf\xd4\xc5\xae\xd0\xd4\xf7\xc8\xc1\xa6\xa3\xac\xb4\xf3\xc1\xbf\xcf\xd6\xbb\xf5\xa3\xac\xbb\xb6\xd3\xad\xd0\xc2\xc0\xcf\xbf\xcd\xbb\xa7\xc4\xc3\xd1\xf9\xb2\xc9\xb9\xba\xa3\xa1</strong>'
if i decode via unicode-escape i got weird characters like:
'<strong>Å·ÃÀÐÔ¸ÐбÁìµ¥¼ç³¤Ðä²»¹æÔòÁ¬ÒÂȹ£¬´òÔìÑç»á¡¢Ê±ÉиÐÊ®×㣬ÕÃÏÔÅ®ÐÔ÷ÈÁ¦£¬´óÁ¿ÏÖ»õ£¬»¶Ó\xadÐÂÀÏ¿Í»§ÄÃÑù²É¹º£¡</strong>'
instead of chinese charaters like 欧美性感斜领单肩长袖不规则连衣裙
You seem to be using the wrong encoding. The right encoding seem to be 'GB2312'.
>>> x.decode('GB2312')
'<strong>欧美性感斜领单肩长袖不规则连衣裙... more symbols</strong>'
I would like to include picture bytes into a JSON, but I struggle with a encoding issue:
import urllib
import json
data = urllib.urlopen('https://www.python.org/static/community_logos/python-logo-master-v3-TM-flattened.png').read()
json.dumps({'picture' : data})
UnicodeDecodeError: 'utf8' codec can't decode byte 0x89 in position 0: invalid start byte
I don't know how to deal with that issue since I am handling an image, so I am a bit confused about this encoding issue. I am using python 2.7. Does anyone can help me? :)
JSON data expects to handle Unicode text. Binary image data is not text, so when the json.dumps() function tries to decode the bytestring to unicode using UTF-8 (the default) that decoding fails.
You'll have to wrap your binary data in a text-safe encoding first, such as Base-64:
json.dumps({'picture' : data.encode('base64')})
Of course, this then assumes that the receiver expects your data to be wrapped so.
If your API endpoint has been so badly designed to expect your image bytes to be passed in as text, then the alternative is to pretend that your bytes are really text; if you first decode it as Latin-1 you can map those bytes straight to Unicode codepoints:
json.dumps({'picture' : data.encode('latin-1')})
With the data already a unicode object the json library will then proceed to treat it as text. This does mean that it can replace non-ASCII codepoints with \uhhhh escapes.
The best solution that comes to my mind for this situation, space-wise, is base85 encoding which represents four bytes as five characters. Also you could also map every byte to the corresponding character in U+0000-U+00FF format and then dump it in the json.
But still, those could be overkill methods for this and base64, ease-wise, would be the winner.
I tried to understand by myself encode and decode in Python but nothing is really clear for me.
str.encode([encoding,[errors]])
str.decode([encoding,[errors]])
First, I don't understand the need of the "encoding" parameter in these two functions.
What is the output of each function, its encoding? What is the use of the "encoding" parameter in each function? I don't really understand the definition of "bytes string".
I have an important question, is there some way to pass from one encoding to another?
I have read some text on ASN.1 about "octet string", so I wondered whether it was the same as "bytes string".
Thanks for you help.
It's a little more complex in Python 2 (compared to Python 3), since it conflates the concepts of 'string' and 'bytestring' quite a bit, but see The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets. Essentially, what you need to understand is that 'string' and 'character' are abstract concepts that can't be directly represented by a computer. A bytestring is a raw stream of bytes straight from disk (or that can be written straight from disk). encode goes from abstract to concrete (you give it preferably a unicode string, and it gives you back a byte string); decode goes the opposite way.
The encoding is the rule that says 'a' should be represented by the byte 0x61 and 'α' by the two-byte sequence 0xc0\xb1.
My presentation from PyCon, Pragmatic Unicode, or, How Do I Stop The Pain covers all of these details.
Briefly, Unicode strings are sequences of integers called code points, and bytestrings are sequences of bytes. An encoding is a way to represent Unicode code points as a series of bytes. So unicode_string.encode(enc) will return the byte string of the Unicode string encoded with "enc", and byte_string.decode(enc) will return the Unicode string created by decoding the byte string with "enc".
Python 2.x has two types of strings:
str = "byte strings" = a sequence of octets. These are used for both "legacy" character encodings (such as windows-1252 or IBM437) and for raw binary data (such as struct.pack output).
unicode = "Unicode strings" = a sequence of UTF-16 or UTF-32 depending on how Python is built.
This model was changed for Python 3.x:
2.x unicode became 3.x str (and the u prefix was dropped from the literals).
A bytes type was introduced for representing binary data.
A character encoding is a mapping between Unicode strings and byte strings. To convert a Unicode string, to a byte string, use the encode method:
>>> u'\u20AC'.encode('UTF-8')
'\xe2\x82\xac'
To convert the other way, use the decode method:
>>> '\xE2\x82\xAC'.decode('UTF-8')
u'\u20ac'
Yes, a byte string is an octet string. Encoding and decoding happens when inputting / outputting text (from/to the console, files, the network, ...). Your console may use UTF-8 internally, your web server serves latin-1, and certain file formats need strange encodings like Bibtex's accents: fran\c{c}aise. You need to convert from/to them on input/output.
The {en|de}code methods do this. They are often called behind the scenes (for example, print "hello world" encodes the string to whatever your terminal uses).
I am trying to convert an incoming byte string that contains non-ascii characters into a valid utf-8 string such that I can dump is as json.
b = '\x80'
u8 = b.encode('utf-8')
j = json.dumps(u8)
I expected j to be '\xc2\x80' but instead I get:
UnicodeDecodeError: 'ascii' codec can't decode byte 0x80 in position 0: ordinal not in range(128)
In my situation, 'b' is coming from mysql via google protocol buffers and is filled out with some blob data.
Any ideas?
EDIT:
I have ethernet frames that are stored in a mysql table as a blob (please, everyone, stay on topic and keep from discussing why there are packets in a table). The table collation is utf-8 and the db layer (sqlalchemy, non-orm) is grabbing the data and creating structs (google protocol buffers) which store the blob as a python 'str'. In some cases I use the protocol buffers directly with out any issue. In other cases, I need to expose the same data via json. What I noticed is that when json.dumps() does its thing, '\x80' can be replaced with the invalid unicode char (\ufffd iirc)
You need to examine the documentation for the software API that you are using. BLOB is an acronym: BINARY Large Object.
If your data is in fact binary, the idea of decoding it to Unicode is of course a nonsense.
If it is in fact text, you need to know what encoding to use to decode it to Unicode.
Then you use json.dumps(a_Python_object) ... if you encode it to UTF-8 yourself, json will decode it back again:
>>> import json
>>> json.dumps(u"\u0100\u0404")
'"\\u0100\\u0404"'
>>> json.dumps(u"\u0100\u0404".encode('utf8'))
'"\\u0100\\u0404"'
>>>
UPDATE about latin1:
u'\x80' is a useless meaningless C1 control character -- the encoding is extremely unlikely to be Latin-1. Latin-1 is "a snare and a delusion" -- all 8-bit bytes are decoded to Unicode without raising an exception. Don't confuse "works" and "doesn't raise an exception".
Use b.decode('name of source encoding') to get a unicode version. This was surprising to me when I learned it. eg:
In [123]: 'foo'.decode('latin-1')
Out[123]: u'foo'
I think what you are trying to do is decode the string object of some encoding. Do you know what that encoding is? To get the unicode object.
unicode_b = b.decode('some_encoding')
and then re-encoding the unicode object using the utf_8 encoding back to a string object.
b = unicode_b.encode('utf_8')
Using the unicode object as a translator, without knowing what the original encoding of the string is I can't know for certain but there is the possibility that the conversion will not go as expected. The unicode object is not meant for converting strings of one encoding to another. I would work with the unicode object assuming you know what the encoding is, if you don't know what the encoding is then there really isn't a way to find out without trial and error, and then convert back to the encoded string when you want a string object back.