How to decode hex encoded Cyrillic string? - python

I have a hex encoded Cyrillic string "041E043F043B04300442".
How can I convert into a text string?
I tried this way:
codecs.decode('041E043F043B04300442', 'hex').decode('utf-16')
'Ḅ㼄㬄〄䈄'
But I'm getting wrong symbols.
As I see from the Unicode symbols list, the first symbol should be a Cyrillic symbol:
U+041E Cyrillic Capital Letter O
What am I doing wrong?

I had to use another codec:
codecs.decode('041E043F043B04300442', 'hex').decode('utf-16be')
Now it is being decoded fine.

utf-16 defaults to the machine's endian-ness unless a byte order mark (BOM, U+FEFF) is present. Your machine appears to be little-endian, but the data is big-endian:
>>> bytes.fromhex("041E043F043B04300442").decode('utf-16')
'Ḅ㼄㬄〄䈄'
>>> bytes.fromhex("041E043F043B04300442").decode('utf-16le')
'Ḅ㼄㬄〄䈄'
>>> bytes.fromhex("041E043F043B04300442").decode('utf-16be')
'Оплат'
(English) payment
With a correct BOM added, utf-16 can work:
>>> bytes.fromhex("FEFF041E043F043B04300442").decode('utf-16')
'Оплат'

Related

Two unicode encodings represent 1 cyrillic letter

I have such string in unicode and utf-8 representation:
\u00d0\u0095\u00d1\u0081\u00d0\u00bb\u00d0\u00b8\u00d0\u00bf\u00d0\u00be\u00d0\u00b2\u00d0\u00b5\u00d0\u00b7\u00d0\u00b5\u00d1\u0082 \u00d1\u0082\u00d0\u00be\u00d1\u0081\u00d0\u00b5\u00d0\u00b3\u00d0\u00be\u00d0\u00b4\u00d0\u00bd\u00d1\u008f\u00d1\u0083\u00d0\u00b6\u00d0\u00b5\u00d1\u0081\u00d0\u00ba\u00d0\u00b8\u00d0\u00bd\u00d1\u0083
and
ЕÑли повезет то ÑÐµÐ³Ð¾Ð´Ð½Ñ ÑƒÐ¶Ðµ Ñкину.
The desired ouput is "Если повезет то сегодня уже скину".
I have tried all possible encodings but still wasn't able to get it in complete cyrillic form.
The best I got was
'�?�?ли повезе�? �?о �?егодн�? �?же �?кин�?'
using windows-1252.
And also I've noticed that one cyrillic letter in desired string means two unicode encodings.
For example: \u00d0\u0095 = 'Е'.
Maybe someone knows what encoding and how to use it to get a normal result?
You have a mis-decoded string where the UTF-8 bytes were translated as ISO-8859-1 (also known as latin1). Ideally, re-download with the correct encoding, but you can also encode with the wrongly-used encoding to regain the original byte stream, then decode with the right encoding (UTF-8):
Python:
>>> s = '\u00d0\u0095\u00d1\u0081\u00d0\u00bb\u00d0\u00b8\u00d0\u00bf\u00d0\u00be\u00d0\u00b2\u00d0\u00b5\u00d0\u00b7\u00d0\u00b5\u00d1\u0082 \u00d1\u0082\u00d0\u00be\u00d1\u0081\u00d0\u00b5\u00d0\u00b3\u00d0\u00be\u00d0\u00b4\u00d0\u00bd\u00d1\u008f\u00d1\u0083\u00d0\u00b6\u00d0\u00b5\u00d1\u0081\u00d0\u00ba\u00d0\u00b8\u00d0\u00bd\u00d1\u0083'
>>> s
'Ð\x95Ñ\x81липовезеÑ\x82 Ñ\x82оÑ\x81егоднÑ\x8fÑ\x83жеÑ\x81кинÑ\x83'
>>> print(s)
ÐÑÐ»Ð¸Ð¿Ð¾Ð²ÐµÐ·ÐµÑ ÑоÑегоднÑÑжеÑкинÑ
>>> s.encode('latin1')
b'\xd0\x95\xd1\x81\xd0\xbb\xd0\xb8\xd0\xbf\xd0\xbe\xd0\xb2\xd0\xb5\xd0\xb7\xd0\xb5\xd1\x82 \xd1\x82\xd0\xbe\xd1\x81\xd0\xb5\xd0\xb3\xd0\xbe\xd0\xb4\xd0\xbd\xd1\x8f\xd1\x83\xd0\xb6\xd0\xb5\xd1\x81\xd0\xba\xd0\xb8\xd0\xbd\xd1\x83'
>>> s.encode('latin1').decode('utf8')
'Еслиповезет тосегодняужескину'
You may also have a literal string of Unicode escape codes, which is a bit trickier:
>>> s=r'\u00d0\u0095\u00d1\u0081\u00d0\u00bb\u00d0\u00b8\u00d0\u00bf\u00d0\u00be\u00d0\u00b2\u00d0\u00b5\u00d0\u00b7\u00d0\u00b5\u00d1\u0082 \u00d1\u0082\u00d0\u00be\u00d1\u0081\u00d0\u00b5\u00d0\u00b3\u00d0\u00be\u00d0\u00b4\u00d0\u00bd\u00d1\u008f\u00d1\u0083\u00d0\u00b6\u00d0\u00b5\u00d1\u0081\u00d0\u00ba\u00d0\u00b8\u00d0\u00bd\u00d1\u0083'
>>> print(s)
\u00d0\u0095\u00d1\u0081\u00d0\u00bb\u00d0\u00b8\u00d0\u00bf\u00d0\u00be\u00d0\u00b2\u00d0\u00b5\u00d0\u00b7\u00d0\u00b5\u00d1\u0082 \u00d1\u0082\u00d0\u00be\u00d1\u0081\u00d0\u00b5\u00d0\u00b3\u00d0\u00be\u00d0\u00b4\u00d0\u00bd\u00d1\u008f\u00d1\u0083\u00d0\u00b6\u00d0\u00b5\u00d1\u0081\u00d0\u00ba\u00d0\u00b8\u00d0\u00bd\u00d1\u0083
In this case, the string has to be converted back to bytes, decoded as Unicode escapes, then encoded back to bytes and correctly decoded as UTF-8. latin1 has the feature that the first 256 code points of Unicode map to bytes 0-255 in that codec, so it converts 1:1 code point to byte value.
>>> s.encode('latin1').decode('unicode-escape').encode('latin1').decode('utf8')
'Еслиповезет тосегодняужескину'
d0 95 d1 81 d0 bb d0 b8 is the correct UTF-8 octet stream for "Если".
So you need to convert each character to a byte (8-bit word, octet) by removing the most significant part (which is always 0 anyway in your example). Then decode them as UTF-8.
Or better, go back to the source from which you got this, and make sure the stream of octets is not seen as single-byte encoding.

String has unicode code points embedded, how to convert? Python 3 [duplicate]

I'm getting back from a library what looks to be an incorrect unicode string:
>>> title
u'Sopet\xc3\xb3n'
Now, those two hex escapes there are the UTF-8 encoding for U+00F3 LATIN SMALL LETTER O WITH ACUTE. So far as I understand, a unicode string in Python should have the actual character, not the the UTF-8 encoding for the character, so I think this is incorrect and presumably a bug either in the library or in my input, right?
The question is, how do I (a) recognize that I have UTF-8 encoded text in my unicode string, and (b) convert this to a proper unicode string?
I'm stumped on (a), as there's nothing wrong, encoding-wise, about that original string (i.e, both are valid characters in their own right, u'\xc3\xb3' == ó, but they're not what's supposed to be there)
It looks like I can achieve (b) by eval()ing that repr() output minus the "u" in front to get a str and then decoding the str with UTF-8:
>>> eval(repr(title)[1:]).decode("utf-8")
u'Sopet\xf3n'
>>> print eval(repr(title)[1:]).decode("utf-8")
Sopetón
But that seems a bit kludgy. Is there an officially-sanctioned way to get the raw data out of a unicode string and treat that as a regular string?
a) Try to put it through the method below.
b)
>>> u'Sopet\xc3\xb3n'.encode('latin-1').decode('utf-8')
u'Sopet\xf3n'
You should use:
>>> title.encode('raw_unicode_escape')
Python2:
print(u'\xd0\xbf\xd1\x80\xd0\xb8'.encode('raw_unicode_escape'))
Python3:
print(u'\xd0\xbf\xd1\x80\xd0\xb8'.encode('raw_unicode_escape').decode('utf8'))

Encode Decode using python

I have this function in python
Str = "ü";
print Str
def correctText( str ):
str = str.upper()
correctedText = str.decode('UTF8').encode('Windows-1252')
return correctedText;
corText = correctText(Str);
print corText
It works and converts characters like ü and é however it fails when i try � and ¶
Is there a way i can fix it?
According to UTF8, à and ¶ are not valid characters, meaning that don't have a number of bytes divisible by 4 (usually). What you need to do is either use some other kind of encoding or strip out errors in your str by using the unicode() function. I recommend using the ladder.
What you are trying to do is to compose valid UTF-8 codes by several consecutive Windows-1252 codes.
For example, for ü, the Windows-1252 code of à is C3 and for ¼ it's BC. Together the code C3BC happens to be the UTF-8 code of ü.
Now, for Ã?, the Windows-1252 code is C33F, which is not a valid UTF-8 code (because the second byte does not start with 10).
Are you sure this sequence occurs in your text? For example, for à, the Windows-1252 decoding of the UTF-8 code (C3A0) is à followed by a non-printable character (non-breaking space). So, if this second character is not printed, the ? might be a regular character of the text.
For ¶ the Windows-1252 encoding is C2B6. Shouldn't it be ö, for which the Windows-1252 encoding is C3B6, which equals the UTF-8 code of ö?

How to capture all letters from different languages in python?

I have a corpus of different texts from different languages.
I want to capture all characters. I use python 2.7 and defaultencodingsetting is utf-8.
I do not know why when I use this code for German umlaut it prints out German umlaut correctly :
'Erd\xC3\xA4pfel'.decode('unicode-escape').encode('latin1').decode('utf-8')
Result is:
Erdäpfel
but when I use this code :
'Erd\xC3\xA4pfel'.decode('unicode-escape').encode('utf-8').decode('utf-8')
result is :
Erdäpfel which is different.
I am not familiar with text mining.I know that for example latin1 encoding does not contain French letter which is not desired in my project.
How can I convert all unicode escape strings in my corpus regardless of their language to respective character?
Utf-8 according to documentations contains all languages but why it does not print out German umlaut correctly while latin1 encoding prints out correctly?
PS: Lowercase in unicode escaping characters sequences is not the case. I have tried both and results were the same.
You already have UTF-8 encoded data. There are no string literal characters to escape in your bytestring. You are looking at the repr() output of a string where non-printable ASCII characters are shown as escape sequences because that makes the value easily copy-pastable in an ASCII-safe way. The \xc3 you see is one byte, not separate characters:
>>> 'Erd\xC3\xA4pfel'
'Erd\xc3\xa4pfel'
>>> 'Erd\xC3\xA4pfel'[3]
'\xc3'
>>> 'Erd\xC3\xA4pfel'[4]
'\xa4'
>>> print 'Erd\xC3\xA4pfel'
Erdäpfel
You'd have to use a raw string literal or doubled backslashes to actually getting escape sequences that unicode-escape would handle:
>>> '\\xc3\\xa4'
'\\xc3\\xa4'
>>> '\\xc3\\xa4'[0]
'\\'
>>> '\\xc3\\xa4'[1]
'x'
>>> '\\xc3\\xa4'[2]
'c'
>>> '\\xc3\\xa4'[3]
'3'
>>> print '\\xc3\\xa4'
\xc3\xa4
Note how there is a separate \ backslash character in that string (echoed as \\, escaped again).
Next to interpreting actual escape sequences he unicode-escape decodes your data as Latin-1, so you end up with a Unicode string with the character U+00C3 LATIN CAPITAL LETTER A WITH TILDE in it. Encoding that back to Latin-1 bytes gives you the \xC3 byte again, and you are back to UTF-8 bytes. Decoding then as UTF-8 works correctly.
But your second attempt encoded the U+00C3 LATIN CAPITAL LETTER A WITH TILDE codepoint to UTF-8, and that encoding gives you the byte sequence \xc3\x83. Printing those bytes to your UTF-8 terminal will show the à character. The other byte, \xA4 became U+00A4 CURRENCY SIGN, and the UTF-8 byte sequence for that is \xc2\xa4, which prints as ¤.
There is absolutely no need to decode as unicode-escape here. Just leave the data as is. Or, perhaps, decode as UTF-8 to get a unicode object:
>>> 'Erd\xC3\xA4pfel'.decode('utf8')
u'Erd\xe4pfel'
>>> print 'Erd\xC3\xA4pfel'.decode('utf8')
Erdäpfel
If your actual data (and not the test you did) contains \xhh escape sequences that encode UTTF-8, then don't use unicode-escape to decode those sequences either. Use string-escape so you get a byte string containing UTF-8 data (which you can then decode to Unicode as needed):
>>> 'Erd\\xc3\\xa4pfel'
'Erd\\xc3\\xa4pfel'
>>> 'Erd\\xc3\\xa4pfel'.decode('string-escape')
'Erd\xc3\xa4pfel'
>>> 'Erd\\xc3\\xa4pfel'.decode('string-escape').decode('utf8')
u'Erd\xe4pfel'
>>> print 'Erd\\xc3\\xa4pfel'.decode('string-escape').decode('utf8')
Erdäpfel

Python: decoding a string that consists of both unicode code points and unicode text

Parsing some HTML content I got the following string:
АБВ\u003d\"res
The common advice on handling it appears to be to decode using unicode_escape. However, this results in the following:
ÐÐÐ="res
The escaped characters get correctly decoded, but cyrillic letters for some reason get mangled. Other than using regexes to extract everything that looks like a unicode string, decoding only them using unicode_escape and then putting everything into a new string, which other methods exist to decode strings with unicode code points in Python?
unicode_escape treats the input as Latin-1 encoded; any bytes that do not represent a Python string literal escape sequence are decoded mapping bytes directly to Unicode codepoints. You gave it UTF-8 bytes, so the cyrillic characters are represented with 2 bytes each where decoded to two Latin-1 characters each, one of which is U+00D0 Ð, the other unprintable:
>>> print repr('АБВ\\u003d\\"res')
'\xd0\x90\xd0\x91\xd0\x92\\u003d\\"res'
>>> print repr('АБВ\\u003d\\"res'.decode('latin1'))
u'\xd0\x90\xd0\x91\xd0\x92\\u003d\\"res'
>>> print 'АБВ\\u003d\\"res'.decode('latin1')
ÐÐÐ\u003d\"res
This kind of mis-decoding is called a Mojibake, and can be repaired by re-encoding to Latin-1, then decoding from the correct codec (UTF-8 in your case):
>>> print 'АБВ\u003d\\"res'.decode('unicode_escape')
ÐÐÐ="res
>>> print 'АБВ\u003d\\"res'.decode('unicode_escape').encode('latin1').decode('utf8')
АБВ="res
Note that this will fail if the \uhhhh escape sequences encode codepoints outside of the Latin-1 range (U+0000-U+00FF).
The Python 3 equivalent of the above uses codecs.encode():
>>> import codecs
>>> codecs.decode('АБВ\\u003d\\"res', 'unicode_escape').encode('latin1').decode('utf8')
'АБВ="res'
The regex really is the easiest solution (Python 3):
text = 'АБВ\\u003d\\"re'
re.sub(r'(?i)(?<!\\)(?:\\\\)*\\u([0-9a-f]{4})', lambda m: chr(int(m.group(1), 16)), text)
This works fine with any 4-nibble Unicode escape, and can be pretty easily extended to other escapes.
For Python 2, make all strings u'' strings, and use unichr.

Categories