Normalize composite/decomposable/variable-length characters (unicode/python3.4) - python

I stumbled upon http://mortoray.com/2013/11/27/the-string-type-is-broken/
And to my horror...
print(len('noe\u0308l')) # returns 5 not 4
However I found
https://stackoverflow.com/a/14682498/1267259,
Normalizing Unicode
from unicodedata import normalize
print(len(unicodedata.normalize('NFC','noe\u0308l'))) # returns 4
But what do I do with the Schrödinger's cats?
print(len('😸😾')) # returns 4 not 2
(side question: in my text editor when I'm trying to save I get a "utf-8 codec can't encode character x in position y: surrogates not allowed" but in the command prompt I can paste and run code with those characters, I assume it is because the cats exist on a different quantum level (SMP) but how do I normalize them?)
Is there anything else I should do to make sure all characters are counted as "1"?

Your editor is producing surrogate pairs, not the actual code points, which is why you are also getting that warning. Use:
'\U0001f638\U0001f63e'
to define the cats without resorting to surrogates.
If you do have a string with surrogates, you can recode these via UTF-16 and allowing surrogates to be encoded with the 'surrogatepass' error handler:
>>> # \U0001f638 is \ud83d\ude38 when using UTF-16 surrogates
...
>>> '\ud83d\ude38'.encode('utf16', 'surrogatepass').decode('utf16')
'😸'
>>> len(_)
1
From the Error Handlers documentation:
'surrogateescape'
On decoding, replace byte with individual surrogate code ranging from U+DC80 to U+DCFF. This code will then be turned back into the same byte when the 'surrogateescape' error handler is used when encoding the data. (See PEP 383 for more.)

For consistent codepoint counting on any version of Python, encode to UTF-32 and divide the byte count by 4.
print(len(unicodedata.normalize('NFC','noe\u0308l').encode('utf-32le')) / 4)
print(len('\U0001f638\U0001f63e'.encode('utf-32le')) / 4)

Related

Python utf-8 encoding not following unicode rules

Background: I've got a byte file that is encoded using unicode. However, I can't figure out the right method to get Python to decode it to a string. Sometimes is uses 1-byte ASCII text. The majority of the time it uses 2-byte "plain latin" text, but it can possibly contain any unicode character. So my python program needs to be able to decode that and handle it. Unfortunately byte_string.decode('unicode') isn't a thing, so I need to specify another encoding scheme. Using Python 3.9
I've read through the Python doc on unicode and utf-8 Python doc. If Python uses unicode for it's strings, and utf-8 as default, this should be pretty straightforward, yet I keep getting incorrect decodes.
If I understand how unicode works, the most significant byte is the character code, and the least significant byte is the lookup value in the decode table. So I would expect 0x00_41 to decode to "A",
0x00_F2 =>
x65_03_01 => é (e with combining acute accent).
I wrote a short test file to experiment with these byte combinations, and I'm running into a few situations that I don't understand (despite extensive reading).
Example code:
def main():
print("Starting MAIN...")
vrsn_bytes = b'\x76\x72\x73\x6E'
serato_bytes = b'\x00\x53\x00\x65\x00\x72\x00\x61\x00\x74\x00\x6F'
special_bytes = b'\xB2\xF2'
combining_bytes = b'\x41\x75\x64\x65\x03\x01'
print(f"vrsn_bytes: {vrsn_bytes}")
print(f"serato_bytes: {serato_bytes}")
print(f"special_bytes: {special_bytes}")
print(f"combining_bytes: {combining_bytes}")
encoding_method = 'utf-8' # also tried latin-1 and cp1252
vrsn_str = vrsn_bytes.decode(encoding_method)
serato_str = serato_bytes.decode(encoding_method)
special_str = special_bytes.decode(encoding_method)
combining_str = combining_bytes.decode(encoding_method)
print(f"vrsn_str: {vrsn_str}")
print(f"serato_str: {serato_str}")
print(f"special_str: {special_str}")
print(f"combining_str: {combining_str}")
return True
if __name__ == '__main__':
print("Starting Command Line Experiment!")
if not main():
print("\n Command Line Test FAILED!!")
else:
print("\n Command Line Test PASSED!!")
Issue 1: utf-8 encoding. As the experiment is written, I get the following error:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb2 in position 0: invalid start byte
I don't understand why this fails to decode, according to the unicode decode table, 0x00B2 should be "SUPERSCRIPT TWO". In fact, it seems like anything above 0x7F returns the same UnicodeDecodeError.
I know that some encoding schemes only support 7 bits, which is what seems like is happening, but utf-8 should support not only 8 bits, but multiple bytes.
If I changed encoding_method to encoding_method = 'latin-1' which extends the original ascii 128 characters to 256 characters (up to 0xFF), then I get a better output:
vrsn_str: vrsn
serato_str: Serato
special_str: ²ò
combining_str: Aude
However, this encoding is not handling the 2-byte codes properly. \x00_53 should be S, not �S, and none of the encoding methods I'll mention in this post handle the combining acute accent after Aude properly.
So far I've tried many different encoding methods, but the ones that are closest are: unicode_escape, latin-1, and cp1252. while I expect utf-8 to be what I'm supposed to use, it does not behave like it's described in the Python doc linked above.
Any help is appreciated. Besides trying more methods, I don't understand why this isn't decoding according to the table in link 3.
UPDATE:
After some more reading, and see your responses, I understand why you're so confused. I'm going to explain further so that hopefully this helps someone in the future.
The byte file that I'm decoding is not mine (hence why the encoding does not make sense). What I see now is that the bytes represent the code point, not the byte representation of the unicode character.
For example: I want 0x00_B2 to translate to ò. But the actual byte representation of ò is 0xC3_B2. What I have is the integer representation of the code point. So while I was trying to decode, what I actually need to do is convert 0x00B2 to an integer = 178. then I can use chr(178) to convert to unicode.
I don't know why the file was written this way, and I can't change it. But I see now why the decoding wasn't working. Hopefully this helps someone avoid the frustration I've been figuring out.
Thanks!
This isn't actually a python issue, it's how you're encoding the character. To convert a unicode codepoint to utf-8, you do not simply get the bytes from the codepoint position.
For example, the code point U+2192 is →. The actual binary representation in utf-8 is: 0xE28692, or 11100010 10000110 10010010
As we can see, this is 3 bytes, not 2 as we'd expect if we only used the position. To get correct behavior, you can either do the encoding by hand, or use a converter such as this one:
https://onlineunicodetools.com/convert-unicode-to-binary
This will let you input a unicode character and get the utf-8 binary representation.
To get correct output for ò, we need to use 0xC3B2.
>>> s = b'\xC3\xB2'
>>> print(s.decode('utf-8'))
ò
The reason why you can't use the direct binary representation is because of the header for the bytes. In utf-8, we can have 1-byte, 2-byte, and 4-byte codepoints. For example, to signify a 1 byte codepoint, the first bit is encoded as a 0. This means that we can only store 2^7 1-byte code points. So, the codepoint U+0080, which is a control character, must be encoded as a 2-byte character such as 11000010 10000000.
For this character, the first byte begins with the header 110, while the second byte begins with the header 10. This means that the data for the codepoint is stored in the last 5 bits of the first byte and the last 6 bits of the second byte. If we combine those, we get
00010 000000, which is equivalent to 0x80.

Converting widechars to system ANSI encoding in Python

I am currently trying to make my screen reader work better with Becky! Internet Mail. The problem which I am facing is related to the list view in there. This control is not Unicode aware but the items are custom drawn on screen so when someone looks at it content of all fields regardless of encoding looks okay. When accessed via MSAA or UIA however basic ANSI chars and mails encoded with the code page set for non Unicode programs have they text correct whereas mails encoded in Unicode do not.
Samples of the text :
Zażółć gęślą jaźń
is represented by:
Zażółć gęślą jaźń
In this case it is damaged CP1250 as per answer below.
However:
⚠️
is represented by:
⚠️
⏰
is represented by:
⏰
and
高生旺
is represented by:
é«ç”źć—ş
I've just assumed that these strings are damaged beyond repair, however when unicode beta support in windows 10 is enabled they are exposed correctly.
Is it possible to simulate this behavior in Python?
The solution needs to work in both Python 2 and 3.
At the moment I am simply replacing known combinations of these characters with their proper representations, but it is not very good solution, because lists containing replacements and characters to replace needs to be updated with each new discovered character.
your utf-8 is decoded as cp1250.
What I did in python3 is this:
orig = "Zażółć gęślą jaźń"
wrong = "Zażółć gęślą jaźń"
for enc in range(437, 1300):
try:
res = orig.encode().decode(f"cp{enc}")
if res == wrong:
print('FOUND', res, enc)
except:
pass
...and the result was the 1250 codepage.
So your solution shall be:
import sys
def restore(garbaged):
# python 3
if sys.version_info.major > 2:
return garbaged.encode('cp1250').decode()
# python 2
else:
# is it a string
try:
return garbaged.decode('utf-8').encode('cp1250')
# or is it unicode
except UnicodeEncodeError:
return garbaged.encode('cp1250')
EDIT:
The reason why "高生旺" can not be recovered from é«ç”źć—ş:
"高生旺".encode('utf-8') is b'\xe9\xab\x98\xe7\x94\x9f\xe6\x97\xba'.
The problem is the \x98 part. In cp1250 there is no character set for that value. If you try this:
"高生旺".encode('utf-8').decode('cp1250')
You will get this error: UnicodeDecodeError: 'charmap' codec can't decode byte 0x98 in position 2: character maps to <undefined>
The way to get "é«ç”źć—ş" is:
"高生旺".encode('utf-8').decode('cp1250', 'ignore')
But the ignore part is critical, it causes data loss:
'é«ç”źć—ş'.encode('cp1250') is b'\xe9\xab\xe7\x94\x9f\xe6\x97\xba'.
If you compare these two:
b'\xe9\xab\xe7\x94\x9f\xe6\x97\xba'
b'\xe9\xab\x98\xe7\x94\x9f\xe6\x97\xba'
you will see that the \x98 character is missing so when you try to restore the original content, you will get a UnicodeDecodeError: 'utf-8' codec can't decode bytes in position 0-1: invalid continuation byte.
If you try this:
'é«ç”źć—ş'.encode('cp1250').decode('utf-8', 'backslashreplace')
The result will be '\\xe9\\xab生旺'. \xe9\xab\x98 could be decoded to 高, from \xe9\xab it is not possible.

How can I convert surrogate pairs to normal string in Python?

This is a follow-up to Converting to Emoji. In that question, the OP had a json.dumps()-encoded file with an emoji represented as a surrogate pair - \ud83d\ude4f. S/he was having problems reading the file and translating the emoji correctly, and the correct answer was to json.loads() each line from the file, and the json module would handle the conversion from surrogate pair back to (I'm assuming UTF8-encoded) emoji.
So here is my situation: say I have just a regular Python 3 unicode string with a surrogate pair in it:
emoji = "This is \ud83d\ude4f, an emoji."
How do I process this string to get a representation of the emoji out of it? I'm looking to get something like this:
"This is 🙏, an emoji."
# or
"This is \U0001f64f, an emoji."
I've tried:
print(emoji)
print(emoji.encode("utf-8")) # also tried "ascii", "utf-16", and "utf-16-le"
json.loads(emoji) # and `.encode()` with various codecs
Generally I get an error similar to UnicodeEncodeError: XXX codec can't encode character '\ud83d' in position 8: surrogates no allowed.
I'm running Python 3.5.1 on Linux, with $LANG set to en_US.UTF-8. I've run these samples both in the Python interpreter on the command line, and within IPython running in Sublime Text - there don't appear to be any differences.
You've mixed a literal string \ud83d in a json file on disk (six characters: \ u d 8 3 d) and a single character u'\ud83d' (specified using a string literal in Python source code) in memory. It is the difference between len(r'\ud83d') == 6 and len('\ud83d') == 1 on Python 3.
If you see '\ud83d\ude4f' Python string (2 characters) then there is a bug upstream. Normally, you shouldn't get such string. If you get one and you can't fix upstream that generates it; you could fix it using surrogatepass error handler:
>>> "\ud83d\ude4f".encode('utf-16', 'surrogatepass').decode('utf-16')
'🙏'
Python 2 was more permissive.
Note: even if your json file contains literal \ud83d\ude4f (12 characters); you shouldn't get the surrogate pair:
>>> print(ascii(json.loads(r'"\ud83d\ude4f"')))
'\U0001f64f'
Notice: the result is 1 character ( '\U0001f64f'), not the surrogate pair ('\ud83d\ude4f').
Because this is a recurring question and the error message is slightly obscure, here is a more detailed explanation.
Surrogates are a way to express Unicode code points bigger than U+FFFF.
Recall that Unicode was originally specified to contain 65,536 characters, but that it was soon found that this was not enough to accommodate all the glyphs of the world.
As an extension mechanism for the (otherwise fixed-width) UTF-16 encoding, a reserved area was set up to contain a mechanism for expressing code points outside the Basic Multilingual Plane: Any code point in this special area would have to be followed by another character code from the same area, and together, they would express a code point with a number larger than the old limit.
(Strictly speaking, the surrogates area is divided into two halves; the first surrogate in a pair needs to come from the High Surrogates half, and the second, from the Low Surrogates. Confusingly, the High Surrogates U+D800-U+DBFF have lower code point numbers than the Low Surrogates U+DC00-U+DFFF.)
This is a legacy mechanism to support the UTF-16 encoding specifically, and should not be used in other encodings; they do not need it, and the applicable standards specifically say that this is disallowed.
In other words, while U+12345 can be expressed with the surrogate pair U+D808 U+DF45, you should simply express it directly instead unless you are specifically using UTF-16.
In some more detail, here is how this would be expressed in UTF-8 as a single character:
0xF0 0x92 0x8D 0x85
And here is the corresponding surrogate sequence:
0xED 0xA0 0x88
0xED 0xBD 0x85
As already suggested in the accepted answer, you can round-trip with something like
>>> "\ud808\udf45".encode('utf-16', 'surrogatepass').decode('utf-16').encode('utf-8')
b'\xf0\x92\x8d\x85'
Perhaps see also http://www.russellcottrell.com/greek/utilities/surrogatepaircalculator.htm

Identify garbage unicode string using python

My script is reads data from csv file, the csv file can have multiple strings of English or non English words.
Some time the text file has garbage strings , i want to identify those string and skip those string and process others
doc = codecs.open(input_text_file, "rb",'utf_8_sig')
fob = csv.DictReader(doc)
for row, entry in enumerate(f):
if is_valid_unicode_str(row['Name']):
process_futher
def is_valid_unicode_str(value):
try:
function
return True
except UnicodeEncodeError:
return false
csv input:
"Name"
"袋è¢âdcx€¹Ã¤Â¸Å½Ã¦Å“‹å‹们çâ€ÂµÃ¥Â­Âå•â€"
"元大寶來證券"
"John Dove"
I want to defile function is_valid_unicode_str() which will identify the garbage string and process valid one only.
I tried to use decode is but it doesnt failed while decoding garbage strings
value.decode('utf8')
The expected output are string with Chinese and English string to be process
could you please guide me how can i implement function to filter valid Unicode files?.
(ftfy developer here)
I've figured out that the text is likely to be '袋袋与朋友们电子商'. I had to guess at the characters 友, 子, and 商, because some unprintable characters are characters missing in the string in your question. When guessing, I picked the most common character from the small number of possibilities. And I don't know where the "dcx" goes or why it's there.
Google Translate is not very helpful here but it seems to mean something about e-commerce.
So here's everything that happened to your text:
It was encoded as UTF-8 and decoded incorrectly as sloppy-windows-1252, twice
It had the letters "dcx" inserted into the middle of a UTF-8 sequence
Characters that don't exist in windows-1252 -- with byte values 81, 8d, 8f, 90, and 9d -- were removed
A non-breaking space (byte value a0) was removed from the end
If just the first problem had happened, ftfy.fix_text_encoding would be able to fix it. It's possible that the remaining problems just happened while you were trying to get the string onto Stack Overflow.
So here's my recommendation:
Find out who keeps decoding the data incorrectly as sloppy-windows-1252, and get them to decode it as UTF-8 instead.
If you end up with a string like this again, try ftfy.fix_text_encoding on it.
You have Mojibake strings; text encoded to one (correct) codec, then decoded as another.
In this case, your text was decoded with the Windows 1252 codepage; the U+20AC EURO SIGN in the text is typical of CP1252 Mojibakes. The original encoding could be one of the GB* family of Chinese encodings, or a multiple roundtrip UTF-8 - CP1252 Mojibake. Which one I cannot determine, I cannot read Chinese, nor do I have your full data; CP1252 Mojibakes include un-printable characters like 0x81 and 0x8D bytes that might have gotten lost when you posted your question here.
I'd install the ftfy project; it won't fix GB* encodings (I requested the project add support), but it includes a new codec called sloppy-windows-1252 that'll let you reverse an erroneous decode with that codec:
>>> import ftfy # registers extra codecs on import
>>> text = u'袋è¢âdcx€¹Ã¤Â¸Å½Ã¦Å“‹å‹们çâ€ÂµÃ¥Â­Âå•â€'
>>> print text.encode('sloppy-windows-1252').decode('gb2312', 'replace')
猫垄�姑�⑩dcx�盲赂沤忙��姑ヂ�姑ぢ宦�р�得ヂ�氓�⑩�
>>> print text.encode('sloppy-windows-1252').decode('gbk', 'replace')
猫垄鈥姑�⑩dcx�盲赂沤忙艙鈥姑ヂ鈥姑ぢ宦�р�得ヂ�氓鈥⑩�
>>> print text.encode('sloppy-windows-1252').decode('gb18030', 'replace')
猫垄鈥姑⑩dcx�盲赂沤忙艙鈥姑ヂ鈥姑ぢ宦р�得ヂ氓鈥⑩�
>>> print text.encode('sloppy-windows-1252').decode('utf8', 'ignore').encode('sloppy-windows-1252').decode('utf8', 'replace')
袋�dcx与朋�们���
The � U+FFFD REPLACEMENT CHARACTER shows the decoding wasn't entirely successful, but that could be due to the fact that your copied string here is missing anything not printable or using the 0x81 or 0x8D bytes.
You can try to fix your data this way; from the file data, try to decode to one of the GB* codecs after encoding to sloppy-windows-1252, or roundtrip from UTF-8 twice and see what fits best.
If that's not good enough (you cannot fix the data) you can use the ftfy.badness.sequence_weirdness() function to try and detect the issue:
>>> from ftfy.badness import sequence_weirdness
>>> sequence_weirdness(text)
9
>>> sequence_weirdness(u'元大寶來證券')
0
>>> sequence_weirdness(u'John Dove')
0
Mojibakes score high on the sequence weirdness scale. You'd could try and find an appropriate threshold for your data by which time you'd call the data most likely to be corrupted.
However, I think we can use a non-zero return value as a starting point for another test. English text should score 0 on that scale, and so should Chinese text. Chinese mixed with English can still score over 0, but you could not then encode that Chinese text to the CP-1252 codec while you can with the broken text:
from ftfy.badness import sequence_weirdness
def is_valid_unicode_str(text):
if not sequence_weirdness(text):
# nothing weird, should be okay
return True
try:
text.encode('sloppy-windows-1252')
except UnicodeEncodeError:
# Not CP-1252 encodable, probably fine
return True
else:
# Encodable as CP-1252, Mojibake alert level high
return False

Character encoding with Python 3

If I run
print(chr(244).encode())
I get the two-byte result b'\xc3\xb4'. Why is that? I imagine the number 244 can be encoded into one byte!
Your default locale appears to use UTF-8 as the output encoding.
Any codepoint outside the range 0-127 is encoded with multiple bytes in the variable-width UTF-8 codec.
You'll have to use a different codec to encode that codepoint to one byte. The Latin-1 encoding can manage it just fine, while the EBCDIC 500 codec (codepage 500) can too, but encodes to a different byte:
>>> print(chr(244).encode('utf8'))
b'\xc3\xb4'
>>> print(chr(244).encode('latin1'))
b'\xf4'
>>> print(chr(244).encode('cp500'))
b'\xcb'
But Latin-1 and EBCDIC 500 codecs can only encode 255 codepoints; UTF-8 can manage all of the Unicode standard.
If you were expecting the number 244 to be interpreted as a byte value instead, then you should not use chr().encode(); chr() produces a unicode value, not a 'byte', and encoding then produces a different result depending on the exact codec. That's because unicode values are text, not bytes.
Pass your number as a list of integers to the bytes() callable instead:
>>> bytes([244])
b'\xf4'
This only happens to fit the Latin-1 codec result, because the first 256 Unicode codepoints map directly to Latin 1 bytes, by design.
Character #244 is U+00F4 LATIN SMALL LETTER O WITH CIRCUMFLEX which is indeed encoded as 0xc3 0xb4 in UTF-8. If you want to use a single-byte encoding then you need to specify it.
I imagine the number 244 can be encoded into one byte!
Sure, if you design an encoding that can only handle 256 code points, all of them can be encoded into one byte.
But if you design an encoding that can handle all of Unicode's 111000+ code points, obviously you can't pack all of them into one byte.
If your only goal were to make things as compact as possible, you could use most of the 256 initial byte values for common code points, and only reserve a few as start bytes for less common code points.
However, if you only use the lower 128 for single-byte values, there are some big advantages. Especially if you design it so that every byte is unambiguously either a 7-bit character, a start byte, or a continuation byte. That makes the algorithm is a lot simpler to implement and faster, you can always scan forward or backward to the start of a character, you can search for ASCII text in a string with traditional byte-oriented (strchr) searches, a simple heuristic can detect your encoding very reliably, you can always detect truncated string start/end instead of misinterpreting it, etc. So, that's exactly what UTF-8 does.
Wikipedia explains UTF-8 pretty well. Rob Pike, one of the inventors of UTF-8, explains the design history in detail.

Categories