Unescape unicode characters from a string - python

I have a string which contains unicode characters which looks like this,
"u'type'` does not belong to `[u'item1', u'item2']"
How would I unescape the unicode parts?
So that it prints out,
"'type' does not belong to ['item1', 'item2']"

You could use the replace method for the string
stringToModify = "u'type' does not belong to [u'item1', u'item2']"
stringToModify.replace("u'","'")

What you show here is actually a byte string displaying the representation of three Unicode strings (all of them containing only ASCII characters FWIW).
If you cannot change the code producing this string your only solution is re.sub().

Related

convert a string to its codepoint in python

there are characters like '‌' that are not visible so I cant copy paste it. I want to convert any character to its codepoint like '\u200D'
another example is: 'abc' => '\u0061\u0062\u0063'
Allow me to rephrase your question. The header convert a string to its codepoint in python clearly did not get through to everyone, mostly, I think, because we can't imagine what you want it for.
What you want is a string containing a representation of Unicode escapes.
You can do that this way:
print(''.join("\\u{:04x}".format(b) for b in b'abc'))
\u0061\u0062\u0063
If you display that printed value as a string literal you will see doubled backslashes, because backslashes have to be escaped in a Python string. So it will look like this:
'\\u0061\\u0062\\u0063'
The reason for that is that if you simply put unescaped backslashes in your string literal, like this:
a = "\u0061\u0062\u0063"
when you display a at the prompt you will get:
>>> a
'abc'
'\u0061\u0062\u0063'.encode('utf-8') will encode the text to Unicode.
Edit:
Since python automatically converts the string to Unicode you can't see the value but you can create a function that will generate that.
def get_string_unicode(string_to_convert):
res = ''
for letter in string_to_convert:
res += '\\u' + (hex(ord(letter))[2:]).zfill(4)
return res
Result:
>>> get_string_unicode('abc')
'\\u0061\\u0062\\u0063'

Python unicode strings

I'm a Python newbie and I'm trying to make one script that writes some strings in a file if there's a difference. Problem is that original string has some characters in \uNNNN Unicode format and I cannot convert the new string to the same Unicode format.
The original string I'm trying to compare: \u00A1 ATENCI\u00D3N! \u25C4
New string is received as: ¡ ATENCIÓN! ◄
And this the code
str = u'¡ ATENCIÓN! ◄'
print(str)
str1 = str.encode('unicode_escape')
print (str1)
str2 = str1.decode()
print (str2)
And the result is:
¡ ATENCIÓN! ◄
b'\\xa1 ATENCI\\xd3N! \\u25c4'
\xa1 ATENCI\xd3N! \u25c4
So, how can I get \xa1 ATENCI\xd3N! \u25c4 converted to \u00A1 ATENCI\u00D3N! \u25C4 as this is the only Unicode format I can save?
Note: Cases of characters in strings also need to be the same for comparison.
The issue is, according to the docs (read down a little bit, between the escape sequences tables), the \u, \U, and \N Unicode escape sequences are only recognized in string literals. That means that once the literal is evaluated in memory, such as in a variable assignment:
s = "\u00A1 ATENCI\u00D3N! \u25C4"
any attempt to str.encode() it automatically converts it to a bytes object that uses \x where it can:
b'\\xa1 ATENCI\\xd3N! \\u25c4'
Using
b'\\xa1 ATENCI\\xd3N! \\u25c4'.decode("unicode_escape")
will convert it back to '¡ ATENCIÓN! ◄'. This uses the actual (intended) representation of the characters, and not the \uXXXX escape sequences of the original string s.
So, what you should do is not mess around with encoding and decoding things. Observe:
print("\u00A1 ATENCI\u00D3N! \u25C4" == '¡ ATENCIÓN! ◄')
True
That's all the comparison you need to do.
For further reading, you may be interested in:
How to work with surrogate pairs in Python?
Encodings and Unicode from the Python docs.

Python: convert unicode character to corresponding Unicode string

How do I convert a unicode character 'ב' to its corresponding Unicode character string '\u05d1' in Python?
I asked the opposite question a few days ago:
Python: convert unicode string to corresponding Unicode character
You can do something like,
>>> x
'ב'
>>> x.encode('ascii', 'backslashreplace').decode('utf-8')
'\\u05d1'
From the docs:
The errors parameter is the same as the parameter of the decode()
method but supports a few more possible handlers. As well as 'strict',
'ignore', and 'replace' (which in this case inserts a question mark
instead of the unencodable character), there is also
'xmlcharrefreplace' (inserts an XML character reference),
backslashreplace (inserts a \uNNNN escape sequence) and namereplace
(inserts a \N{...} escape sequence).
Something like this works
>>> hex(ord('ב'))
'0x5d1'
Python Specific Encodings:
unicode_escape - Encoding suitable as the contents of a Unicode
literal in ASCII-encoded Python source code, except that quotes are
not escaped.
'ב'.encode('unicode-escape').decode() ### '\\u05d1'
print('ב'.encode('unicode-escape').decode()) ### \u05d1
I prefer my own answer which is clean and simple:
json.dumps(unicode_character)
decoded_string = "ב"
encoded_string = decoded_string.encode("utf-8")

Converting Python unicode code point string to its actual unicode character

I have a dataset containing some some poorly parsed text that includes a lot of unicode characters (like 'a', '{', 'Ⅷ', '♞', ...) that have been improperly converted to Unicode.
All of the backslashes are escaped, so every unicode escape sequence was interpreted as a \ next to a u instead of a single character, \u.
More specifically, I have strings that look like this:
>>> '\\u00e9'
'\\u00e9'
And I want them to look like this:
>>> '\u00e9'
'é'
How can I convert the first string to the second?
Here is one way to accomplish without importing another module.
input_string = '\\u00e9'
print(input_string.encode('latin-1').decode('unicode-escape'))
# output
é
First you need to identify the string as hex
classmethod fromhex(string)
This bytes class method returns a bytes object, decoding the given string object. The string must contain two hexadecimal digits per byte, with ASCII whitespace being ignored.
https://docs.python.org/3/library/stdtypes.html#bytes.fromhex
Next we need to convert the hex to Unicode
bytes.decode(encoding="utf-8", errors="strict")
https://docs.python.org/3/library/stdtypes.html#bytes.decode
So it would look something like this
char = '\\u00e9'
print (bytes.fromhex(char)[3:-1].decode('latin-1'))

Returning the first N characters of a unicode string

I have a string in unicode and I need to return the first N characters.
I am doing this:
result = unistring[:5]
but of course the length of unicode strings != length of characters.
Any ideas? The only solution is using re?
Edit: More info
unistring = "Μεταλλικα" #Metallica written in Greek letters
result = unistring[:1]
returns-> ?
I think that unicode strings are two bytes (char), that's why this thing happens. If I do:
result = unistring[:2]
I get
M
which is correct,
So, should I always slice*2 or should I convert to something?
Unfortunately for historical reasons prior to Python 3.0 there are two string types. byte strings (str) and Unicode strings (unicode).
Prior to the unification in Python 3.0 there are two ways to declare a string literal: unistring = "Μεταλλικα" which is a byte string and unistring = u"Μεταλλικα" which is a unicode string.
The reason you see ? when you do result = unistring[:1] is because some of the characters in your Unicode text cannot be correctly represented in the non-unicode string. You have probably seen this kind of problem if you ever used a really old email client and received emails from friends in countries like Greece for example.
So in Python 2.x if you need to handle Unicode you have to do it explicitly. Take a look at this introduction to dealing with Unicode in Python: Unicode HOWTO
When you say:
unistring = "Μεταλλικα" #Metallica written in Greek letters
You do not have a unicode string. You have a bytestring in (presumably) UTF-8. That is not the same thing. A unicode string is a separate datatype in Python. You get unicode by decoding bytestrings using the right encoding:
unistring = "Μεταλλικα".decode('utf-8')
or by using the unicode literal in a source file with the right encoding declaration
# coding: UTF-8
unistring = u"Μεταλλικα"
The unicode string will do what you want when you do unistring[:5].
There is no correct straight-forward approach with any type of "Unicode string".
Even Python "Unicode" UTF-16 string has variable length characters so, you can't just cut with ustring[:5]. Because some Unicode Code points may use more then one "character" i.e. Surrogate pairs.
So if you want to cut 5 code points (note these are not characters) so you may analyze the text, see http://en.wikipedia.org/wiki/UTF-8 and http://en.wikipedia.org/wiki/UTF-16 definitions. So you need to use some bit masks to figure out boundaries.
Also you still do not get characters. Because for example. Word "שָלוֹם" -- peace in Hebrew "Shalom" consists of 4 characters and 6 code points letter "shin", vowel "a" letter "lamed", letter "vav" and vowel "o" and final letter "mem".
So character is not code point.
Same for most western languages where a letter with diacritics may be represented as two code points. Search for example for "unicode normalization".
So... If you really need 5 first characters you have to use tools like ICU library. For example there is ICU library for Python that provides characters boundary iterator.

Categories