I feel stacked here trying to change encodings with Python 2.5
I have XML response, which I encode to UTF-8: response.encode('utf-8'). That is fine, but the program which uses this info doesn't like this encoding and I have to convert it to other code page. Real example is that I use ghostscript python module to embed pdfmark data to a PDF file - end result is with wrong characters in Acrobat.
I've done numerous combinations with .encode() and .decode() between 'utf-8' and 'latin-1' and it drives me crazy as I can't output correct result.
If I output the string to a file with .encode('utf-8') and then convert this file from UTF-8 to CP1252 (aka latin-1) with i.e. iconv.exe and embed the data everything is fine.
Basically can someone help me convert i.e. character á which is UTF-8 encoded as hex: C3 A1 to latin-1 as hex: E1?
Instead of .encode('utf-8'), use .encode('latin-1').
data="UTF-8 data"
udata=data.decode("utf-8")
data=udata.encode("latin-1","ignore")
Should do it.
Can you provide more details about what you are trying to do? In general, if you have a unicode string, you can use encode to convert it into string with appropriate encoding. Eg:
>>> a = u"\u00E1"
>>> type(a)
<type 'unicode'>
>>> a.encode('utf-8')
'\xc3\xa1'
>>> a.encode('latin-1')
'\xe1'
If the previous answers do not solve your problem, check the source of the data that won't print/convert properly.
In my case, I was using json.load on data incorrectly read from file by not using the encoding="utf-8". Trying to de-/encode the resulting string to latin-1 just does not help...
Related
I have difficulties converting those bytes to string:
x = b'<strong>\xc5\xb7\xc3\xc0\xd0\xd4\xb8\xd0\xd0\xb1\xc1\xec\xb5\xa5\xbc\xe7\xb3\xa4\xd0\xe4\xb2\xbb\xb9\xe6\xd4\xf2\xc1\xac\xd2\xc2\xc8\xb9\xa3\xac\xb4\xf2\xd4\xec\xd1\xe7\xbb\xe1\xa1\xa2\xca\xb1\xc9\xd0\xb8\xd0\xca\xae\xd7\xe3\xa3\xac\xd5\xc3\xcf\xd4\xc5\xae\xd0\xd4\xf7\xc8\xc1\xa6\xa3\xac\xb4\xf3\xc1\xbf\xcf\xd6\xbb\xf5\xa3\xac\xbb\xb6\xd3\xad\xd0\xc2\xc0\xcf\xbf\xcd\xbb\xa7\xc4\xc3\xd1\xf9\xb2\xc9\xb9\xba\xa3\xa1</strong>'
if i decode via unicode-escape i got weird characters like:
'<strong>Å·ÃÀÐÔ¸ÐбÁìµ¥¼ç³¤Ðä²»¹æÔòÁ¬ÒÂȹ£¬´òÔìÑç»á¡¢Ê±ÉиÐÊ®×㣬ÕÃÏÔÅ®ÐÔ÷ÈÁ¦£¬´óÁ¿ÏÖ»õ£¬»¶Ó\xadÐÂÀÏ¿Í»§ÄÃÑù²É¹º£¡</strong>'
instead of chinese charaters like 欧美性感斜领单肩长袖不规则连衣裙
You seem to be using the wrong encoding. The right encoding seem to be 'GB2312'.
>>> x.decode('GB2312')
'<strong>欧美性感斜领单肩长袖不规则连衣裙... more symbols</strong>'
I have a supposedly unicode string like this:
u'\xc3\xa3\xc6\u2019\xc2\xa9\xc3\xa3\xc6\u2019\xe2\u20ac\u201c\xc3\xa3\xc6\u2019\xc2\xa9\xc3\xa3\xe2\u20ac\u0161\xc2\xa4\xc3\xa3\xc6\u2019\xe2\u20ac\u201c\xc3\xaf\xc2\xbc\xc2\x81\xc3\xa3\xe2\u20ac\u0161\xc2\xb9\xc3\xa3\xe2\u20ac\u0161\xc2\xaf\xc3\xa3\xc6\u2019\xc2\xbc\xc3\xa3\xc6\u2019\xc2\xab\xc3\xa3\xe2\u20ac\u0161\xc2\xa2\xc3\xa3\xe2\u20ac\u0161\xc2\xa4\xc3\xa3\xc6\u2019\xe2\u20ac\xb0\xc3\xa3\xc6\u2019\xc2\xab\xc3\xa3\xc6\u2019\xe2\u20ac\xa2\xc3\xa3\xe2\u20ac\u0161\xc2\xa7\xc3\xa3\xe2\u20ac\u0161\xc2\xb9\xc3\xa3\xc6\u2019\xe2\u20ac\xa0\xc3\xa3\xe2\u20ac\u0161\xc2\xa3\xc3\xa3\xc6\u2019\xc2\x90\xc3\xa3\xc6\u2019\xc2\xab\xc3\xaf\xc2\xbc\xcb\u2020\xc3\xa3\xe2\u20ac\u0161\xc2\xb9\xc3\xa3\xe2\u20ac\u0161\xc2\xaf\xc3\xa3\xc6\u2019\xe2\u20ac\xa2\xc3\xa3\xe2\u20ac\u0161\xc2\xa7\xc3\xa3\xe2\u20ac\u0161\xc2\xb9\xc3\xaf\xc2\xbc\xe2\u20ac\xb0'
How do I get the correct unicode string out of this? I think, the actual unicode value is ラブライブ!スクールアイドルフェスティバル(スクフェス)
You have a Mojibake, an incorrectly decoded piece text.
You can use the ftfy library to un-do the damage:
>>> from ftfy import fix_text
>>> fix_text(s)
u'\u30e9\u30d6\u30e9\u30a4\u30d6!\u30b9\u30af\u30fc\u30eb\u30a2\u30a4\u30c9\u30eb\u30d5\u30a7\u30b9\u30c6\u30a3\u30d0\u30eb(\u30b9\u30af\u30d5\u30a7\u30b9)'
>>> print fix_text(s)
ラブライブ!スクールアイドルフェスティバル(スクフェス)
According to ftfy, your data was encoded as UTF-8, then decoded as Windows codepage 1252; the ftfy.fixes.fix_one_step_and_explain() function shows the repair steps needed:
>>> ftfy.fixes.fix_one_step_and_explain(s)[-1]
[(u'encode', u'sloppy-windows-1252', 0), (u'decode', u'utf-8', 0)]
(the 'sloppy' encoding is needed because not all UTF-8 bytes can be decoded as cp1252, but some bad decoders then just copy the original byte; the special codec reverses that process).
In fact, in your case this was done twice, not a feat I had seen before:
>>> print s.encode('sloppy-cp1252').decode('utf8').encode('sloppy-cp1252').decode('utf8')
ラブライブ!スクールアイドルフェスティバル(スクフェス)
As is the way of things, i'm having issues decoding some unicode in python
specifically, this webpage: xkcd.com/403/info.0.json
The relevant part is Paul Erd\u00c5\u0091s!
When i run it through the json decoder, the unicode gets decoded, but not using the correct codec
I'm currently using the one-liner:
requests.get("http://xkcd.com/403/info.0.json").json()["alt"][-12:]
which gets 'Paul ErdÅ\x91s!' which is obviously not what i want
Any ideas as to what i can do to fix it?
To fix that JSON you'll need to encode to Latin-1 (since it transcodes bytes naively) and then decode from UTF-8.
Twice. Because it's double-broken.
>>> json.loads('"Erd\u00c3\u0085\u00c2\u0091s!"')
u'Erd\xc3\x85\xc2\x91s!'
>>> json.loads('"Erd\u00c3\u0085\u00c2\u0091s!"').encode('latin-1').decode('utf-8')
u'Erd\xc5\x91s!'
>>> json.loads('"Erd\u00c3\u0085\u00c2\u0091s!"').encode('latin-1').decode('utf-8').encode('latin-1').decode('utf-8')
u'Erd\u0151s!'
>>> print json.loads('"Erd\u00c3\u0085\u00c2\u0091s!"').encode('latin-1').decode('utf-8').encode('latin-1').decode('utf-8')
Erdős!
I have a string that looks like this.
st = '/M\xe4rzen'
I would like to covert this to unicode. How can I do this? I've tried:
st.decode('utf-8')
unicode(t, 'utf-8')
The original file is utf-8 encoded, but I can't seem to get the unicode representation of the string.
Your data is not UTF8 encoded; more likely it is using the Latin-1 encoding:
>>> print st.decode('latin1')
/Märzen
Calling .decode() is enough, no need to also call unicode().
I have a hex string and i want to convert it utf8 to insert mysql. (my database is utf8)
hex_string = 'kitap ara\xfet\xfdrmas\xfd'
...
result = 'kitap araştırması'
How can I do that?
Try(Python 3.x):
import codecs
codecs.decode("707974686f6e2d666f72756d2e696f", "hex").decode('utf-8')
From here.
Assuming Python 2.6,
>>> print('kitap ara\xfet\xfdrmas\xfd'.decode('iso-8859-9'))
kitap araştırması
>>> 'kitap ara\xfet\xfdrmas\xfd'.decode('iso-8859-9').encode('utf-8')
'kitap ara\xc5\x9ft\xc4\xb1rmas\xc4\xb1'
Try
hex_string.decode("cp1254").encode("utf-8")
(cp1254 or iso-8859-9 are the Turkish codepages, the former being the usual name on Windows platforms, but in Python, both work equally well)
First you need to decode it from the encoded bytes you have. That appears to be ISO-8859-9 (latin-5), or, if you are using Windows, probably code page 1254, which is based on latin-5.
>>> 'kitap ara\xfet\xfdrmas\xfd'.decode('cp1254')
u'kitap ara\u015ft\u0131rmas\u0131' # u'kitap araştırması'
If you are using Windows, then depending on where you are getting those bytes, it might be more appropriate to decode them as mbcs, which translates to ‘whichever code page the local system is using’. If the string is just sitting in a .py file, you would be better off just writing u'kitap araştırması' in the source and setting a -*- coding declaration to direct Python to decode it. See PEP 263.
As to how to encode unicode strings to UTF-8 for the database, well, if you want to you can do it manually:
>>> u'kitap ara\u015ft\u0131rmas\u0131'.encode('utf-8')
'kitap ara\xc5\x9ft\xc4\xb1rmas\xc4\xb1'
but a good data access layer is likely to do that automatically for you, if you've got the COLLATION of the tables the data is going into right.
String literals explains how to use UTF8 strings in Python source.