URL encode & decode - python

Is there some plug-in or library to encode and decode URLs?
For example im doing that:
name='Jazmín Fernández'
self.redirect('/page?userName='+name)
But, obviously there are some characters that need to be encoded (í, á, and the space between n and F). So, this give me an error.
Thanks!!!

You should convert your unicode string to UTF8, before passing it to urllib.urlencode.
>>> name = u'Jazmín Fernández'
>>> urllib.urlencode({'name': name.encode('utf8')})
'name=Jazm%C3%ADn+Fern%C3%A1ndez'

Related

Python convert utf-8 bytes to string

I have difficulties converting those bytes to string:
x = b'<strong>\xc5\xb7\xc3\xc0\xd0\xd4\xb8\xd0\xd0\xb1\xc1\xec\xb5\xa5\xbc\xe7\xb3\xa4\xd0\xe4\xb2\xbb\xb9\xe6\xd4\xf2\xc1\xac\xd2\xc2\xc8\xb9\xa3\xac\xb4\xf2\xd4\xec\xd1\xe7\xbb\xe1\xa1\xa2\xca\xb1\xc9\xd0\xb8\xd0\xca\xae\xd7\xe3\xa3\xac\xd5\xc3\xcf\xd4\xc5\xae\xd0\xd4\xf7\xc8\xc1\xa6\xa3\xac\xb4\xf3\xc1\xbf\xcf\xd6\xbb\xf5\xa3\xac\xbb\xb6\xd3\xad\xd0\xc2\xc0\xcf\xbf\xcd\xbb\xa7\xc4\xc3\xd1\xf9\xb2\xc9\xb9\xba\xa3\xa1</strong>'
if i decode via unicode-escape i got weird characters like:
'<strong>Å·ÃÀÐÔ¸ÐбÁìµ¥¼ç³¤Ðä²»¹æÔòÁ¬ÒÂȹ£¬´òÔìÑç»á¡¢Ê±ÉиÐÊ®×㣬ÕÃÏÔÅ®ÐÔ÷ÈÁ¦£¬´óÁ¿ÏÖ»õ£¬»¶Ó\xadÐÂÀÏ¿Í»§ÄÃÑù²É¹º£¡</strong>'
instead of chinese charaters like 欧美性感斜领单肩长袖不规则连衣裙
You seem to be using the wrong encoding. The right encoding seem to be 'GB2312'.
>>> x.decode('GB2312')
'<strong>欧美性感斜领单肩长袖不规则连衣裙... more symbols</strong>'

Python unicode accent a (à) hex

I have a string from bs4 that is
s = "vinili-disponibili/311-canzoniere-del-lazio-lassa-st\u00c3\u00a0-la-me-creatura.html"
\u00c3\u00a0should be accent a (à) I have gotten it to show up in the console partly correct as
vinili-disponibili/311-canzoniere-del-lazio-lassa-stà-la-me-creatura.html
with
str2 = u'%s' % s
print(str2.encode('utf-8').decode('unicode-escape'))
but it's decoding c3 and a0 separately, so I get a tilde A instead of an accent a. I know that c3 a0 is the hex utf-8 for accent a. I have no idea what's going on and I got to here using Google and the combinatory approach to the answers I got. This entire character encoding thing seems like a big mess to me.
The way it is supposed to be is
311-canzoniere-del-lazio-lassa-stà-la-me-creatura.html
EDIT:
Andrey's method worked when printing it out, but trying to use urlopen with the string I get UnicodeEncodeError: 'ascii' codec can't encode character '\xe0' in position 60: ordinal not in range(128)
After using unquote(str,":/") it gives UnicodeEncodeError: 'ascii' codec can't encode characters in position 56-57: ordinal not in range(128).
Transform the string back into bytes using .encode('latin-1'), then decode the unicode-escapes \u, transform everything into bytes again using the "wrong" 'latin-1' encoding, and finally, decode "properly" as 'utf-8':
s = "vinili-disponibili/311-canzoniere-del-lazio-lassa-st\u00c3\u00a0-la-me-creatura.html"
s.encode('latin-1').decode('raw_unicode_escape').encode('latin-1').decode('utf-8')
gives:
'vinili-disponibili/311-canzoniere-del-lazio-lassa-stà-la-me-creatura.html'
It works for the same reason as explained in this answer.
Assuming Python 2:
This is a byte string with Unicode escapes. The Unicode escapes were incorrectly generated for some UTF-8-encoded data:
>>> s = "vinili-disponibili/311-canzoniere-del-lazio-lassa-st\u00c3\u00a0-la-me-creatura.html"
>>> s.decode('unicode-escape')
u'vinili-disponibili/311-canzoniere-del-lazio-lassa-st\xc3\xa0-la-me-creatura.html'
Now it is a Unicode string but now appears mis-decoded since the code points resemble UTF-8 bytes. It turns output the latin1 (also iso-8859-1) codec maps the first 256 code points directly to bytes 0-255, so use this trick to convert back to a byte string:
>>> s.decode('unicode-escape').encode('latin1')
'vinili-disponibili/311-canzoniere-del-lazio-lassa-st\xc3\xa0-la-me-creatura.html'
Now it can be decoded correctly as UTF-8:
>>> s.decode('unicode-escape').encode('latin1').decode('utf8')
u'vinili-disponibili/311-canzoniere-del-lazio-lassa-st\xe0-la-me-creatura.html'
It is a Unicode string, so Python displays its repr() value, which shows code points above U+007F as escape codes. print it to see the actual value assuming your terminal is correctly configured with an encoding that supports the characters printed:
>>> print(s.decode('unicode-escape').encode('latin1').decode('utf8'))
vinili-disponibili/311-canzoniere-del-lazio-lassa-stà-la-me-creatura.html
Ideally, fix the problem that generated this string incorrectly in the first place instead of working around the mess.

Python3 - reinterpret string as bytes [duplicate]

Is there a way to convert a \x escaped string like "\\xe8\\xaa\\x9e\\xe8\\xa8\\x80" into readable form: "語言"?
>>> a = "\\xe8\\xaa\\x9e\\xe8\\xa8\\x80"
>>> print(a)
\xe8\xaa\x9e\xe8\xa8\x80
I am aware that there is a similar question here, but it seems the solution is only for latin characters. How can I convert this form of string into readable CJK characters?
Decode it first using 'unicode-escape', then as 'utf8':
a = "\\xe8\\xaa\\x9e\\xe8\\xa8\\x80"
decoded = a.encode('latin1').decode('unicode_escape').encode('latin1').decode('utf8')
print(decoded)
# 語言
Note that since we can only decode bytes objects, we need to transparently encode it in between, using 'latin1'.
Starting with string a which appears to follow python's hex escaping rules, you can decode it to a bytes object plus length of string decoded.
>>> a = "\\xe8\\xaa\\x9e\\xe8\\xa8\\x80"
>>> import codecs
>>> codecs.escape_decode(a)
(b'\xe8\xaa\x9e\xe8\xa8\x80', 24)
You don't need the length here, so just get item 0. Now its time for some guessing. Assuming that this string actually represented a utf-8 encoding, you now have a bytes array that you can decode
>>> codecs.escape_decode(a)[0].decode('utf-8')
'語言'
If the underlying encoding was different (say, a Windows CJK code page), you'd have to decode with its decoder.
Text like this could make a valid Python bytes literal. Assuming we don't have to worry about invalid input, we can simply construct a string that looks like the corresponding source code, and use ast.literal_eval to interpret it that way (this is safe, unlike using eval). Finally we decode the resulting bytes as UTF-8. Thus:
>>> a = "\\xe8\\xaa\\x9e\\xe8\\xa8\\x80"
>>> ast.literal_eval(f"b'{a}'")
b'\xe8\xaa\x9e\xe8\xa8\x80'
>>> ast.literal_eval(f"b'{a}'").decode('utf-8')
'語言'
Such a codec is missing in stdlib. My package all-escapes registers a codec which can be used:
>>> a = "\\xe8\\xaa\\x9e\\xe8\\xa8\\x80"
>>> a.encode('all-escapes').decode()
'語言'

'ascii' codec can't encode character u'\xe9'

I already tried all previous answers and solution.
I am trying to use this value, which gave me encoding related error.
ar = [u'http://dbpedia.org/resource/Anne_Hathaway', u'http://dbpedia.org/resource/Jodie_Bain', u'http://dbpedia.org/resource/Wendy_Divine', u'http://dbpedia.org/resource/Jos\xe9_El\xedas_Moreno', u'http://dbpedia.org/resource/Baaba_Maal']
So I tried,
d = [x.decode('utf-8') for x in ar]
which gives:
UnicodeEncodeError: 'ascii' codec can't encode character u'\xe9' in position 31: ordinal not in range(128)
I tried out
d = [x.encode('utf-8') for x in ar]
which removes error but changes the original content
original value was u'http://dbpedia.org/resource/Jos\xe9_El\xedas_Moreno' which converted to 'http://dbpedia.org/resource/Jos\xc3\xa9_El\xc3\xadas_Moreno' while using encode
what is correct way to deal with this scenario?
Edit
Error comes when I feed these links in
req = urllib2.Request()
The second version of your string is the correct utf-8 representation of your original unicode string. If you want to have a meaningful comparison, you have to use the same representation for both the stored string and the user input string. The sane thing to do here is to always use Unicode string internally (in your code), and make sure both your user inputs and stored strings are correctly decoded to unicode from their respective encodings at your system's boundaries (storage subsystem and user inputs subsystem).
Also you seem to be a bit confused about unicode and encodings, so reading this and this might help.
Unicode strings in python are "raw" unicode, so make sure to .encode() and .decode() them as appropriate. Using utf8 encoding is considered a best practice among multiple dev groups all over the world.
To encode use the quote function from the urllib2 library:
from urllib2 import quote
escaped_string = quote(unicode_string.encode('utf-8'))
To decode, use unquote:
from urllib2 import unquote
src = "http://dbpedia.org/resource/Jos\xc3\xa9_El\xc3\xadas_Moreno"
unicode_string = unquote(src).decode('utf-8')
Also, if you're more interested in Unicode and UTF-8 work, check out Unicode HOWTO and
In your Unicode list, u'http://dbpedia.org/resource/Jos\xe9_El\xedas_Moreno' is an ASCII safe way to represent a Unicode string. When encoded in a form that supports the full Western European character set, such as UTF-8, it's: http://dbpedia.org/resource/José_Elías_Moreno
Your .encode("UTF-8") is correct and would have looked ok in a UTF-8 editor or browser. What you saw after the encode was an ASCII safe representation of UTF-8.
For example, your trouble chars were é and í.
é = 00E9 Unicode = C3A9 UTF-8
í = 00ED Unicode = C3AD UTF-8
In short, your .encode() method is correct and should be used for writing to files or to a browser.

Python: convert string from UTF-8 to Latin-1

I feel stacked here trying to change encodings with Python 2.5
I have XML response, which I encode to UTF-8: response.encode('utf-8'). That is fine, but the program which uses this info doesn't like this encoding and I have to convert it to other code page. Real example is that I use ghostscript python module to embed pdfmark data to a PDF file - end result is with wrong characters in Acrobat.
I've done numerous combinations with .encode() and .decode() between 'utf-8' and 'latin-1' and it drives me crazy as I can't output correct result.
If I output the string to a file with .encode('utf-8') and then convert this file from UTF-8 to CP1252 (aka latin-1) with i.e. iconv.exe and embed the data everything is fine.
Basically can someone help me convert i.e. character á which is UTF-8 encoded as hex: C3 A1 to latin-1 as hex: E1?
Instead of .encode('utf-8'), use .encode('latin-1').
data="UTF-8 data"
udata=data.decode("utf-8")
data=udata.encode("latin-1","ignore")
Should do it.
Can you provide more details about what you are trying to do? In general, if you have a unicode string, you can use encode to convert it into string with appropriate encoding. Eg:
>>> a = u"\u00E1"
>>> type(a)
<type 'unicode'>
>>> a.encode('utf-8')
'\xc3\xa1'
>>> a.encode('latin-1')
'\xe1'
If the previous answers do not solve your problem, check the source of the data that won't print/convert properly.
In my case, I was using json.load on data incorrectly read from file by not using the encoding="utf-8". Trying to de-/encode the resulting string to latin-1 just does not help...

Categories