I'm using Microsoft's free translation service to translate some Hindi characters to English. They don't provide an API for Python, but I borrowed code from: tinyurl.com/dxh6thr
I'm trying to use the 'Detect' method as described here: tinyurl.com/bxkt3we
The 'hindi.txt' file is saved in unicode charset.
>>> hindi_string = open('hindi.txt').read()
>>> data = { 'text' : hindi_string }
>>> token = msmt.get_access_token(MY_USERID, MY_TOKEN)
>>> request = urllib2.Request('http://api.microsofttranslator.com/v2/Http.svc/Detect?'+urllib.urlencode(data))
>>> request.add_header('Authorization', 'Bearer '+token)
>>> response = urllib2.urlopen(request)
>>> print response.read()
<string xmlns="http://schemas.microsoft.com/2003/10/Serialization/">en</string>
>>>
The response shows that the Translator detected 'en', instead of 'hi' (for Hindi). When I check the encoding, it shows as 'string':
>>> type(hindi_string)
<type 'str'>
For reference, here is content of 'hindi.txt':
हाय, कैसे आप आज कर रहे हैं। मैं अच्छी तरह से, आपको धन्यवाद कर रहा हूँ।
I'm not sure if using string.encode or string.decode applies here. If it does, what do I need to encode/decode from/to? What is the best method to pass a Unicode string as a urllib.urlencode argument? How can I ensure that the actual Hindi characters are passed as the argument?
Thank you.
** Additional Information **
I tried using codecs.open() as suggested, but I get the following error:
>>> hindi_new = codecs.open('hindi.txt', encoding='utf-8').read()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\codecs.py", line 671, in read
return self.reader.read(size)
File "C:\Python27\lib\codecs.py", line 477, in read
newchars, decodedbytes = self.decode(data, self.errors)
UnicodeDecodeError: 'utf8' codec can't decode byte 0xff in position 0: invalid start byte
Here is the repr(hindi_string) output:
>>> repr(hindi_string)
"'\\xff\\xfe9\\t>\\t/\\t,\\x00 \\x00\\x15\\tH\\t8\\tG\\t \\x00\\x06\\t*\\t \\x00
\\x06\\t\\x1c\\t \\x00\\x15\\t0\\t \\x000\\t9\\tG\\t \\x009\\tH\\t\\x02\\td\\t \
\x00.\\tH\\t\\x02\\t \\x00\\x05\\t'"
Your file is utf-16, so you need to decode the content before sending it:
hindi_string = open('hindi.txt').read().decode('utf-16')
data = { 'text' : hindi_string.encode('utf-8') }
...
You could try opening the file using codecs.open and decode it with utf-8:
import codecs
with codecs.open('hindi.txt', encoding='utf-8') as f:
hindi_text = f.read()
Related
I'm using Docraptor to convert HTML to PDF, docraptor does the conversion and sends me a response, I'm having some trouble understanding how I could convert this response to a PDF file.
Here's what the response looks like :
b'%PDF-1.4\n%\xe2\xe3\xcf\xd3\n\n1 0 obj\n<</Type /Catalog\n/Pages 2 0 R>>\nendobj\n\n2 0 obj\n<</Type /Pages\n/Kids [3 0 R]\n/Count 1>>
\nendobj\n\n4 0 obj\n<</Length 5 0 R\n/Filter /FlateDecode>>\nstream\nx\x9cs\n\xe125\xd13\x00\x02\x05s#3=sSC#\x85\x90\x14.}7C\x05C#\x88x
H\x1a\x97\x86GjNN\xbeB\xb8\xa6BH\x16\x97\x89\x81\x9e\x81\x91\xa9\x89\x82\x0
... ... ...
... ... lots of code ... ...
... ... ...
<</Info 10 0 R\n/Size 11\n/Root 1 0 R\n/ID [<5FCD137048BC4E60BF5E3D2E3741CD4B> <5FCD137048BC4E60BF5E3D2E3741CD4B>]>>\nstartxref\n12234\n
%%EOF\n'
I was thinking to do something like that :
#docraptor response
response = doc_api.create_doc({ "type": "pdf", "document_content": "<html><body>Hello World!</body></html>" })
with open("test.pdf", "wb") as f:
f.write(response)
file = open(f.name, 'r').read()
Error: UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 195: character maps to
How can I achieve this ?
Use binary mode when opening the file for reading:
with open('test.pdf', 'rb') as f:
doc = f.read()
Without the binary flag Python 3 expects that the data is encoded with the default file system encoding, and it will attempt to decode the incoming data into a unicode string:
>>> import sys
>>> sys.getfilesystemencoding()
'utf-8'
On my system the default encoding is UTF-8. When in text mode Python will try to decode from UTF8 into a str object, but that might fail if the data in the file is not UTF-8 encoded.
I am making an API call and the response has unicode characters. Loading this response into a file throws the following error:
'ascii' codec can't encode character u'\u2019' in position 22462
I've tried all combinations of decode and encode ('utf-8').
Here is the code:
url = "https://%s?start_time=%s&include=metric_sets,users,organizations,groups" % (api_path, start_epoch)
while url != None and url != "null" :
json_filename = "%s/%s.json" % (inbound_folder, start_epoch)
try:
resp = requests.get(url,
auth=(api_user, api_pwd),
headers={'Content-Type': 'application/json'})
except requests.exceptions.RequestException as e:
print "|********************************************************|"
print e
return "Error: {}".format(e)
print "|********************************************************|"
sys.exit(1)
try:
total_records_extracted = total_records_extracted + rec_cnt
jsonfh = open(json_filename, 'w')
inter = resp.text
string_e = inter#.decode('utf-8')
final = string_e.replace('\\n', ' ').replace('\\t', ' ').replace('\\r', ' ')#.replace('\\ ',' ')
encoded_data = final.encode('utf-8')
cleaned_data = json.loads(encoded_data)
json.dump(cleaned_data, jsonfh, indent=None)
jsonfh.close()
except ValueError as e:
tb = traceback.format_exc()
print tb
print "|********************************************************|"
print e
print "|********************************************************|"
sys.exit(1)
Lot of developers have faced this issue. a lot of places have asked to use .decode('utf-8') or having a # _*_ coding:utf-8 _*_ at the top of python.
It is still not helping.
Can someone help me with this issue?
Here is the trace:
Traceback (most recent call last):
File "/Users/SM/PycharmProjects/zendesk/zendesk_tickets_api.py", line 102, in main
cleaned_data = json.loads(encoded_data)
File "/Users/SM/anaconda/lib/python2.7/json/__init__.py", line 339, in loads
return _default_decoder.decode(s)
File "/Users/SM/anaconda/lib/python2.7/json/decoder.py", line 364, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/Users/SM/anaconda/lib/python2.7/json/decoder.py", line 380, in raw_decode
obj, end = self.scan_once(s, idx)
ValueError: Invalid \escape: line 1 column 2826494 (char 2826493)
|********************************************************|
Invalid \escape: line 1 column 2826494 (char 2826493)
inter = resp.text
string_e = inter#.decode('utf-8')
encoded_data = final.encode('utf-8')
The text property is a Unicode character string, decoded from the original bytes using whatever encoding the Requests module guessed might be in use from the HTTP headers.
You probably don't want that; JSON has its own ideas about what the encoding should be, so you should let the JSON decoder do that by taking the raw response bytes from resp.content and passing them straight to json.loads.
What's more, Requests has a shortcut method to do the same: resp.json().
final = string_e.replace('\\n', ' ').replace('\\t', ' ').replace('\\r', ' ')#.replace('\\ ',' ')
Trying to do this on the JSON-string-literal formatted input is a bad idea: you will miss some valid escapes, and incorrectly unescape others. Your actual error is nothing to do with Unicode at all, it's that this replacement is mangling the input. For example consider the input JSON:
{"message": "Open the file C:\\newfolder\\text.txt"}
after replacement:
{"message": "Open the file C:\ ewfolder\ ext.txt"}
which is clearly not valid JSON.
Instead of trying to operate on the JSON-encoded string, you should let json decode the input and then filter any strings you have in the structured output. This may involve using a recursive function to walk down into each level of the data looking for strings to filter. eg
def clean(data):
if isinstance(data, basestring):
return data.replace('\n', ' ').replace('\t', ' ').replace('\r', ' ')
if isinstance(data, list):
return [clean(item) for item in data]
if isinstance(data, dict):
return {clean(key): clean(value) for (key, value) in data.items()}
return data
cleaned_data = clean(resp.json())
My code works perfectly for some pdf, but some show error:
Traceback (most recent call last):
File "con.py", line 24, in <module>
print getPDFContent("abc.pdf")
File "con.py", line 17, in getPDFContent
f.write(a)
UnicodeEncodeError: 'ascii' codec can't encode character u'\u02dd' in position 64: ordinal not in range(128)
My code is
import pyPdf
def getPDFContent(path):
content = ""
pdf = pyPdf.PdfFileReader(file(path, "rb"))
for i in range(0, pdf.getNumPages()):
f=open("xxx.txt",'a')
content= pdf.getPage(i).extractText() + "\n"
import string
c=content.split()
for a in c:
f.write(" ")
f.write(a)
f.write('\n')
f.close()
return content
print getPDFContent("abc.pdf")
Your problem is that when you call f.write() with a string, it is trying to encode it using the ascii codec. Your pdf contains characters that can not be represented by the ascii codec. Try explicitly encoding your str, e.g.
a = a.encode('utf-8')
f.write(a)
Try
import sys
print getPDFContent("abc.pdf").encode(sys.getfilesystemencoding())
urlparse.parse_qs is usefull for parsing url parameters, and it works fine with simple ASCII url, represented by str. So i can parse a query and then construct the same path using urllib.urlencode from parsed data:
>>> import urlparse
>>> import urllib
>>>
>>> path = '/?key=value' #path is str
>>> query = urlparse.urlparse(path).query
>>> query
'key=value'
>>> query_dict = urlparse.parse_qs(query)
>>> query_dict
{'key': ['value']}
>>> '/?' + urllib.urlencode(query_dict, doseq=True)
'/?key=value' # <-- path is the same here
It also works fine, when url contains percent encoded non-ASCII param:
>>> value = urllib.quote(u'значение'.encode('utf8'))
>>> value
'%D0%B7%D0%BD%D0%B0%D1%87%D0%B5%D0%BD%D0%B8%D0%B5'
>>> path = '/?key=%s' % value
>>> path
'/?key=%D0%B7%D0%BD%D0%B0%D1%87%D0%B5%D0%BD%D0%B8%D0%B5'
>>> query = urlparse.urlparse(path).query
>>> query
'key=%D0%B7%D0%BD%D0%B0%D1%87%D0%B5%D0%BD%D0%B8%D0%B5'
>>> query_dict = urlparse.parse_qs(query)
>>> query_dict
{'key': ['\xd0\xb7\xd0\xbd\xd0\xb0\xd1\x87\xd0\xb5\xd0\xbd\xd0\xb8\xd0\xb5']}
>>> '/?' + urllib.urlencode(query_dict, doseq=True)
'/?key=%D0%B7%D0%BD%D0%B0%D1%87%D0%B5%D0%BD%D0%B8%D0%B5' # <-- path is the same here
But, when using django, i get the url using request.get_full_path(), and it returns path as unicode string:
>>> path = request.get_full_path()
>>> path
u'/?key=%D0%B7%D0%BD%D0%B0%D1%87%D0%B5%D0%BD%D0%B8%D0%B5' # path is unicode
Look what will happen now:
>>> query = urlparse.urlparse(path).query
>>> query
u'key=%D0%B7%D0%BD%D0%B0%D1%87%D0%B5%D0%BD%D0%B8%D0%B5'
>>> query_dict = urlparse.parse_qs(query)
>>> query_dict
{u'key': [u'\xd0\xb7\xd0\xbd\xd0\xb0\xd1\x87\xd0\xb5\xd0\xbd\xd0\xb8\xd0\xb5']}
>>>
query_dict contains unicode string, that contains bytes! Not unicode points!
And of course i've got a UnicodeEncodeError, when trying to urlencode that string:
>>> urllib.urlencode(query_dict, doseq=True)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\Lib\urllib.py", line 1337, in urlencode
l.append(k + '=' + quote_plus(str(elt)))
UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-15: ordinal not in range(128)
Currently i have a solution:
# just convert path, returned by request.get_full_path(), to `str` explicitly:
path = str(request.get_full_path())
So the questions are:
why parse_qs return so strange string (unicode, that contains bytes)?
is it safe to convert url to str?
Encode back to bytes before passing it to .parse_qs(), using ASCII:
query_dict = urlparse.parse_qs(query.encode('ASCII'))
This does the same thing as str() but with an explicit encoding. Yes, this is safe, the URL encoding uses ASCII codepoints only.
parse_qs was handed a Unicode value, so it returned you a unicode value too; it is not it's job to decode bytes.
I'm trying to make a html entities encoder/decoder on Python that behaves similar to PHP's htmlentities and html_entity_decode, it works normally as a standalone script:
My input:
Lorem ÁÉÍÓÚÇÃOÁáéíóúção ##$%*()[]<>+ 0123456789
python decode.py
Output:
Lorem ÁÉÍÓÚÇÃOÁáéíóúção ##$%*()[]<>+ 0123456789
Now if I run it as an Autokey script I get this error:
Script name: 'html_entity_decode'
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/autokey/service.py", line 454, in execute
exec script.code in scope
File "<string>", line 40, in <module>
File "/usr/local/lib/python2.7/dist-packages/autokey/scripting.py", line 42, in send_keys
self.mediator.send_string(keyString.decode("utf-8"))
File "/usr/lib/python2.7/encodings/utf_8.py", line 16, in decode
return codecs.utf_8_decode(input, errors, True)
UnicodeEncodeError: 'ascii' codec can't encode characters in position 6-12: ordinal not in range(128)
What am I doing wrong? Here's the script:
import htmlentitydefs
import re
entity_re = re.compile(r'&(%s|#(\d{1,5}|[xX]([\da-fA-F]{1,4})));' % '|'.join(
htmlentitydefs.name2codepoint.keys()))
def html_entity_decode(s, encoding='utf-8'):
if not isinstance(s, basestring):
raise TypeError('argument 1: expected string, %s found' \
% s.__class__.__name__)
def entity_2_unichr(matchobj):
g1, g2, g3 = matchobj.groups()
if g3 is not None:
codepoint = int(g3, 16)
elif g2 is not None:
codepoint = int(g2)
else:
codepoint = htmlentitydefs.name2codepoint[g1]
return unichr(codepoint)
if isinstance(s, unicode):
entity_2_chr = entity_2_unichr
else:
entity_2_chr = lambda o: entity_2_unichr(o).encode(encoding,
'xmlcharrefreplace')
def silent_entity_replace(matchobj):
try:
return entity_2_chr(matchobj)
except ValueError:
return matchobj.group(0)
return entity_re.sub(silent_entity_replace, s)
text = clipboard.get_selection()
text = html_entity_decode(text)
keyboard.send_keys("%s" % text)
I found it on a Gist https://gist.github.com/607454, I'm not the author.
Looking at the backtrace the likely problem is that you are passing in a unicode string to keyboard.send_keys, which expects a UTF-8 encoded bytestring. autokey then tries to decode your string which fails because the input is unicode instead of utf-8. This looks like a bug in autokey: it should not try to decode strings unless their are really plain (byte)sstrings.
If this guess is correct you should be able to work around this by making sure you pass a unicode instance to send_keys. Try something like this:
text = clipboard.get_selection()
if isinstance(text, unicode):
text = text.encode('utf-8')
text = html_entity_decode(text)
assert isinstance(text, str)
keyboard.send_keys(text)
The assert is not needed but is a handy sanity check to make sure html_entity_decode does the right thing.
The problem is the the output of:
clipboard.get_selection()
is an unicode string.
to solve the problem replace:
text = clipboard.get_selection()
by:
text = clipboard.get_selection().encode("utf8")