Django cannot read Arabic from uploaded .xls file - python

When uploading a .xls file containing Arabic to my Django website, I get the following error when running code:
UnicodeDecodeError
'charmap' codec can't decode byte 0x81 in position 655224: character maps to
Here is my python / django code (it is original without any decoding as I tried many variants they all failed):
#certain_file is the uploaded file
certain_file = TextIOWrapper(request.FILES['certain_file'].file, encoding=request.encoding)
with certain_file as f:
soup = BeautifulSoup(f, "html.parser")
#rest of code
My goal is to parse uploaded file.
Many thanks for the help in advance!

Related

Encoding issue when reading binary file in Django

I am uploading files in my Django application, which are recorded on the hard drive, and at a later moment they are retrieved. This works well for most files. However, every once in a while there is a file -- generally a PDF file -- that can't be retrieved properly. This is the error that comes up:
UnicodeEncodeError: 'ascii' codec can't encode character '\xe9' in position 194: ordinal not in range(128)
I have seen other questions about this encoding issue, which all relate to how to deal with this when encoding plain text, but I am dealing with binary files. Here is my code:
Relevant upload code:
with open(path, "wb+") as destination:
for chunk in attachment.chunks():
destination.write(chunk)
Code to retrieve the file:
with open(file_path, "rb") as f:
contents = f.read()
response = HttpResponse(contents, content_type="application")
response["Content-Disposition"] = "attachment; filename=\""+name + "\""
I understand there is an encoding issue, but where exactly should I fix this?

UnicodeDecodeError (UTF-8) for JSON

BLUF: Why is the decode() method on a bytes object failing to decode ç?
I am receiving a UnicodeDecodeError: 'utf-8' codec can't decode by 0xe7 in position..... Upon tracking down the character, it is the ç character. So when I get to reading the response from the server:
conn = http.client.HTTPConnection(host = 'something.com')
conn.request('GET', url = '/some/json')
resp = conn.getresponse()
content = resp.read().decode() # throws error
I am unable to get the content. If I just do content = resp.read() it is successful, I can write to file using wb but then whever the ç is, it is replaced with 0xE7 in the file upon writing. Even if I open the file in Notepad++ and set the encoding to UTF-8, the character only shows as the hex version.
Why am I not able to decode this UTF-8 character from an HTTPResponse? Am I not correctly writing it to file either?
When you have issues with encoding/decoding, you should take a look at the UTF-8 Encoding Debugging Chart.
If you look in the chart for the Windows 1252 code point 0xE7 you find the expected character is ç showing that the encoding is CP1252.

Wikipedia database dump - UTF8 charset

I am trying to open in python3 wikipedia database dump file. I unpack in linux this file with gzip command and try open with this code:
#!/usr/bin/env python
# -*- coding: utf-8 -*
with open('dump.sql', 'r') as file:
for i in file:
print(i)
But it gives me this error:
File "/usr/lib/python3.5/codecs.py", line 321, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode bytes in position 250-251: invalid continuation byte
Linux command file -i dump.sql shows utf8 charset. Where can be a problem?
I found more info here but this file is from 4.7.2017 so this cannot be a problem.
The dumps may contain non-Unicode (UTF8) characters in older text revisions due to lenient charset validation in the earlier MediaWiki releases (2004 or so). For instance, zhwiki-20130102-langlinks.sql.gz contained some copy and pasted iso8859-1 "ö" characters; as the langlinks table is generated on parsing, a null edit or forcelinkupdate to the page was enough to fix it.
So how can I process wikipedia database dumps files in python?

Unicode Decoding error when trying to generate pdf with non-ascii characters

I am working with some software that is generating an error when trying to create a pdf from html that contains non-ascii characters. I have created a much simpler program to reproduce the problem and help me understand what is going on.
#!/usr/bin/python
#coding=utf8
from __future__ import unicode_literals
import pdfkit
from pyPdf import PdfFileWriter, PdfFileReader
f = open('test.html','r')
html = f.read()
print html
pdfkit.from_string(html, 'gen.pdf')
f.close()
Running this program results in:
<html>
<body>
<h1>ر</h1>
</body>
</html>
Traceback (most recent call last):
File "./testerror.py", line 10, in <module>
pdfkit.from_string(html, 'gen.pdf')
File "/usr/local/lib/python2.7/dist-packages/pdfkit/api.py", line 72, in from_string
return r.to_pdf(output_path)
File "/usr/local/lib/python2.7/dist-packages/pdfkit/pdfkit.py", line 136, in to_pdf
input = self.source.to_s().encode('utf-8')
UnicodeDecodeError: 'ascii' codec can't decode byte 0xd8 in position 18: ordinal not in range(128)
I tried adding a replace statement to strip the problem character, but that also resulted in an error:
Traceback (most recent call last):
File "./testerror.py", line 9, in <module>
html = html.replace('ر','-')
UnicodeDecodeError: 'ascii' codec can't decode byte 0xd8 in position 18: ordinal not in range(128)
I am afraid I don't understand ascii / utf-8 encoding very well. If anyone could help me understand what is going on here, that would be great! I am not sure if this is a problem in the pdf library, or if this is a result of my ignorance of encodings :)
Reading pdfkit source code, it appears that pdfkit.from_string expects its first argument to be unicode not str, so it's up to you to properly decode html. To do so you must know what encoding your test.html file is. Once you know that you just have to proceed:
with open('test.html') as f:
html = f.read().decode('<your-encoding-name-here>)
pdfkit.from_string(html, 'gen.pdf')
Note that str.decode(<encoding>) will return a unicode string and unicode.encode(<encoding>) will return a byte string, IOW you decode from byte string to unicode and you encode from unicode to byte string.
In your case can also use codecs.open(path, mode, encoding) instead of file.open() + explicit decoding, ie:
import codecs
with codecs.open('test.html', encoding=<your-encoding-name-here>) as f:
html = f.read() # `codecs` while do the decoding behind the scene
As a side note:
read (read binary for codecs but that's an implementation detail) is the default mode when opening a file so no need to specify it all
using files as context managers (with open(path) as f: ...) makes sure the file will be properly closed. While CPython will usually close opened filed when the file objects get collected, this is an implementation detail and is not garanteed by the language, so do not rely on it.
Also HTML should include charset
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body>
The question seems to be Python 2 specific. However, I had a similar issue with Python 3 in a Flask + Apache/mod_wsgi environment on Ubuntu 22.04. when passing a non-ASCII-string to the header or footer via the from_string options (e.g. document = pdfkit.from_string(html, False, options={"header-left": "é"}). I then got the error UnicodeEncodeError: 'ascii' codec can't encode character '\xe9' in position 0: ordinal not in range(128). The problem was the missing locale setting for WSGIDaemonProcess in the Apache/VirtualHost configuration. I solved it by passing locake=C.UTF-8: WSGIDaemonProcess myapp user=myuser group=mygroup threads=5 locale=C.UTF-8 python-home=/path/to/myapp/venv.

Read all .txt file in one folder, then copy text (based from line) to another .txt file

I am trying to write Python code to read all .txt files from one directory then copy it (based from line) to another .txt file:
import os
import glob
path = '/Users/Documents/*.txt'
f1 = open(os.path.expanduser('/Users/Documents/test.txt'),'w')
for data in glob.glob(path):
with open(data) as script:
for line in script:
script.readline()
if 'Subject: ' in line:
f1.write(line)
My code was working, but it can only copy some text from the files, the rest gives an error message, like:
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 1658: ordinal not in range(128)
How can I fix this? Anyone?

Categories