How to remove extended ascii using python? - python

In trying to fix up a PML (Palm Markup Language) file, it appears as if my test file has non-ASCII characters which is causing MakeBook to complain. The solution would be to strip out all the non-ASCII chars in the PML.
So in attempting to fix this in python, I have
import unicodedata, fileinput
for line in fileinput.input():
print unicodedata.normalize('NFKD', line).encode('ascii','ignore')
However, this results in an error that line must be "unicode, not str". Here's a file fragment.
\B1a\B \tintense, disordered and often destructive rage†.†.†.\t
Not quite sure how to properly pass line in to be processed at this point.

Try print line.decode('iso-8859-1').encode('ascii', 'ignore') -- that should be much closer to what you want.

You would like to treat line as ASCII-encoded data, so the answer is to decode it to text using the ascii codec:
line.decode('ascii')
This will raise errors for data that is not in fact ASCII-encoded. This is how to ignore those errors:
line.decode('ascii', 'ignore').
This gives you text, in the form of a unicode instance. If you would rather work with (ascii-encoded) data rather than text, you may re-encode it to get back a str or bytes instance (depending on your version of Python):
line.decode('ascii', 'ignore').encode('ascii')

To drop non-ASCII characters use line.decode(your_file_encoding).encode('ascii', 'ignore'). But probably you'd better use PLM escape sequences for them:
import re
def escape_unicode(m):
return '\\U%04x' % ord(m.group())
non_ascii = re.compile(u'[\x80-\uFFFF]', re.U)
line = u'\\B1a\\B \\tintense, disordered and often destructive rage\u2020.\u2020.\u2020.\\t'
print non_ascii.sub(escape_unicode, line)
This outputs \B1a\B \tintense, disordered and often destructive rage\U2020.\U2020.\U2020.\t.
Dropping non-ASCII and control characters with regular expression is easy too (this can be safely used after escaping):
regexp = re.compile('[^\x09\x0A\x0D\x20-\x7F]')
regexp.sub('', line)

When reading from a file in Python you're getting byte strings, aka "str" in Python 2.x and earlier. You need to convert these to the "unicode" type using the decode method. eg:
line = line.decode('latin1')
Replace 'latin1' with the correct encoding.

Related

TypeError: a bytes-like object is required, not 'str' when writing to a file in Python3 [duplicate]

I've very recently migrated to Python 3.5.
This code was working properly in Python 2.7:
with open(fname, 'rb') as f:
lines = [x.strip() for x in f.readlines()]
for line in lines:
tmp = line.strip().lower()
if 'some-pattern' in tmp: continue
# ... code
But in 3.5, on the if 'some-pattern' in tmp: continue line, I get an error which says:
TypeError: a bytes-like object is required, not 'str'
I was unable to fix the problem using .decode() on either side of the in, nor could I fix it using
if tmp.find('some-pattern') != -1: continue
What is wrong, and how do I fix it?
You opened the file in binary mode:
with open(fname, 'rb') as f:
This means that all data read from the file is returned as bytes objects, not str. You cannot then use a string in a containment test:
if 'some-pattern' in tmp: continue
You'd have to use a bytes object to test against tmp instead:
if b'some-pattern' in tmp: continue
or open the file as a textfile instead by replacing the 'rb' mode with 'r'.
You can encode your string by using .encode()
Example:
'Hello World'.encode()
As the error describes, in order to write a string to a file you need to encode it to a byte-like object first, and encode() is encoding it to a byte-string.
Like it has been already mentioned, you are reading the file in binary mode and then creating a list of bytes. In your following for loop you are comparing string to bytes and that is where the code is failing.
Decoding the bytes while adding to the list should work. The changed code should look as follows:
with open(fname, 'rb') as f:
lines = [x.decode('utf8').strip() for x in f.readlines()]
The bytes type was introduced in Python 3 and that is why your code worked in Python 2. In Python 2 there was no data type for bytes:
>>> s=bytes('hello')
>>> type(s)
<type 'str'>
You have to change from wb to w:
def __init__(self):
self.myCsv = csv.writer(open('Item.csv', 'wb'))
self.myCsv.writerow(['title', 'link'])
to
def __init__(self):
self.myCsv = csv.writer(open('Item.csv', 'w'))
self.myCsv.writerow(['title', 'link'])
After changing this, the error disappears, but you can't write to the file (in my case). So after all, I don't have an answer?
Source: How to remove ^M
Changing to 'rb' brings me the other error: io.UnsupportedOperation: write
Use the encode() function along with the hardcoded string value given in a single quote.
Example:
file.write(answers[i] + '\n'.encode())
Or
line.split(' +++$+++ '.encode())
For this small example, adding the only b before
'GET http://www.py4inf.com/code/romeo.txt HTTP/1.0\n\n' solved my problem:
import socket
mysock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
mysock.connect(('www.py4inf.com', 80))
mysock.send(b'GET http://www.py4inf.com/code/romeo.txt HTTP/1.0\n\n')
while True:
data = mysock.recv(512)
if (len(data) < 1):
break
print (data);
mysock.close()
What does the 'b' character do in front of a string literal?
You opened the file in binary mode:
The following code will throw
a TypeError: a bytes-like object is required, not 'str'.
for line in lines:
print(type(line))# <class 'bytes'>
if 'substring' in line:
print('success')
The following code will work - you have to use the decode() function:
for line in lines:
line = line.decode()
print(type(line))# <class 'str'>
if 'substring' in line:
print('success')
Try opening your file as text:
with open(fname, 'rt') as f:
lines = [x.strip() for x in f.readlines()]
Additionally, here is a link for Python 3.x on the official page:
io — Core tools for working with streams.
And this is the open function: open
If you are really trying to handle it as a binary then consider encoding your string.
I got this error when I was trying to convert a char (or string) to bytes, the code was something like this with Python 2.7:
# -*- coding: utf-8 -*-
print(bytes('ò'))
This is the way of Python 2.7 when dealing with Unicode characters.
This won't work with Python 3.6, since bytes require an extra argument for encoding, but this can be little tricky, since different encoding may output different result:
print(bytes('ò', 'iso_8859_1')) # prints: b'\xf2'
print(bytes('ò', 'utf-8')) # prints: b'\xc3\xb2'
In my case I had to use iso_8859_1 when encoding bytes in order to solve the issue.
Summary
Python 2.x encouraged many bad habits WRT text handling. In particular, its type named str does not actually represent text per the Unicode standard (that type is unicode), and the default "string literal" in fact produces a sequence of raw bytes - with some convenience functions for treating it like a string, if you can get away with assuming a "code page" style encoding.
In 3.x, "string literals" now produce actual strings, and built-in functionality no longer does any implicit conversions between the two types. Thus, the same code now has a TypeError, because the literal and the variable are of incompatible types. To fix the problem, one of the values must be either replaced or converted, so that the types match.
The Python documentation has an extremely detailed guide to working with Unicode properly.
In the example in the question, the input file is processed as if it contains text. Therefore, the file should have been opened in a text mode in the first place. The only good reason the file would have been opened in binary mode even in 2.x is to avoid universal newline translation; in 3.x, this is done by specifying the newline keyword parameter when opening a file in text mode.
To read a file as text properly requires knowing a text encoding, which is specified in the code by (string) name. The encoding iso-8859-1 is a safe fallback; it interprets each byte separately, as representing one of the first 256 Unicode code points, in order (so it will never raise an exception due to invalid data). utf-8 is much more common as of the time of writing, but it does not accept arbitrary data. (However, in many cases, for English text, the distinction will not matter; both of those encodings, and many more, are supersets of ASCII.)
Thus:
with open(fname, 'r', newline='\n', encoding='iso-8859-1') as f:
lines = [x.strip() for x in f.readlines()]
# proceed as before
# If the results are wrong, take additional steps to ascertain the correct encoding
How the error is created when migrating from 2.x to 3.x
In 2.x, 'some-pattern' creates a str, i.e. a sequence of bytes that the programmer is then likely to pretend is text. The str type is the same as the bytes type, and different from the unicode type that properly represents text. Many methods are offered to treat this data as if it were text, but it is not a proper representation of text. The meaning of each value as a text character (the encoding) is assumed. (In order to enable the illusion of raw data as "text", there would sometimes be implicit conversions between the str and unicode types. However, this results in confusing errors of its own - such as getting UnicodeDecodeError from an attempt to encode, or vice-versa).
In 3.x, 'some-pattern' creates what is also called a str; but now str means the Unicode-using, properly-text-representing string type. (unicode is no longer used as a type name, and only bytes refers to the sequence-of-bytes type.) Some changes were made to bytes to dissociate it from the text-with-assumed-encoding interpretation (in particular, indexing into a bytes object now results in an int, rather than a 1-element bytes), but many strange legacy methods persist (including ones rarely used even with actual strings any more, like zfill).
Why this causes a problem
The data, tmp, is a bytes instance. It came from a binary source: in this case, a file opened with a 'b' file mode. In other cases, it could come from a raw network socket, a web request made with urllib or similar, or some other API call.
This means that it cannot do anything meaningful in combination with a string. The elements of a string are Unicode code points (i.e., abstractions that represent, for the most part, text characters, in a universal form that represents all world languages and many other symbols). The elements of a bytes are, well, bytes. (Specifically in 3.x, they are interpreted as unsigned integers ranging from 0 to 255 inclusive.)
When the code was migrated, the literal 'some-pattern' went from describing a bytes, to describing text. Thus, the code went from making a legal comparison (byte-sequence to byte-sequence), to making an illegal one (string to byte-sequence).
Fixing the problem
In order to operate on a string and a byte-sequence - whether it's checking for equality with ==, lexicographic comparison with <, substring search with in, concatenation with +, or anything else - either the string must be converted to a byte-sequence, or vice-versa. In general, only one of these will be the correct, sensible answer, and it will depend on the context.
Fixing the source
Sometimes, one of the values can be seen to be "wrong" in the first place. For example, if reading the file was intended to result in text, then it should have been opened in a text mode. In 3.x, the file encoding can simply be passed as an encoding keyword argument to open, and conversion to Unicode is handled seamlessly without having to feed a binary file to an explicit translation step (thus, universal newline handling still takes place seamlessly).
In the case of the original example, that could look like:
with open(fname, 'r') as f:
lines = [x.strip() for x in f.readlines()]
This example assumes a platform-dependent default encoding for the file. This will normally work for files that were created in straightforward ways, on the same computer. In the general case, however, the encoding of the data must be known in order to work with it properly.
If the encoding is known to be, for example, UTF-8, that is trivially specified:
with open(fname, 'r', encoding='utf-8') as f:
lines = [x.strip() for x in f.readlines()]
Similarly, a string literal that should have been a bytes literal is simply missing a prefix: to make the bytes sequence representing integer values [101, 120, 97, 109, 112, 108, 101] (i.e., the ASCII values of the letters example), write the bytes literal b'example', rather than the string literal `'example'). Similarly the other way around.
In the case of the original example, that would look like:
if b'some-pattern' in tmp:
There is a safeguard built in to this: the bytes literal syntax only allows ASCII characters, so something like b'ëxãmþlê' will be caught as a SyntaxError, regardless of the encoding of the source file (since it is not clear which byte values are meant; in the old implied-encoding schemes, the ASCII range was well established, but everything else was up in the air.) Of course, bytes literals with elements representing values 128..255 can still be written by using \x escaping for those values: for example, b'\xebx\xe3m\xfel\xea' will produce a byte-sequence corresponding to the text ëxãmþlê in Latin-1 (ISO 8859-1) encoding.
Converting, when appropriate
Conversion between byte-sequences and text is only possible when an encoding has been determined. It has always been so; we just used to assume an encoding locally, and then mostly ignore that we had done so. (Programmers in places like East Asia have been more aware of the problem historically, because they commonly need to work with scripts that have more than 256 distinct symbols, and thus their text requires multi-byte encodings.)
In 3.x, because there is no pressure to be able to treat byte-sequences implicitly as text with an assumed encoding, there are therefore no implicit conversion steps behind the scenes. This means that understanding the API is straightforward: Bytes are raw data; therefore, they are used to encode text, which is an abstraction. Therefore, the .encode() method is provided by str (which represents text), in order to encode text into raw data. Similarly, the .decode() method is provided by bytes (which represents a byte-sequence), in order to decode raw data into text.
Applying these to the example code, again supposing UTF-8 encoding is appropriate, gives:
if 'some-pattern'.encode('utf-8') in tmp:
and
if 'some-pattern' in tmp.decode('utf-8'):

Save file/TypeError: a bytes-like object is required, not 'str' [duplicate]

I've very recently migrated to Python 3.5.
This code was working properly in Python 2.7:
with open(fname, 'rb') as f:
lines = [x.strip() for x in f.readlines()]
for line in lines:
tmp = line.strip().lower()
if 'some-pattern' in tmp: continue
# ... code
But in 3.5, on the if 'some-pattern' in tmp: continue line, I get an error which says:
TypeError: a bytes-like object is required, not 'str'
I was unable to fix the problem using .decode() on either side of the in, nor could I fix it using
if tmp.find('some-pattern') != -1: continue
What is wrong, and how do I fix it?
You opened the file in binary mode:
with open(fname, 'rb') as f:
This means that all data read from the file is returned as bytes objects, not str. You cannot then use a string in a containment test:
if 'some-pattern' in tmp: continue
You'd have to use a bytes object to test against tmp instead:
if b'some-pattern' in tmp: continue
or open the file as a textfile instead by replacing the 'rb' mode with 'r'.
You can encode your string by using .encode()
Example:
'Hello World'.encode()
As the error describes, in order to write a string to a file you need to encode it to a byte-like object first, and encode() is encoding it to a byte-string.
Like it has been already mentioned, you are reading the file in binary mode and then creating a list of bytes. In your following for loop you are comparing string to bytes and that is where the code is failing.
Decoding the bytes while adding to the list should work. The changed code should look as follows:
with open(fname, 'rb') as f:
lines = [x.decode('utf8').strip() for x in f.readlines()]
The bytes type was introduced in Python 3 and that is why your code worked in Python 2. In Python 2 there was no data type for bytes:
>>> s=bytes('hello')
>>> type(s)
<type 'str'>
You have to change from wb to w:
def __init__(self):
self.myCsv = csv.writer(open('Item.csv', 'wb'))
self.myCsv.writerow(['title', 'link'])
to
def __init__(self):
self.myCsv = csv.writer(open('Item.csv', 'w'))
self.myCsv.writerow(['title', 'link'])
After changing this, the error disappears, but you can't write to the file (in my case). So after all, I don't have an answer?
Source: How to remove ^M
Changing to 'rb' brings me the other error: io.UnsupportedOperation: write
Use the encode() function along with the hardcoded string value given in a single quote.
Example:
file.write(answers[i] + '\n'.encode())
Or
line.split(' +++$+++ '.encode())
For this small example, adding the only b before
'GET http://www.py4inf.com/code/romeo.txt HTTP/1.0\n\n' solved my problem:
import socket
mysock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
mysock.connect(('www.py4inf.com', 80))
mysock.send(b'GET http://www.py4inf.com/code/romeo.txt HTTP/1.0\n\n')
while True:
data = mysock.recv(512)
if (len(data) < 1):
break
print (data);
mysock.close()
What does the 'b' character do in front of a string literal?
You opened the file in binary mode:
The following code will throw
a TypeError: a bytes-like object is required, not 'str'.
for line in lines:
print(type(line))# <class 'bytes'>
if 'substring' in line:
print('success')
The following code will work - you have to use the decode() function:
for line in lines:
line = line.decode()
print(type(line))# <class 'str'>
if 'substring' in line:
print('success')
Try opening your file as text:
with open(fname, 'rt') as f:
lines = [x.strip() for x in f.readlines()]
Additionally, here is a link for Python 3.x on the official page:
io — Core tools for working with streams.
And this is the open function: open
If you are really trying to handle it as a binary then consider encoding your string.
I got this error when I was trying to convert a char (or string) to bytes, the code was something like this with Python 2.7:
# -*- coding: utf-8 -*-
print(bytes('ò'))
This is the way of Python 2.7 when dealing with Unicode characters.
This won't work with Python 3.6, since bytes require an extra argument for encoding, but this can be little tricky, since different encoding may output different result:
print(bytes('ò', 'iso_8859_1')) # prints: b'\xf2'
print(bytes('ò', 'utf-8')) # prints: b'\xc3\xb2'
In my case I had to use iso_8859_1 when encoding bytes in order to solve the issue.
Summary
Python 2.x encouraged many bad habits WRT text handling. In particular, its type named str does not actually represent text per the Unicode standard (that type is unicode), and the default "string literal" in fact produces a sequence of raw bytes - with some convenience functions for treating it like a string, if you can get away with assuming a "code page" style encoding.
In 3.x, "string literals" now produce actual strings, and built-in functionality no longer does any implicit conversions between the two types. Thus, the same code now has a TypeError, because the literal and the variable are of incompatible types. To fix the problem, one of the values must be either replaced or converted, so that the types match.
The Python documentation has an extremely detailed guide to working with Unicode properly.
In the example in the question, the input file is processed as if it contains text. Therefore, the file should have been opened in a text mode in the first place. The only good reason the file would have been opened in binary mode even in 2.x is to avoid universal newline translation; in 3.x, this is done by specifying the newline keyword parameter when opening a file in text mode.
To read a file as text properly requires knowing a text encoding, which is specified in the code by (string) name. The encoding iso-8859-1 is a safe fallback; it interprets each byte separately, as representing one of the first 256 Unicode code points, in order (so it will never raise an exception due to invalid data). utf-8 is much more common as of the time of writing, but it does not accept arbitrary data. (However, in many cases, for English text, the distinction will not matter; both of those encodings, and many more, are supersets of ASCII.)
Thus:
with open(fname, 'r', newline='\n', encoding='iso-8859-1') as f:
lines = [x.strip() for x in f.readlines()]
# proceed as before
# If the results are wrong, take additional steps to ascertain the correct encoding
How the error is created when migrating from 2.x to 3.x
In 2.x, 'some-pattern' creates a str, i.e. a sequence of bytes that the programmer is then likely to pretend is text. The str type is the same as the bytes type, and different from the unicode type that properly represents text. Many methods are offered to treat this data as if it were text, but it is not a proper representation of text. The meaning of each value as a text character (the encoding) is assumed. (In order to enable the illusion of raw data as "text", there would sometimes be implicit conversions between the str and unicode types. However, this results in confusing errors of its own - such as getting UnicodeDecodeError from an attempt to encode, or vice-versa).
In 3.x, 'some-pattern' creates what is also called a str; but now str means the Unicode-using, properly-text-representing string type. (unicode is no longer used as a type name, and only bytes refers to the sequence-of-bytes type.) Some changes were made to bytes to dissociate it from the text-with-assumed-encoding interpretation (in particular, indexing into a bytes object now results in an int, rather than a 1-element bytes), but many strange legacy methods persist (including ones rarely used even with actual strings any more, like zfill).
Why this causes a problem
The data, tmp, is a bytes instance. It came from a binary source: in this case, a file opened with a 'b' file mode. In other cases, it could come from a raw network socket, a web request made with urllib or similar, or some other API call.
This means that it cannot do anything meaningful in combination with a string. The elements of a string are Unicode code points (i.e., abstractions that represent, for the most part, text characters, in a universal form that represents all world languages and many other symbols). The elements of a bytes are, well, bytes. (Specifically in 3.x, they are interpreted as unsigned integers ranging from 0 to 255 inclusive.)
When the code was migrated, the literal 'some-pattern' went from describing a bytes, to describing text. Thus, the code went from making a legal comparison (byte-sequence to byte-sequence), to making an illegal one (string to byte-sequence).
Fixing the problem
In order to operate on a string and a byte-sequence - whether it's checking for equality with ==, lexicographic comparison with <, substring search with in, concatenation with +, or anything else - either the string must be converted to a byte-sequence, or vice-versa. In general, only one of these will be the correct, sensible answer, and it will depend on the context.
Fixing the source
Sometimes, one of the values can be seen to be "wrong" in the first place. For example, if reading the file was intended to result in text, then it should have been opened in a text mode. In 3.x, the file encoding can simply be passed as an encoding keyword argument to open, and conversion to Unicode is handled seamlessly without having to feed a binary file to an explicit translation step (thus, universal newline handling still takes place seamlessly).
In the case of the original example, that could look like:
with open(fname, 'r') as f:
lines = [x.strip() for x in f.readlines()]
This example assumes a platform-dependent default encoding for the file. This will normally work for files that were created in straightforward ways, on the same computer. In the general case, however, the encoding of the data must be known in order to work with it properly.
If the encoding is known to be, for example, UTF-8, that is trivially specified:
with open(fname, 'r', encoding='utf-8') as f:
lines = [x.strip() for x in f.readlines()]
Similarly, a string literal that should have been a bytes literal is simply missing a prefix: to make the bytes sequence representing integer values [101, 120, 97, 109, 112, 108, 101] (i.e., the ASCII values of the letters example), write the bytes literal b'example', rather than the string literal `'example'). Similarly the other way around.
In the case of the original example, that would look like:
if b'some-pattern' in tmp:
There is a safeguard built in to this: the bytes literal syntax only allows ASCII characters, so something like b'ëxãmþlê' will be caught as a SyntaxError, regardless of the encoding of the source file (since it is not clear which byte values are meant; in the old implied-encoding schemes, the ASCII range was well established, but everything else was up in the air.) Of course, bytes literals with elements representing values 128..255 can still be written by using \x escaping for those values: for example, b'\xebx\xe3m\xfel\xea' will produce a byte-sequence corresponding to the text ëxãmþlê in Latin-1 (ISO 8859-1) encoding.
Converting, when appropriate
Conversion between byte-sequences and text is only possible when an encoding has been determined. It has always been so; we just used to assume an encoding locally, and then mostly ignore that we had done so. (Programmers in places like East Asia have been more aware of the problem historically, because they commonly need to work with scripts that have more than 256 distinct symbols, and thus their text requires multi-byte encodings.)
In 3.x, because there is no pressure to be able to treat byte-sequences implicitly as text with an assumed encoding, there are therefore no implicit conversion steps behind the scenes. This means that understanding the API is straightforward: Bytes are raw data; therefore, they are used to encode text, which is an abstraction. Therefore, the .encode() method is provided by str (which represents text), in order to encode text into raw data. Similarly, the .decode() method is provided by bytes (which represents a byte-sequence), in order to decode raw data into text.
Applying these to the example code, again supposing UTF-8 encoding is appropriate, gives:
if 'some-pattern'.encode('utf-8') in tmp:
and
if 'some-pattern' in tmp.decode('utf-8'):

Lxml trying to extract data with windows-1250 characters

Hello i am experimenting with Python and LXML, and I am stuck with the problem of extracting data from the webpage which contains windows-1250 characters like ž and ć.
tree = html.fromstring(new.text,parser=hparser)
title = tree.xpath('//strong[text()="Title"]')
opis[g] = opis[g].tail.encode('utf-8')[2:]
I get text responses containing something like this :
\xc2\x9ea
instead of characters. Then I have the problem with storing into database
So how can I accomplish this? I tried put 'windows-1250' instead utf8 without success. Can I convert this code to original characters somehow?
Try:
text = "\xc2\x9ea"
print text.decode('windows-1250').encode('utf-8')
Output:
ža
And save nice chars in your DB.
If encoding to UTF-8 results in b'\xc2\x9ea', then that means the original string was '\x9ea'. Whether lxml didn't do things correctly, or something happened on your end (perhaps a parser configuration issue), the fact is that you get the equivalent of this (Python 3.x syntax):
>>> '\x9ea'.encode('utf-8')
b'\xc2\x9ea'
How do you fix it? One error-prone way would be to encode as something other than UTF-8 that can properly handle the characters. It's error-prone because while something might work in one case, it might not in another. You could instead extract the character ordinals by mapping the character ordinals and work with the character ordinals instead:
>>> list(map((lambda n: hex(n)[2:]), map(ord, '\x9ea')))
['9e', '61']
That gets us somewhere because the bytes type has a fromhex method that can decode a string containing hexadecimal values to the equivalent byte values:
>>> bytes.fromhex(''.join(map((lambda n: hex(n)[2:]), map(ord, '\x9ea'))))
b'\x9ea'
You can use decode('cp1250') on the result of that to get ža, which I believe is the string you wanted. If you are using Python 2.x, the equivalent would be
from binascii import unhexlify
unhexlify(u''.join(map((lambda n: hex(n)[2:]), map(ord, u'\x9ea'))))
Note that this is highly destructive as it forces all characters in a Unicode string to be interpreted as bytes. For this reason, it should only be used on strings containing Unicode characters that fit in a single byte. If you had something like '\x9e\u724b\x61', that code would result in joining ['9e', '724b', '61'] as '9e724b61', and interpreting that using a single-byte character set such as CP1250 would result in something like 'žrKa'.
For that reason, more reliable code would replace ord with a function that throws an exception if 0 <= ord(ch) < 0x100 is false, but I'll leave that for you to code.

How do I split a multi-languages line in Python and get the Unicode hex value?

I try to split this kind of lines in Python:
aiburenshi 爱不忍释 "לא מסוגל להינתק, לא יכול להיפרד מדבר מרוב חיבתו אליו"
This line contains Hebrew, simplified Chinese and English.
If I have a tuple T for example, I would like to get the tuple to be T= (Hebrew string, English string, Chinese string).
The problem is that I don't figure out how to get the Unicode value of the Chinese of the Hebrew letters. Both these lines don't work:
print ((unicode("释","utf-8")).encode("utf-8"))
print ((unicode("א","utf-8")).encode("utf-8"))
And I get this error:
SyntaxError: Non-ASCII character '\xe9' in file split_or.py on line 9, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details
In Python 2, you need to open the file specifying an encoding like this:
import codecs
f = codecs.open("myfile.txt","r",encoding="utf-8")
In Python 3, you can just add the encoding option to any open() calls.
This will guarantee that the file is correctly decoded. Note that this doesn't mean your print calls will work properly, that depends on many things (see for example http://www.pycs.net/users/0000323/stories/14.html and that's just a start); it's better to either use a proper debugger, or output to a file (which will again be opened with codecs.open() ).
To get the actual codepoint (i.e. integer "value"), you can use the built-in ord():
>>> ord(u"£")
163
if you know the ranges for different languages, that's all you need. See this page or this page for the ranges.
Otherwise, you might want to use unicodedata to look up stuff, like the bidirectional category:
>>> unicodedata.bidirectional(u"£")
ET # 'E'uropean 'T'erminator
In Python 2, Unicode string constants need to be prefaced with the "u" character, as in:
print ((unicode(u"释","utf-8")).encode("utf-8"))
print ((unicode(u"א","utf-8")).encode("utf-8"))
In Python 3, string constants are Unicode by default.

UnicodeEncodeError when writing to a file

I am trying to write some strings to a file (the strings have been given to me by the HTML parser BeautifulSoup).
I can use "print" to display them, but when I use file.write() I get the following error:
UnicodeEncodeError: 'ascii' codec can't encode character u'\xa3' in position 6: ordinal not in range(128)
How can I parse this?
If I type 'python unicode' into Google, I get about 14 million results; the first is the official doc which describes the whole situation in excruciating detail; and the fourth is a more practical overview that will pretty much spoon-feed you an answer, and also make sure you understand what's going on.
You really do need to read and understand these sorts of overviews, however long they seem. There really isn't any getting around it. Text is hard. There is no such thing as "plain text", there hasn't been a reasonable facsimile for years, and there never really was, although we spent decades pretending there was. But Unicode is at least a standard.
You also should read http://www.joelonsoftware.com/articles/Unicode.html .
This error occurs when you pass a Unicode string containing non-English characters (Unicode characters beyond 128) to something that expects an ASCII bytestring. The default encoding for a Python bytestring is ASCII, "which handles exactly 128 (English) characters". This is why trying to convert Unicode characters beyond 128 produces the error.
The unicode()
unicode(string[, encoding, errors])
constructor has the signature unicode(string[, encoding, errors]). All of its arguments should be 8-bit strings.
The first argument is converted to Unicode using the specified encoding; if you leave off the encoding argument, the ASCII encoding is used for the conversion, so characters greater than 127 will be treated as errors
for example
s = u'La Pe\xf1a'
print s.encode('latin-1')
or
write(s.encode('latin-1'))
will encode using latin-1
The answer to your question is "use codecs". The appeded code also shows some gettext magic, FWIW. http://wiki.wxpython.org/Internationalization
import codecs
import gettext
localedir = './locale'
langid = wx.LANGUAGE_DEFAULT # use OS default; or use LANGUAGE_JAPANESE, etc.
domain = "MyApp"
mylocale = wx.Locale(langid)
mylocale.AddCatalogLookupPathPrefix(localedir)
mylocale.AddCatalog(domain)
translater = gettext.translation(domain, localedir,
[mylocale.GetCanonicalName()], fallback = True)
translater.install(unicode = True)
# translater.install() installs the gettext _() translater function into our namespace...
msg = _("A message that gettext will translate, probably putting Unicode in here")
# use codecs.open() to convert Unicode strings to UTF8
Logfile = codecs.open(logfile_name, 'w', encoding='utf-8')
Logfile.write(msg + '\n')
Despite Google being full of hits on this problem, I found it rather hard to find this simple solution (it is actually in the Python docs about Unicode, but rather burried).
So ... HTH...
GaJ

Categories