I am currently experimenting with how Python 3 handles bytes when reading, and writing data and I have come across a particularly troubling problem that I can't seem to find the source of. I am basically reading bytes out of a JPEG file, converting them to an integer using ord(), then returning the bytes to their original character using the line chr(character).encode('utf-8') and writing it back into a JPEG file. No issue right? Well when I go to try opening the JPEG file, I get a Windows 8.1 notification saying it can not open the photo. When I check the two files against each other one is 5.04MB, and the other is 7.63MB which has me awfully confused.
def __main__():
operating_file = open('photo.jpg', 'rb')
while True:
data_chunk = operating_file.read(64*1024)
if len(data_chunk) == 0:
print('COMPLETE')
break
else:
new_operation = open('newFile.txt', 'ab')
for character in list(data_chunk):
new_operation.write(chr(character).encode('utf-8'))
if __name__ == '__main__':
__main__()
This is the exact code I am using, any ideas on what is happening and how I can fix it?
NOTE: I am assuming that the list of numbers that list(data_chunk) provides is the equivalent to ord().
Here is a simple example you might wish to play with:
import sys
f = open('gash.txt', 'rb')
stuff=f.read() # stuff refers to a bytes object
f.close()
print(stuff)
f2 = open('gash2.txt', 'wb')
for i in stuff:
f2.write(i.to_bytes(1, sys.byteorder))
f2.close()
As you can see, the bytes object is iterable, but in the for loop we get back an int in i. To convert that to a byte I use int.to_bytes() method.
When you have a code point and you encode it in UTF-8, it is possible for the result to contain more bytes than the original.
For a specific example, refer to the WikiPedia page and consider the hexadecimal value 0xA2.
This is a single binary value, less than 255, but when encoded to UTF8 it becomes 0xC2, 0xA2.
Given that you are pulling bytes out of your source file, my first recommendation would be to simply pass the bytes directly to the writer of your target file.
If you are trying to understand how file I/O works, be wary of encode() when using a binary file mode. Binary files don't need to be encoded and or decoded - they are raw data.
Related
I need to transfer compiled code into raw bits, then back into compiled
code, for a project. I have gotten my .uf2 file into Python, and I have gotten it to show as bytes and as encoded into ANSI text, but I haven't figured out how to turn it into bits. I can add that output here, but it is incredibly long so for readability I left it out. By extension, I also haven't figured out how to turn it back into a functioning .uf2 file. Does anyone have any ideas? Is it even possible to take compiled code and turn it bits without destroying it?
Edit:
Here is my code so far. I need to be able to access the bits, not the bytes. Data is encoded in ANSI.
fpath= input("File path: ")
f = open(fpath,'rb')
hexdec = f.read()
print(hexdec)
decode = hexdec.decode('ansi')
print(decode)
You can convert a hex string to a byte array using the fromhex() method of bytearray
Then it's a simple matter of writing back the binary file
binary_data = bytearray.fromhex(hex_string)
newfile = open(path, 'wb')
newFile.write(binary_data)
newFile.close()
I want to read a file with data, coded in hex format:
01ff0aa121221aff110120...etc
the files contains >100.000 such bytes, some more than 1.000.000 (they comes form DNA sequencing)
I tried the following code (and other similar):
filele=1234563
f=open('data.geno','r')
c=[]
for i in range(filele):
a=f.read(1)
b=a.encode("hex")
c.append(b)
f.close()
This gives each byte separate "aa" "01" "f1" etc, that is perfect for me!
This works fine up to (in this case) byte no 905 that happen to be "1a". I also tried the ord() function that also stopped at the same byte.
There might be a simple solution?
Simple solution is binascii:
import binascii
# Open in binary mode (so you don't read two byte line endings on Windows as one byte)
# and use with statement (always do this to avoid leaked file descriptors, unflushed files)
with open('data.geno', 'rb') as f:
# Slurp the whole file and efficiently convert it to hex all at once
hexdata = binascii.hexlify(f.read())
This just gets you a str of the hex values, but it does it much faster than what you're trying to do. If you really want a bunch of length 2 strings of the hex for each byte, you can convert the result easily:
hexlist = map(''.join, zip(hexdata[::2], hexdata[1::2]))
which will produce the list of len 2 strs corresponding to the hex encoding of each byte. To avoid temporary copies of hexdata, you can use a similar but slightly less intuitive approach that avoids slicing by using the same iterator twice with zip:
hexlist = map(''.join, zip(*[iter(hexdata)]*2))
Update:
For people on Python 3.5 and higher, bytes objects spawned a .hex() method, so no module is required to convert from raw binary data to ASCII hex. The block of code at the top can be simplified to just:
with open('data.geno', 'rb') as f:
hexdata = f.read().hex()
Just an additional note to these, make sure to add a break into your .read of the file or it will just keep going.
def HexView():
with open(<yourfilehere>, 'rb') as in_file:
while True:
hexdata = in_file.read(16).hex() # I like to read 16 bytes in then new line it.
if len(hexdata) == 0: # breaks loop once no more binary data is read
break
print(hexdata.upper()) # I also like it all in caps.
If the file is encoded in hex format, shouldn't each byte be represented by 2 characters? So
c=[]
with open('data.geno','rb') as f:
b = f.read(2)
while b:
c.append(b.decode('hex'))
b=f.read(2)
Thanks for all interesting answers!
The simple solution that worked immediately, was to change "r" to "rb",
so:
f=open('data.geno','r') # don't work
f=open('data.geno','rb') # works fine
The code in this case is actually only two binary bites, so one byte contains four data, binary; 00, 01, 10, 11.
Yours!
How does one read binary and text from the same file in Python? I know how to do each separately, and can imagine doing both very carefully, but not both with the built-in IO library directly.
So I have a file that has a format that has large chunks of UTF-8 text interspersed with binary data. The text does not have a length written before it or a special character like "\0" delineating it from the binary data, there is a large portion of text near the end when parsed means "we are coming to an end".
The optimal solution would be to have the built-in file reading classes have "read(n)" and "read_char(n)" methods, but alas they don't. I can't even open the file twice, once as text and once as binary, since the return value of tell() on the text one can't be used with the binary one in any meaningful way.
So my first idea would be to open the whole file as binary and when I reach a chunk of text, read it "character by character" until I realize that the text is ending and then go back to reading it as binary. However this means that I have to read byte-by-byte and do my own decoding of UTF-8 characters (do I need to read another byte for this character before doing something with it?). If it was a fixed-width character encoding I would just read that many bytes each time. In the end I would also like the universal line endings as supported by the Python text-readers, but that would be even more difficult to implement while reading byte-by-byte.
Another easier solution would be if I could ask the text file object its real offset in the file. That alone would solve all my problems.
One way might be to use Hachoir to define a file parsing protocol.
The simple alternative is to open the file in binary mode and manually initialise a buffer and text wrapper around it. You can then switch in and out of binary pretty neatly:
my_file = io.open("myfile.txt", "rb")
my_file_buffer = io.BufferedReader(my_file, buffer_size=1) # Not as performant but a larger buffer will "eat" into the binary data
my_file_text_reader = io.TextIOWrapper(my_file_buffer, encoding="utf-8")
string_buffer = ""
while True:
while "near the end" not in string_buffer:
string_buffer += my_file_text_reader.read(1) # read one Unicode char at a time
# binary data must be next. Where do we get the binary length from?
print string_buffer
data = my_file_buffer.read(3)
print data
string_buffer = ""
A quicker, less extensible way might be to use the approach you've suggested in your question by intelligently parsing the text portions, reading each UTF-8 sequence of bytes at a time. The following code (from http://rosettacode.org/wiki/Read_a_file_character_by_character/UTF8#Python), seems to be a neat way to conservatively read UTF-8 bytes into characters from a binary file:
def get_next_character(f):
# note: assumes valid utf-8
c = f.read(1)
while c:
while True:
try:
yield c.decode('utf-8')
except UnicodeDecodeError:
# we've encountered a multibyte character
# read another byte and try again
c += f.read(1)
else:
# c was a valid char, and was yielded, continue
c = f.read(1)
break
# Usage:
with open("input.txt","rb") as f:
my_unicode_str = ""
for c in get_next_character(f):
my_unicode_str += c
win8.1-32bit, python3.4
made a web-robot for www.douban.com to get the main html, jpg files and png files.
but when finished, I can't open the pic files.(Windows Photo Viewer can't open this picture balablabala~~~~)
Questions:
1: why can't the pics be opened?
2: if line 35 is edited like this:dbr.write(data), the command line will prompt: TypeError: 'str' does not support the buffer interface.
Same thing will happen for line 51 and 59.
But when line 35 is :dbr.write(bytes(data, 'UTF-8')) , I will get the right html file. So I did the same for line 51 and 59 for pic files, but somethings went wrong. I wonder there should be a bug in the "write()", but I can't figure out what exactly is wrong.
Here is the code.
import urllib.request
import os
import re
#make dirs for douban_robot, jpg, png
dirpath = 'D:/Pwork/webrobot/'
if not os.path.isdir(dirpath):
os.makedirs(dirpath)
jpg_path = dirpath + 'jpgfiles/'
png_path = dirpath + 'pngfiles/'
if not os.path.isdir(jpg_path):
os.makedirs(jpg_path)
if not os.path.isdir(png_path):
os.makedirs(png_path)
douban_robot = dirpath + 'douban.html'
url = 'http://www.douban.com'
#get .html
data = urllib.request.urlopen(url).read().decode('UTF-8')
with open(douban_robot, 'wb') as dbr:
dbr.write(bytes(data, 'UTF-8'))
dbr.close()
# create regex
re_jpg = re.compile(r'<img src="(http.+?.jpg)"')
re_png = re.compile(r'<img src="(http:.+?.png)"')
jpg_data = re_jpg.findall(data)
png_data = re_png.findall(data)
# for test jpg and png date
print(jpg_data, png_data)
#get jpg files
i = 1
for image in jpg_data:
jpg_name = jpg_path + str(i)+'.jpg'
#urllib.request.urlretrieve(image, jpg_name)
with open(jpg_name, 'wb') as jpg_file:
jpg_file.write(bytes(image, 'UTF-8'))
jpg_file.close()
i += 1
for image in png_data:
png_name = png_path + str(i)+'.png'
#urllib.request.urlretrieve(image, png_name)
with open(png_name, 'wb') as png_file:
png_file.write(bytes(image, 'UTF-8'))
png_file.close()
i += 1
The variables jpg_data and png_data are lists containing captured URLs. Your loops iterate over each URL, placing the URL string in the variable image. Then, in both loops, you write the URL string to the file, not the actual image. It actually looks like the commented out urllib lines would do the trick, instead of what you're currently doing now.
The .write() function expects you to give it an object that matches the mode of the file. When you call open(..., 'wb'), you're saying to open the file in write and binary mode, which means that you need to give it bytes instead of str.
Bytes are the fundamental way everything is stored in a computer. Everything is a series of bytes -- the data on your hard drive, and the data you send and receive on the Internet. Bytes don't really have meaning on their own -- each one is just 8 bits strung together. The meaning depends on how you interpret the bytes. For instance, you could interpret a single byte as representing a number from 0 to 255. Or, you could interpret it as a number from -128 to 127 (both of these are common). You could also assign these "numbers" to characters, and interpret a sequence of bytes as text. However, this only allows you to represent 256 characters, and there are many more than that in the world's various languages. So, there are multiple ways of representing text as sequences of bytes. These are called "character encodings". The most popular modern one is "UTF-8".
In Python, a bytes object is just a series of bytes. It has no special meaning -- nobody has said what it represents yet. If you want to use that as text, you need to decode it, using one of the character encodings. Once you do that (.decode('UTF-8')), you have a str object. In order to write it to disk (or the network), your str will have to eventually be encoded back into bytes. When you open a file in text mode, Python chooses your computer's default encoding, and it will decode everything you read using that, and encode everything you write with it. However, when you open a file in b mode, Python expects that you will give it bytes, and so it throws an error when you give it a str instead. Since you know the HTML file you downloaded and put in data is text, it would have been best for you to save it to a file in text mode. However, encoding it as UTF-8 and writing it to a binary file works too, as long as your system's default encoding is UTF-8. In general, when you have a str and you want to write it to a file, open the file in text mode (just don't pass b in the mode parameter) and let Python pick the encoding, since it knows better than you do!
For more info on the character sets and encoding stuff (which I only glossed over), you really should read this article.
I'm programming in Python 3 and I'm having a small problem which I can't find any reference to it on the net.
As far as I understand the default string in is utf-16, but I must work with utf-8, I can't find the command that will convert from the default one to utf-8.
I'd appreciate your help very much.
In Python 3 there are two different datatypes important when you are working with string manipulation. First there is the string class, an object that represents unicode code points. Important to get is that this string is not some bytes, but really a sequence of characters. Secondly, there is the bytes class, which is just a sequence of bytes, often representing an string stored in an encoding (like utf-8 or iso-8859-15).
What does this mean for you? As far as I understand you want to read and write utf-8 files. Let's make a program that replaces all 'ć' with 'ç' characters
def main():
# Let's first open an output file. See how we give an encoding to let python know, that when we print something to the file, it should be encoded as utf-8
with open('output_file', 'w', encoding='utf-8') as out_file:
# read every line. We give open() the encoding so it will return a Unicode string.
for line in open('input_file', encoding='utf-8'):
#Replace the characters we want. When you define a string in python it also is automatically a unicode string. No worries about encoding there. Because we opened the file with the utf-8 encoding, the print statement will encode the whole string to utf-8.
print(line.replace('ć', 'ç'), out_file)
So when should you use bytes? Not often. An example I could think of would be when you read something from a socket. If you have this in an bytes object, you could make it a unicode string by doing bytes.decode('encoding') and visa versa with str.encode('encoding'). But as said, probably you won't need it.
Still, because it is interesting, here the hard way, where you encode everything yourself:
def main():
# Open the file in binary mode. So we are going to write bytes to it instead of strings
with open('output_file', 'wb') as out_file:
# read every line. Again, we open it binary, so we get bytes
for line_bytes in open('input_file', 'rb'):
#Convert the bytes to a string
line_string = bytes.decode('utf-8')
#Replace the characters we want.
line_string = line_string.replace('ć', 'ç')
#Make a bytes to print
out_bytes = line_string.encode('utf-8')
#Print the bytes
print(out_bytes, out_file)
Good reading about this topic (string encodings) is http://www.joelonsoftware.com/articles/Unicode.html. Really recommended read!
Source: http://docs.python.org/release/3.0.1/whatsnew/3.0.html#text-vs-data-instead-of-unicode-vs-8-bit
(P.S. As you see, I didn't mention utf-16 in this post. I actually don't know whether python uses this as internal decoding or not, but it is totally irrelevant. At the moment you are working with a string, you work with characters (code points), not bytes.