Python - Efficient way to flip bytes in a file? - python

I've got a folder full of very large files that need to be byte flipped by a power of 4. So essentially, I need to read the files as a binary, adjust the sequence of bits, and then write a new binary file with the bits adjusted.
In essence, what I'm trying to do is read a hex string hexString that looks like this:
"00112233AABBCCDD"
And write a file that looks like this:
"33221100DDCCBBAA"
(i.e. every two characters is a byte, and I need to flip the bytes by a power of 4)
I am very new to python and coding in general, and the way I am currently accomplishing this task is extremely inefficient. My code currently looks like this:
import binascii
with open(myFile, 'rb') as f:
content = f.read()
hexString = str(binascii.hexlify(content))
flippedBytes = ""
inc = 0
while inc < len(hexString):
flippedBytes += file[inc + 6:inc + 8]
flippedBytes += file[inc + 4:inc + 6]
flippedBytes += file[inc + 2:inc + 4]
flippedBytes += file[inc:inc + 2]
inc += 8
..... write the flippedBytes to file, etc
The code I pasted above accurately accomplishes what I need (note, my actual code has a few extra lines of: "hexString.replace()" to remove unnecessary hex characters - but I've left those out to make the above easier to read). My ultimate problem is that it takes EXTREMELY long to run my code with larger files. Some of my files I need to flip are almost 2gb in size, and the code was going to take almost half a day to complete one single file. I've got dozens of files I need to run this on, so that timeframe simply isn't practical.
Is there a more efficient way to flip the HEX values in a file by a power of 4?
.... for what it's worth, there is a tool called WinHEX that can do this manually, and only takes a minute max to flip the whole file.... I was just hoping to automate this with python so we didn't have to manually use WinHEX each time

You want to convert your 4-byte integers from little-endian to big-endian, or vice-versa. You can use the struct module for that:
import struct
with open(myfile, 'rb') as infile, open(myoutput, 'wb') as of:
while True:
d = infile.read(4)
if not d:
break
le = struct.unpack('<I', d)
be = struct.pack('>I', *le)
of.write(be)

Here is a little struct awesomeness to get you started:
>>> import struct
>>> s = b'\x00\x11\x22\x33\xAA\xBB\xCC\xDD'
>>> a, b = struct.unpack('<II', s)
>>> s = struct.pack('>II', a, b)
>>> ''.join([format(x, '02x') for x in s])
'33221100ddccbbaa'
To do this at full speed for a large input, use struct.iter_unpack

Related

How to properly write and read BitArray objects?

I have an issue. I tryed to save my BitArray object into file. After that I want to read it and get the same BitArray object what I saved earlier. But result is not same with input.
from bitarray import bitarray
a = bitarray()
a += bitarray('{0:014b}'.format(15))
print(a.to01(), len(a))
with open('j.j', 'wb') as file:
a.tofile(file)
b = bitarray()
with open('j.j', 'rb') as file:
b.fromfile(file)
print(b.to01(), len(b))
Output:
00000000001111 14
0000000000111100 16
I see my object now is 2-byte representation. But I want to get 14-bit I saved. Do you have any ideas to make it right?
This isn't a great solution, but it does get rid of the 0's on the right.
from bitarray import bitarray
a = bitarray('{0:014b}'.format(15))
print(a.to01(), len(a)) #00000000001111 14
with open('j.j', 'wb') as file:
a.reverse()
a.tofile(file)
b = bitarray()
with open('j.j', 'rb') as file:
b.fromfile(file)
b.reverse()
print(b.to01(), len(b)) #0000000000001111 16
You could skip the reversals and just right shift b, but you would have to create a dynamic system that knows exactly how many bits to shift by. Another solution is to simply use bits in multiples of 8 in the first place. What are you saving here by removing 1 to 7 bits? You aren't saving anything in the file. Those bits will be padded regardless.
It's eather not a great solution, but it's a solution)
def encode():
encoded_bits = bitarray()
...
encoded_bits += bitarray('000') # 48:51 slice
zeroes_at_the_end = 8 - len(encoded_bits) % 8
if zeroes_at_the_end < 8:
encoded_bits[48:51] = bitarray('{0:03b}'.format(zeroes_at_the_end))
def decode(bites_sequence):
zeroes_at_the_end = ba2int(bites_sequence[48:51])
if zeroes_at_the_end != 0:
del bites_sequence[-zeroes_at_the_end:]
I just contain number of zeroes, which will appear after save/read in files and then easy delete those zeroes

Python - Merge many big numpy arrays with unknown shape, that would not fit in memory

Let's suppose I have a large number of NumPy arrays saved as files (np.save(), ".npy" files). All these have shape e.g. (n,20), where I don't know n without opening the file. n is different for every file.
I want to merge these into a single dataset, and then using a set of selection methods split it into three different numpy arrays written on the disk.
Usually I would loop over all files and use np.concatenate(). However the final array is likely not to fit in memory.
The other option I have is to use np.memmap(), which I am absolutely not so sure how it works. To my understanding, I'd have to do something like that:
a = np.memmap('output.npy',dtype='float64',mode='w+',shape=(N,20))
for i,f in enumerate(myfiles):
a[i,:] = np.load(f)
a.flush()
# And then find a way to split "a" into three, does the following work?
part_one = a[ [0,2,10,42,58] , : ]
The problem is that I don't know N, the final number of rows. Therefore I would need to open each file, read number of rows, close the file, sum all the number of rows before declaring the memmap. Which is highly inefficient, and there must be a better method.
Do you have any suggestion on this problem? Am I doing something wrong?
The .npy file specification defines the header for npy files. I couldn't find an already-baked way to read it, but the format is easy and you can pull the information out yourself. The file information is encoded in a python dict including a shape tuple. This is a short read of the top of the file and will be much faster than reading in the data.
import struct
import ast
# structs to decode .npy file header consisting of a "magic"
# string verifying the file type, major and minor version numbers,
# header length, and literal string representation of a python dict
# holding file's type and shape.
npy_magic = b"\x93NUMPY"
npy_v1_header = struct.Struct(
"<" # little-endian encoding
"6s" # 6 byte magic string
"B" # 1 byte major number
"B" # 1 byte minor number
"H" # 2 byte header length
# ... header string follows
)
npy_v2_header = struct.Struct(
"<" # little-endian encoding
"6s" # 6 byte magic string
"B" # 1 byte major number
"B" # 1 byte minor number
"L" # 4 byte header length
# ... header string follows
)
def read_npy_file_header(filename):
with open(filename, 'rb') as fp:
buf = fp.read(npy_v1_header.size)
magic, major, minor, hdr_size = npy_v1_header.unpack(buf)
if magic != npy_magic:
raise IOError("Not an npy file")
if major not in (0,1):
raise IOError("Unknown npy file version")
if major == 2:
fp.seek(0)
buf = fp.read(npy_v2_header.size)
magic, major, minor, hdr_size = npy_v2_header.unpack(buf)
return ast.literal_eval(fp.read(hdr_size).decode('ascii'))
# test
from glob import glob
for fn in glob('*.npy'):
print(fn, read_npy_file_header(fn))

python binary file operation

I am getting trouble working with binary file in python.
Here is what I want to do:
I have a binary file in which I want to modify a sequence by another.
The sequence to replace is 'ASBF' .
And I want to replace it by a number.
It worked just fine when I used python 2.7.
But now in python 3.3 there is a difference between bytes and str and I think it is my problem.
Here was the code I was doing:
#that is the number I want to put instead of the hexa sequence
number = 1703518678
#I put it in an array
number_array = []
number_array.append(number & 0xFF)
number_array.append(number >> 8 & 0xFF)
number_array.append(number >> 16 & 0xFF)
number_array.append(number >> 24 & 0xFF)
f = open(fichier_bin, 'rb')
lines = f.readlines()
f.close()
f = open(fichier_bin, 'wb')
for line in lines:
f.write(line.replace('ASBF', struct.pack('BBBB', number_array[0], number_array[1], number_array[2], number_array[3]))) #replace ASBF by the number
f.close()
I tried other things to workaround the problem but I can't figure out how to replace a sequence by another in a binary file.
I would like 41534246 which is 'ASBF' in hexa to become 6589A1D6 which is 1703518678 in hexa.
EDIT:
And here is the error I get
f.write(ligne.replace('ASBF', struct.pack('BBBB', number_array[0], number_array[1], number_array[2], number_array[3]))) #replace ASBF by the number
TypeError: expected bytes, bytearray or buffer compatible object
And I really do not understand how to get through this.
EDIT2:
The problem I got was from the way I opened the file.
Now that I use with instead of just open, my program is working just fine.

byte reverse AB CD to CD AB with python

I have a .bin file, and I want to simply byte reverse the hex data. Say for instance # 0x10 it reads AD DE DE C0, want it to read DE AD C0 DE.
I know there is a simple way to do this, but I am am beginner and just learning python and am trying to make a few simple programs to help me through my daily tasks. I would like to convert the whole file this way, not just 0x10.
I will be converting at start offset 0x000000 and blocksize/length is 1000000.
here is my code, maybe you can tell me what to do. i am sure i am just not getting it, and i am new to programming and python. if you could help me i would very much appreciate it.
def main():
infile = open("file.bin", "rb")
new_pos = int("0x000000", 16)
chunk = int("1000000", 16)
data = infile.read(chunk)
reverse(data)
def reverse(data):
output(data)
def output(data):
with open("reversed", "wb") as outfile:
outfile.write(data)
main()
and you can see the module for reversing, i have tried many different suggestions and it will either pass the file through untouched, or it will throw errors. i know module reverse is empty now, but i have tried all kinds of things. i just need module reverse to convert AB CD to CD AB.
thanks for any input
EDIT: the file is 16 MB and i want to reverse the byte order of the whole file.
In Python 3.4 you can use this:
>>> data = b'\xAD\xDE\xDE\xC0'
>>> swap_data = bytearray(data)
>>> swap_data.reverse()
the result is
bytearray(b'\xc0\xde\xde\xad')
In Python 2, the binary file gets read as a string, so string slicing should easily handle the swapping of adjacent bytes:
>>> original = '\xAD\xDE\xDE\xC0'
>>> ''.join([c for t in zip(original[1::2], original[::2]) for c in t])
'\xde\xad\xc0\xde'
In Python 3, the binary file gets read as bytes. Only a small modification is need to build another array of bytes:
>>> original = b'\xAD\xDE\xDE\xC0'
>>> bytes([c for t in zip(original[1::2], original[::2]) for c in t])
b'\xde\xad\xc0\xde'
You could also use the < and > endianess format codes in the struct module to achieve the same result:
>>> struct.pack('<2h', *struct.unpack('>2h', original))
'\xde\xad\xc0\xde'
Happy byte swapping :-)
data = b'\xAD\xDE\xDE\xC0'
reversed_data = data[::-1]
print(reversed_data)
# b'\xc0\xde\xde\xad'
Python3
bytes(reversed(b'\xAD\xDE\xDE\xC0'))
# b'\xc0\xde\xde\xad'
Python has a list operator to reverse the values of a list --> nameOfList[::-1]
So, I might store the hex values as string and put them into a list then try something like:
def reverseList(aList):
rev = aList[::-1]
outString = ""
for el in rev:
outString += el + " "
return outString

reorder byte order in hex string (python)

I want to build a small formatter in python giving me back the numeric
values embedded in lines of hex strings.
It is a central part of my formatter and should be reasonable fast to
format more than 100 lines/sec (each line about ~100 chars).
The code below should give an example where I'm currently blocked.
'data_string_in_orig' shows the given input format. It has to be
byte swapped for each word. The swap from 'data_string_in_orig' to
'data_string_in_swapped' is needed. In the end I need the structure
access as shown. The expected result is within the comment.
Thanks in advance
Wolfgang R
#!/usr/bin/python
import binascii
import struct
## 'uint32 double'
data_string_in_orig = 'b62e000052e366667a66408d'
data_string_in_swapped = '2eb60000e3526666667a8d40'
print data_string_in_orig
packed_data = binascii.unhexlify(data_string_in_swapped)
s = struct.Struct('<Id')
unpacked_data = s.unpack_from(packed_data, 0)
print 'Unpacked Values:', unpacked_data
## Unpacked Values: (46638, 943.29999999943209)
exit(0)
array.arrays have a byteswap method:
import binascii
import struct
import array
x = binascii.unhexlify('b62e000052e366667a66408d')
y = array.array('h', x)
y.byteswap()
s = struct.Struct('<Id')
print(s.unpack_from(y))
# (46638, 943.2999999994321)
The h in array.array('h', x) was chosen because it tells array.array to regard the data in x as an array of 2-byte shorts. The important thing is that each item be regarded as being 2-bytes long. H, which signifies 2-byte unsigned short, works just as well.
This should do exactly what unutbu's version does, but might be slightly easier to follow for some...
from binascii import unhexlify
from struct import pack, unpack
orig = unhexlify('b62e000052e366667a66408d')
swapped = pack('<6h', *unpack('>6h', orig))
print unpack('<Id', swapped)
# (46638, 943.2999999994321)
Basically, unpack 6 shorts big-endian, repack as 6 shorts little-endian.
Again, same thing that unutbu's code does, and you should use his.
edit Just realized I get to use my favorite Python idiom for this... Don't do this either:
orig = 'b62e000052e366667a66408d'
swap =''.join(sum([(c,d,a,b) for a,b,c,d in zip(*[iter(orig)]*4)], ()))
# '2eb60000e3526666667a8d40'
The swap from 'data_string_in_orig' to 'data_string_in_swapped' may also be done with comprehensions without using any imports:
>>> d = 'b62e000052e366667a66408d'
>>> "".join([m[2:4]+m[0:2] for m in [d[i:i+4] for i in range(0,len(d),4)]])
'2eb60000e3526666667a8d40'
The comprehension works for swapping byte order in hex strings representing 16-bit words. Modifying it for a different word-length is trivial. We can make a general hex digit order swap function also:
def swap_order(d, wsz=4, gsz=2 ):
return "".join(["".join([m[i:i+gsz] for i in range(wsz-gsz,-gsz,-gsz)]) for m in [d[i:i+wsz] for i in range(0,len(d),wsz)]])
The input params are:
d : the input hex string
wsz: the word-size in nibbles (e.g for 16-bit words wsz=4, for 32-bit words wsz=8)
gsz: the number of nibbles which stay together (e.g for reordering bytes gsz=2, for reordering 16-bit words gsz = 4)
import binascii, tkinter, array
from tkinter import *
infile_read = filedialog.askopenfilename()
with open(infile, 'rb') as infile_:
infile_read = infile_.read()
x = (infile_read)
y = array.array('l', x)
y.byteswap()
swapped = (binascii.hexlify(y))
This is a 32 bit unsigned short swap i achieved with code very much the same as "unutbu's" answer just a little bit easier to understand. And technically binascii is not needed for the swap. Only array.byteswap is needed.

Categories