How to get multiple 32bit values with byte array? - python

I need to extract some number values out of a binary data stream.
the code below is working for me, but for sure there is a more suitable way to do this in python. Especially I was struggling a lot to find a better way to iterate over the array and get 4 byte as byte arrays from the buffer.
some hint for me?
outfile = io.BytesIO()
outfile.writelines(some binary data stream)
buf = outfile.getvalue()
blen = int(len(buf) / 4 );
for i in range(blen):
a = bytearray([0,0,0,0])
a[0] = buf[i*4]
a[1] = buf[i*4+1]
a[2] = buf[i*4+2]
a[3] = buf[i*4+3]
data = struct.unpack('<l', a)[0]
do something with data

Your question and accompanying pseudo-code are somewhat hazy in my opinion, but here's something that uses slices of buf to obtain the each group of 4 bytes needed—so if nothing else it's at least a bit more succinct (assuming I've correctly interpreted what you're asking):
import io
import struct
outfile = io.BytesIO()
outfile.writelines([b'\x00\x01\x02\x03',
b'\x04\x05\x06\x07'])
buf = outfile.getvalue()
for i in range(0, len(buf), 4):
data = struct.unpack('<l', buf[i:i+4])[0]
print(hex(data))
Output:
0x3020100
0x7060504

Related

load hex file with intelhex format in python

I saved one data set(200 double data values) from Keil, it turns to be a .hex file with IntelHex format, I installed IntelHex in python and load it. Now the problem is I do not know how to interpret it, for example, this post
tells you to use dict, but it does not work for hex file including double data. my code:
from intelhex import IntelHex
ih = IntelHex() # create empty object
ih.loadhex('output.hex')
ihdict = ih.todict()
datastr = ""
startAddress = 536871952
while ihdict.get(startAddress) != None:
datastr += str("%0.2X" %ihdict.get(startAddress))
startAddress += 1
the file output.hex:
:020000042000DA
:0802A8003FB7809F5BC03F409F
:1002B000DFB56EEF5AB73F407F717CBF38BE3F401D
:1002C000DFD369EFE9B43F407F717CBF38BE3F4068
:1002D0003F895E9F44AF3F401F706A0F38B53F4073
:1002E0009F20584F10AC3F405F5F72AF2FB93F4027
:1002F000DFB56EEF5AB73F40DF5B7DEFADBE3F40ED
:10030000BFA364DF51B23F40DF62676FB1B33F40CC
:100310001F9E8C0F4FC63F405F0C6B2F86B53F4032
:10032000BF7542DF3AA13F403F4D689F26B43F4032
:100330009F2742CF13A13F40DF2D5BEF96AD3F409B
:100340009F915ACF48AD3F40DF874CEF43A63F40D7
:100350007FD2573FE9AB3F40FD721E7F398F3F4050
:10036000FF892FFFC4973F409D5311CFA9883F407D
:100370001F706A0F38B53F407F78663F3CB33F40FF
:100380001DFD148F7E8A3F401F954F8FCAA73F40A7
:100390005FC04D2FE0A63F401F0D3C8F069E3F40A3
:1003A0007F4A443F25A23F40DFE13DEFF09E3F40C2
:1003B0003F185C1F0CAE3F403F79379FBC9B3F40CE
:1003C000FF2F3EFF179F3F40DFBC586F5EAC3F40A2
:1003D000FD36287F1B943F403F3D419F9EA03F40FC
:1003E000FFFA317FFD983F409FF2354FF99A3F4029
:1003F0007D0511BF82883F40DF703B6FB89D3F4055
:10040000FF1143FF88A13F40DD60146F308A3F40F9
:100410001F49328F24993F407D230CBF11863F40F6
:100420009DBD29CFDE943F40BFED2EDF76973F4044
:10043000DDBA056FDD823F407D58183F2C8C3F4070
:100440007F3333BF99993F40DD9C0A6F4E853F4013
:100450007F3333BF99993F403DBC171FDE8B3F4030
:10046000BDF4185F7A8C3F403D16091F8B843F40D6
:100470003DE1FC9E707E3F40DD0D0DEF86863F40E6
:100480003F1F469F0FA33F403DFFF79EFF7B3F402E
:10049000DD42196FA18C3F40BDC6F65E637B3F40D5
:1004A0009D5AFB4EAD7D3F40FD4BE6FE25733F4020
:1004B0001D12D30E89693F403D8EF51EC77A3F401D
:1004C0003DBC171FDE8B3F409D7FE0CE3F703F401D
:1004D000BD6C055FB6823F40DD4903EFA4813F401C
:1004E000DD14F76E8A7B3F40DDF6FB6EFB7D3F40FF
:1004F000BD20E85E10743F40DD6EE86E37743F400B
:100500001DB1F78ED87B3F407DFCD33EFE693F4056
:100510009D4C274FA6933F407DD7EEBE6B773F4063
:10052000DDAADE6E556F3F401D55B38EAA593F4080
:100530009DF7CCCE7B663F40DDCFC3EEE7613F4009
:10054000BDF2C55EF9623F40BD7AD95EBD6C3F40E9
:100550005D90D82E486C3F40BD7AD95EBD6C3F405F
:100560003D67BD9EB35E3F403D0DCC9E06663F405D
:10057000BD88AD5EC4563F40BDA6A85E53543F4003
:10058000BDD4CA5E6A653F40FD95B0FE4A583F4003
:100590003DA3B39ED1593F405DF89D2EFC4E3F4098
:1005A000FD69E1FEB4703F40FD59BAFE2C5D3F404D
:1005B000BD63C8DE31643F407DB0B63E585B3F400E
:1005C0001DCD9F8EE64F3F405DEAC92EF5643F404A
:1005D000FD4993FEA4493F405D8E852EC7423F40B2
:1005E0007D4D88BE26443F401D3EA20E1F513F4018
:1005F0003D938C9E49463F40FD7E9F7EBF4F3F40CE
:10060000DDB1C8EE58643F40BD7F70DE3F383F40EB
:100610005DBCA72EDE533F409D4197CEA04B3F408F
:100620003D0B799E853C3F403D0B799E853C3F408C
:10063000BD7F70DE3F383F407D6B83BEB5413F409C
:10064000FD32827E19413F409D2A864E15433F4030
:10065000FDEFA1FEF7503F401DC4620E62313F40E6
:100660003D476F9EA3373F401D98930ECC493F40B6
:100670001D53608E29303F403DF4671EFA333F40E2
:100680003D048F1E82473F407D726D3EB9363F402C
:10069000FDF68B7EFB453F40FD5767FEAB333F4089
:1006A0005D7774AE3B3A3F40BD2C695E96343F4067
:1006B0009DF579CEFA3C3F40DD4C476EA6233F4086
:1006C0001D1E540E0F2A3F407DE36FBEF1373F40A1
:1006D000FD7C4C7E3E263F40DD8153EEC0293F40ED
:1006E0009D034ECE01273F409D1375CE893A3F4072
:1006F000DDC4336EE2193F407D444B3EA2253F40AE
:100700005DD84F2EEC273F407D0F3FBE871F3F40F7
:100710007DF143BEF8213F40BDE04B5EF0253F40F8
:100720005D8548AE42243F405DD84F2EEC273F40C8
:100730007D5B5CBE2D2E3F40FD40567E202B3F4012
:10074000BD514EDE28273F40FD2945FE94223F4003
:100750001DD9208E6C103F409DE552CE72293F403E
:100760003D91399EC81C3F407D16293E8B143F4069
:100770007D6930BE34183F409D9935CECC1A3F403C
:100780005DFD34AE7E1A3F409DB730CE5B183F40D2
:100790003DAF349E571A3F405D8C322E46193F4084
:1007A0003D81129E40093F405DC8282E64143F40A1
:1007B0007D34243E1A123F405DC13EAE601F3F4073
:1007C0003D2E0B1E97053F405D6E372EB71B3F40F9
:1007D0003D09269E04133F403DCD2F9EE6173F4026
:1007E000BD6F49DEB7243F403D5C2D1EAE163F4035
:1007F0001DE00A0E70053F403DBD089E5E043F406F
:100800009D890ECE44073F401D86190EC30C3F4004
:10081000BDD70EDE6B073F401DBB258EDD123F406E
:100820001DBB258EDD123F407D06023E03013F4089
:10083000BDD0245E68123F40FDA81B7ED40D3F4012
:100840005D14462E0A233F40FD12347E091A3F40B4
:100850009D180C4E0C063F401D681E0E340F3F4085
:100860001D510D8EA8063F403DD4191EEA0C3F4095
:100870001B94ED0DCAF63E405D40152EA00A3F4088
:100880009D64294EB2143F407DF82D3EFC163F403A
:100890001DD9208E6C103F405DE6232EF3113F40A2
:1008A0007DA526BE52133F40BDB913DEDC093F4093
:1008B0005D2904AE14023F40BBE5E2DD72F13E402B
:1008C0009D180C4E0C063F40BD84075EC2033F409E
:1008D0005D221A2E110D3F40FDC6167E630B3F4070
:0108E0009D7A
:00000001FF
Assuming the data represents a list of 64-bit floating point numbers that you want to decode, the process is to collect the appropriate number of octets and decode them as a double.
Reusing the structure you presented:
from intelhex import IntelHex
import struct
ih = IntelHex()
ih.loadhex('output.hex')
ihdict = ih.todict()
# Read all the data into a long list of int octets
data = []
startAddress = 536871952
while ihdict.get(startAddress) is not None:
data.append(ihdict.get(startAddress))
startAddress += 1
# slice the list into 8-byte bytearrays
bin_arr = [bytearray(data[n:n+8]) for n in range(0, len(data), 8)]
# unpack each bytearray as a double
# Filter for 8 byte arrays because len(data) is not divisible by 8.
# Is the data properly aligned?
doubles_list = [struct.unpack('d', b) for b in bin_arr if len(b) == 8]
print(doubles_list)
It may be worth mentioning that the above assumes a big endian byte ordering. I believe you can use < as part of the format definition to assume a little endian ordering. More information is available in the struct.unpack docs.

Get int value from each two bytes

I am trying to read bytes from an image, and get all the int (16 bit) values from that image.
After I parsed the image header, I got to the pixel values. The values that I get when the pair of bytes are like b"\xd4\x00" is incorrect. In this case it should be 54272, not 3392.
This are parts of the code:
I use a generator to get the bytes:
import itertools
def osddef_generator(in_file):
with open(in_file, mode='rb') as f:
dat = f.read()
for byte in dat:
yield byte
def take_slice(in_generator, size):
return ''.join(str(chr(i)) for i in itertools.islice(in_generator, size))
def take_single_pixel(in_generator):
pix = itertools.islice(in_generator, 2)
hex_list = [hex(i) for i in pix]
hex_str = "".join(hex_list)[2:].replace("0x", '')
intval = int(hex_str, 16)
print("hex_list: ", hex_list)
print("hex_str: ", hex_str)
print("intval: ", intval)
After I get the header correctly using the take_slice method, I get to the part with the pixel values, where I use the take_single_pixel method.
Here, I get the bad results.
This is what I get:
hex_list: ['0xd4', '0x0']
hex_str: d40
intval: 3392
But the actual sequence of bytes that should be interpreted is: \xd4\x00, which equals to 54272, so that my hex_list = ['0xd4', '0x00'] and hex_str = d400.
Something happens when I have a sequence of bytes when the second one is \x00.
Got any ideas? Thanks!
There are much better ways of converting bytes to integters:
int.from_bytes() takes bytes input, and a byte order argument:
>>> int.from_bytes(b"\xd4\x00", 'big')
54272
>>> int.from_bytes(b"\xd4\x00", 'little')
212
The struct.unpack() function lets you convert a whole series of bytes to integers following a pattern:
>>> import struct
>>> struct.unpack('!4H', b'\xd4\x00\xd4\x00\xd4\x00\xd4\x00')
(54272, 54272, 54272, 54272)
The array module lets you read binary data representing homogenous integer data into a memory structure efficiently:
>>> array.array('H', fileobject)
However, array can't be told what byte order to use. You'd have to determine the current architecture byte order and call arr.byteswap() to reverse order if the machine order doesn't match the file order.
When reading image data, it is almost always preferable to use the struct module to do the parsing. You generally then use file.read() calls with specific sizes; if the header consists of 10 bytes, use:
headerinfo = struct.unpack('<expected header pattern for 10 bytes>', f.read(10))
and go from there. For examples, look at the Pillow / PIL image plugins source code; here is how the Blizzard Mipmap image format header is read:
def _read_blp_header(self):
self._blp_compression, = struct.unpack("<i", self.fp.read(4))
self._blp_encoding, = struct.unpack("<b", self.fp.read(1))
self._blp_alpha_depth, = struct.unpack("<b", self.fp.read(1))
self._blp_alpha_encoding, = struct.unpack("<b", self.fp.read(1))
self._blp_mips, = struct.unpack("<b", self.fp.read(1))
self._size = struct.unpack("<II", self.fp.read(8))
if self.magic == b"BLP1":
# Only present for BLP1
self._blp_encoding, = struct.unpack("<i", self.fp.read(4))
self._blp_subtype, = struct.unpack("<i", self.fp.read(4))
self._blp_offsets = struct.unpack("<16I", self.fp.read(16 * 4))
self._blp_lengths = struct.unpack("<16I", self.fp.read(16 * 4))
Because struct.unpack() always returns tuples, you can assign individual elements in a tuple to name1, name2, ... names on the left-hand size, including single_name, = assignments to extract a single result.
The separate set of read calls above could also be compressed into fewer calls:
comp, enc, adepth, aenc, mips, *size = struct.unpack("<i4b2I", self.fp.read(16))
if self.magic == b"BLP1":
# Only present for BLP1
enc, subtype = struct.unpack("<2i", self.fp.read(8))
followed by specific attribute assignments.

Python - Efficient way to flip bytes in a file?

I've got a folder full of very large files that need to be byte flipped by a power of 4. So essentially, I need to read the files as a binary, adjust the sequence of bits, and then write a new binary file with the bits adjusted.
In essence, what I'm trying to do is read a hex string hexString that looks like this:
"00112233AABBCCDD"
And write a file that looks like this:
"33221100DDCCBBAA"
(i.e. every two characters is a byte, and I need to flip the bytes by a power of 4)
I am very new to python and coding in general, and the way I am currently accomplishing this task is extremely inefficient. My code currently looks like this:
import binascii
with open(myFile, 'rb') as f:
content = f.read()
hexString = str(binascii.hexlify(content))
flippedBytes = ""
inc = 0
while inc < len(hexString):
flippedBytes += file[inc + 6:inc + 8]
flippedBytes += file[inc + 4:inc + 6]
flippedBytes += file[inc + 2:inc + 4]
flippedBytes += file[inc:inc + 2]
inc += 8
..... write the flippedBytes to file, etc
The code I pasted above accurately accomplishes what I need (note, my actual code has a few extra lines of: "hexString.replace()" to remove unnecessary hex characters - but I've left those out to make the above easier to read). My ultimate problem is that it takes EXTREMELY long to run my code with larger files. Some of my files I need to flip are almost 2gb in size, and the code was going to take almost half a day to complete one single file. I've got dozens of files I need to run this on, so that timeframe simply isn't practical.
Is there a more efficient way to flip the HEX values in a file by a power of 4?
.... for what it's worth, there is a tool called WinHEX that can do this manually, and only takes a minute max to flip the whole file.... I was just hoping to automate this with python so we didn't have to manually use WinHEX each time
You want to convert your 4-byte integers from little-endian to big-endian, or vice-versa. You can use the struct module for that:
import struct
with open(myfile, 'rb') as infile, open(myoutput, 'wb') as of:
while True:
d = infile.read(4)
if not d:
break
le = struct.unpack('<I', d)
be = struct.pack('>I', *le)
of.write(be)
Here is a little struct awesomeness to get you started:
>>> import struct
>>> s = b'\x00\x11\x22\x33\xAA\xBB\xCC\xDD'
>>> a, b = struct.unpack('<II', s)
>>> s = struct.pack('>II', a, b)
>>> ''.join([format(x, '02x') for x in s])
'33221100ddccbbaa'
To do this at full speed for a large input, use struct.iter_unpack

Effienctly unpack mono12packed bitstring format with python

I have raw data from a camera, which is in the mono12packed format. This is an interlaced bit format, to store 2 12bit integers in 3 bytes to eliminate overhead. Explicitly the memory layout for each 3 bytes looks like this:
Byte 1 = Pixel0 Bits 11-4
Byte 2 = Pixel1 Bits 3-0 + Pixel0 Bits 3-0
Byte 3 = Pixel1 Bits 11-4
I have a file, where all the bytes can be read from using binary read, let's assume it is called binfile.
To get the pixeldata from the file I do:
from bitstring import BitArray as Bit
f = open(binfile, 'rb')
bytestring = f.read()
f.close()
a = []
for i in range(len(bytestring)/3): #reading 2 pixels = 3 bytes at a time
s = Bit(bytes = bytestring[i*3:i*3+3], length = 24)
p0 = s[0:8]+s[12:16]
p1 = s[16:]+s[8:12]
a.append(p0.unpack('uint:12'))
a.append(p1.unpack('uint:12'))
which works, but is horribly slow and I would like to do that more efficiently, because I have to do that for a huge amount of data.
My idea is, that by reading more than 3 bytes at a time I could spare some time in the conversion step, but I can't figure a way how to do that.
Another idea is, since the bits come in packs of 4, maybe there is a way to work on nibbles rather than on bits.
Data example:
The bytes
'\x07\x85\x07\x05\x9d\x06'
lead to the data
[117, 120, 93, 105]
Have you tried bitwise operators? Maybe that's a faster way:
with open('binfile.txt', 'rb') as binfile:
bytestring = list(bytearray(binfile.read()))
a = []
for i in range(0, len(bytestring), 3):
px_bytes = bytestring[i:i+3]
p0 = (px_bytes[0] << 4) | (px_bytes[1] & 0x0F)
p1 = (px_bytes[2] << 4) | (px_bytes[1] >> 4 & 0x0F)
a.append(p0)
a.append(p1)
print a
This also outputs:
[117, 120, 93, 105]
Hope it helps!

Can you use numpy's fromfile to store files in an ndarray?

I'm still pretty new to python. I've been able to write a program that will read in a file from binary and stores the data that's there in a few arrays. Now that I've been able to complete a few other tasks with this program, I'm trying to go back through all my code and see where I can make it more efficient, learning Python better along the way. In particular, I'm trying to update the reading and storing of data from the file. Using numpy's fromfile is MUCH, MUCH faster at unpacking data than the struct.unpack method, and works wonderfully for a 1D array structure. However, I have some of the data stored in 2D arrays. I am seemingly stuck on how to implement the same type of storing in the 2D array. Does anyone have any ideas or hints as to how I may be able to perform this?
My basic program structure is as follows:
from numpy import fromfile
import numpy as np
file = open(theFilePath,'rb')
####### File Header #########
reservedParse = 4
fileHeaderBytes = 4 + int(131072/reservedParse) #Parsing out the bins in the file header
fileHeaderArray = np.zeros(fileHeaderBytes)
fileHeaderArray[0] = fromfile(file, dtype='<I', count=1) #File Index; 4 Bytes
fileHeaderArray[1] = fromfile(file, dtype='<I', count=1) #The Number of Packets; 4 bytes
fileHeaderArray[2] = fromfile(file, dtype='<Q', count=1) #Timestamp; 16 bytes; 2, 8-byte.
fileHeaderArray[3] = fromfile(file, dtype='<Q', count=1)
fileHeaderArray[4:] = fromfile(file, dtype='<I', count=int(131072/reservedParse)) #Empty header space
####### Data Packets #########
#Note: Each packet begins with a header containing information about the data stream followed by the data.
packets = int(fileHeaderArray[1]) #The number of packets in the data stream
dataLength = int(28672)
packHeader = np.zeros(14*packets).reshape((packets,14))
data = np.zeros(dataLength*packets).reshape((packets,dataLength))
for i in range(packets):
packHeader[i][0] = fromfile(file, dtype='>H', count=1) #Model Num
packHeader[i][1] = fromfile(file, dtype='>H', count=1) #Packet ID
packHeader[i][2] = fromfile(file, dtype='>I', count=1) #Coarse Timestamp
....#Continuing on
packHeader[i][13] = fromfile(file, dtype='>i', count=1) #4 bytes of reserved space
data[i] = fromfile(file, dtype='<h', count=dataLength) #Actual data
Essentially this is what I have right now. Is there a way I can do this without doing the loop? Going through that loop does not seem particularly fast or numpy-ish.
For reference, the for-loop structure using unpack and not numpy is:
packHeader = [[0 for x in range(14)] for y in range(packets)]
data = [[0 for x in range(dataLength)] for y in range(packets)]
for i in range(packets):
packHeader[i][0] = unpack('>H', file.read(2)) #Model Num
packHeader[i][1] = unpack('>H', file.read(2)) #Packet ID
packHeader[i][2] = unpack('>I', file.read(4)) #Coarse Timestamp
....#Continuing on
packHeader[i][13] = unpack('>i', file.read(4)) #4 bytes of reserved space
packHeader[i]=list(chain(*packHeader[i])) #Deals with the tuple issue ((x,),(y,),...) -> (x,y,...)
data[i] = [unpack('<h', file.read(2)) for j in range(dataLength)] #Actual data
data[i] = list(chain(*data[i])) #Deals with the tuple issue ((x,),(y,),...) -> (x,y,...)
Let me know if any clarification is needed.

Categories