Convert buffer repesents list of int little indian Python class - python

I'm trying to get data from buffer represents as string,
Example:
got :
str = "0004000001000000020000000A000000"
class MyData:
length
some_data
array_data
buf_data
data = parse(str)
Except :
length=1024, some_data=1, array_data=[2,10], buf_data="000000020000010"
Explain:
length=1024 since the 8 numbers "00040000" repesnts an hex number in little indian
and the rest the same idea,
"00040000 01000000 0200000 00A000000"
1024, 1, 2, 10
any idea?
I have some solution but it's too messy and isn't easy to support

This is one way to do it:
class MyData:
mmap = [16**1, 16**0, 16**3, 16**2, 16**5, 16**4, 16**7, 16**6]
def __init__(self, buffer):
self.buffer = buffer
self.integers = []
def get_integers(self):
if len(self.integers) == 0:
for i in range(0, len(self.buffer), 8):
a = 0
for x, y in zip(self.buffer[i:i+8], self.mmap):
a += int(x, 16) * y
self.integers.append(a)
return self.integers
mydata = MyData('0004000001000000020000000A000000')
print(mydata.get_integers())
Output:
[1024, 1, 2, 10]
NOTE: This is specifically for 32-bit unsigned values

Related

Ouput is incorrect (list not sorted)

The program takes as input a data-set of orders where id, t selection and t shipping
are of type unsigned int, n is the number of orders, and a space character.
id1, t selection1, t shipping1; ...; idn, t selectionn, t shippingn \n.
The expected output is a space-separated list of the ids, sorted by t selection + t shipping
and terminated by a newline \n.
Input: 1, 500, 100; 2, 700, 100; 3, 100, 100; 4, 50, 50\n
Output: 4 3 1 2\n
My output however shows this
output: 4 1 2 3
Could somebody help me fix this? thanks in advance. below you can see my code. in the code there are some annotations from my teacher btw, don't mind them.
#!/usr/bin/env python3
import sys
class Order:
def __init__(self, id: int, selection_time: int, shipping_time: int):
self.id: int = id
self.selection_time: int = selection_time
self.shipping_time: int = shipping_time
'''
Remove me if you don't need me.
Add a method to assign to me.
'''
self.next: Order = None
'''
Make your life easier and your code prettier, use `Operator Overloading`.
'''
def sort(data):
sorted_order = selection_t + shipping_t
for i in range(len(data)):
for j in range(i + 1, len(data)):
if sorted_order[i] > sorted_order[j]:
data[i], data[j] = data[j], data[i]
return data
if __name__ == '__main__':
'''
Retrieves and splits the input
'''
data = input()
data = data.split('; ')
for d in data:
id, selection_t, shipping_t = d.split(', ', 2)
order: Order = Order(int(id), int(selection_t), int(shipping_t))
sort(data)
for order.id in data:
sys.stdout.write(order.id[0])
sys.stdout.write(" ")
as pointed out by Matt, you're not actually using the Order class. Funny thing about classes is that they have a number of so-called magic methods (that naming is very correct)
The magic method that is useful to you in this case is __lt__. This is an abbreviation for Lesser Than. Google search
If you set up this magic method for the class, you can simply call sort on a list containing only instances of that class.
If you do not want to use this magic method, the other option is to use a lambda to tell the sort method how to sort the list. This is very well explained here.
(also, I removed the sys.stdout and replaced it with the standard print)
#!/usr/bin/env python3
class Order:
def __init__(self, id: int, selection_time: int, shipping_time: int):
self.id: int = id
self.selection_time: int = selection_time
self.shipping_time: int = shipping_time
self.sort_value: int = shipping_time + selection_time
def __lt__(self, other) -> bool:
return self.sort_value < other.sort_value
if __name__ == "__main__":
data = "1, 500, 100; 2, 700, 100; 3, 100, 100; 4, 50, 50"
data = data.split("; ")
order_list = []
order_list_lambda = []
for d in data:
id, selection_t, shipping_t = [int(s) for s in d.split(", ")]
order: Order = Order(id, selection_t, shipping_t)
order_list.append(order)
order_list_lambda.append(order)
print("using __lt__ class magic method")
order_list.sort()
for order in order_list:
print(order.id)
print("-----")
print("using lamda")
order_list_lambda.sort(key=lambda x: x.sort_value)
for order in order_list_lambda:
print(order.id)
output
using __lt__ class magic method
4
3
1
2
-----
using lamda
4
3
1
2

Python print floating point number in binary? [duplicate]

How to get the string as binary IEEE 754 representation of a 32 bit float?
Example
1.00 -> '00111111100000000000000000000000'
You can do that with the struct package:
import struct
def binary(num):
return ''.join('{:0>8b}'.format(c) for c in struct.pack('!f', num))
That packs it as a network byte-ordered float, and then converts each of the resulting bytes into an 8-bit binary representation and concatenates them out:
>>> binary(1)
'00111111100000000000000000000000'
Edit:
There was a request to expand the explanation. I'll expand this using intermediate variables to comment each step.
def binary(num):
# Struct can provide us with the float packed into bytes. The '!' ensures that
# it's in network byte order (big-endian) and the 'f' says that it should be
# packed as a float. Alternatively, for double-precision, you could use 'd'.
packed = struct.pack('!f', num)
print 'Packed: %s' % repr(packed)
# For each character in the returned string, we'll turn it into its corresponding
# integer code point
#
# [62, 163, 215, 10] = [ord(c) for c in '>\xa3\xd7\n']
integers = [ord(c) for c in packed]
print 'Integers: %s' % integers
# For each integer, we'll convert it to its binary representation.
binaries = [bin(i) for i in integers]
print 'Binaries: %s' % binaries
# Now strip off the '0b' from each of these
stripped_binaries = [s.replace('0b', '') for s in binaries]
print 'Stripped: %s' % stripped_binaries
# Pad each byte's binary representation's with 0's to make sure it has all 8 bits:
#
# ['00111110', '10100011', '11010111', '00001010']
padded = [s.rjust(8, '0') for s in stripped_binaries]
print 'Padded: %s' % padded
# At this point, we have each of the bytes for the network byte ordered float
# in an array as binary strings. Now we just concatenate them to get the total
# representation of the float:
return ''.join(padded)
And the result for a few examples:
>>> binary(1)
Packed: '?\x80\x00\x00'
Integers: [63, 128, 0, 0]
Binaries: ['0b111111', '0b10000000', '0b0', '0b0']
Stripped: ['111111', '10000000', '0', '0']
Padded: ['00111111', '10000000', '00000000', '00000000']
'00111111100000000000000000000000'
>>> binary(0.32)
Packed: '>\xa3\xd7\n'
Integers: [62, 163, 215, 10]
Binaries: ['0b111110', '0b10100011', '0b11010111', '0b1010']
Stripped: ['111110', '10100011', '11010111', '1010']
Padded: ['00111110', '10100011', '11010111', '00001010']
'00111110101000111101011100001010'
Here's an ugly one ...
>>> import struct
>>> bin(struct.unpack('!i',struct.pack('!f',1.0))[0])
'0b111111100000000000000000000000'
Basically, I just used the struct module to convert the float to an int ...
Here's a slightly better one using ctypes:
>>> import ctypes
>>> bin(ctypes.c_uint32.from_buffer(ctypes.c_float(1.0)).value)
'0b111111100000000000000000000000'
Basically, I construct a float and use the same memory location, but I tag it as a c_uint32. The c_uint32's value is a python integer which you can use the builtin bin function on.
Note: by switching types we can do reverse operation as well
>>> ctypes.c_float.from_buffer(ctypes.c_uint32(int('0b111111100000000000000000000000', 2))).value
1.0
also for double-precision 64-bit float we can use the same trick using ctypes.c_double & ctypes.c_uint64 instead.
Found another solution using the bitstring module.
import bitstring
f1 = bitstring.BitArray(float=1.0, length=32)
print(f1.bin)
Output:
00111111100000000000000000000000
For the sake of completeness, you can achieve this with numpy using:
f = 1.00
int32bits = np.asarray(f, dtype=np.float32).view(np.int32).item() # item() optional
You can then print this, with padding, using the b format specifier
print('{:032b}'.format(int32bits))
With these two simple functions (Python >=3.6) you can easily convert a float number to binary and vice versa, for IEEE 754 binary64.
import struct
def bin2float(b):
''' Convert binary string to a float.
Attributes:
:b: Binary string to transform.
'''
h = int(b, 2).to_bytes(8, byteorder="big")
return struct.unpack('>d', h)[0]
def float2bin(f):
''' Convert float to 64-bit binary string.
Attributes:
:f: Float number to transform.
'''
[d] = struct.unpack(">Q", struct.pack(">d", f))
return f'{d:064b}'
For example:
print(float2bin(1.618033988749894))
print(float2bin(3.14159265359))
print(float2bin(5.125))
print(float2bin(13.80))
print(bin2float('0011111111111001111000110111011110011011100101111111010010100100'))
print(bin2float('0100000000001001001000011111101101010100010001000010111011101010'))
print(bin2float('0100000000010100100000000000000000000000000000000000000000000000'))
print(bin2float('0100000000101011100110011001100110011001100110011001100110011010'))
The output is:
0011111111111001111000110111011110011011100101111111010010100100
0100000000001001001000011111101101010100010001000010111011101010
0100000000010100100000000000000000000000000000000000000000000000
0100000000101011100110011001100110011001100110011001100110011010
1.618033988749894
3.14159265359
5.125
13.8
I hope you like it, it works perfectly for me.
This problem is more cleanly handled by breaking it into two parts.
The first is to convert the float into an int with the equivalent bit pattern:
import struct
def float32_bit_pattern(value):
return sum(ord(b) << 8*i for i,b in enumerate(struct.pack('f', value)))
Python 3 doesn't require ord to convert the bytes to integers, so you need to simplify the above a little bit:
def float32_bit_pattern(value):
return sum(b << 8*i for i,b in enumerate(struct.pack('f', value)))
Next convert the int to a string:
def int_to_binary(value, bits):
return bin(value).replace('0b', '').rjust(bits, '0')
Now combine them:
>>> int_to_binary(float32_bit_pattern(1.0), 32)
'00111111100000000000000000000000'
Piggy-tailing on Dan's answer with colored version for Python3:
import struct
BLUE = "\033[1;34m"
CYAN = "\033[1;36m"
GREEN = "\033[0;32m"
RESET = "\033[0;0m"
def binary(num):
return [bin(c).replace('0b', '').rjust(8, '0') for c in struct.pack('!f', num)]
def binary_str(num):
bits = ''.join(binary(num))
return ''.join([BLUE, bits[:1], GREEN, bits[1:10], CYAN, bits[10:], RESET])
def binary_str_fp16(num):
bits = ''.join(binary(num))
return ''.join([BLUE, bits[:1], GREEN, bits[1:10][-5:], CYAN, bits[10:][:11], RESET])
x = 0.7
print(x, "as fp32:", binary_str(0.7), "as fp16 is sort of:", binary_str_fp16(0.7))
After browsing through lots of similar questions I've written something which hopefully does what I wanted.
f = 1.00
negative = False
if f < 0:
f = f*-1
negative = True
s = struct.pack('>f', f)
p = struct.unpack('>l', s)[0]
hex_data = hex(p)
scale = 16
num_of_bits = 32
binrep = bin(int(hex_data, scale))[2:].zfill(num_of_bits)
if negative:
binrep = '1' + binrep[1:]
binrep is the result.
Each part will be explained.
f = 1.00
negative = False
if f < 0:
f = f*-1
negative = True
Converts the number to a positive if negative, and sets the variable negative to false. The reason for this is that the difference between positive and negative binary representations is just in the first bit, and this was the simpler way than to figure out what goes wrong when doing the whole process with negative numbers.
s = struct.pack('>f', f) #'?\x80\x00\x00'
p = struct.unpack('>l', s)[0] #1065353216
hex_data = hex(p) #'0x3f800000'
s is a hex representation of the binary f. it is however not in the pretty form i need. Thats where p comes in. It is the int representation of the hex s. And then another conversion to get a pretty hex.
scale = 16
num_of_bits = 32
binrep = bin(int(hex_data, scale))[2:].zfill(num_of_bits)
if negative:
binrep = '1' + binrep[1:]
scale is the base 16 for the hex. num_of_bits is 32, as float is 32 bits, it is used later to fill the additional places with 0 to get to 32. Got the code for binrep from this question. If the number was negative, just change the first bit.
I know this is ugly, but i didn't find a nice way and I needed it fast. Comments are welcome.
This is a little more than was asked, but it was what I needed when I found this entry. This code will give the mantissa, base and sign of the IEEE 754 32 bit float.
import ctypes
def binRep(num):
binNum = bin(ctypes.c_uint.from_buffer(ctypes.c_float(num)).value)[2:]
print("bits: " + binNum.rjust(32,"0"))
mantissa = "1" + binNum[-23:]
print("sig (bin): " + mantissa.rjust(24))
mantInt = int(mantissa,2)/2**23
print("sig (float): " + str(mantInt))
base = int(binNum[-31:-23],2)-127
print("base:" + str(base))
sign = 1-2*("1"==binNum[-32:-31].rjust(1,"0"))
print("sign:" + str(sign))
print("recreate:" + str(sign*mantInt*(2**base)))
binRep(-0.75)
output:
bits: 10111111010000000000000000000000
sig (bin): 110000000000000000000000
sig (float): 1.5
base:-1
sign:-1
recreate:-0.75
Convert float between 0..1
def float_bin(n, places = 3):
if (n < 0 or n > 1):
return "ERROR, n must be in 0..1"
answer = "0."
while n > 0:
if len(answer) - 2 == places:
return answer
b = n * 2
if b >= 1:
answer += '1'
n = b - 1
else:
answer += '0'
n = b
return answer
Several of these answers did not work as written with Python 3, or did not give the correct representation for negative floating point numbers. I found the following to work for me (though this gives 64-bit representation which is what I needed)
def float_to_binary_string(f):
def int_to_8bit_binary_string(n):
stg=bin(n).replace('0b','')
fillstg = '0'*(8-len(stg))
return fillstg+stg
return ''.join( int_to_8bit_binary_string(int(b)) for b in struct.pack('>d',f) )
I made a very simple one. please check it. and if you think there was any mistake please let me know. this works fine for me.
sds=float(input("Enter the number : "))
sf=float("0."+(str(sds).split(".")[-1]))
aa=[]
while len(aa)<15:
dd=round(sf*2,5)
if dd-1>0:
aa.append(1)
sf=dd-1
else:
sf=round(dd,5)
aa.append(0)
des=aa[:-1]
print("\n")
AA=([str(i) for i in des])
print("So the Binary Of : %s>>>"%sds,bin(int(str(sds).split(".")[0])).replace("0b",'')+"."+"".join(AA))
or in case of integer number just use bin(integer).replace("0b",'')
Let's use numpy!
import numpy as np
def binary(num, string=True):
bits = np.unpackbits(np.array([num]).view('u1'))
if string:
return np.array2string(bits, separator='')[1:-1]
else:
return bits
e.g.,
binary(np.pi)
# '0001100000101101010001000101010011111011001000010000100101000000'
binary(np.pi, string=False)
# array([0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1,
# 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0,
# 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0],
# dtype=uint8)
You can use the .format for the easiest representation of bits in my opinion:
my code would look something like:
def fto32b(flt):
# is given a 32 bit float value and converts it to a binary string
if isinstance(flt,float):
# THE FOLLOWING IS AN EXPANDED REPRESENTATION OF THE ONE LINE RETURN
# packed = struct.pack('!f',flt) <- get the hex representation in (!)Big Endian format of a (f) Float
# integers = []
# for c in packed:
# integers.append(ord(c)) <- change each entry into an int
# binaries = []
# for i in integers:
# binaries.append("{0:08b}".format(i)) <- get the 8bit binary representation of each int (00100101)
# binarystring = ''.join(binaries) <- join all the bytes together
# return binarystring
return ''.join(["{0:08b}".format(i) for i in [ord(c) for c in struct.pack('!f',flt)]])
return None
Output:
>>> a = 5.0
'01000000101000000000000000000000'
>>> b = 1.0
'00111111100000000000000000000000'

Binary reading with python gives unexpected results

I'm trying to read some binary files with python for my analysis generated with Zemax OpticStudio. The structure of the file is supposed to be the following:
2 x 32-bit integer as header
n chunks of data
Each chunk is made by
32-bit integer indicating the number of C struc that come after
m C structures
The structures' definition is the following:
typedef struct
{
unsigned int status;
int level;
int hit_object;
int hit_face;
int unused;
int in_object;
int parent;
int storage;
int xybin, lmbin;
double index, starting_phase;
double x, y, z;
double l, m, n;
double nx, ny, nz;
double path_to, intensity;
double phase_of, phase_at;
double exr, exi, eyr, eyi, ezr, ezi;
}
which has a size of 208 bytes, for your convenience.
Here is the code that I wrote with some research and a couple of brilliant answers from here.
from pathlib import Path
from functools import partial
from io import DEFAULT_BUFFER_SIZE
import struct
def little_endian_int(x):
return int.from_bytes(x,'little')
def file_byte_iterator(path):
"""iterator over lazily loaded file
"""
path = Path(path)
with path.open('rb') as file:
reader = partial(file.read1, DEFAULT_BUFFER_SIZE)
file_iterator = iter(reader, bytes())
for chunk in file_iterator:
yield from chunk
def ray_tell(rays_idcs:list,ray_idx:int,seg_idx:int):
idx = rays_idcs[ray_idx][0]
idx += 4 + 208*seg_idx
return idx
def read_header(bytearr:bytearray):
version = int.from_bytes(bytearr[0:4],'little')
zrd_format = version//10000
version = version%10000
num_seg_max = int.from_bytes(bytearr[4:8],'little')
return zrd_format,version,num_seg_max
def rays_indices(bytearr:bytearray):
index=8
rays=[]
while index <len(bytearr):
num_seg = int.from_bytes(bytearr[index:index+4],'little')
rays.append((index,num_seg))
index = index+4 + 208*num_seg
return rays
def read_ray(bytearr:bytearray,ray):
ray_idx,num_seg = ray
data = []
ray_idx = ray_idx + 4
seg_idx=0
for ray_idx in range(8,8+num_seg*208,208):
offsets = [0,4,8,12,16,20,24,28,32,36,40,48,56,64,72,80,88,96,104,112,120,128,136,144,152,160,168,176,184,192,200]
int_vars = offsets[0:11]
doubl_vars = offsets[11:]
data_integ = [bytearr[ray_idx+offset:ray_idx+offset+4] for offset in int_vars]
data_doubl = [bytearr[ray_idx+offset:ray_idx+offset+8] for offset in doubl_vars]
data.append([seg_idx,data_integ,data_doubl])
seg_idx += 1
return data
file="test_uncompressed.ZRD"
raypath = {}
filebin = bytearray(file_byte_iterator(file))
header = read_header(filebin)
print(header)
rays_idcs = rays_indices(filebin)
rays = []
for ray in rays_idcs:
rays.append(read_ray(filebin,ray))
ray = rays[1] #Random ray
segm = ray[2] #Random segm
ints = segm[1]
doub = segm[2]
print("integer vars:")
for x in ints:
print(x,little_endian_int(x))
print("double vars:")
for x in doub:
print(x,struct.unpack('<d',x))
I have verified that all of the structures have the right size and number of chunks and structures (my reading matches the number of segments and rays that I read with Zemax, ) , and thanks to the header, I verified the endianness of the file (little endian).
My output is the following:
(0, 2002)
bytearray(b'\x1f\xd8\x9c?') 1067243551
bytearray(b'\x06\x80\x00\x00') 32774
bytearray(b'\x02\x00\x00\x00') 2
bytearray(b'\x11\x00\x00\x00') 17
bytearray(b'\x02\x00\x00\x00') 2
bytearray(b'\x00\x00\x00\x00') 0
bytearray(b'\x11\x00\x00\x00') 17
bytearray(b'\x01\x00\x00\x00') 1
bytearray(b'\x00\x00\x00\x00') 0
bytearray(b'\x00\x00\x00\x00') 0
double vars:
bytearray(b'\x00\x00\x00\x00# \xac\xe8') (-1.6425098109028998e+196,)
bytearray(b'\xe8\xe3\xf9?\x00\x00\x00\x00') (5.3030112e-315,)
bytearray(b'\x00\x00\x00\x00\x00\x00\x00\x00') (0.0,)
bytearray(b'\x00\x00\x00\x00p_\xb4\xec') (-4.389425605765071e+215,)
bytearray(b'5\xe3\x9d\xbf\xf0\xbd"\xa2') (-3.001836066957746e-144,)
bytearray(b'z"\xc0?\x00\x00\x00\x00') (5.28431047e-315,)
bytearray(b'\x00\x00\x00\x00 \xc9+\xa3') (-2.9165705864036956e-139,)
bytearray(b'g\xd4\xcd?\x9ch{ ') (3.2707669223572687e-152,)
bytearray(b'q\x1e\xef?\x00\x00\x00\x00') (5.299523535e-315,)
bytearray(b'\x00\x00\x00\x00%\x0c\xb4A') (336340224.0,)
bytearray(b'\t\xf2u\xbf\\3L\xe6') (-5.991371249309652e+184,)
bytearray(b'\xe1\xff\xef\xbf1\x8dV\x1e') (1.5664573023148095e-162,)
bytearray(b'\xa1\xe9\xe8?\x9c\x9a6\xfc') (-2.202825582975923e+290,)
bytearray(b'qV\xb9?\x00\x00\x00\x00') (5.28210966e-315,)
bytearray(b'\x00\x00\x00\x00\x00\x00\x00\x00') (0.0,)
bytearray(b'\x00\x00\x00\x00\xc6\xfd\x0c\xa1') (-1.7713316840526727e-149,)
bytearray(b'\x96\x94\x8d?\xad\xf9(\xcc') (-7.838624888507203e+58,)
bytearray(b'yN\xb2\xbff.\\\x1a') (1.0611651097687064e-181,)
bytearray(b'\xb9*\xae?\xac\xaf\xe5\xe1') (-3.90257774261585e+163,)
bytearray(b'c\xab\xd2\xbf\xccQ\x8bj') (1.7130904564012918e+205,)
bytearray(b'\xc8\xea\x8c\xbf\xdf\xdc\xe49') (8.22891935818188e-30,)
I'm reading correctly just the int values. I don't understand why I get those binaries for all the other variables
EDIT
I want to highlight that bytearrays contain non-hexadecimal digits, and I'm sure that binary files are not corrupted, since I can read those in zemax
Solved.
It was just an error in my pointer arithmetic in the read_ray function. Thanks to Mad Physicist for his suggestion to unpack the whole structure which put me in the right direction.
def read_ray(bytearr:bytearray,ray):
ray_idx,num_seg = ray
data = []
assert num_seg==little_endian_int(bytearr[ray_idx:ray_idx+4])
ray_idx = ray_idx + 4
for seg_ptr in range(ray_idx,ray_idx + num_seg*208,208):
...
data_integ = [bytearr[seg_ptr+offset:seg_ptr+offset+4] for offset in int_vars]
data_doubl = [bytearr[seg_ptr+offset:seg_ptr+offset+8] for offset in doubl_vars]
...
return data

Represent number as a bytes using 16-bit blocks

I wish to convert a number like 683550 (0xA6E1E) to b'\x1e\x6e\x0a\x00', where the number of bytes in the array is a multiple of 2 and where the len of the bytes object is only so long as it needs to be to represent the number.
This is as far as I got:
"{0:0{1}x}".format(683550,8)
giving:
'000a6e1e'
Use the .tobytes-method:
num = 683550
bytes = num.to_bytes((num.bit_length()+15)//16*2, "little")
Using python3:
def encode_to_my_hex_format(num, bytes_group_len=2, byteorder='little'):
"""
#param byteorder can take the values 'little' or 'big'
"""
bytes_needed = abs(-len(bin(num)[2: ]) // 8)
if bytes_needed % bytes_group_len:
bytes_needed += bytes_group_len - bytes_needed % bytes_group_len
num_in_bytes = num.to_bytes(bytes_needed, byteorder)
encoded_num_in_bytes = b''
for index in range(0, len(num_in_bytes), bytes_group_len):
bytes_group = num_in_bytes[index: index + bytes_group_len]
if byteorder == 'little':
bytes_group = bytes_group[-1: -len(bytes_group) -1 : -1]
encoded_num_in_bytes += bytes_group
encoded_num = ''
for byte in encoded_num_in_bytes:
encoded_num += r'\x' + hex(byte)[2: ].zfill(2)
return encoded_num
print(encode_to_my_hex_format(683550))

How to efficiently parse fixed width files?

I am trying to find an efficient way of parsing files that holds fixed width lines. For example, the first 20 characters represent a column, from 21:30 another one and so on.
Assuming that the line holds 100 characters, what would be an efficient way to parse a line into several components?
I could use string slicing per line, but it's a little bit ugly if the line is big. Are there any other fast methods?
Using the Python standard library's struct module would be fairly easy as well as fairly fast since it's written in C. The code below how it use it. It also allows columns of characters to be skipped by specifying negative values for the number of characters in the field.
import struct
fieldwidths = (2, -10, 24)
fmtstring = ' '.join('{}{}'.format(abs(fw), 'x' if fw < 0 else 's') for fw in fieldwidths)
# Convert Unicode input to bytes and the result back to Unicode string.
unpack = struct.Struct(fmtstring).unpack_from # Alias.
parse = lambda line: tuple(s.decode() for s in unpack(line.encode()))
print('fmtstring: {!r}, record size: {} chars'.format(fmtstring, struct.calcsize(fmtstring)))
line = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789\n'
fields = parse(line)
print('fields: {}'.format(fields))
Output:
fmtstring: '2s 10x 24s', recsize: 36 chars
fields: ('AB', 'MNOPQRSTUVWXYZ0123456789')
Here's a way to do it with string slices, as you were considering but were concerned that it might get too ugly. It is kind of complicated and speedwise it's about the same as the version based the struct module — although I have an idea about how it could be sped up (which might make the extra complexity worthwhile). See update below on that topic.
from itertools import zip_longest
from itertools import accumulate
def make_parser(fieldwidths):
cuts = tuple(cut for cut in accumulate(abs(fw) for fw in fieldwidths))
pads = tuple(fw < 0 for fw in fieldwidths) # bool values for padding fields
flds = tuple(zip_longest(pads, (0,)+cuts, cuts))[:-1] # ignore final one
parse = lambda line: tuple(line[i:j] for pad, i, j in flds if not pad)
# Optional informational function attributes.
parse.size = sum(abs(fw) for fw in fieldwidths)
parse.fmtstring = ' '.join('{}{}'.format(abs(fw), 'x' if fw < 0 else 's')
for fw in fieldwidths)
return parse
line = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789\n'
fieldwidths = (2, -10, 24) # negative widths represent ignored padding fields
parse = make_parser(fieldwidths)
fields = parse(line)
print('format: {!r}, rec size: {} chars'.format(parse.fmtstring, parse.size))
print('fields: {}'.format(fields))
Output:
format: '2s 10x 24s', rec size: 36 chars
fields: ('AB', 'MNOPQRSTUVWXYZ0123456789')
Update
As I suspected, there is a way of making the string-slicing version of the code faster — which in Python 2.7 make it about the same speed as the version using struct, but in Python 3.x make it 233% faster (as well as the un-optimized version of itself which is about the same speed as the struct version).
What the version presented above does is define a lambda function that's primarily a comprehension that generates the limits of a bunch of slices at runtime.
parse = lambda line: tuple(line[i:j] for pad, i, j in flds if not pad)
Which is equivalent to a statement like the following, depending on the values of i and j in the for loop, to something looking like this:
parse = lambda line: tuple(line[0:2], line[12:36], line[36:51], ...)
However the latter executes more than twice as fast since the slice boundaries are all constants.
Fortunately it relatively easy to convert and "compile" the former into the latter using the built-in eval() function:
def make_parser(fieldwidths):
cuts = tuple(cut for cut in accumulate(abs(fw) for fw in fieldwidths))
pads = tuple(fw < 0 for fw in fieldwidths) # bool flags for padding fields
flds = tuple(zip_longest(pads, (0,)+cuts, cuts))[:-1] # ignore final one
slcs = ', '.join('line[{}:{}]'.format(i, j) for pad, i, j in flds if not pad)
parse = eval('lambda line: ({})\n'.format(slcs)) # Create and compile source code.
# Optional informational function attributes.
parse.size = sum(abs(fw) for fw in fieldwidths)
parse.fmtstring = ' '.join('{}{}'.format(abs(fw), 'x' if fw < 0 else 's')
for fw in fieldwidths)
return parse
I'm not really sure if this is efficient, but it should be readable (as opposed to do the slicing manually). I defined a function slices that gets a string and column lengths, and returns the substrings. I made it a generator, so for really long lines, it doesn't build a temporary list of substrings.
def slices(s, *args):
position = 0
for length in args:
yield s[position:position + length]
position += length
Example
In [32]: list(slices('abcdefghijklmnopqrstuvwxyz0123456789', 2))
Out[32]: ['ab']
In [33]: list(slices('abcdefghijklmnopqrstuvwxyz0123456789', 2, 10, 50))
Out[33]: ['ab', 'cdefghijkl', 'mnopqrstuvwxyz0123456789']
In [51]: d,c,h = slices('dogcathouse', 3, 3, 5)
In [52]: d,c,h
Out[52]: ('dog', 'cat', 'house')
But I think the advantage of a generator is lost if you need all columns at once. Where one could benefit from is when you want to process columns one by one, say in a loop.
Two more options that are easier and prettier than already mentioned solutions:
The first is using pandas:
import pandas as pd
path = 'filename.txt'
#inferred - as suggested in the comments by James Paul Mason
data = pd.read_fwf(path, colspecs='infer')
# Or using Pandas with a column specification
col_specification = [(0, 20), (21, 30), (31, 50), (51, 100)]
data = pd.read_fwf(path, colspecs=col_specification)
And the second option using numpy.loadtxt:
import numpy as np
# Using NumPy and letting it figure it out automagically
data_also = np.loadtxt(path)
It really depends on in what way you want to use your data.
The code below gives a sketch of what you might want to do if you have some serious fixed-column-width file handling to do.
"Serious" = multiple record types in each of multiple file types, records up to 1000 bytes, the layout-definer and "opposing" producer/consumer is a government department with attitude, layout changes result in unused columns, up to a million records in a file, ...
Features: Precompiles the struct formats. Ignores unwanted columns. Converts input strings to required data types (sketch omits error handling). Converts records to object instances (or dicts, or named tuples if you prefer).
Code:
import struct, datetime, io, pprint
# functions for converting input fields to usable data
cnv_text = rstrip
cnv_int = int
cnv_date_dmy = lambda s: datetime.datetime.strptime(s, "%d%m%Y") # ddmmyyyy
# etc
# field specs (field name, start pos (1-relative), len, converter func)
fieldspecs = [
('surname', 11, 20, cnv_text),
('given_names', 31, 20, cnv_text),
('birth_date', 51, 8, cnv_date_dmy),
('start_date', 71, 8, cnv_date_dmy),
]
fieldspecs.sort(key=lambda x: x[1]) # just in case
# build the format for struct.unpack
unpack_len = 0
unpack_fmt = ""
for fieldspec in fieldspecs:
start = fieldspec[1] - 1
end = start + fieldspec[2]
if start > unpack_len:
unpack_fmt += str(start - unpack_len) + "x"
unpack_fmt += str(end - start) + "s"
unpack_len = end
field_indices = range(len(fieldspecs))
print unpack_len, unpack_fmt
unpacker = struct.Struct(unpack_fmt).unpack_from
class Record(object):
pass
# or use named tuples
raw_data = """\
....v....1....v....2....v....3....v....4....v....5....v....6....v....7....v....8
Featherstonehaugh Algernon Marmaduke 31121969 01012005XX
"""
f = cStringIO.StringIO(raw_data)
headings = f.next()
for line in f:
# The guts of this loop would of course be hidden away in a function/method
# and could be made less ugly
raw_fields = unpacker(line)
r = Record()
for x in field_indices:
setattr(r, fieldspecs[x][0], fieldspecs[x][3](raw_fields[x]))
pprint.pprint(r.__dict__)
print "Customer name:", r.given_names, r.surname
Output:
78 10x20s20s8s12x8s
{'birth_date': datetime.datetime(1969, 12, 31, 0, 0),
'given_names': 'Algernon Marmaduke',
'start_date': datetime.datetime(2005, 1, 1, 0, 0),
'surname': 'Featherstonehaugh'}
Customer name: Algernon Marmaduke Featherstonehaugh
> str = '1234567890'
> w = [0,2,5,7,10]
> [ str[ w[i-1] : w[i] ] for i in range(1,len(w)) ]
['12', '345', '67', '890']
This is how I solved with a dictionary that contains where fields start and end. Giving start and end points helped me to manage changes at the length of the column also.
# fixed length
# '---------- ------- ----------- -----------'
line = '20.06.2019 myname active mydevice '
SLICES = {'date_start': 0,
'date_end': 10,
'name_start': 11,
'name_end': 18,
'status_start': 19,
'status_end': 30,
'device_start': 31,
'device_end': 42}
def get_values_as_dict(line, SLICES):
values = {}
key_list = {key.split("_")[0] for key in SLICES.keys()}
for key in key_list:
values[key] = line[SLICES[key+"_start"]:SLICES[key+"_end"]].strip()
return values
>>> print (get_values_as_dict(line,SLICES))
{'status': 'active', 'name': 'myname', 'date': '20.06.2019', 'device': 'mydevice'}
Here's a simple module for Python 3, based on John Machin's answer - adapt as needed :)
"""
fixedwidth
Parse and iterate through a fixedwidth text file, returning record objects.
Adapted from https://stackoverflow.com/a/4916375/243392
USAGE
import fixedwidth, pprint
# define the fixed width fields we want
# fieldspecs is a list of [name, description, start, width, type] arrays.
fieldspecs = [
["FILEID", "File Identification", 1, 6, "A/N"],
["STUSAB", "State/U.S. Abbreviation (USPS)", 7, 2, "A"],
["SUMLEV", "Summary Level", 9, 3, "A/N"],
["LOGRECNO", "Logical Record Number", 19, 7, "N"],
["POP100", "Population Count (100%)", 30, 9, "N"],
]
# define the fieldtype conversion functions
fieldtype_fns = {
'A': str.rstrip,
'A/N': str.rstrip,
'N': int,
}
# iterate over record objects in the file
with open(f, 'rb'):
for record in fixedwidth.reader(f, fieldspecs, fieldtype_fns):
pprint.pprint(record.__dict__)
# output:
{'FILEID': 'SF1ST', 'LOGRECNO': 2, 'POP100': 1, 'STUSAB': 'TX', 'SUMLEV': '040'}
{'FILEID': 'SF1ST', 'LOGRECNO': 3, 'POP100': 2, 'STUSAB': 'TX', 'SUMLEV': '040'}
...
"""
import struct, io
# fieldspec columns
iName, iDescription, iStart, iWidth, iType = range(5)
def get_struct_unpacker(fieldspecs):
"""
Build the format string for struct.unpack to use, based on the fieldspecs.
fieldspecs is a list of [name, description, start, width, type] arrays.
Returns a string like "6s2s3s7x7s4x9s".
"""
unpack_len = 0
unpack_fmt = ""
for fieldspec in fieldspecs:
start = fieldspec[iStart] - 1
end = start + fieldspec[iWidth]
if start > unpack_len:
unpack_fmt += str(start - unpack_len) + "x"
unpack_fmt += str(end - start) + "s"
unpack_len = end
struct_unpacker = struct.Struct(unpack_fmt).unpack_from
return struct_unpacker
class Record(object):
pass
# or use named tuples
def reader(f, fieldspecs, fieldtype_fns):
"""
Wrap a fixedwidth file and return records according to the given fieldspecs.
fieldspecs is a list of [name, description, start, width, type] arrays.
fieldtype_fns is a dictionary of functions used to transform the raw string values,
one for each type.
"""
# make sure fieldspecs are sorted properly
fieldspecs.sort(key=lambda fieldspec: fieldspec[iStart])
struct_unpacker = get_struct_unpacker(fieldspecs)
field_indices = range(len(fieldspecs))
for line in f:
raw_fields = struct_unpacker(line) # split line into field values
record = Record()
for i in field_indices:
fieldspec = fieldspecs[i]
fieldname = fieldspec[iName]
s = raw_fields[i].decode() # convert raw bytes to a string
fn = fieldtype_fns[fieldspec[iType]] # get conversion function
value = fn(s) # convert string to value (eg to an int)
setattr(record, fieldname, value)
yield record
if __name__=='__main__':
# test module
import pprint, io
# define the fields we want
# fieldspecs are [name, description, start, width, type]
fieldspecs = [
["FILEID", "File Identification", 1, 6, "A/N"],
["STUSAB", "State/U.S. Abbreviation (USPS)", 7, 2, "A"],
["SUMLEV", "Summary Level", 9, 3, "A/N"],
["LOGRECNO", "Logical Record Number", 19, 7, "N"],
["POP100", "Population Count (100%)", 30, 9, "N"],
]
# define a conversion function for integers
def to_int(s):
"""
Convert a numeric string to an integer.
Allows a leading ! as an indicator of missing or uncertain data.
Returns None if no data.
"""
try:
return int(s)
except:
try:
return int(s[1:]) # ignore a leading !
except:
return None # assume has a leading ! and no value
# define the conversion fns
fieldtype_fns = {
'A': str.rstrip,
'A/N': str.rstrip,
'N': to_int,
# 'N': int,
# 'D': lambda s: datetime.datetime.strptime(s, "%d%m%Y"), # ddmmyyyy
# etc
}
# define a fixedwidth sample
sample = """\
SF1ST TX04089000 00000023748 1
SF1ST TX04090000 00000033748! 2
SF1ST TX04091000 00000043748!
"""
sample_data = sample.encode() # convert string to bytes
file_like = io.BytesIO(sample_data) # create a file-like wrapper around bytes
# iterate over record objects in the file
for record in reader(file_like, fieldspecs, fieldtype_fns):
# print(record)
pprint.pprint(record.__dict__)
Here is what NumPy uses under the hood (much much simplified, but still - this code is found in the LineSplitter class within the _iotools module):
import numpy as np
DELIMITER = (20, 10, 10, 20, 10, 10, 20)
idx = np.cumsum([0] + list(DELIMITER))
slices = [slice(i, j) for (i, j) in zip(idx[:-1], idx[1:])]
def parse(line):
return [line[s] for s in slices]
It does not handle negative delimiters for ignoring column so it is not as versatile as struct, but it is faster.
Because my old work often handles 1 million lines of fixwidth data, I did research on this issue when I started using Python.
There are 2 types of FixedWidth
ASCII FixedWidth (ascii character length = 1, double-byte encoded character length = 2)
Unicode FixedWidth (ascii character & double-byte encoded character length = 1)
If the resource string is all composed of ascii characters, then ASCII FixedWidth = Unicode FixedWidth
Fortunately, string and byte are different in py3, which reduces a lot of confusion when dealing with double-byte encoded characters (e.g.gbk, big5, euc-jp, shift-jis, etc.).
For the processing of "ASCII FixedWidth", the String is usually converted to Bytes and then split.
Without importing third-party modules
totalLineCount = 1 million, lineLength = 800 byte , FixedWidthArgs=(10,25,4,....), I split the Line in about 5 ways and get the following conclusion:
struct is the fastest (1x)
Loop only, not pre-processing FixedWidthArgs is the slowest (5x+)
slice(bytes) is faster than slice(string)
The source string is the bytes test result: struct(1x) , operator.itemgetter(1.7x) , precompiled sliceObject & list comprehensions(2.8x), re.patten object (2.9x)
When dealing with large files, we often use with open ( file, "rb") as f:.
The method traverses one of the above files, about 2.4 second.
I think the appropriate handler, which processes 1 million rows of data, splits each row into 20 fields and takes less than 2.4 seconds.
I only find that stuct and itemgetter meet the requirements
ps: For normal display, I converted unicode str to bytes.
If you are in a double-byte environment, you don't need to do this.
from itertools import accumulate
from operator import itemgetter
def oprt_parser(sArgs):
sum_arg = tuple(accumulate(abs(i) for i in sArgs))
# Negative parameter field index
cuts = tuple(i for i,num in enumerate(sArgs) if num < 0)
# Get slice args and Ignore fields of negative length
ig_Args = tuple(item for i, item in enumerate(zip((0,)+sum_arg,sum_arg)) if i not in cuts)
# Generate `operator.itemgetter` object
oprtObj =itemgetter(*[slice(s,e) for s,e in ig_Args])
return oprtObj
lineb = b'abcdefghijklmnopqrstuvwxyz\xb0\xa1\xb2\xbb\xb4\xd3\xb5\xc4\xb6\xee\xb7\xa2\xb8\xf6\xba\xcd0123456789'
line = lineb.decode("GBK")
# Unicode Fixed Width
fieldwidthsU = (13, -13, 4, -4, 5,-5) # Negative width fields is ignored
# ASCII Fixed Width
fieldwidths = (13, -13, 8, -8, 5,-5) # Negative width fields is ignored
# Unicode FixedWidth processing
parse = oprt_parser(fieldwidthsU)
fields = parse(line)
print('Unicode FixedWidth','fields: {}'.format(tuple(map(lambda s: s.encode("GBK"), fields))))
# ASCII FixedWidth processing
parse = oprt_parser(fieldwidths)
fields = parse(lineb)
print('ASCII FixedWidth','fields: {}'.format(fields))
line = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789\n'
fieldwidths = (2, -10, 24)
parse = oprt_parser(fieldwidths)
fields = parse(line)
print(f"fields: {fields}")
Output:
Unicode FixedWidth fields: (b'abcdefghijklm', b'\xb0\xa1\xb2\xbb\xb4\xd3\xb5\xc4', b'01234')
ASCII FixedWidth fields: (b'abcdefghijklm', b'\xb0\xa1\xb2\xbb\xb4\xd3\xb5\xc4', b'01234')
fields: ('AB', 'MNOPQRSTUVWXYZ0123456789')
oprt_parser is 4x make_parser(list comprehensions + slice)
During the research, it was found that when the cpu speed is faster, it seems that the efficiency of the re method increases faster.
Since I don't have more and better computers to test, provide my test code, if anyone is interested, you can test it with a faster computer.
Run Environment:
os:win10
python: 3.7.2
CPU:amd athlon x3 450
HD:seagate 1T
import timeit
import time
import re
from itertools import accumulate
from operator import itemgetter
def eff2(stmt,onlyNum= False,showResult=False):
'''test function'''
if onlyNum:
rl = timeit.repeat(stmt=stmt,repeat=roundI,number=timesI,globals=globals())
avg = sum(rl) / len(rl)
return f"{avg * (10 ** 6)/timesI:0.4f}"
else:
rl = timeit.repeat(stmt=stmt,repeat=10,number=1000,globals=globals())
avg = sum(rl) / len(rl)
print(f"【{stmt}】")
print(f"\tquick avg = {avg * (10 ** 6)/1000:0.4f} s/million")
if showResult:
print(f"\t Result = {eval(stmt)}\n\t timelist = {rl}\n")
else:
print("")
def upDouble(argList,argRate):
return [c*argRate for c in argList]
tbStr = "000000001111000002222真2233333333000000004444444QAZ55555555000000006666666ABC这些事中文字abcdefghijk"
tbBytes = tbStr.encode("GBK")
a20 = (4,4,2,2,2,3,2,2, 2 ,2,8,8,7,3,8,8,7,3, 12 ,11)
a20U = (4,4,2,2,2,3,2,2, 1 ,2,8,8,7,3,8,8,7,3, 6 ,11)
Slng = 800
rateS = Slng // 100
tStr = "".join(upDouble(tbStr , rateS))
tBytes = tStr.encode("GBK")
spltArgs = upDouble( a20 , rateS)
spltArgsU = upDouble( a20U , rateS)
testList = []
timesI = 100000
roundI = 5
print(f"test round = {roundI} timesI = {timesI} sourceLng = {len(tStr)} argFieldCount = {len(spltArgs)}")
print(f"pure str \n{''.ljust(60,'-')}")
# ==========================================
def str_parser(sArgs):
def prsr(oStr):
r = []
r_ap = r.append
stt=0
for lng in sArgs:
end = stt + lng
r_ap(oStr[stt:end])
stt = end
return tuple(r)
return prsr
Str_P = str_parser(spltArgsU)
# eff2("Str_P(tStr)")
testList.append("Str_P(tStr)")
print(f"pure bytes \n{''.ljust(60,'-')}")
# ==========================================
def byte_parser(sArgs):
def prsr(oBytes):
r, stt = [], 0
r_ap = r.append
for lng in sArgs:
end = stt + lng
r_ap(oBytes[stt:end])
stt = end
return r
return prsr
Byte_P = byte_parser(spltArgs)
# eff2("Byte_P(tBytes)")
testList.append("Byte_P(tBytes)")
# re,bytes
print(f"re compile object \n{''.ljust(60,'-')}")
# ==========================================
def rebc_parser(sArgs,otype="b"):
re_Args = "".join([f"(.{{{n}}})" for n in sArgs])
if otype == "b":
rebc_Args = re.compile(re_Args.encode("GBK"))
else:
rebc_Args = re.compile(re_Args)
def prsr(oBS):
return rebc_Args.match(oBS).groups()
return prsr
Rebc_P = rebc_parser(spltArgs)
# eff2("Rebc_P(tBytes)")
testList.append("Rebc_P(tBytes)")
Rebc_Ps = rebc_parser(spltArgsU,"s")
# eff2("Rebc_Ps(tStr)")
testList.append("Rebc_Ps(tStr)")
print(f"struct \n{''.ljust(60,'-')}")
# ==========================================
import struct
def struct_parser(sArgs):
struct_Args = " ".join(map(lambda x: str(x) + "s", sArgs))
def prsr(oBytes):
return struct.unpack(struct_Args, oBytes)
return prsr
Struct_P = struct_parser(spltArgs)
# eff2("Struct_P(tBytes)")
testList.append("Struct_P(tBytes)")
print(f"List Comprehensions + slice \n{''.ljust(60,'-')}")
# ==========================================
import itertools
def slice_parser(sArgs):
tl = tuple(itertools.accumulate(sArgs))
slice_Args = tuple(zip((0,)+tl,tl))
def prsr(oBytes):
return [oBytes[s:e] for s, e in slice_Args]
return prsr
Slice_P = slice_parser(spltArgs)
# eff2("Slice_P(tBytes)")
testList.append("Slice_P(tBytes)")
def sliceObj_parser(sArgs):
tl = tuple(itertools.accumulate(sArgs))
tl2 = tuple(zip((0,)+tl,tl))
sliceObj_Args = tuple(slice(s,e) for s,e in tl2)
def prsr(oBytes):
return [oBytes[so] for so in sliceObj_Args]
return prsr
SliceObj_P = sliceObj_parser(spltArgs)
# eff2("SliceObj_P(tBytes)")
testList.append("SliceObj_P(tBytes)")
SliceObj_Ps = sliceObj_parser(spltArgsU)
# eff2("SliceObj_Ps(tStr)")
testList.append("SliceObj_Ps(tStr)")
print(f"operator.itemgetter + slice object \n{''.ljust(60,'-')}")
# ==========================================
def oprt_parser(sArgs):
sum_arg = tuple(accumulate(abs(i) for i in sArgs))
cuts = tuple(i for i,num in enumerate(sArgs) if num < 0)
ig_Args = tuple(item for i,item in enumerate(zip((0,)+sum_arg,sum_arg)) if i not in cuts)
oprtObj =itemgetter(*[slice(s,e) for s,e in ig_Args])
return oprtObj
Oprt_P = oprt_parser(spltArgs)
# eff2("Oprt_P(tBytes)")
testList.append("Oprt_P(tBytes)")
Oprt_Ps = oprt_parser(spltArgsU)
# eff2("Oprt_Ps(tStr)")
testList.append("Oprt_Ps(tStr)")
print("|".join([s.split("(")[0].center(11," ") for s in testList]))
print("|".join(["".center(11,"-") for s in testList]))
print("|".join([eff2(s,True).rjust(11," ") for s in testList]))
Output:
Test round = 5 timesI = 100000 sourceLng = 744 argFieldCount = 20
...
...
   Str_P | Byte_P | Rebc_P | Rebc_Ps | Struct_P | Slice_P | SliceObj_P|SliceObj_Ps| Oprt_P | Oprt_Ps
-----------|-----------|-----------|-----------|-- ---------|-----------|-----------|-----------|---- -------|-----------
     9.6315| 7.5952| 4.4187| 5.6867| 1.5123| 5.2915| 4.2673| 5.7121| 2.4713| 3.9051
String slicing doesn't have to be ugly as long as you keep it organized. Consider storing your field widths in a dictionary and then using the associated names to create an object:
from collections import OrderedDict
class Entry:
def __init__(self, line):
name2width = OrderedDict()
name2width['foo'] = 2
name2width['bar'] = 3
name2width['baz'] = 2
pos = 0
for name, width in name2width.items():
val = line[pos : pos + width]
if len(val) != width:
raise ValueError("not enough characters: \'{}\'".format(line))
setattr(self, name, val)
pos += width
file = "ab789yz\ncd987wx\nef555uv"
entry = []
for line in file.split('\n'):
entry.append(Entry(line))
print(entry[1].bar) # output: 987
I like to process text files containing fixed width fields using regular expressions. More specifically, using named capture groups. It's fast, does not require importing large libraries and is quite descriptive and convenient (in my opinion).
I also like the fact that the named capture groups are basically auto-documenting the data format, acting as a sort of data specification, since each capture group can be written to define each fields' name, data type and length.
Here's simple example...
import re
data = [
"1234ABCDEFGHIJ5",
"6789KLMNOPQRST0"
]
record_regex = (
r"^"
r"(?P<firstnumbers>[0-9]{4})"
r"(?P<middletext>[a-zA-Z0-9_\-\s]{10})"
r"(?P<lastnumber>[0-9]{1})"
r"$"
)
records = []
for line in data:
match = re.match(record_regex, line)
if match:
records.append(match.groupdict())
print(records)
...that yields a convenient dictionary of each record:
[
{'firstnumbers': '1234', 'lastnumber': '5', 'middletext': 'ABCDEFGHIJ'},
{'firstnumbers': '6789', 'lastnumber': '0', 'middletext': 'KLMNOPQRST'}
]
Helpful tools, like the online regex tester and debugger, are available if you are not familiar (or comfortable) with Python regular expressions or named capture groups.

Categories