I need the Python analog for this Perl string:
unpack("nNccH*", string_val)
I need the nNccH* - data format in Python format characters.
In Perl it unpack binary data to five variables:
16 bit value in "network" (big-endian)
32 bit value in "network" (big-endian)
Signed char (8-bit integer) value
Signed char (8-bit integer) value
Hexadecimal string, high nibble first
But I can't do it in Python
More:
bstring = ''
while DataByte = client[0].recv(1):
bstring += DataByte
print len(bstring)
if len(bstring):
a, b, c, d, e = unpack("nNccH*", bstring)
I never wrote in Perl or Python, but my current task is to write a multithreading Python server that was written in Perl...
The Perl format "nNcc" is equivalent to the Python format "!HLbb".
There is no direct equivalent in Python for Perl's "H*".
There are two problems.
Python's struct.unpack does not accept the wildcard character, *
Python's struct.unpack does not "hexlify" data strings
The first problem can be worked-around using a helper function like unpack.
The second problem can be solved using binascii.hexlify:
import struct
import binascii
def unpack(fmt, data):
"""
Return struct.unpack(fmt, data) with the optional single * in fmt replaced with
the appropriate number, given the length of data.
"""
# http://stackoverflow.com/a/7867892/190597
try:
return struct.unpack(fmt, data)
except struct.error:
flen = struct.calcsize(fmt.replace('*', ''))
alen = len(data)
idx = fmt.find('*')
before_char = fmt[idx-1]
n = (alen-flen)//struct.calcsize(before_char)+1
fmt = ''.join((fmt[:idx-1], str(n), before_char, fmt[idx+1:]))
return struct.unpack(fmt, data)
data = open('data').read()
x = list(unpack("!HLbbs*", data))
# x[-1].encode('hex') works in Python 2, but not in Python 3
x[-1] = binascii.hexlify(x[-1])
print(x)
When tested on data produced by this Perl script:
$line = pack("nNccH*", 1, 2, 10, 4, '1fba');
print "$line";
The Python script yields
[1, 2, 10, 4, '1fba']
The equivalent Python function you're looking for is struct.unpack. Documentation of the format string is here: http://docs.python.org/library/struct.html
You will have a better chance of getting help if you actually explain what kind of unpacking you need. Not everyone knows Perl.
Related
I'm trying to modify the code shown far below, which works in Python 2.7.x, so it will also work unchanged in Python 3.x. However I'm encountering the following problem I can't solve in the first function, bin_to_float() as shown by the output below:
float_to_bin(0.000000): '0'
Traceback (most recent call last):
File "binary-to-a-float-number.py", line 36, in <module>
float = bin_to_float(binary)
File "binary-to-a-float-number.py", line 9, in bin_to_float
return struct.unpack('>d', bf)[0]
TypeError: a bytes-like object is required, not 'str'
I tried to fix that by adding a bf = bytes(bf) right before the call to struct.unpack(), but doing so produced its own TypeError:
TypeError: string argument without an encoding
So my questions are is it possible to fix this issue and achieve my goal? And if so, how? Preferably in a way that would work in both versions of Python.
Here's the code that works in Python 2:
import struct
def bin_to_float(b):
""" Convert binary string to a float. """
bf = int_to_bytes(int(b, 2), 8) # 8 bytes needed for IEEE 754 binary64
return struct.unpack('>d', bf)[0]
def int_to_bytes(n, minlen=0): # helper function
""" Int/long to byte string. """
nbits = n.bit_length() + (1 if n < 0 else 0) # plus one for any sign bit
nbytes = (nbits+7) // 8 # number of whole bytes
bytes = []
for _ in range(nbytes):
bytes.append(chr(n & 0xff))
n >>= 8
if minlen > 0 and len(bytes) < minlen: # zero pad?
bytes.extend((minlen-len(bytes)) * '0')
return ''.join(reversed(bytes)) # high bytes at beginning
# tests
def float_to_bin(f):
""" Convert a float into a binary string. """
ba = struct.pack('>d', f)
ba = bytearray(ba)
s = ''.join('{:08b}'.format(b) for b in ba)
s = s.lstrip('0') # strip leading zeros
return s if s else '0' # but leave at least one
for f in 0.0, 1.0, -14.0, 12.546, 3.141593:
binary = float_to_bin(f)
print('float_to_bin(%f): %r' % (f, binary))
float = bin_to_float(binary)
print('bin_to_float(%r): %f' % (binary, float))
print('')
To make portable code that works with bytes in both Python 2 and 3 using libraries that literally use the different data types between the two, you need to explicitly declare them using the appropriate literal mark for every string (or add from __future__ import unicode_literals to top of every module doing this). This step is to ensure your data types are correct internally in your code.
Secondly, make the decision to support Python 3 going forward, with fallbacks specific for Python 2. This means overriding str with unicode, and figure out methods/functions that do not return the same types in both Python versions should be modified and replaced to return the correct type (being the Python 3 version). Do note that bytes is a reserved word, too, so don't use that.
Putting this together, your code will look something like this:
import struct
import sys
if sys.version_info < (3, 0):
str = unicode
chr = unichr
def bin_to_float(b):
""" Convert binary string to a float. """
bf = int_to_bytes(int(b, 2), 8) # 8 bytes needed for IEEE 754 binary64
return struct.unpack(b'>d', bf)[0]
def int_to_bytes(n, minlen=0): # helper function
""" Int/long to byte string. """
nbits = n.bit_length() + (1 if n < 0 else 0) # plus one for any sign bit
nbytes = (nbits+7) // 8 # number of whole bytes
ba = bytearray(b'')
for _ in range(nbytes):
ba.append(n & 0xff)
n >>= 8
if minlen > 0 and len(ba) < minlen: # zero pad?
ba.extend((minlen-len(ba)) * b'0')
return u''.join(str(chr(b)) for b in reversed(ba)).encode('latin1') # high bytes at beginning
# tests
def float_to_bin(f):
""" Convert a float into a binary string. """
ba = struct.pack(b'>d', f)
ba = bytearray(ba)
s = u''.join(u'{:08b}'.format(b) for b in ba)
s = s.lstrip(u'0') # strip leading zeros
return (s if s else u'0').encode('latin1') # but leave at least one
for f in 0.0, 1.0, -14.0, 12.546, 3.141593:
binary = float_to_bin(f)
print(u'float_to_bin(%f): %r' % (f, binary))
float = bin_to_float(binary)
print(u'bin_to_float(%r): %f' % (binary, float))
print(u'')
I used the latin1 codec simply because that's what the byte mappings are originally defined, and it seems to work
$ python2 foo.py
float_to_bin(0.000000): '0'
bin_to_float('0'): 0.000000
float_to_bin(1.000000): '11111111110000000000000000000000000000000000000000000000000000'
bin_to_float('11111111110000000000000000000000000000000000000000000000000000'): 1.000000
float_to_bin(-14.000000): '1100000000101100000000000000000000000000000000000000000000000000'
bin_to_float('1100000000101100000000000000000000000000000000000000000000000000'): -14.000000
float_to_bin(12.546000): '100000000101001000101111000110101001111110111110011101101100100'
bin_to_float('100000000101001000101111000110101001111110111110011101101100100'): 12.546000
float_to_bin(3.141593): '100000000001001001000011111101110000010110000101011110101111111'
bin_to_float('100000000001001001000011111101110000010110000101011110101111111'): 3.141593
Again, but this time under Python 3.5)
$ python3 foo.py
float_to_bin(0.000000): b'0'
bin_to_float(b'0'): 0.000000
float_to_bin(1.000000): b'11111111110000000000000000000000000000000000000000000000000000'
bin_to_float(b'11111111110000000000000000000000000000000000000000000000000000'): 1.000000
float_to_bin(-14.000000): b'1100000000101100000000000000000000000000000000000000000000000000'
bin_to_float(b'1100000000101100000000000000000000000000000000000000000000000000'): -14.000000
float_to_bin(12.546000): b'100000000101001000101111000110101001111110111110011101101100100'
bin_to_float(b'100000000101001000101111000110101001111110111110011101101100100'): 12.546000
float_to_bin(3.141593): b'100000000001001001000011111101110000010110000101011110101111111'
bin_to_float(b'100000000001001001000011111101110000010110000101011110101111111'): 3.141593
It's a lot more work, but in Python3 you can more clearly see that the types are done as proper bytes. I also changed your bytes = [] to a bytearray to more clearly express what you were trying to do.
I had a different approach from #metatoaster's answer. I just modified int_to_bytes to use and return a bytearray:
def int_to_bytes(n, minlen=0): # helper function
""" Int/long to byte string. """
nbits = n.bit_length() + (1 if n < 0 else 0) # plus one for any sign bit
nbytes = (nbits+7) // 8 # number of whole bytes
b = bytearray()
for _ in range(nbytes):
b.append(n & 0xff)
n >>= 8
if minlen > 0 and len(b) < minlen: # zero pad?
b.extend([0] * (minlen-len(b)))
return bytearray(reversed(b)) # high bytes at beginning
This seems to work without any other modifications under both Python 2.7.11 and Python 3.5.1.
Note that I zero padded with 0 instead of '0'. I didn't do much testing, but surely that's what you meant?
In Python 3, integers have a to_bytes() method that can perform the conversion in a single call. However, since you asked for a solution that works on Python 2 and 3 unmodified, here's an alternative approach.
If you take a detour via hexadecimal representation, the function int_to_bytes() becomes very simple:
import codecs
def int_to_bytes(n, minlen=0):
hex_str = format(n, "0{}x".format(2 * minlen))
return codecs.decode(hex_str, "hex")
You might need some special case handling to deal with the case when the hex string gets an odd number of characters.
Note that I'm not sure this works with all versions of Python 3. I remember that pseudo-encodings weren't supported in some 3.x version, but I don't remember the details. I tested the code with Python 3.5.
I'm receiving a byte array via serial communication and converting part of the byte array to an integer. The code is as follows:
data = conn.recv(40)
print(data)
command = data[0:7]
if(command == b'FORWARD' and data[7] == 3):
value = 0
counter = 8
while (data[counter] != 4):
value = value * 10 + int(data[counter] - 48)
counter = counter + 1
In short, I unpack the bytearray data starting at location 8 and going until I hit a delimiter of b'\x03'. So I'm unpacking an integer of from 1 to 3 digits, and putting the numeric value into value.
This brute force method works. But is there a more elegant way to do it in Python? I'm new to the language and would like to learn better ways of doing some of these things.
You can find the delimiter, convert the substring of the bytearray to str and then int. Here's a little function to do that:
def intToDelim( ba, delim ):
i=ba.find( delim )
return int(str(ba[0:i]))
which you can invoke with
value = intToDelim( data[8:], b'\x04' )
(or with b'\x03' if that's your delimiter). This works in Python 2.7 and should work with little or no change in Python 3.
I have a large string more than 256 bits and and I need to byte swap it by 32 bits. But the string is in a hexadecimal base. When I looked at numpy and array modules I couldnt find the right syntax as to how to do the coversion. Could someone please help me?
An example:(thought the data is much longer.I can use pack but then I would have to convert the little endian to decimal and then to big endian first which seems like a waste):
Input:12345678abcdeafa
Output:78563412faeacdab
Convert the string to bytes, unpack big-endian 32-bit and pack little-endian 32-bit (or vice versa) and convert back to a string:
#!python3
import binascii
import struct
Input = b'12345678abcdeafa'
Output = b'78563412faeacdab'
def convert(s):
s = binascii.unhexlify(s)
a,b = struct.unpack('>LL',s)
s = struct.pack('<LL',a,b)
return binascii.hexlify(s)
print(convert(Input),Output)
Output:
b'78563412faeacdab' b'78563412faeacdab'
Generalized for any string with length multiple of 4:
import binascii
import struct
Input = b'12345678abcdeafa'
Output = b'78563412faeacdab'
def convert(s):
if len(s) % 4 != 0:
raise ValueError('string length not multiple of 4')
s = binascii.unhexlify(s)
f = '{}L'.format(len(s)//4)
dw = struct.unpack('>'+f,s)
s = struct.pack('<'+f,*dw)
return binascii.hexlify(s)
print(convert(Input),Output)
If they really are strings, just do string operations on them?
>>> input = "12345678abcdeafa"
>>> input[7::-1]+input[:7:-1]
'87654321afaedcba'
My take:
slice the string in N digit chunks
reverse each chunk
concatenate everything
Example:
>>> source = '12345678abcdeafa87654321afaedcba'
>>> # small helper to slice the input in 8 digit chunks
>>> chunks = lambda iterable, sz: [iterable[i:i+sz]
for i in range(0, len(iterable), sz)]
>>> swap = lambda source, sz: ''.join([chunk[::-1]
for chunk in chunks(source, sz)])
Output asked in the original question:
>>> swap(source, 8)
'87654321afaedcba12345678abcdeafa'
It is easy to adapt in order to match the required output after icktoofay edit:
>>> swap(swap(source, 8), 2)
'78563412faeacdab21436587badcaeaf'
A proper implementation probably should check if len(source) % 8 == 0.
In Java, I can encode a BigInteger as:
java.math.BigInteger bi = new java.math.BigInteger("65537L");
String encoded = Base64.encodeBytes(bi.toByteArray(), Base64.ENCODE|Base64.DONT_GUNZIP);
// result: 65537L encodes as "AQAB" in Base64
byte[] decoded = Base64.decode(encoded, Base64.DECODE|Base64.DONT_GUNZIP);
java.math.BigInteger back = new java.math.BigInteger(decoded);
In C#:
System.Numerics.BigInteger bi = new System.Numerics.BigInteger("65537L");
string encoded = Convert.ToBase64(bi);
byte[] decoded = Convert.FromBase64String(encoded);
System.Numerics.BigInteger back = new System.Numerics.BigInteger(decoded);
How can I encode long integers in Python as Base64-encoded strings? What I've tried so far produces results different from implementations in other languages (so far I've tried in Java and C#), particularly it produces longer-length Base64-encoded strings.
import struct
encoded = struct.pack('I', (1<<16)+1).encode('base64')[:-1]
# produces a longer string, 'AQABAA==' instead of the expected 'AQAB'
When using this Python code to produce a Base64-encoded string, the resulting decoded integer in Java (for example) produces instead 16777472 in place of the expected 65537. Firstly, what am I missing?
Secondly, I have to figure out by hand what is the length format to use in struct.pack; and if I'm trying to encode a long number (greater than (1<<64)-1) the 'Q' format specification is too short to hold the representation. Does that mean that I have to do the representation by hand, or is there an undocumented format specifier for the struct.pack function? (I'm not compelled to use struct, but at first glance it seemed to do what I needed.)
Check out this page on converting integer to base64.
import base64
import struct
def encode(n):
data = struct.pack('<Q', n).rstrip('\x00')
if len(data)==0:
data = '\x00'
s = base64.urlsafe_b64encode(data).rstrip('=')
return s
def decode(s):
data = base64.urlsafe_b64decode(s + '==')
n = struct.unpack('<Q', data + '\x00'* (8-len(data)) )
return n[0]
The struct module:
… performs conversions between Python values and C structs represented as Python strings.
Because C doesn't have infinite-length integers, there's no functionality for packing them.
But it's very easy to write yourself. For example:
def pack_bigint(i):
b = bytearray()
while i:
b.append(i & 0xFF)
i >>= 8
return b
Or:
def pack_bigint(i):
bl = (i.bit_length() + 7) // 8
fmt = '<{}B'.format(bl)
# ...
And so on.
And of course you'll want an unpack function, like jbatista's from the comments:
def unpack_bigint(b):
b = bytearray(b) # in case you're passing in a bytes/str
return sum((1 << (bi*8)) * bb for (bi, bb) in enumerate(b))
This is a bit late, but I figured I'd throw my hat in the ring:
def inttob64(n):
"""
Given an integer returns the base64 encoded version of it (no trailing ==)
"""
parts = []
while n:
parts.insert(0,n & limit)
n >>= 32
data = struct.pack('>' + 'L'*len(parts),*parts)
s = base64.urlsafe_b64encode(data).rstrip('=')
return s
def b64toint(s):
"""
Given a string with a base64 encoded value, return the integer representation
of it
"""
data = base64.urlsafe_b64decode(s + '==')
n = 0
while data:
n <<= 32
(toor,) = struct.unpack('>L',data[:4])
n |= toor & 0xffffffff
data = data[4:]
return n
These functions turn an arbitrary-sized long number to/from a big-endian base64 representation.
Here is something that may help. Instead of using struct.pack() I am building a string of bytes to encode and then calling the BASE64 encode on that. I didn't write the decode, but clearly the decode can recover an identical string of bytes and a loop could recover the original value. I don't know if you need fixed-size integers (like always 128-bit) and I don't know if you need Big Endian so I left the decoder for you.
Also, encode64() and decode64() are from #msc's answer, but modified to work.
import base64
import struct
def encode64(n):
data = struct.pack('<Q', n).rstrip('\x00')
if len(data)==0:
data = '\x00'
s = base64.urlsafe_b64encode(data).rstrip('=')
return s
def decode64(s):
data = base64.urlsafe_b64decode(s + '==')
n = struct.unpack('<Q', data + '\x00'* (8-len(data)) )
return n[0]
def encode(n, big_endian=False):
lst = []
while True:
n, lsb = divmod(n, 0x100)
lst.append(chr(lsb))
if not n:
break
if big_endian:
# I have not tested Big Endian mode, and it may need to have
# some initial zero bytes prepended; like, if the integer is
# supposed to be a 128-bit integer, and you encode a 1, you
# would need this to have 15 leading zero bytes.
initial_zero_bytes = '\x00' * 2
data = initial_zero_bytes + ''.join(reversed(lst))
else:
data = ''.join(lst)
s = base64.urlsafe_b64encode(data).rstrip('=')
return s
print encode(1234567890098765432112345678900987654321)
I am trying to get a grip around the packing and unpacking of binary data in Python 3. Its actually not that hard to understand, except one problem:
what if I have a variable length textstring and want to pack and unpack this in the most elegant manner?
As far as I can tell from the manual I can only unpack fixed size strings directly? In that case, are there any elegant way of getting around this limitation without padding lots and lots of unnecessary zeroes?
The struct module does only support fixed-length structures. For variable-length strings, your options are either:
Dynamically construct your format string (a str will have to be converted to a bytes before passing it to pack()):
s = bytes(s, 'utf-8') # Or other appropriate encoding
struct.pack("I%ds" % (len(s),), len(s), s)
Skip struct and just use normal string methods to add the string to your pack()-ed output: struct.pack("I", len(s)) + s
For unpacking, you just have to unpack a bit at a time:
(i,), data = struct.unpack("I", data[:4]), data[4:]
s, data = data[:i], data[i:]
If you're doing a lot of this, you can always add a helper function which uses calcsize to do the string slicing:
def unpack_helper(fmt, data):
size = struct.calcsize(fmt)
return struct.unpack(fmt, data[:size]), data[size:]
I've googled up this question and a couple of solutions.
construct
An elaborate, flexible solution.
Instead of writing imperative code to parse a piece of data, you declaratively define a data structure that describes your data. As this data structure is not code, you can use it in one direction to parse data into Pythonic objects, and in the other direction, convert (“build”) objects into binary data.
The library provides both simple, atomic constructs (such as integers of various sizes), as well as composite ones which allow you form hierarchical structures of increasing complexity. Construct features bit and byte granularity, easy debugging and testing, an easy-to-extend subclass system, and lots of primitive constructs to make your work easier:
Updated: Python 3.x, construct 2.10.67; also they have native PascalString, so renamed
from construct import *
myPascalString = Struct(
"length" / Int8ul,
"data" / Bytes(lambda ctx: ctx.length)
)
>>> myPascalString.parse(b'\x05helloXXX')
Container(length=5, data=b'hello')
>>> myPascalString.build(Container(length=6, data=b"foobar"))
b'\x06foobar'
myPascalString2 = ExprAdapter(myPascalString,
encoder=lambda obj, ctx: Container(length=len(obj), data=obj),
decoder=lambda obj, ctx: obj.data
)
>>> myPascalString2.parse(b"\x05hello")
b'hello'
>>> myPascalString2.build(b"i'm a long string")
b"\x11i'm a long string"
ed: Also pay attention to that ExprAdapter, once native PascalString won't be doing what you need from it, this is what you will be doing.
netstruct
A quick solution if you only need a struct extension for variable length byte sequences. Nesting a variable-length structure can be achieved by packing the first pack results.
NetStruct supports a new formatting character, the dollar sign ($). The dollar sign represents a variable-length string, encoded with its length preceeding the string itself.
edit: Looks like the length of a variable-length string uses the same data type as the elements. Thus, the maximum length of variable-length string of bytes is 255, if words - 65535, and so on.
import netstruct
>>> netstruct.pack(b"b$", b"Hello World!")
b'\x0cHello World!'
>>> netstruct.unpack(b"b$", b"\x0cHello World!")
[b'Hello World!']
An easy way that I was able to do a variable length when packing a string is:
pack('{}s'.format(len(string)), string)
when unpacking it is kind of the same way
unpack('{}s'.format(len(data)), data)
Here's some wrapper functions I wrote which help, they seem to work.
Here's the unpacking helper:
def unpack_from(fmt, data, offset = 0):
(byte_order, fmt, args) = (fmt[0], fmt[1:], ()) if fmt and fmt[0] in ('#', '=', '<', '>', '!') else ('#', fmt, ())
fmt = filter(None, re.sub("p", "\tp\t", fmt).split('\t'))
for sub_fmt in fmt:
if sub_fmt == 'p':
(str_len,) = struct.unpack_from('B', data, offset)
sub_fmt = str(str_len + 1) + 'p'
sub_size = str_len + 1
else:
sub_fmt = byte_order + sub_fmt
sub_size = struct.calcsize(sub_fmt)
args += struct.unpack_from(sub_fmt, data, offset)
offset += sub_size
return args
Here's the packing helper:
def pack(fmt, *args):
(byte_order, fmt, data) = (fmt[0], fmt[1:], '') if fmt and fmt[0] in ('#', '=', '<', '>', '!') else ('#', fmt, '')
fmt = filter(None, re.sub("p", "\tp\t", fmt).split('\t'))
for sub_fmt in fmt:
if sub_fmt == 'p':
(sub_args, args) = ((args[0],), args[1:]) if len(args) > 1 else ((args[0],), [])
sub_fmt = str(len(sub_args[0]) + 1) + 'p'
else:
(sub_args, args) = (args[:len(sub_fmt)], args[len(sub_fmt):])
sub_fmt = byte_order + sub_fmt
data += struct.pack(sub_fmt, *sub_args)
return data
To pack use
packed=bytes('sample string','utf-8')
To unpack use
string=str(packed)[2:][:-1]
This works only on utf-8 string and quite simple workaround.
Nice, but can't handle numeric number of fields, such as '6B' for 'BBBBBB'. The solution would be to expand format string in both functions before use. I came up with this:
def pack(fmt, *args):
fmt = re.sub('(\d+)([^\ds])', lambda x: x.group(2) * int(x.group(1)), fmt)
...
And same for unpack. Maybe not most elegant, but it works :)
Another silly but very simple approach: (PS:as others mentioned there is no pure pack/unpack support for that, with that in mind)
import struct
def pack_variable_length_string(s: str) -> bytes:
str_size_bytes = struct.pack('!Q', len(s))
str_bytes = s.encode('UTF-8')
return str_size_bytes + str_bytes
def unpack_variable_length_string(sb: bytes, offset=0) -> (str, int):
str_size_bytes = struct.unpack('!Q', sb[offset:offset + 8])[0]
return sb[offset + 8:offset + 8 + str_size_bytes].decode('UTF-8'), 8 + str_size_bytes + offset
if __name__ == '__main__':
b = pack_variable_length_string('Worked maybe?') + \
pack_variable_length_string('It seems it did?') + \
pack_variable_length_string('Are you sure?') + \
pack_variable_length_string('Surely.')
next_offset = 0
for i in range(4):
s, next_offset = unpack_variable_length_string(b, next_offset)
print(s)