Is there any way to add two bytes with overflow in python? - python

I am using pySerial to read in data from an attached device. I want to calculate the checksum of each received packet. The packet is read in as a char array, with the actual checksum being the very last byte, at the end of the packet. To calculate the checksum, I would normally sum over the packet payload, and then compare it to the actual checksum.
Normally in a language like C, we would expect overflow, because the checksum itself is only one byte. I'm not sure about the internals of python, but from my experience with the language it looks like it will default to a larger size variable (maybe some internal bigInt class or something). Is there anyway to mimic the expected behavior of adding two chars, without writing my own implementation? Thanks.

Sure, just take the modulus of your result to fit it back in the size you want. You can do the modulus at the end or at every step. For example:
>>> payload = [100, 101, 102, 103, 104] # arbitrary sequence of bytes
>>> sum(payload) % 256 # modulo 256 to make the answer fit in a single byte
254 # this would be your checksum

to improve upon the earlier example, just bitwise-and it with 0xFF. not sure if python does the optimization by default or not.
sum(bytes) & 0xFF

Summing the bytes and then taking the modulus, as in sum(bytes) % 256 (or sum(bytes) & 0xFF), is (in many programming languages) vulnerable to integer overflow, since there is a finite maximum value that integer types can represent.
But, since we are talking about Python, this is not technically an issue: Python integers are arbitrary-precision, so an integer overflow can't occur.
If you want to perform the modulus operation on an element-by-element basis, you can use functools.reduce():
>>> payload = [100, 101, 102, 103, 104] # arbitrary sequence of bytes
# (Python 3 uses functools.reduce() instead of builtin reduce() function)
>>> import functools
>>> functools.reduce(lambda x,y: (x+y)%256, payload)
254

Related

Switching endianness in the middle of a struct.unpack format string

I have a bunch of binary data (the contents of a video game save-file, as it happens) where a part of the data contains both little-endian and big-endian integer values. Naively, without reading much of the docs, I tried to unpack it this way...
struct.unpack(
'3sB<H<H<H<H4s<I<I32s>IbBbBbBbB12s20sBB4s',
string_data
)
...and of course I got this cryptic error message:
struct.error: bad char in struct format
The problem is that struct.unpack format strings do not expect individual fields to be marked with endianness. The actually correct format-string here would be something like
struct.unpack(
'<3sBHHHH4sII32sIbBbBbBbB12s20sBB4s',
string_data
)
except that this will flip the endianness of the third I field (parsing it as little-endian, when I really want to parse it as big-endian).
Is there an easy and/or "Pythonic" solution to my problem? I have already thought of three possible solutions, but none of them is particularly elegant. In the absence of better ideas I'll probably go with number 3:
I could extract a substring and parse it separately:
(my.f1, my.f2, ...) = struct.unpack('<3sBHHHH4sII32sIbBbBbBbB12s20sBB4s', string_data)
my.f11 = struct.unpack('>I', string_data[56:60])
I could flip the bits in the field after the fact:
(my.f1, my.f2, ...) = struct.unpack('<3sBHHHH4sII32sIbBbBbBbB12s20sBB4s', string_data)
my.f11 = swap32(my.f11)
I could just change my downstream code to expect this field to be represented differently — it's actually a bitmask, not an arithmetic integer, so it wouldn't be too hard to flip around all the bitmasks I'm using with it; but the big-endian versions of these bitmasks are more mnemonically relevant than the little-endian versions.
A little late to the party, but I just had the same problem. I solved it with a custom numpy dtype, which allows to mix elements with different endianess (see https://numpy.org/doc/stable/reference/generated/numpy.dtype.html):
t=np.dtype('>u4,<u4') # Compound type with two 4-byte unsigned int with different byte order
a=np.zeros(shape=1, dtype=t) # Create an array of length one with above type
a[0][0]=1 # Assign first uint
a[0][1]=1 # Assign second uint
bytes=a.tobytes() # bytes should be b'\x01\x00\x00\x00\x00\x00\x00\x01'
b=np.frombuffer(buf, dtype=t) # should yield array[(1,1)]
c=np.frombuffer(buf, dtype=np.uint32) # yields array([ 1, 16777216]

Bit order in python's struct.pack

When packing bytes with python's struct.pack, I was surprised that although my byte order is little-endian, my bit order appears to be big-endian. My most significant bytes appear on the right side in the output below, but the most significant bits of each byte appear on the left. (I'm using BitArray from bitstring to display the bits.)
In[23]: BitArray(struct.pack('B', 1)).bin
Out[23]:'00000001'
In[24]: BitArray(struct.pack('H', 1)).bin
Out[24]:'0000000100000000'
In[25]: sys.byteorder
Out[25]:'little'
This surprises me because I read here that "Bit order usually follows the same endianness as the byte order for a given computer system. That is, in a big endian system the most significant bit is stored at the lowest bit address; in a little endian system, the least significant bit is stored at the lowest bit address."
Am I interpreting it correctly that my bit order is the reverse of my byte order here?
Also, I know you can change the byte order using the > and <, but I guess there is no way to change the bit order?
Edit: For context, right now I'm writing a python implementation of TCP communication with an ATI NetFT sensor based on the protocol description starting on page B - 76 here. But, this same question comes up frequently in my work implementing serial and network communications with all sorts of sensors. In this case, the protocol description says things like: set bit 2 of byte 16 to 1 to bias the sensor, and I've been finding that bit 0 in python does not correspond to the bit 0 that controls the bias -- the bit order in the byte seems to be flipped.
No, Python supplies no way to reverse the bit order - but you don't need to. The article made you overly paranoid ;-)
The endianness of byte order is normally invisible to software. If, e.g., you read a 2-byte short in C, the underlying hardware delivers a big-endian result regardless of the physical storage convention. Store 258 (0x0102) and you read 258 back, regardless of the storage's physical byte order. The only way you can tell the difference is to read (or write) part of an N-byte value in a chunk of less than N bytes. That's common enough in network protocols and portable storage formats, but rare outside those.
Similarly, the only way you could detect the endianness of physical bit order is if the machine were bit-addressable, so you could read one bit at a time directly. I don't know of any current machine that supports bit addressing, and even if there were such a beast C supports no direct bit-level access anyway. If you read a byte at time, the hardware will deliver the bytes in big-endian bit order again regardless of the physical bit storage order.
If, e.g., you're poking a bit at a time into a bit-level serial port, then you'll need to know the convention the specific hardware requires. But in that case struct.pack() is useless anyway - the smallest unit struct.pack() manipulates is a byte, and at that level hardware bit-level ordering is invisible. For example, your struct.pack('B', 1) will unpack as 1 regardless of the bit-level endianness of the machine you run it on.
Bits of Code
Since "general principles" don't seem to be enough here, and there was no specific code presented to work with, here are bits of code that may be useful.
As mentioned in a comment, if you want to reverse a byte's bit order, the simplest and fastest way is to precompute a list with 256 items, mapping a byte to its bit-reversed value:
br = [int("{:08b}".format(i)[::-1], 2) for i in range(256)]
assert sorted(br) == list(range(256))
Then, e.g.,
>>> br[0], br[1], br[2], br[254], br[255]
(0, 128, 64, 127, 255)
If you're working with bytes objects, the .translate() method can use this table (after converting it to a bytes object) to convert the whole object with one call:
reverse_table = bytes(br)
and then, e.g.,
>>> original = bytes([0, 1, 2, 3, 254, 255])
>>> print([i for i in original.translate(reverse_table)])
[0, 128, 64, 192, 127, 255]
If instead you're building bytes a bit at a time (as in "set bit 2 of byte 16 to 1"), you can build them in "reversed order" (when appropriate) from the start. To build a byte in LSB 0 order, "setting bit i" means
byte |= 1 << i
To build a byte in MSB 0 order instead, it's
byte |= 1 << (7-i)
But without knowing the precise details of the API(s) you're using, and how you like to work, it's really not possible to guess at the precise code you need.

Proper way for converting to bigendian for network submission

I need to get an int through the network. Is this the proper way to convert to bytes in big-endian?
pack("I",socket.htonl(integer_value))
I unpack it as:
socket.ntohl(unpack("I",data)[0])
I noticed that pack-unpack also have the <> to use for endian conversion so I am not sure if I could just directly use that instead or if htonl is safer.
You should use only the struct module for communicating with another system. By using the htonl first, you'll end up with an indeterminate order being transmitted.
Since you need to convert the integer into a string of bytes in order to send it to another system, you'll need to use struct.pack (because htonl just returns a different integer than the one passed as argument and you cannot directly send an integer). And in using struct.pack you must choose an endianness for that string of bytes (if you don't specify one, you'll get a default ordering which may not be the same on the receiving side so you really need to choose one).
Converting an integer to a sequence of bytes in a definite order is exactly what struct.pack("!I", integer_value) does and a sequence of bytes in a definite order is exactly what you need on the receiving end.
On the other hand, if you use struct.pack("!I", socket.htonl(integer_value)), what does that do? Well, first it puts the integer into big-endian order (network byte order), then it takes your already big-endian integer and converts it to bytes in "big-endian order". But, on a little endian machine, that will actually reverse the ordering again, and you will end up transmitting the integer in little-endian byte order if you do both those two operations.
But on a big-endian machine htonl is a no-op, and then you're converting the result into bytes in big-endian order.
So using ntohl actually defeats the purpose and a receiving machine would have to know the byte-order used on the sending machine in order to properly decode it. Observe...
Little-endian box:
>>> print(socket.htonl(27))
452984832
>>> print(struct.pack("!I", 27))
b'\x00\x00\x00\x1b'
>>> print(struct.pack("!I", socket.htonl(27)))
b'\x1b\x00\x00\x00'
Big-endian box:
>>> print(socket.htonl(27))
27
>>> print(struct.pack("!I", 27))
b'\x00\x00\x00\x1b'
>>> print(struct.pack("!I", socket.htonl(27)))
b'\x00\x00\x00\x1b'
struct.unpack() uses '!' in the format specifiers for network byte order. But its the same as '>'...

Where are python bytearrays used?

I recently came across the dataType called bytearray in python. Could someone provide scenarios where bytearrays are required?
This answer has been shameless ripped off from here
Example 1: Assembling a message from fragments
Suppose you're writing some network code that is receiving a large message on a socket connection. If you know about sockets, you know that the recv() operation doesn't wait for all of the data to arrive. Instead, it merely returns what's currently available in the system buffers. Therefore, to get all of the data, you might write code that looks like this:
# remaining = number of bytes being received (determined already)
msg = b""
while remaining > 0:
chunk = s.recv(remaining) # Get available data
msg += chunk # Add it to the message
remaining -= len(chunk)
The only problem with this code is that concatenation (+=) has horrible performance. Therefore, a common performance optimization in Python 2 is to collect all of the chunks in a list and perform a join when you're done. Like this:
# remaining = number of bytes being received (determined already)
msgparts = []
while remaining > 0:
chunk = s.recv(remaining) # Get available data
msgparts.append(chunk) # Add it to list of chunks
remaining -= len(chunk)
msg = b"".join(msgparts) # Make the final message
Now, here's a third solution using a bytearray:
# remaining = number of bytes being received (determined already)
msg = bytearray()
while remaining > 0:
chunk = s.recv(remaining) # Get available data
msg.extend(chunk) # Add to message
remaining -= len(chunk)
Notice how the bytearray version is really clean. You don't collect parts in a list and you don't perform that cryptic join at the end. Nice.
Of course, the big question is whether or not it performs. To test this out, I first made a list of small byte fragments like this:
chunks = [b"x"*16]*512
I then used the timeit module to compare the following two code fragments:
# Version 1
msgparts = []
for chunk in chunks:
msgparts.append(chunk)
msg = b"".join(msgparts)
#Version 2
msg = bytearray()
for chunk in chunks:
msg.extend(chunk)
When tested, version 1 of the code ran in 99.8s whereas version 2 ran in 116.6s (a version using += concatenation takes 230.3s by comparison). So while performing a join operation is still faster, it's only faster by about 16%. Personally, I think the cleaner programming of the bytearray version might make up for it.
Example 2: Binary record packing
This example is an slight twist on the last example. Suppose you had a large Python list of integer (x,y) coordinates. Something like this:
points = [(1,2),(3,4),(9,10),(23,14),(50,90),...]
Now, suppose you need to write that data out as a binary encoded file consisting of a 32-bit integer length followed by each point packed into a pair of 32-bit integers. One way to do it would be to use the struct module like this:
import struct
f = open("points.bin","wb")
f.write(struct.pack("I",len(points)))
for x,y in points:
f.write(struct.pack("II",x,y))
f.close()
The only problem with this code is that it performs a large number of small write() operations. An alternative approach is to pack everything into a bytearray and only perform one write at the end. For example:
import struct
f = open("points.bin","wb")
msg = bytearray()
msg.extend(struct.pack("I",len(points))
for x,y in points:
msg.extend(struct.pack("II",x,y))
f.write(msg)
f.close()
Sure enough, the version that uses bytearray runs much faster. In a simple timing test involving a list of 100000 points, it runs in about half the time as the version that makes a lot of small writes.
Example 3: Mathematical processing of byte values
The fact that bytearrays present themselves as arrays of integers makes it easier to perform certain kinds of calculations. In a recent embedded systems project, I was using Python to communicate with a device over a serial port. As part of the communications protocol, all messages had to be signed with a Longitudinal Redundancy Check (LRC) byte. An LRC is computed by taking an XOR across all of the byte values.
Bytearrays make such calculations easy. Here's one version:
message = bytearray(...) # Message already created
lrc = 0
for b in message:
lrc ^= b
message.append(lrc) # Add to the end of the message
Here's a version that increases your job security:
message.append(functools.reduce(lambda x,y:x^y,message))
And here's the same calculation in Python 2 without bytearrays:
message = "..." # Message already created
lrc = 0
for b in message:
lrc ^= ord(b)
message += chr(lrc) # Add the LRC byte
Personally, I like the bytearray version. There's no need to use ord() and you can just append the result at the end of the message instead of using concatenation.
Here's another cute example. Suppose you wanted to run a bytearray through a simple XOR-cipher. Here's a one-liner to do it:
>>> key = 37
>>> message = bytearray(b"Hello World")
>>> s = bytearray(x ^ key for x in message)
>>> s
bytearray(b'm#IIJ\x05rJWIA')
>>> bytearray(x ^ key for x in s)
bytearray(b"Hello World")
>>>
Here is a link to the presentation
A bytearray is very similar to a regular python string (str in python2.x, bytes in python3) but with an important difference, whereas strings are immutable, bytearrays are mutable, a bit like a list of single character strings.
This is useful because some applications use byte sequences in ways that perform poorly with immutable strings. When you are making lots of little changes in the middle of large chunks of memory, as in a database engine, or image library, strings perform quite poorly; since you have to make a copy of the whole (possibly large) string. bytearrays have the advantage of making it possible to make that kind of change without making a copy of the memory first.
But this particular case is actually more the exception, rather than the rule. Most uses involve comparing strings, or string formatting. For the latter, there's usually a copy anyway, so a mutable type would offer no advantage, and for the former, since immutable strings cannot change, you can calculate a hash of the string and compare that as a shortcut to comparing each byte in order, which is almost always a big win; and so it's the immutable type (str or bytes) that is the default; and bytearray is the exception when you need it's special features.
If you look at the documentation for bytearray, it says:
Return a new array of bytes. The bytearray type is a mutable sequence of integers in the range 0 <= x < 256.
In contrast, the documentation for bytes says:
Return a new “bytes” object, which is an immutable sequence of integers in the range 0 <= x < 256. bytes is an immutable version of bytearray – it has the same non-mutating methods and the same indexing and slicing behaviors.
As you can see, the primary distinction is mutability. str methods that "change" the string actually return a new string with the desired modification. Whereas bytearray methods that change the sequence actually change the sequence.
You would prefer using bytearray, if you are editing a large object (e.g. an image's pixel buffer) through its binary representation and you want the modifications to be done in-place for efficiency.
Wikipedia provides an example of XOR cipher using Python's bytearrays (docstrings reduced):
#!/usr/bin/python2.7
from os import urandom
def vernam_genkey(length):
"""Generating a key"""
return bytearray(urandom(length))
def vernam_encrypt(plaintext, key):
"""Encrypting the message."""
return bytearray([ord(plaintext[i]) ^ key[i] for i in xrange(len(plaintext))])
def vernam_decrypt(ciphertext, key):
"""Decrypting the message"""
return bytearray([ciphertext[i] ^ key[i] for i in xrange(len(ciphertext))])
def main():
myMessage = """This is a topsecret message..."""
print 'message:',myMessage
key = vernam_genkey(len(myMessage))
print 'key:', str(key)
cipherText = vernam_encrypt(myMessage, key)
print 'cipherText:', str(cipherText)
print 'decrypted:', vernam_decrypt(cipherText,key)
if vernam_decrypt(vernam_encrypt(myMessage, key),key)==myMessage:
print ('Unit Test Passed')
else:
print('Unit Test Failed - Check Your Python Distribution')
if __name__ == '__main__':
main()

Binary data with pyserial(python serial port)

serial.write() method in pyserial seems to only send string data. I have arrays like [0xc0,0x04,0x00] and want to be able to send/receive them via the serial port? Are there any separate methods for raw I/O?
I think I might need to change the arrays to ['\xc0','\x04','\x00'], still, null character might pose a problem.
An alternative method, without using the array module:
def a2s(arr):
""" Array of integer byte values --> binary string
"""
return ''.join(chr(b) for b in arr)
You need to convert your data to a string
"\xc0\x04\x00"
Null characters are not a problem in Python -- strings are not null-terminated the zero byte behaves just like another byte "\x00".
One way to do this:
>>> import array
>>> array.array('B', [0xc0, 0x04, 0x00]).tostring()
'\xc0\x04\x00'
I faced a similar (but arguably worse) issue, having to send control bits through a UART from a python script to test an embedded device. My data definition was "field1: 8 bits , field2: 3 bits, field3 7 bits", etc. It turns out you can build a robust and clean interface for this using the BitArray library. Here's a snippet (minus the serial set-up)
from bitstring import BitArray
cmdbuf = BitArray(length = 50) # 50 byte BitArray
cmdbuf.overwrite('0xAA', 0) # Init the marker byte at the head
Here's where it gets flexible. The command below replaces the 4 bits at
bit position 23 with the 4 bits passed. Note that it took a binary
bit value, given in string form. I can set/clear any bits at any location
in the buffer this way, without having to worry about stepping on
values in adjacent bytes or bits.
cmdbuf.overwrite('0b0110', 23)
# To send on the (previously opened) serial port
ser.write( cmdbuf )

Categories