Convert binary to signed, little endian 16bit integer in Python - python

Trying to a convert a binary list into a signed 16bit little endian integer
input_data = [['1100110111111011','1101111011111111','0010101000000011'],['1100111111111011','1101100111111111','0010110100000011']]
Desired Output =[[-1074, -34, 810],[-1703, -39, 813]]
This is what I've got so far. It's been adapted from: Hex string to signed int in Python 3.2?,
Conversion from HEX to SIGNED DEC in python
results = []
for i in input_data:
hex_convert = [hex(int(x,2)) for x in i]
convert = [int(y[4:6] + y[2:4], 16) for y in hex_convert]
results.append(convert)
print (results)
output: [[64461, 65502, 810], [64463, 65497, 813]]
This is works fine, but the above are unsigned integers. I need signed integers capable of handling negative values. I then tried a different approach:
results_2 = []
for i in input_data:
hex_convert = [hex(int(x,2)) for x in i]
to_bytes = [bytes(j, 'utf-8') for j in hex_convert]
split_bits = [int(k, 16) for k in to_bytes]
convert_2 = [int.from_bytes(b, byteorder = 'little', signed = True) for b in to_bytes]
results_2.append(convert_2)
print (results_2)
Output: [[108191910426672, 112589973780528, 56282882144304], [108191943981104, 112589235583024, 56282932475952]]
This result is even more wild than the first. I know my approach is wrong, and it doesn't help that i've never been able to get my head around binary conversion etc, but I feel i'm on the right path with:
(b, byteorder = 'little', signed = True)
but can't work out where i'm wrong. Any help explaining this concept would be greatly appreciated.

This result is even more wild than the first. I know my approach is wrong... but can't work out where i'm wrong.
The problem is in the conversion to bytes. Let's look at it a step at a time:
int(x, 2)
Fine; we treat the string as a base-2 representation of the integer value, and get that integer. Only problem is it's a) unsigned and b) big-endian.
hex(int(x,2))
What this does is create a string representation of the integer, in base 16, with a 0x prefix. Notably, there are two text characters per byte that we want. This is already heading is down the wrong path.
You might have thought of using hexadecimal because you've seen \xAB style escapes inside string representations. This is a completely different thing. The string '\xAB' contains one character. The string '0xAB' contains four.
From there, everything else is still nonsense. Converting to bytes with a text encoding just means that the text character 0 for example is replaced with the byte value 48 (since in UTF-8 it's encoded with a single byte with that value). For this data we get the same results with UTF-8 that we would by assuming plain ASCII (since UTF-8 is "ASCII transparent" and there are no non-ASCII characters in the text).
So how do we do it?
We want to convert the integer from the first step into the bytes used to represent it. Just as there is a .from_bytes class method allowing us to create an integer from underlying bytes, there is an instance method allowing us to get the bytes that would represent the integer.
So, we use .to_bytes, specifying the length, signedness and endianness that was assumed when we created the int from the binary string - that gives us bytes that correspond to that string. Then, we re-create the integer from those bytes, except now specifying the proper signedness and endianness. The reason that .to_bytes makes us specify a length is because the integer doesn't have a particular length - there are a minimum number of bytes required to represent it, but you could use as many more as you like. (This is especially important if you want to handle signed values, since it will do sign-extension automatically.)
Thus:
for i in input_data:
values = [int(x,2) for x in i]
as_bytes = [x.to_bytes(2, byteorder='big', signed=False) for x in values]
reinterpreted = [int.from_bytes(x, byteorder='little', signed=True) for x in as_bytes]
results_2.append(reinterpreted)
But let's improve the organization of the code a bit. I will first make a function to handle a single integer value, and then we can use comprehensions to process the list. In fact, we can use nested comprehensions for the nested list.
def as_signed_little(binary_str):
# This time, taking advantage of positional args and default values.
as_bytes = int(binary_str, 2).to_bytes(2, 'big')
return int.from_bytes(as_bytes, 'little', signed=True)
# And now we can do:
results_2 = [[as_signed_little(x) for x in i] for i in input_data]

Related

Pack into c types and obtain the binary value back

I'm using the following code to pack an integer into an unsigned short as follows,
raw_data = 40
# Pack into little endian
data_packed = struct.pack('<H', raw_data)
Now I'm trying to unpack the result as follows. I use utf-16-le since the data is encoded as little-endian.
def get_bin_str(data):
bin_asc = binascii.hexlify(data)
result = bin(int(bin_asc.decode("utf-16-le"), 16))
trimmed_res = result[2:]
return trimmed_res
print(get_bin_str(data_packed))
Unfortunately, it throws the following error,
result = bin(int(bin_asc.decode("utf-16-le"), 16)) ValueError: invalid
literal for int() with base 16: '㠲〰'
How do I properly decode the bytes in little-endian to binary data properly?
Use unpack to reverse what you packed. The data isn't UTF-encoded so there is no reason to use UTF encodings.
>>> import struct
>>> data_packed = struct.pack('<H', 40)
>>> data_packed.hex() # the two little-endian bytes are 0x28 (40) and 0x00 (0)
2800
>>> data = struct.unpack('<H',data_packed)
>>> data
(40,)
unpack returns a tuple, so index it to get the single value
>>> data = struct.unpack('<H',data_packed)[0]
>>> data
40
To print in binary use string formatting. Either of these work work best. bin() doesn't let you specify the number of binary digits to display and the 0b needs to be removed if not desired.
>>> format(data,'016b')
'0000000000101000'
>>> f'{data:016b}'
'0000000000101000'
You have not said what you are trying to do, so let's assume your goal is to educate yourself. (If you are trying to pack data that will be passed to another program, the only reliable test is to check if the program reads your output correctly.)
Python does not have an "unsigned short" type, so the output of struct.pack() is a byte array. To see what's in it, just print it:
>>> data_packed = struct.pack('<H', 40)
>>> print(data_packed)
b'(\x00'
What's that? Well, the character (, which is decimal 40 in the ascii table, followed by a null byte. If you had used a number that does not map to a printable ascii character, you'd see something less surprising:
>>> struct.pack("<H", 11)
b'\x0b\x00'
Where 0b is 11 in hex, of course. Wait, I specified "little-endian", so why is my number on the left? The answer is, it's not. Python prints the byte string left to right because that's how English is written, but that's irrelevant. If it helps, think of strings as growing upwards: From low memory locations to high memory. The least significant byte comes first, which makes this little-endian.
Anyway, you can also look at the bytes directly:
>>> print(data_packed[0])
40
Yup, it's still there. But what about the bits, you say? For this, use bin() on each of the bytes separately:
>>> bin(data_packed[0])
'0b101000'
>>> bin(data_packed[1])
'0b0'
The two high bits you see are worth 32 and 8. Your number was less than 256, so it fits entirely in the low byte of the short you constructed.
What's wrong with your unpacking code?
Just for fun let's see what your sequence of transformations in get_bin_str was doing.
>>> binascii.hexlify(data_packed)
b'2800'
Um, all right. Not sure why you converted to hex digits, but now you have 4 bytes, not two. (28 is the number 40 written in hex, the 00 is for the null byte.) In the next step, you call decode and tell it that these 4 bytes are actually UTF-16; there's just enough for two unicode characters, let's take a look:
>>> b'2800'.decode("utf-16-le")
'㠲〰'
In the next step Python finally notices that something is wrong, but by then it does not make much difference because you are pretty far away from the number 40 you started with.
To correctly read your data as a UTF-16 character, call decode directly on the byte string you packed.
>>> data_packed.decode("utf-16-le")
'('
>>> ord('(')
40

How to use Hashlib to MD5 hash a number?

It seems everyone is doing this:
import hashlib
# initializing string
str = "GeeksforGeeks"
# encoding GeeksforGeeks using encode()
# then sending to md5()
result = hashlib.md5(str.encode())
However, I want to hash plain numbers. Something like
result = hashlib.md5(0)
#or
var = 5
result = hashlib.md5(var)
isn't working, and I've tried lots of other variations. What's the right syntax?
Hashes operate on a sequence of bytes.
An integer in Python is just simply a logical value; it has no definite size or byte-wise representation. If you want to hash numbers, you need to decide what form to put the number in before hashing it.
The simplest option would be to hash the string representation of the number. Do this by calling str and hashing that result. E.g.
var = 5
hash_input = str(var)
result = hashlib.md5(hash_input)
Another option would be to choose a fixed size, and hash the binary representation of the number:
var = 5
hash_input = struct.pack('<I', var) # Little-endian 32-bit unsigned
result = hashlib.md5(hash_input)
The correct way to do this totally depends on what exactly you're trying to accomplish, which you haven't told us.
Hashing plain numbers is ambiguous, to say the least.
The number should be converted to bytes before being fed to the digest algorithm. The first problem that you'll run into is how much byte size the number occupies, could be 4 byte, 8 byte or whatever. Then comes endianness, the order of bytes in memory. All these would result in different digest for seemingly the same number. (For simplicity, I've assumed the number is int)
>>> hashlib.md5(b'4').hexdigest()
'a87ff679a2f3e71d9181a67b7542122c'
>>> i = 4
>>> hashlib.md5(i.to_bytes(2, 'big')).hexdigest()
'c244b9cdf7853b5693a295e384c07367'
>>> hashlib.md5(i.to_bytes(4, 'big')).hexdigest()
'ea4959eb64a1f09be580d950964f3843'
>>> hashlib.md5(i.to_bytes(8, 'big')).hexdigest()
'59cff542fae7e0c4267e45740a12c9a0'
So, your solution would be to convert the int to str and encode it so you can get a digest from it.
# either this
>>> hashlib.md5(b'4').hexdigest()
# or
>>> hashlib.md5(str(4).encode()).hexdigest()
Hope that helps.

Struct. pack error in python 3 - struct.error: argument for 's' must be a bytes object

I know this question has been asked before, and some of the suggestions seem to be about needing a b to make the string a byte literal. However, im passing hex code to the function as 0x414243 to save it as ABC.
def _pack(_data, size):
numofbytes = size/8
print("Chars Expected: " + str(numofbytes))
formatString = "{}s".format(int(numofbytes))
print("Formatted String:" + formatString)
struct.pack(formatString,_data)
_pack(0x414243,24)
I'm not sure what to change here, im wondering if its a problem with how im using formatstring variable. I want the function to be able to work out how many chars are in the passed data from the size and in this case 24 bits = 3 bytes so it formats 3s and passes 0x414243 to convert to ABC.
Can anyone advise how to get past the error.
As the error message says, struct.pack() wants a group of bytes and you're giving it an integer.
If you want to be able to pass the data in as an integer, convert it to bytes before packing it:
_data = _data.to_bytes(numofbytes, "big") # or "little", depending on endianness
Or just pass the data in as bytes when you call it:
_pack(b"0x410x420x43", 24)
If you have a string containing hexadecimal, such as "0x414243", you can convert it to an integer, then on to bytes:
_data = int(_data, 16).to_bytes(numofbytes, "big")
You might use isinstance() to allow your function to accept any of these formats:
if isinstance(_data, str):
_data = int(_data, 16)
if isinstance(_data, int):
_data = _data.to_bytes(numofbytes, "big")
By the way, your calculation of the number of bytes will produce a floating-point answer if size is not a multiple of 8. A fractional number of bytes is an error. To address this:
numofbytes = size // 8 + bool(size % 8)
The + bool(size % 8) bit adds one to the result of the integer division if there are any bits left over.

Converting bytes to signed numbers in Python

I am faced with a problem in Python and I think I don't understand how signed numbers are handled in Python. My logic works in Java where everything is signed so need some help in Python.
I have some bytes that are coded in HEX and I need to decode them and interpret them to numbers. The protocol are defined.
Say the input may look like:
raw = '016402570389FFCF008F1205DB2206CA'
And I decode like this:
bin_bytes = binascii.a2b_hex(raw)
lsb = bin_bytes[5] & 0xff
msb = bin_bytes[6] << 8
aNumber = int(lsb | msb)
print(" X: " + str(aNumber / 4000.0))
After dividing by 4000.0, X can be in a range of -0.000025 to +0.25.
This logic works when X is in positive range. When X is expected
to be negative, I am getting back a positive number.
I think I am not handling "msb" correctly when it is a signed number.
How should I handlehandle negative signed number in
Python?
Any tips much appreciated.
You can use Python's struct module to convert the byte string to integers. It takes care of endianness and sign extension for you. I guess you are trying to interpret this 16-byte string as 8 2-byte signed integers, in big-endian byte order. The format string for this is '>8h. The > character tells Python to interpret the string as big endian, 8 means 8 of the following data type, and h means signed short integers.
import struct
nums = struct.unpack('>8h', bin_bytes)
Now nums is a tuple of integers that you can process further.
I'm not quite sure if your data is little or big endian. If it is little-endian, you can use < to indicate that in the struct.unpack format string.

str_to_a32 - What does this function do?

I need to rewrite some Python script in Objective-C. It's not that hard since Python is easily readable but this piece of code struggles me a bit.
def str_to_a32(b):
if len(b) % 4:
# pad to multiple of 4
b += '\0' * (4 - len(b) % 4)
return struct.unpack('>%dI' % (len(b) / 4), b)
What is this function supposed to do?
I'm not positive, but I'm using the documentation to take a stab at it.
Looking at the docs, we're going to return a tuple based on the format string:
Unpack the string (presumably packed by pack(fmt, ...)) according to the given format. The result is a tuple even if it contains exactly one item. The string must contain exactly the amount of data required by the format (len(string) must equal calcsize(fmt)).
The item coming in (b) is probably a byte buffer (represented as a string) - looking at the examples they are represented the the \x escape, which consumes the next two characters as hex.
It appears the format string is
'>%dI' % (len(b) / 4)
The % and %d are going to put a number into the format string, so if the length of b is 32 the format string becomes
`>8I`
The first part of the format string is >, which the documentation says is setting the byte order to big-endian and size to standard.
The I says it will be an unsigned int with size 4 (docs), and the 8 in front of it means it will be repeated 8 times.
>IIIIIIII
So I think this is saying: take this byte buffer, make sure it's a multiple of 4 by appending as many 0x00s as is necessary, then unpack that into a tuple with as many unsigned integers as there are blocks of 4 bytes in the buffer.
Looks like it's supposed to take an input array of bytes represented as a string and unpack them as big-endian (the ">") unsigned ints (the 'I') The formatting codes are explaied in http://docs.python.org/2/library/struct.html
This takes a string and converts it into a tuple of Unsigned Integers. If you look at the python struct documentation you will see how it works. In a nutshell it handles conversions between Python values and C structs represented as Python strings for handling binary data stored in files (unceremoniously copied from the link provided).
In your case, the function takes a string, b and adds some extra characters to make sure that it is the standard size of the an unsigned int (see link), and then converts it into a tuple of integers using the big endian representation of the characters. This is the '>' part. The I part says to use unsigned integers

Categories