I am trying to convert this hex to the correct INT32 Big Endian that would be:
ffd7c477 --> -2636681
I checked how it should look here:
http://www.scadacore.com/tools/programming-calculators/online-hex-converter/
I dont know how to convert it. This is where the latitude is
payload = "1901000a03010aff01ff01300a01ffd7c4750016c0540322ed"
latitude = payload[28:36] = ffd7c477
Here I get the wrong unsigned value:
int(binary[28:36], 16)
This worked struct.unpack('>i', "ffd7c477".decode('hex'))
Since Python will use the byteorder of your processor architecture by default to handle numbers (you can check your systems byteorder with sys.byteorder), you'll have to explicitly specify that you want to treat the given value as big endian. The struct module will allow you to do this:
import struct, codecs
val = "ffd7c477"
struct.unpack("!i", codecs.decode(val, "hex"))
The first argument of unpack: ! means to treat the bytes as big endian, i means to treat the bytes as int32 values.
Related
So basically, I generate 16 random bytes and i then convert them to Base64. I need to transform this Base64 to an Int.
I've searched all over the internet, i found out how to convert to hex, and many other but none seem to work.
This is the code I use to generate the nonce :
import base64
nonce = base64.encodebytes(os.urandom(16))
I need a function a bit like the parseInt() in JavaScript. The result need to be between -9223372036854775808 and 9223372036854775807.
There is a builtin method to convert bytes to int:
int.from_bytes(nonce, "big") # big endian
int.from_bytes(nonce, "little") # little endian
Python docs: https://docs.python.org/3/library/stdtypes.html#int.from_bytes
I have a bunch of binary data (the contents of a video game save-file, as it happens) where a part of the data contains both little-endian and big-endian integer values. Naively, without reading much of the docs, I tried to unpack it this way...
struct.unpack(
'3sB<H<H<H<H4s<I<I32s>IbBbBbBbB12s20sBB4s',
string_data
)
...and of course I got this cryptic error message:
struct.error: bad char in struct format
The problem is that struct.unpack format strings do not expect individual fields to be marked with endianness. The actually correct format-string here would be something like
struct.unpack(
'<3sBHHHH4sII32sIbBbBbBbB12s20sBB4s',
string_data
)
except that this will flip the endianness of the third I field (parsing it as little-endian, when I really want to parse it as big-endian).
Is there an easy and/or "Pythonic" solution to my problem? I have already thought of three possible solutions, but none of them is particularly elegant. In the absence of better ideas I'll probably go with number 3:
I could extract a substring and parse it separately:
(my.f1, my.f2, ...) = struct.unpack('<3sBHHHH4sII32sIbBbBbBbB12s20sBB4s', string_data)
my.f11 = struct.unpack('>I', string_data[56:60])
I could flip the bits in the field after the fact:
(my.f1, my.f2, ...) = struct.unpack('<3sBHHHH4sII32sIbBbBbBbB12s20sBB4s', string_data)
my.f11 = swap32(my.f11)
I could just change my downstream code to expect this field to be represented differently — it's actually a bitmask, not an arithmetic integer, so it wouldn't be too hard to flip around all the bitmasks I'm using with it; but the big-endian versions of these bitmasks are more mnemonically relevant than the little-endian versions.
A little late to the party, but I just had the same problem. I solved it with a custom numpy dtype, which allows to mix elements with different endianess (see https://numpy.org/doc/stable/reference/generated/numpy.dtype.html):
t=np.dtype('>u4,<u4') # Compound type with two 4-byte unsigned int with different byte order
a=np.zeros(shape=1, dtype=t) # Create an array of length one with above type
a[0][0]=1 # Assign first uint
a[0][1]=1 # Assign second uint
bytes=a.tobytes() # bytes should be b'\x01\x00\x00\x00\x00\x00\x00\x01'
b=np.frombuffer(buf, dtype=t) # should yield array[(1,1)]
c=np.frombuffer(buf, dtype=np.uint32) # yields array([ 1, 16777216]
I need to get an int through the network. Is this the proper way to convert to bytes in big-endian?
pack("I",socket.htonl(integer_value))
I unpack it as:
socket.ntohl(unpack("I",data)[0])
I noticed that pack-unpack also have the <> to use for endian conversion so I am not sure if I could just directly use that instead or if htonl is safer.
You should use only the struct module for communicating with another system. By using the htonl first, you'll end up with an indeterminate order being transmitted.
Since you need to convert the integer into a string of bytes in order to send it to another system, you'll need to use struct.pack (because htonl just returns a different integer than the one passed as argument and you cannot directly send an integer). And in using struct.pack you must choose an endianness for that string of bytes (if you don't specify one, you'll get a default ordering which may not be the same on the receiving side so you really need to choose one).
Converting an integer to a sequence of bytes in a definite order is exactly what struct.pack("!I", integer_value) does and a sequence of bytes in a definite order is exactly what you need on the receiving end.
On the other hand, if you use struct.pack("!I", socket.htonl(integer_value)), what does that do? Well, first it puts the integer into big-endian order (network byte order), then it takes your already big-endian integer and converts it to bytes in "big-endian order". But, on a little endian machine, that will actually reverse the ordering again, and you will end up transmitting the integer in little-endian byte order if you do both those two operations.
But on a big-endian machine htonl is a no-op, and then you're converting the result into bytes in big-endian order.
So using ntohl actually defeats the purpose and a receiving machine would have to know the byte-order used on the sending machine in order to properly decode it. Observe...
Little-endian box:
>>> print(socket.htonl(27))
452984832
>>> print(struct.pack("!I", 27))
b'\x00\x00\x00\x1b'
>>> print(struct.pack("!I", socket.htonl(27)))
b'\x1b\x00\x00\x00'
Big-endian box:
>>> print(socket.htonl(27))
27
>>> print(struct.pack("!I", 27))
b'\x00\x00\x00\x1b'
>>> print(struct.pack("!I", socket.htonl(27)))
b'\x00\x00\x00\x1b'
struct.unpack() uses '!' in the format specifiers for network byte order. But its the same as '>'...
I'm trying to send a float as a series of 4 bytes over serial.
I have code that looks like this which works:
ser.write(b'\xcd') #sending the byte representation of 0.1
ser.write(b'\xcc')
ser.write(b'\xcc')
ser.write(b'\x3d')
but I want to be able to send an arbitary float.
I also want to be able to go through each byte individually so this won't do for example:
bytes = struct.pack('f',float(0.1))
ser.write(bytes)
because I want to check each byte.
I'm using python 2.7
How can I do this?
You can use the struct module to pack the float as binary data. Then loop through each byte of the bytearray and write them to your output.
import struct
value = 13.37 # arbitrary float
bin = struct.pack('f', value)
for b in bin:
ser.write(b)
In Python 2.7.5, I have an hex 0xbba1, and I want to change it in bytestring format.
>>> bytetoint = lambda bytestr: struct.unpack('H', bytestr)[0]
>>> hextobyte = lambda hexnum: struct.pack('H', hexnum)
>>> hextobyte(0xbba1)
'\xa1\xbb'
>>> hex(bytetoint('\xa1\xbb'))
'0xbba1'
Why are the first byte '\xa1' and the second byte'\xbb' switched in place?
How can I get the right bytestring from hex, or vice versa?
e.g. 0xbba1 -> '\xbb\xa1'
'\xbb\xa1' -> 0xbba1
It's a little-endian/big-endian thing. You can't really say the bytes are switched, because nothing in the int definition says what order the bytes representing it are laid out in.
The result you have is a perfectly usable little-endian representation. If you want to force big-endian, which may look better to a human reader, you can specify the byte order with >:
>>> import struct
>>> struct.pack('>H', 0xbba1)
'\xbb\xa1'
>>> hex(struct.unpack('>H', '\xbb\xa1')[0])
'0xbba1'
First read about endianness so that you understand where this problem is coming from. On a typical x86-based computer with a little-endian CPU, the correct in-memory representation of int(0xbba1) is the two bytes a1 bb, in that order.
If you really want to decode a byte string from the opposite big-endian order, see this section of the struct docs:
bytestring = `\xbb\xa1`
hex( struct.unpack('>H','\xbb\xa1')[0] )