The function I'm writing gets a checksum (format: '*76') as a string (isolated from an NMEA string). This checksum in string format is called 'Obs' (Observed from string). It then computes the checksum from the rest of the string and gets that answer as hex (Terminal: 0x76), this will be called 'Com' (Computed from string). Now I need to convert one to the other to compare them agains each other.
I've tried stuff like:
HexObs = hex(Obs) #with Obs as '0x76' and '0*76'
Which gives me an error.
and
StrCom = str(Com)
Which gives: '118'
There were no previous questions in which I recognised my question.
Does anyone know how to convert one to the other? Tnx in advance.
I think you're problem is getting the original into an actual hex form
tobs = '76'
obs = hex(int('0x' + tobs, 16))
that will give you an actual hex value to compare
alternately you could use:
tobs = '76'
com = '0x76'
tcom = com[2:]
then compare tobs & tcom
To go from a string hex representation, use:
>>> int('0x76', 16)
118
The second argument is the base.
To go from an integer to a string hex representation, use:
>>> hex(118)
'0x76'
Related
Trying to a convert a binary list into a signed 16bit little endian integer
input_data = [['1100110111111011','1101111011111111','0010101000000011'],['1100111111111011','1101100111111111','0010110100000011']]
Desired Output =[[-1074, -34, 810],[-1703, -39, 813]]
This is what I've got so far. It's been adapted from: Hex string to signed int in Python 3.2?,
Conversion from HEX to SIGNED DEC in python
results = []
for i in input_data:
hex_convert = [hex(int(x,2)) for x in i]
convert = [int(y[4:6] + y[2:4], 16) for y in hex_convert]
results.append(convert)
print (results)
output: [[64461, 65502, 810], [64463, 65497, 813]]
This is works fine, but the above are unsigned integers. I need signed integers capable of handling negative values. I then tried a different approach:
results_2 = []
for i in input_data:
hex_convert = [hex(int(x,2)) for x in i]
to_bytes = [bytes(j, 'utf-8') for j in hex_convert]
split_bits = [int(k, 16) for k in to_bytes]
convert_2 = [int.from_bytes(b, byteorder = 'little', signed = True) for b in to_bytes]
results_2.append(convert_2)
print (results_2)
Output: [[108191910426672, 112589973780528, 56282882144304], [108191943981104, 112589235583024, 56282932475952]]
This result is even more wild than the first. I know my approach is wrong, and it doesn't help that i've never been able to get my head around binary conversion etc, but I feel i'm on the right path with:
(b, byteorder = 'little', signed = True)
but can't work out where i'm wrong. Any help explaining this concept would be greatly appreciated.
This result is even more wild than the first. I know my approach is wrong... but can't work out where i'm wrong.
The problem is in the conversion to bytes. Let's look at it a step at a time:
int(x, 2)
Fine; we treat the string as a base-2 representation of the integer value, and get that integer. Only problem is it's a) unsigned and b) big-endian.
hex(int(x,2))
What this does is create a string representation of the integer, in base 16, with a 0x prefix. Notably, there are two text characters per byte that we want. This is already heading is down the wrong path.
You might have thought of using hexadecimal because you've seen \xAB style escapes inside string representations. This is a completely different thing. The string '\xAB' contains one character. The string '0xAB' contains four.
From there, everything else is still nonsense. Converting to bytes with a text encoding just means that the text character 0 for example is replaced with the byte value 48 (since in UTF-8 it's encoded with a single byte with that value). For this data we get the same results with UTF-8 that we would by assuming plain ASCII (since UTF-8 is "ASCII transparent" and there are no non-ASCII characters in the text).
So how do we do it?
We want to convert the integer from the first step into the bytes used to represent it. Just as there is a .from_bytes class method allowing us to create an integer from underlying bytes, there is an instance method allowing us to get the bytes that would represent the integer.
So, we use .to_bytes, specifying the length, signedness and endianness that was assumed when we created the int from the binary string - that gives us bytes that correspond to that string. Then, we re-create the integer from those bytes, except now specifying the proper signedness and endianness. The reason that .to_bytes makes us specify a length is because the integer doesn't have a particular length - there are a minimum number of bytes required to represent it, but you could use as many more as you like. (This is especially important if you want to handle signed values, since it will do sign-extension automatically.)
Thus:
for i in input_data:
values = [int(x,2) for x in i]
as_bytes = [x.to_bytes(2, byteorder='big', signed=False) for x in values]
reinterpreted = [int.from_bytes(x, byteorder='little', signed=True) for x in as_bytes]
results_2.append(reinterpreted)
But let's improve the organization of the code a bit. I will first make a function to handle a single integer value, and then we can use comprehensions to process the list. In fact, we can use nested comprehensions for the nested list.
def as_signed_little(binary_str):
# This time, taking advantage of positional args and default values.
as_bytes = int(binary_str, 2).to_bytes(2, 'big')
return int.from_bytes(as_bytes, 'little', signed=True)
# And now we can do:
results_2 = [[as_signed_little(x) for x in i] for i in input_data]
Say I have 2 hex values that were taken in from keyboard input.
E.g. val1 = 0x000000 and val2 = 0xFF0000
and I do,
print(hex(val1))
print(hex(val2))
I get the right output for val2 but val1 is just 0x0, how do I get it to print the whole value?
By whole value, I mean, if the inputted value is 0x000000, I want to output the value as 0x000000 and not 0x0.
Use the format() built-in function to show the hex representation to the expected level of precision:
>>> format(53, '08x')
'00000035'
>>> format(1234567, '08x')
'0012d687'
The format code 08x means "eight lowercase hex digits padded with leading zeros".
You can pad the hex value to the specified number of digits by using the ljust method.
Whenever the string is less than the specified width, it'll append the specified filler character to extend it.
In your example example, hex(0x0).ljust(8, '0') == "0x000000".
Strings that are already long enough are preserved so that 0xFF0000 will still work.
print(hex(0x000000).ljust(8, '0')) # Prints 0x000000
print(hex(0xFF0000).ljust(8, '0')) # Prints 0xFF0000
A couple of important things to note that have bitten me in the past:
Make sure your width includes the length of the leading "0x"
This is because the ljust function operator operates on raw text and doesn't realize it's a hex string
If you give a width value shorter than you need, the strings won't be filled up enough and will have different lengths.
In other words len(hex(0xFF0000).ljust(4, '0')) != len(hex(0xFF0000).ljust(4, '0')) because you need a length of 8 characters to fit both cases
You say the user typed this input in on the keyboard. That means you already started with the strings you want, but you parsed integers out of the input and threw away the strings.
Don't do that. Keep the strings. Parse integers too if you need to do math, but keep the strings. Then you can just do
print(original_input)
without having to go through a reconstruction process and guess at how many leading zeros were originally input.
I have a hex string, but i need to convert it to actual hex.
For example, i have this hex string:
3f4800003f480000
One way I could achieve my goal is by using escape sequences:
print("\x3f\x48\x00\x00\x3f\x48\x00\x00")
However, I can't do it this way, because I want create together my hex from multiple variables.
My program's purpose is to:
take in a number for instance 100
multiply it by 100: 100 * 100 = 10000
convert it to hex 2710
add 0000
add 2710 again
add 0000 once more
Result I'm expecting is 2710000027100000. Now I need to pass this hexadecimal number as argument to a function (as hexadecimal).
In Python, there is no separate type as 'hex'. It represents the hexadecimal notation of the number as str. You may check the type yourself by calling it on hex() as:
# v convert integer to hex
>>> type(hex(123))
<type 'str'>
But in order to represent the value as a hexadecimal, Python prepends the 0x to the string which represents hexadecimal number. For example:
>>> hex(123)
'0x7b'
So, in your case in order to display your string as a hexadecimal equivalent, all you need is to prepend it with "0x" as:
>>> my_hex = "0x" + "3f4800003f480000"
This way if you probably want to later convert it into some other notation, let's say integer (which based on the nature of your problem statement, you'll definitely need), all you need to do is call int with base 16 as:
>>> int("0x3f4800003f480000", base=16)
4559894623774310400
In fact Python's interpreter is smart enough. If you won't even prepend "0x", it will take care of it. For example:
>>> int("3f4800003f480000", base=16)
4559894623774310400
"0x" is all about representing the string is hexadecimal string in case someone is looking/debugging your code (in future), they'll get the idea. That's why it is preferred.
So my suggestion is to stick with Python's Hex styling, and don't convert it with escape characters as "\x3f\x48\x00\x00\x3f\x48\x00\x00"
From the Python's hex document :
Convert an integer number to a lowercase hexadecimal string prefixed with “0x”. If x is not a Python int object, it has to define an index() method that returns an integer.
try binascii.unhexlify:
Return the binary data represented by the hexadecimal string hexstr.
example:
assert binascii.unhexlify('3f4800003f480000') == b"\x3f\x48\x00\x00\x3f\x48\x00\x00"
>>> hex(int('3f4800003f480000', 16))
'0x3f4800003f480000'
I'm fairly weak with structs but I have a feeling they're the best way to do this. I have a large string of binary data and need to pull 32 of those chars, starting at a specific index, and store them as an int. What is the best way to do this?
Since I need to start at an initial position I have been playing with struct.unpack_from(). Based on the format table here, I thought the 'i' formatting being 4 bytes is exactly what I needed but the code below executes and prints "(825307441,)" where I was expecting either the binary, decimal or hex form. Can anyone explain to me what 825307441 represents?
Also is there a method of extracting the data in a similar fashion but returning it in a list instead of a tuple? Thank you
st = "1111111111111111111111111111111"
test = struct.unpack_from('i',st,0)
print test
Just use int
>>> st = "1111111111111111111111111111111"
>>> int(st,2)
2147483647
>>> int(st[1:4],2)
7
You can slice the string any way you want to get the indices you desire. Passing 2 to int tells int that you are passing it a string in binary
How to convert floating point number to base-16 numbers, 8 hexadecimal digits per 32-bit FLP number in python?
eg : input = 1.2717441261e+20 output wanted : 3403244E
If you want the byte values of the IEEE-754 representation, the struct module can do this:
>>> import struct
>>> f = 1.2717441261e+20
>>> struct.pack('f', f)
'\xc9\x9c\xdc`'
This is a string version of the bytes, which can then be converted into a string representation of the hex values:
>>> struct.pack('f', f).encode('hex')
'c99cdc60'
And, if you want it as a hex integer, parse it as such:
>>> s = struct.pack('f', f).encode('hex')
>>> int(s, 16)
3382500448
To display the integer as hex:
>>> hex(int(s, 16))
'0xc99cdc60'
Note that this does not match the hex value in your question -- if your value is the correct one you want, please update the question to say how it is derived.
There are several possible ways to do so, but none of them leads to the result you wanted.
You can code this float value into its IEEE binary representation. This leads indeed to a 32 bit number (if you do it with single precision). But it leads to different results, no matter which endianness I suppose:
import struct
struct.pack("<f", 1.2717441261e+20).encode("hex")
# -> 'c99cdc60'
struct.pack(">f", 1.2717441261e+20).encode("hex")
# -> '60dc9cc9'
struct.unpack("<f", "3403244E".decode("hex"))
# -> (687918336.0,)
struct.unpack(">f", "3403244E".decode("hex"))
# -> (1.2213533295835077e-07,)
As the other one didn't fit result-wise, I'll take the other answers and include them here:
float.hex(1.2717441261e+20)
# -> '0x1.b939919e12808p+66'
Has nothing to do with 3403244E as well, so maybe you want to clarify what exactly you mean.
There are surely other ways to do this conversation, but unless you specify which method you want, no one is likely to be able to help you.
There is something wrong with your expected output :
import struct
input = 1.2717441261e+20
buf = struct.pack(">f", input)
print ''.join("%x" % ord(c) for c in struct.unpack(">4c", buf) )
Output :
60dc9cc9
Try float.hex(input) if input is already a float.
Try float.hex(input). This should convert a number into a string representing the number in base 16, and works with floats, unlike hex(). The string will begin with 0x however, and will contain 13 digits after the decimal point, so I can't help you with the 8 digits part.
Source: http://docs.python.org/2/library/stdtypes.html#float.hex