How to convert hex to IEEE floating point python - python

I want to convert this hex string '8436d4ccd436d3333' to IEEE floating point. I've try to do this with struct.unpack but it's requires a string argument of length 4.
struct.unpack('>f', binascii.unhexlify('8436d999a436e0000'))
I'm using this website to verify if my conversion attempts are correct : https://gregstoll.dyndns.org/~gregstoll/floattohex/ but I can't find a way to do this.
Thanks for any help

At a guess, each hex string contains two single-precision floating-point values, not one, and the initial 8 is part of the whatever message protocol is being used, and not a part of either of those floats. With that guess, I get some plausible-looking numbers:
>>> struct.unpack('>ff', binascii.unhexlify('436d4ccd436d3333'))
(237.3000030517578, 237.1999969482422)
>>> struct.unpack('>ff', binascii.unhexlify('436d999a436e0000'))
(237.60000610351562, 238.0)
And to reinforce the plausibility, here's what I get by encoding the corresponding 1-digit-past-the-decimal-point values:
>>> binascii.hexlify(struct.pack('>ff', 237.3, 237.2))
b'436d4ccd436d3333'
>>> binascii.hexlify(struct.pack('>ff', 237.6, 238.0))
b'436d999a436e0000'

Related

How to convert HEX string to Float (Little Endian)

I just learned Python (3.x) and I am stuck with HEX String conversion to Float. I have this HEX String values:
'0x22354942F31AFA42CE6A494311518A43082CAF437C6BD4C35F78FA433BF10F442A5222448D3D3544200749C438295C4468AF6E4406B4804450518A4423B0934450E99CC4'
And I want to turn it into float.
I have tried to use this code:
bs=bytes.fromhex(row[2:])
fmt = '<' + ('H' * (len(bs) // 2))
res=struct.unpack(fmt, bs)
and it gives me the result of 13602.0,16969.0,6899.0,17146.0,27342.0,17225.0,20753.0,17290.0,11272.0,17327.0,27516.0,50132.0,30815.0,17402.0,61755.0,17423.0,21034.0,17442.0,15757.0,17461.0,1824.0,50249.0,10552.0,17500.0,44904.0,17518.0,46086.0,17536.0,20816.0,17546.0,45091.0,17555.0,59728.0,50332.0
After checking it, I found out that the code that what I currently have is float in base 16, while I need it in base 32 (or maybe not because I am not sure what base/format), with expected float results as 50.3018875, 125.052635,201.4172,276.633331,350.344,424.839722,500.9404,575.7692,649.2838,724.961731,804.1113,880.644043,954.7407,1029.62573,106.541,1181.50427,1255.291 the values which I got from this Calculator Converter.
What should I change in the coding to get the expected results?
Thank you.
Let's break things down here, because you seem to be confused a bit with all of the juggling of representations. You have some hexadecimal string (that's base 16 encoding) of some binary data. That's your 0x22354942F31AFA42CE6A494311.... You correctly identified that you can convert this from its encoded form to python bytes with bytes.fromhex:
hex_encoded = '0x22354942F31AFA42CE6A494311518A43082CAF437C6BD4C35F78FA433BF10F442A5222448D3D3544200749C438295C4468AF6E4406B4804450518A4423B0934450E99CC4'
binary_data = bytes.fromhex(hex_encoded[2:]) # we do 2: to remove the leading '0x'
At this point, unless we know how binary_data was constructed we can't do anything. But we can take some guesses. You know the first few numbers are floating points: 50.3018875, 125.052635, 201.4172, .... Typically floats are encoded using the IEEE 754 standard. This provides 3 different encodings of a floating point number: binary16 (16 bits), float (32 bits), and double (64 bits). You can see these in the struct documentation, they are format codes 'e', 'f', and 'd', respectively. We can try each to see which of (if any) your binary data is encoded as. By trial and error, we discover your data was encoded as 32-bit floats, so you can decode them with:
FLOAT = 'f'
fmt = '<' + FLOAT * (len(binary_data) // struct.calcsize(FLOAT))
numbers = struct.unpack(fmt, binary_data)
print(numbers)
Why did what you tried not work? Well you used the format code 'H' which is for an unsigned short. This is an integer, which is why you were getting back numbers with no fractional part!

struct.unpack with precision after decimal points

I am reading data from a binary file, it contains floating point data of which I want only first 6 digits after decimal point but its printing a pretty long string.
self.dataArray.append(struct.unpack("f", buf)[0])
I tried with this
self.dataArray.append(struct.unpack(".6f", buf)[0])
But it didn't worked.
Thanks in advance
a float isnt a string and a string isnt a float.
all a float is, is a number of bytes interpreted as both a whole number part and a fractional part
the_float = struct.unpack("f", buf)[0]
print "The Float String %0.6f"%(the_float)

How to convert floating point number in python?

How to convert floating point number to base-16 numbers, 8 hexadecimal digits per 32-bit FLP number in python?
eg : input = 1.2717441261e+20 output wanted : 3403244E
If you want the byte values of the IEEE-754 representation, the struct module can do this:
>>> import struct
>>> f = 1.2717441261e+20
>>> struct.pack('f', f)
'\xc9\x9c\xdc`'
This is a string version of the bytes, which can then be converted into a string representation of the hex values:
>>> struct.pack('f', f).encode('hex')
'c99cdc60'
And, if you want it as a hex integer, parse it as such:
>>> s = struct.pack('f', f).encode('hex')
>>> int(s, 16)
3382500448
To display the integer as hex:
>>> hex(int(s, 16))
'0xc99cdc60'
Note that this does not match the hex value in your question -- if your value is the correct one you want, please update the question to say how it is derived.
There are several possible ways to do so, but none of them leads to the result you wanted.
You can code this float value into its IEEE binary representation. This leads indeed to a 32 bit number (if you do it with single precision). But it leads to different results, no matter which endianness I suppose:
import struct
struct.pack("<f", 1.2717441261e+20).encode("hex")
# -> 'c99cdc60'
struct.pack(">f", 1.2717441261e+20).encode("hex")
# -> '60dc9cc9'
struct.unpack("<f", "3403244E".decode("hex"))
# -> (687918336.0,)
struct.unpack(">f", "3403244E".decode("hex"))
# -> (1.2213533295835077e-07,)
As the other one didn't fit result-wise, I'll take the other answers and include them here:
float.hex(1.2717441261e+20)
# -> '0x1.b939919e12808p+66'
Has nothing to do with 3403244E as well, so maybe you want to clarify what exactly you mean.
There are surely other ways to do this conversation, but unless you specify which method you want, no one is likely to be able to help you.
There is something wrong with your expected output :
import struct
input = 1.2717441261e+20
buf = struct.pack(">f", input)
print ''.join("%x" % ord(c) for c in struct.unpack(">4c", buf) )
Output :
60dc9cc9
Try float.hex(input) if input is already a float.
Try float.hex(input). This should convert a number into a string representing the number in base 16, and works with floats, unlike hex(). The string will begin with 0x however, and will contain 13 digits after the decimal point, so I can't help you with the 8 digits part.
Source: http://docs.python.org/2/library/stdtypes.html#float.hex

How to avoid rounding float numbers in python?

I have some json objects and when I convert them into dictionary in python they will rounded:
-112.07393329999999 -> -112.0739333
My code is:
for line in open("c:\\myfile","r+").readlines():
d = json.loads(line)
logtv = d['logtv']
It's just a more compact representation of the same (binary, 64-bit) number.
In [12]: -112.07393329999999 == -112.0739333
Out[12]: True
In [17]: struct.pack('d', -112.0739333)
Out[17]: 've\xbcR\xbb\x04\\\xc0'
In [18]: struct.pack('d', -112.07393329999999)
Out[18]: 've\xbcR\xbb\x04\\\xc0'
If you want this exact decimal, there's no way to represent it as a number in JSON. You'll have to use strings in your JSON, Decimals/strings in your python and decimal fields in your DB.
SQL Server Floats are at most 64-bit (53-bit significand) as well.
The answer is don't use floats. In most languages floats only have about 6 digits of significance and not too many more for doubles (note python floats are doubles). Use decimals if you know the exact precision. For JSON send it as a string or an int with an implied decimal point.
soap box: floats are very much overused. Floats should not be used by anything that you wouldn't represent with scientific notation, as that is what they really are underneath.
Note: Databases do not usually use floating point numbers, they use fixed point numbers. Which is exactly what a decimal is.
clarification
before you write the json file do something like
with open("c:\\myfile","w") as my_file:
for key in d:
if isinstance(d[key], Decimal):
d[key] = str(d[key])
my_file.write(json.dumps(d))
then when reading the json, you can just put the value into the database as is or convert it back to Decimal if you need to work with it more.

Problems with decimals and scientific notation in Python 2.6.6

I'm having difficulty with decimal values that I need to use for arithmetic in some cases and as strings in others. Specifically I have a list of rates, ex:
rates=[0.1,0.000001,0.0000001]
And I am using these to specify the compression rates for images. I need to initially have these values as numbers because I need to be able to sort them to make sure they are in a specific order. I also want to be able to convert each of these values to strings so I can 1) embed the rate into the filename and 2) log the rates and other details in a CSV file. The first problem is that any float with more than 6 decimal places is in scientific format when converted to a string:
>>> str(0.0000001)
'1e-07'
So I tried using Python's Decimal module but it is also converting some floats to scientific notation (seemingly contrary to the docs I've read). Ex:
>>> Decimal('1.0000001')
Decimal('1.0000001')
# So far so good, it doesn't convert to scientific notation with 7 decimal places
>>> Decimal('0.0000001')
Decimal('1E-7')
# Scientific notation, back where I started.
I've also looking into string formatting as suggested in multiple posts, but I've not had any luck. Any suggestions and pointers are appreciated by this Python neophyte.
You have to specify the string format then:
["%.8f" % (x) for x in rates]
This yields ['0.10000000', '0.00000100', '0.00000010']. Works with Decimal, too.
'{0:f}'.format(Decimal('0.0000001'))
The above should work for you
See % formatting, especially the floating point conversions:
'e' Floating point exponential format (lowercase). (3)
'E' Floating point exponential format (uppercase). (3)
'f' Floating point decimal format. (3)
'F' Floating point decimal format. (3)
'g' Floating point format. Uses lowercase exponential format if exponent is less than -4 or not less than precision, decimal format otherwise. (4)
'G' Floating point format. Uses uppercase exponential format if exponent is less than -4 or not less than precision, decimal format otherwise. (4)
An example, using f format.
>>> ["%10.7f" %i for i in rates]
[' 0.1000000', ' 0.0000010', ' 0.0000001']
>>>
You can also use the newer (starting 2.6) str.format() method:
>>> ['{0:10.7f}'.format(i) for i in rates]
[' 0.1000000', ' 0.0000010', ' 0.0000001']
>>>
Using f-strings:
>>> rates = [0.1, 0.000001, 0.0000008]
>>> [f'{r:.7f}' for r in rates]
['0.1000000', '0.0000010', '0.0000008']
The string format {r:.7f} indicates the number of digits used after the decimal point, which in this case is 7.

Categories