I would like to write a script in python that takes a float value for example -37.32 and output its IEEE value (11000010000101010100011110101110). for the first bit if the number is negative than it is 1 else it's 0. for the exponent its the matter of dividing the number by 2 and getting the remainder if I am correct. as for the mantissa, I have no idea how to calculate it. returning a string would be the most reasonable way since it would be better for constructing each element of the IEEE.
can someone help me with the approach I will be taking? or show a script tackling this problem and explaining it to me?
The struct module can be used to convert a float to a sequence of bytes. Then it's just a matter of converting each byte to a binary string and joining them together.
>>> import struct
>>> ''.join('{:08b}'.format(b) for b in struct.pack('f', -37.32))
'10101110010001110001010111000010'
Related
Im stuck on how to write a function which get as an input ether
large string
large integer
and transform into a float.
The problem is when I transform into a FLOAT, it is missing characters.
#value= int(45264444.4444444444466666254188888888888526)
value= "45264444.4444444444466666254188888888888526"
float(value)
output:45264444.44444445
A float can not store arbitrary numbers, its precision is limited.
A float will store the few most significant digits of a number (in binary form), and discard the rest.
That is why you see the value truncated.
For a more in-depth look at the problem, see for example https://0.30000000000000004.com/, or some general guide on floating-point arithmetic.
I'd like to calculate (⌊2^(1918)*π⌋+124476) in python but I get this error when I do it using the following code:
b = math.floor((2**1918) * math.pi) + 124476
print(b)
OverflowError: int too large to convert to float
How can you get this to work? In the end I just like to have it all as hexadecimal (if that helps with answering my question) but I was actually only trying to get it as an integer first :)
The right solution really depends on how precise the results are required. Since 2^1918 already is too large for both standard integer and floating point containers, it is not possible to get away with direct calculations without loosing all the precision below ~ 10^300.
In order to compute the desired result, you should use arbitrary-precision calculation techniques. You can implement the algorithms yourself or use one of the available libraries.
Assuming you are looking for an integer part of your expression, it will take about 600 decimal places to store the results precisely. Here is how you can get it using mpmath:
from mpmath import mp
mp.dps = 600
print(mp.floor(mp.power(2, 1918)*mp.pi + 124476))
74590163000744215664571428206261183464882552592869067139382222056552715349763159120841569799756029042920968184704590129494078052978962320087944021101746026347535981717869532122259590055984951049094749636380324830154777203301864744802934173941573749720376124683717094961945258961821638084501989870923589746845121992752663157772293235786930128078740743810989039879507242078364008020576647135087519356182872146031915081433053440716531771499444683048837650335204793844725968402892045220358076481772902929784589843471786500160230209071224266538164123696273477863853813807997663357545.0
Next, all you have to do is to convert it to hex representation (or extract hex from its internal binary form), which is a matter for another subject :)
The basic problem is what the message says. Python integers can be arbitrarily large, larger even than the range of a float. 2**1918 in decimal contains 578 significant digits and is way bigger than the biggest float your IEEE754 hardware can represent. So the call just fails.
You could try looking at the mpmath module. It is designed for floating point arithmetic outside the bounds of what ordinary hardware can handle.
I think the problem can be solved without resorting to high-precision arithmetic. floor(n.something + m) where m and n are integers is equal to floor(n.something) + m. So in this case you are looking for floor(2**1918 * pi) plus an integer (namely 124476). floor(2**whatever * pi) is just the first whatever + 2 bits of pi. So just look up the first 1920 bits of pi, add the bits for 124476, and output as hex digits.
A spigot algorithm can generate digits of pi without using arbitrary precision. A quick web search seems to find some Python implementations for generating digits in base 10. I didn't see anything about base 2, but Plouffe's formula generates base 16 digits if I am not mistaken.
The problem is that (2**1918) * math.pi attempts to convert the integer to 64-bit floating point precision, which is insufficiently large. You can convert math.pi to a fraction to use arbitrary precision.
>>> math.floor((2**1918) * fractions.Fraction(math.pi) + 124476)
74590163000744212756918704280961225881025315246628098737524697383138220762542289800871336766911957454080350508173317171375032226685669280397906783245438534131599390699781017605377332298669863169044574050694427882869191541933796848577277592163846082732344724959222075452985644173036076895843129191378853006780204194590286508603564336292806628948212533561286572102730834409985441874735976583720122784469168008083076285020654725577288682595262788418426186598550864392013191287665258445673204426746083965447956681216069719524525240073122409298640817341016286940008045020172328756796
Note that arbitrary precision applies to the calculation; math.pi is defined only with 64-bit floating point precision. Use an external library, such as mpmath, if you need the exact value.
To convert this to a hexadecimal string, use hex or a string format:
>>> hex(math.floor((2**1918) * fractions.Fraction(math.pi) + 124476))
'0xc90fdaa22168c0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001e63c'
>>> '%x' % math.floor((2**1918) * fractions.Fraction(math.pi) + 124476)
'c90fdaa22168c0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001e63c'
>>> f'{math.floor((2**1918) * fractions.Fraction(math.pi) + 124476):X}'
'C90FDAA22168C0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001E63C'
For string formats, x provides lower-case hex whereas X provides upper-case case.
I am struggling with a number-format problem in Python 3.6. My goal is to convert binary data from a file into printable decimal numbers. As an example, I need to convert two little-endian bytes in the form of a byte string...
b'\x12\00'
into its big-endian binary form...
0000000000010010
and finally to its 16-bit fixed-point Q15 decimal number form...
(1 / 4096) + (1 / 16384) = 0.00030517578 (basically, we've made the 2 bytes above human-readable)
In my failed attempts, the struct.unpack function seemed promising, but my low-level / number representation experience just isn't very mature at the moment.
Failed Attempt:
struct.unpack('<h', b'\x12\x00') # Yields (18,)
The above code gets me "18", which would be fine if the bytes represented an integer, but they do not.
Any help / advice would be appreciated. Thank you!
Answered by #jasonharper in the question comments--
The bytes do represent an integer - which has been shifted by 15 bits. Divide by 32768 (2**15) to get the actual Q15 value. (This doesn't match the value you calculated, but that's because you did the math wrong - the two set bits actually have place values of 1/2048 and 1/16384.)
I achieved the proper value via the following code--
struct.unpack('<h', b'\x12\x00')[0] / (2**15)
I have been working on a program and I have been trying to convert a big binary file (As a string) and pack it into a file. I have tried for days to make such thing possible. Here is the code I had written to pack the large binary string.
binaryRecieved="11001010101....(Shortened)"
f=open(fileName,'wb')
m=long(binaryRecieved,2)
struct.pack('i',m)
f.write(struct.pack('i',m))
f.close()
quit()
I am left with the error
struct.pack('i',x)
struct.error: integer out of range for 'i' format code
My integer is out of range, so I was wondering if there is a different way of going about with this.
Thanks
Convert your bit string to a byte string: see for example this question Converting bits to bytes in Python. Then pack the bytes with struct.pack('c', bytestring)
For encoding m in big-endian order (like "ten" being written as "10" in normal decimal use) use:
def as_big_endian_bytes(i):
out=bytearray()
while i:
out.append(i&0xff)
i=i>>8
out.reverse()
return out
For encoding m in little-endian order (like "ten" being written as "01" in normal decimal use) use:
def as_little_endian_bytes(i):
out=bytearray()
while i:
out.append(i&0xff)
i=i>>8
return out
both functions work on numbers - like you do in your question - so the returned bytearray may be shorter than expected (because for numbers leading zeroes do not matter).
For an exact representation of a binary-digit-string (which is only possible if its length is dividable by 8) you would have to do:
def as_bytes(s):
assert len(s)%8==0
out=bytearray()
for i in range(0,len(s)-8,8):
out.append(int(s[i:i+8],2))
return out
In struct.pack you have used 'i' which represents an integer number, which is limited. As your code states, you have a long output; thus, you may want to use 'd' in stead of 'i', to pack your data up as double. It should work.
See Python struct for more information.
So I have a list of tuples of two floats each. Each tuple represents a range. I am going through another list of floats which represent values to be fit into the ranges. All of these floats are < 1 but positive, so precision matter. One of my tests to determine if a value fits into a range is failing when it should pass. If I print the value and the range that is causing problems I can tell this much:
curValue = 0.00145000000671
range = (0.0014500000067055225, 0.0020968749796738849)
The conditional that is failing is:
if curValue > range[0] and ... blah :
# do some stuff
From the values given by curValue and range, the test should clearly pass (don't worry about what is in the conditional). Now, if I print explicitly what the value of range[0] is I get:
range[0] = 0.00145000000671
Which would explain why the test is failing. So my question then, is why is the float changing when it is accessed. It has decimal values available up to a certain precision when part of a tuple, and a different precision when accessed. Why would this be? What can I do to ensure my data maintains a consistent amount of precision across my calculations?
The float doesn't change. The built-in numberic types are all immutable. The cause for what you're observing is that:
print range[0] uses str on the float, which (up until very recent versions of Python) printed less digits of a float.
Printing a tuple (be it with repr or str) uses repr on the individual items, which gives a much more accurate representation (again, this isn't true anymore in recent releases which use a better algorithm for both).
As for why the condition doesn't work out the way you expect, it's propably the usual culprit, the limited precision of floats. Try print repr(curVal), repr(range[0]) to see if what Python decided was the closest representation of your float literal possible.
In modern day PC's floats aren't that precise. So even if you enter pi as a constant to 100 decimals, it's only getting a few of them accurate. The same is happening to you. This is because in 32-bit floats you only get 24 bits of mantissa, which limits your precision (and in unexpected ways because it's in base2).
Please note, 0.00145000000671 isn't the exact value as stored by Python. Python only diplays a few decimals of the complete stored float if you use print. If you want to see exactly how python stores the float use repr.
If you want better precision use the decimal module.
It isn't changing per se. Python is doing its best to store the data as a float, but that number is too precise for float, so Python modifies it before it is even accessed (in the very process of storing it). Funny how something so small is such a big pain.
You need to use a arbitrary fixed point module like Simple Python Fixed Point or the decimal module.
Not sure it would work in this case, because I don't know if Python's limiting in the output or in the storage itself, but you could try doing:
if curValue - range[0] > 0 and...