I am interacting with an API that returns floats. I am trying to calculate the number of decimal places with which the API created these floats.
For example:
# API returns the following floats.
>> 0.0194360600000000015297185740.....
>> 0.0193793800000000016048318230.....
>> 0.0193793699999999999294963970.....
# Quite clearly these are supposed to represent:
>> 0.01943606
>> 0.01937938
>> 0.01937937
# And are therefore ACTUALLY accurate to only 8 decimal places.
How can I identify that the floats are actually accurate to 8 decimal places? Once I do that, I can initialize a decimal.Decimal instance with the "true" values rather than the inaccurate floats.
Edit: The number of accurate decimal places returned by the API varies and is not always 8!
If you are using Python 2.7 or Python 3.1+, consider using the repr() builtin.
Here's how it works with your examples in a Python 3.6 interpreter.
>>> repr(0.0194360600000000015297185740)
'0.01943606'
>>> repr(0.0193793800000000016048318230)
'0.01937938'
>>> repr(0.0193793699999999999294963970)
'0.01937937'
This works because repr() shows the minimum precision of the number, n, that still satisfies float(repr(n)) == n.
Given the string representation returned by repr(), you can count the number of digits to the right of the decimal point.
Related
I'm starting with Python and I recently came across a dataset with big values.
One of my fields has a list of values that looks like this: 1.3212724310201994e+18 (note the e+18 by the end of the number).
How can I convert it to a floating point number and remove the the exponent without affecting the value?
First of all, the number is already a floating point number, and you do not need to change this. The only issue is that you want to have more control over how it is converted to a string for output purposes.
By default, floating point numbers above a certain size are converted to strings using exponential notation (with "e" representing "*10^"). However, if you want to convert it to a string without exponential notation, you can use the f format specifier, for example:
a = 1.3212724310201994e+18
print("{:f}".format(a))
gives:
1321272431020199424.000000
or using "f-strings" in Python 3:
print(f"{a:f}")
here the first f tells it to use an f-string and the :f is the floating point format specifier.
You can also specify the number of decimal places that should be displayed, for example:
>>> print(f"{a:.2f}") # 2 decimal places
1321272431020199424.00
>>> print(f"{a:.0f}") # no decimal places
1321272431020199424
Note that the internal representation of a floating-point number in Python uses 53 binary digits of accuracy (approximately one part in 10^16), so in this case, the value of your number of magnitude approximately 10^18 is not stored with accuracy down to the nearest integer, let alone any decimal places. However, the above gives the general principle of how you control the formatting used for string conversion.
You can use Decimal from the decimal module for each element of your data:
from decimal import Decimal
s = 1.3212724310201994e+18
print(Decimal(s))
Output:
1321272431020199424
I am trying to return a number with 6 decimal places, regardless of what the number is.
For example:
>>> a = 3/6
>>> a
0.5
How can I take a and make it 0.500000 while preserving its type as a float?
I've tried
'{0:.6f}'.format(a)
but that returns a string. I'd like something that accomplishes this same task, but returns a float.
In memory of the computer, the float is being stored as an IEEE754 object, that means it's just a bunch of binary data exposed with a given format that's nothing alike the string of the number as you write it.
So when you manipulate it, it's still a float and has no number of decimals after the dot. It's only when you display it that it does, and whatever you do, when you display it, it gets converted to a string.
That's when you do the conversion to string that you can specify the number of decimals to show, and you do it using the string format as you wrote.
This question shows a slight misunderstanding on the nature of data types such as float and string.
A float in a computer has a binary representation, not a decimal one. The rendering to decimal that python is giving you in the console was converted to a string when it was printed, even if it's implicit by the print function. There is no difference between how a 0.5 and 0.5000000 is stored as a float in its binary representation.
When you are writing application code, it is best not to worry about the presentation until it gets to the end user where it must, somehow, be converted to a string if only implicitly. At that point you can worry about decimal places, or even whether you want it shown in decimal at all.
I have a list called scores of varying -log probabilities.
when I call this function:
maxState = scores.pop(scores.index(max(scores)))
and print maxState, I realize that the maxState loses its precision as a float. Is there a way I can get the maxState without losing precision?
ex: I print out the list scores: [-35.7971525669589, -34.67875545008369]
and print maxState, I get this: -34.6787554501
(You can see it's rounded)
You are confusing string presentation with actual contents. Nowhere is precision lost, only the string produced to write to your console is using a rounded value rather than show you all digits. And always remember that float numbers are digital approximations, not precise values.
Python floats are formatted differently when using the str() and repr() functions; in a list or other container, repr() is used, but print it directly and str() is used.
If you don't like either option, format it explicitly with the format() function and specifying a precision:
print format(maxState, '.12f')
to print it with 8 decimals, for example.
Demo:
>>> maxState = -34.67875545008369
>>> repr(maxState)
'-34.67875545008369'
>>> str(maxState)
'-34.6787554501'
>>> format(maxState, '.8f')
'-34.67875545'
>>> format(maxState, '.12f')
'-34.678755450084'
The repr() output is roughly equivalent to using '.17g' as the format, while str() is equivalent to '.12g'; here the precision denotes when to use scientific notation (e) and when to display in floating point notation (f).
I say roughly because the repr() output aims to give you round-trippable output; see the change notes for Python 3.1 on float() representation, which where backported to Python 2.7:
What is new is how the number gets displayed. Formerly, Python used a simple approach. The value of repr(1.1) was computed as format(1.1, '.17g') which evaluated to '1.1000000000000001'. The advantage of using 17 digits was that it relied on IEEE-754 guarantees to assure that eval(repr(1.1)) would round-trip exactly to its original value. The disadvantage is that many people found the output to be confusing (mistaking intrinsic limitations of binary floating point representation as being a problem with Python itself).
The new algorithm for repr(1.1) is smarter and returns '1.1'. Effectively, it searches all equivalent string representations (ones that get stored with the same underlying float value) and returns the shortest representation.
I'm wondering what causes this behaviour. I haven't been able to find an answer that covers this. It is probably something simple and obvious, but it is not to me. I am using python 2.7.3 in Ubuntu.
In [1]: 2 == 1.9999999999999999
Out[1]: True
In [2]: 2 == 1.999999999999999
Out[2]: False
EDIT:
To clarify my question. Is there a written(in documentation) max number of 9's where python will evaluate the expression above as being equal to 2?
Python uses floating point representation
What a floating point actually is, is a fixed-width binary number (called the "significand") plus a small integer to tell you how many powers of two to shift that value by (the "exponent"). Plus a sign bit. Just like scientific notation, but in base 2 instead of 10.
The closest 64 bit floating point value to 1.9999999999999999 is 2.0, because 64 bit floating point values (so-called "double precision") uses 52 bits of significand, which is equivalent to about 15 decimal places. So the literal 1.9999999999999999 is just another way of writing 2.0. However, the closest value to 1.999999999999999 is less than 2.0 (I think it's 1.9999999999999988897769753748434595763683319091796875 exactly, but I'm too lazy to check that's correct, I'm just relying on Python's formatting code to be exact).
I don't actually know whether the use specifically of 64 bit floats is required by the Python language, or is an implementation detail of CPython. But whatever size is used, the important thing is not specifically the number of decimal places, it is where the closest floating-point value of that size lies to your decimal literal. It will be closer for some literals than others.
Hence, 1.9999999999999999 == 2 for the same reason that 2.0 == 2 (Python allows mixed-type numeric operations including comparison, and the integer 2 is equal to the float 2.0). Whereas 1.999999999999999 != 2.
Types coercion
>>> 2 == 2.0
True
And consequences of maximum number of digits that can be represented in python :
>>> import sys
>>> sys.float_info.dig
15
>>> 1.9999999999999999
2.0
more from docs
>>> float('9876543211234567')
9876543211234568.0
note ..68 on the end instead of expected ..67
This is due to the way floats are implemented in Python. To keep it short and simple: Since floats almost always are an approximation and thus have more digits than most people find useful, the Python interpreter displays a rounded value.
More detailed, floats are stored in binary. This means that they're stored as fractions to the base 2, unlike decimal, were you can display a float as fractions to the base 10. However, most decimal fractions don't have an exact representation in binary. Because of that, they are typically stored with a precision of 53 bits. This renders them pretty much useless if you want to do more complex arithmetic operations, since you'll run into some strange problems, e. g.:
>>> 0.1 + 0.2
0.30000000000000004
>>> round(2.675, 2)
2.67
See The docs on floats as well.
Mathematically speaking, 2.0 does equal 1.9999... forever. They are two different ways of writing the same number.
However, in software, it's important to never compare two floats or decimals for equality - instead, subtract them, take the absolute value, and verify that the (always positive) difference is sufficiently low for your purposes.
EG:
if abs(value1 - value2) < 1e10:
# they are close enough
else:
# they are not
You probably should set EPSILON = 1e10, and use the symbolic constant instead of scattering 1e10 throughout your code, or better still use a comparison function.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Floating Point Limitations
Using Python 2.7 here.
Can someone explain why this happens in the shell?
>>> 5.2-5.0
0.20000000000000018
Searching yielded things about different scales of numbers not producing the right results (a very small number and a very large number), but that seemed pretty general, and considering the numbers I'm using are of the same scale, I don't think that's why this happens.
EDIT: I suppose I didn't define that the "this thing happening" I meant was that it returns 0.2 ... 018 instead of simply resulting in 0.2. I get that print rounds, and removed the print part in the code snippet, as that was misleading.
You need to understand that 5.2-5.0 really is 0.20000000000000018, not 0.2. The standard explanation for this is found in What Every Computer Scientist Should Know About Floating-Point Arithmetic.
If you don't want to read all of that, just accept that 5.2, 5.0, and 0.20000000000000018 are all just approximations, as close as the computer can get to the numbers you really way.
Python has some tricks to allow you to not know what every computer scientist should know and still get away with it. The main trick is that str(f)—that is, the human-readable rendition of a floating-point number—is truncated to 12 significant digits, so str(5.2-5.0) is "0.2", not "0.20000000000000018". But sometimes you need all the precision you can get, so repr(f)—that is, the machine-readable rendition—is not truncated, so repr(5.2-5.0) is "0.20000000000000018".
Now the only thing left to understand is what the interpreter shell does. As Ashwini Chaudhary explains, just evaluating something in the shell prints out its repr, while the print statement prints out its str.
shell uses repr():
In [1]: print repr(5.2-5.0)
0.20000000000000018
In [2]: print str(5.2-5.0)
0.2
In [3]: print 5.2-5.0
0.2
The default implementation of float.__str__ limits the output to 12 digits only.
Thus, the least significant digits are dropped and what is left is the value 0.2.
To print more digits (if available), use string formatting:
print '%f' % result # prints 0.200000
That defaults to 6 digits, but you can specify more precision:
print '%.16f' % result # prints 0.2000000000000002
Alternatively, python offers a newer string formatting method too:
print '{0:.16f}'.format(result) # prints 0.2000000000000002
Why python produces the 'imprecise' result in the first place has everything to do with the imprecise nature of floating point arithmetic. Use the decimal module instead if you need more predictable precision:
>>> from decimal import *
>>> getcontext().prec = 1
>>> Decimal(5.2) - Decimal(5.0)
Decimal('0.2')
Python has two different ways of converting an object to a string, the __str__ and __repr__ methods. __str__ is meant to be a normal string output and is used by print; __repr__ is meant to be a more exact representation and is what is displayed when you don't use print, or when you print the contents of a list or dictionary. __str__ rounds floating-point values.
As for why the actual result of the subtraction is 0.20000000000000018 rather than 0.2 exactly, it has to do with the internal representation of floating point. It's impossible to represent 5.2 exactly because it's an infinitely repeating binary number. The closest that you can come is approximately 5.20000000000000018.