This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 5 years ago.
I have the following list
alist = [0.141, 0.29, 0.585, 0.547, 0.233]
and I find the cumulative summation with
np.cumsum(alist)
array([ 0.141, 0.431, 1.016, 1.563, 1.796])
When I convert this array to a list, some decimal values show up. How can I avoid these decimals?
list(np.cumsum(alist))
[0.14099999999999999,
0.43099999999999994,
1.016,
1.5630000000000002,
1.7960000000000003]
This may be a duplicate, but I couldn't find the answer.
It's important to understand that floating point numbers are not stored in base 10 decimal format.
Therefore, you have to be crystal clear why you want to remove "the extra decimal places" that you see:
Change formatting to make the numbers look prettier / consistent.
Work with precision as if you are performing base-10 arithmetic.
If the answer is (1), then use np.round.
If the answer is (2), then use python's decimal module.
The below example demonstrates that np.round does not change the underlying representation of floats.
import numpy as np
alist = [0.141, 0.29, 0.585, 0.547, 0.233]
lst = np.round(np.cumsum(alist), decimals=3)
print(lst)
# [ 0.141 0.431 1.016 1.563 1.796]
np.set_printoptions(precision=20)
print(lst)
# [ 0.14099999999999998646 0.43099999999999999423 1.01600000000000001421
# 1.56299999999999994493 1.79600000000000004086]
np.set_printoptions(precision=3)
print(lst)
# [ 0.141 0.431 1.016 1.563 1.796]
Related
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 6 years ago.
I wanted values like -
1,1.02,1.04,1.06,1.08 etc...
So used numpy in python:
y = [x for x in numpy.arange(1,2,0.02)]
I got the values-
1.0,
1.02,
1.04,
1.0600000000000001,
1.0800000000000001,
I have three questions here:
How do I get exactly values 1,1.02,1.04,1.06,1.08 etc....
Why correct values for 1.02, and 1.04, and not for 1.0600000000000001,
How reliable our programs can be when we can't trust such basic operations in programs that can run into thousands of lines of code, and we do so many calculations in that? How do we cope with such scenario?
There are very similar questions that address problems with floating points in general and numpy library in particular -
Is floating point math broken?
Why Are Floating Point Numbers Inaccurate?
While they address why such a thing happens, here I'm concerned more about how do I deal with such scenario in everyday programming, particularly in numpy python? Hence I've these questions.
First pitfall : do not confuse accuracy and printing policies.
In your example :
In [6]: [x.as_integer_ratio() for x in arange(1,1.1,0.02)]
Out[6]:
[(1, 1),
(2296835809958953, 2251799813685248),
(1170935903116329, 1125899906842624),
(2386907802506363, 2251799813685248),
(607985949695017, 562949953421312),
(2476979795053773, 2251799813685248)]
shows that only 1 has an exact float representation.
In [7]: ['{:1.18f}'.format(f) for f in arange(1,1.1,.02)]
Out[7]:
['1.000000000000000000',
'1.020000000000000018',
'1.040000000000000036',
'1.060000000000000053',
'1.080000000000000071',
'1.100000000000000089']
shows intern accuracy.
In [8]: arange(1,1.1,.02)
Out[8]: array([ 1. , 1.02, 1.04, 1.06, 1.08, 1.1 ])
shows how numpy deals with printing, rounding to at most 6 digits, discarding trailing 0.
In [9]: [f for f in arange(1,1.1,.02)]
Out[9]: [1.0, 1.02, 1.04, 1.0600000000000001, 1.0800000000000001, 1.1000000000000001]
shows how python deals with printing, rounding to at most 16 digits, discarding trailing 0 after first digit.
Some advise for How do I deal with such scenario in everyday programming ?
Furthermore, each operation on floats can deteriorate accuracy.
Natural float64 accuracy is roughly 1e-16, which is sufficient for a lot of applications. Subtract is the most common source of precision loss, as in this example where exact result is 0. :
In [5]: [((1+10**(-exp))-1)/10**(-exp)-1 for exp in range(0,24,4)]
Out[5]:
[0.0,
-1.1013412404281553e-13,
-6.07747097092215e-09,
8.890058234101161e-05,
-1.0,
-1.0]
This question already has answers here:
Python rounding error with float numbers [duplicate]
(2 answers)
Closed 8 years ago.
How to convert float value to another float value with some specified precision using python?
>>> a = 0.1
but actually value of a is
0.10000000000000001
But I need to convert a to 0.1 in float type.
It is not possible to have 0.1 exactly when using the float type, due to how numbers are stored internally. This offsite resources will hopefully explain the internals more easily.
Note that when you use the Python console it may look like you have 0.1 exactly but this is not true, as the code below shows
In [35]: a = 0.1
In [36]: print(a)
0.1 # Looks fine! But wait, let's print with 30 decimal places...
In [37]: print('{:.30f}'.format(a))
0.100000000000000005551115123126 # Uh oh.
This occurs because when printing in the console, only a certain number of decimal places are printed, and for 0.1 this number is such that where it starts to deviate from 0 is outside the range.
Python does have a Decimal package which can provide support for decimal numbers, as below
import decimal
a = decimal.Decimal('0.1')
print(a)
# 0.1
print('{:.30f}'.format(a))
# 0.100000000000000000000000000000
Note that I've constructed the Decimal object using the string '0.1' as opposed to the float 0.1. This is because if I had used the float then the Decimal object would have contained the same "errors" that the float has.
How about the round function: Round
You can use formate function with float and round function also
>>> a = float(format(0.10000000000000001, '.1f'))
>>> a
0.1
>>> round(0.10000000000000001,2)
0.1
This question already has answers here:
Is floating point math broken?
(31 answers)
Is floating point arbitrary precision available?
(5 answers)
Closed 7 years ago.
I don't know if this is an obvious bug, but while running a Python script for varying the parameters of a simulation, I realized the results with delta = 0.29 and delta = 0.58 were missing. On investigation, I noticed that the following Python code:
for i_delta in range(0, 101, 1):
delta = float(i_delta) / 100
(...)
filename = 'foo' + str(int(delta * 100)) + '.dat'
generated identical files for delta = 0.28 and 0.29, same with .57 and .58, the reason being that python returns float(29)/100 as 0.28999999999999998. But that isn't a systematic error, not in the sense it happens to every integer. So I created the following Python script:
import sys
n = int(sys.argv[1])
for i in range(0, n + 1):
a = int(100 * (float(i) / 100))
if i != a: print i, a
And I can't see any pattern in the numbers for which this rounding error happens. Why does this happen with those particular numbers?
Any number that can't be built from exact powers of two can't be represented exactly as a floating point number; it needs to be approximated. Sometimes the closest approximation will be less than the actual number.
Read What Every Computer Scientist Should Know About Floating-Point Arithmetic.
Its very well known due to the nature of floating point numbers.
If you want to do decimal arithmetic not floating point arithmatic there are libraries to do this.
E.g.,
>>> from decimal import Decimal
>>> Decimal(29)/Decimal(100)
Decimal('0.29')
>>> Decimal('0.29')*100
Decimal('29')
>>> int(Decimal('29'))
29
In general decimal is probably going overboard and still will have rounding errors in rare cases when the number does not have a finite decimal representation (for example any fraction where the denominator is not 1 or divisible by 2 or 5 - the factors of the decimal base (10)). For example:
>>> s = Decimal(7)
>>> Decimal(1)/s/s/s/s/s/s/s*s*s*s*s*s*s*s
Decimal('0.9999999999999999999999999996')
>>> int(Decimal('0.9999999999999999999999999996'))
0
So its best to always just round before casting floating points to ints, unless you want a floor function.
>>> int(1.9999)
1
>>> int(round(1.999))
2
Another alternative is to use the Fraction class from the fractions library which doesn't approximate. (It justs keeps adding/subtracting and multiplying the integer numerators and denominators as necessary).
This question already has answers here:
Floating Point Limitations [duplicate]
(3 answers)
Closed 9 years ago.
I spent an hour today trying to figure out why
return abs(val-desired) <= 0.1
was occasionally returning False, despite val and desired having an absolute difference of <=0.1. After some debugging, I found out that -13.2 + 13.3 = 0.10000000000000142. Now I understand that CPUs cannot easily represent most real numbers, but this is an exception, because you can subtract 0.00000000000000142 and get 0.1, so it can be represented in Python.
I am running Python 2.7 on Intel Core architecture CPUs (this is all I have been able to test it on). I'm curious to know how I can store a value of 0.1 despite not being able to apply arithmetic to particular floating point values. val and desired are float values.
Yes, this can be a bit surprising:
>>> +13.3
13.300000000000001
>>> -13.2
-13.199999999999999
>>> 0.1
0.10000000000000001
All these numbers can be represented with some 16 digits of accuracy. So why:
>>> 13.3-13.2
0.10000000000000142
Why only 14 digits of accuracy in that case?
Well, that's because 13.3 and -13.2 have 16 digits of accuracy, which means 14 decimal points, since there are two digits before the decimal point. So the result also have 14 decimal points of accuracy. Even though the computer can represent numbers with 16 digits.
If we make the numbers bigger, the accuracy of the result decreases further:
>>> 13000.3-13000.2
0.099999999998544808
>>> 1.33E10-13.2E10
-118700000000.0
In short, the accuracy of the result depends on the accuracy of the input.
"Now I understand that CPUs cannot easily represent most floating point numbers with high resolution", the fact you asked this question indicates that you don't understand. None of the real values 13.2, 13.3 nor 0.1 can be represented exactly as floating point numbers:
>>> "{:.20f}".format(13.2)
'13.19999999999999928946'
>>> "{:.20f}".format(13.3)
'13.30000000000000071054'
>>> "{:.20f}".format(0.1)
'0.10000000000000000555'
To directly address your question of "how do I store a value like 0.1 and do an exact comparison to it when I have imprecise floating-point numbers," the answer is to use a different type to represent your numbers. Python has a decimal module for doing decimal fixed-point and floating-point math instead of binary -- in decimal, obviously, 0.1, -13.2, and 13.3 can all be represented exactly instead of approximately; or you can set a specific level of precision when doing calculations using decimal and discard digits below that level of significance.
val = decimal.Decimal(some calculation)
desired = decimal.Decimal(some other calculation)
return abs(val-desired) <= decimal.Decimal('0.1')
The other common alternative is to use integers instead of floats by artificially multiplying by some power of ten.
return not int(abs(val-desired)*10)
This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Is floating point arbitrary precision available?
(5 answers)
Closed 7 years ago.
I don't know if this is an obvious bug, but while running a Python script for varying the parameters of a simulation, I realized the results with delta = 0.29 and delta = 0.58 were missing. On investigation, I noticed that the following Python code:
for i_delta in range(0, 101, 1):
delta = float(i_delta) / 100
(...)
filename = 'foo' + str(int(delta * 100)) + '.dat'
generated identical files for delta = 0.28 and 0.29, same with .57 and .58, the reason being that python returns float(29)/100 as 0.28999999999999998. But that isn't a systematic error, not in the sense it happens to every integer. So I created the following Python script:
import sys
n = int(sys.argv[1])
for i in range(0, n + 1):
a = int(100 * (float(i) / 100))
if i != a: print i, a
And I can't see any pattern in the numbers for which this rounding error happens. Why does this happen with those particular numbers?
Any number that can't be built from exact powers of two can't be represented exactly as a floating point number; it needs to be approximated. Sometimes the closest approximation will be less than the actual number.
Read What Every Computer Scientist Should Know About Floating-Point Arithmetic.
Its very well known due to the nature of floating point numbers.
If you want to do decimal arithmetic not floating point arithmatic there are libraries to do this.
E.g.,
>>> from decimal import Decimal
>>> Decimal(29)/Decimal(100)
Decimal('0.29')
>>> Decimal('0.29')*100
Decimal('29')
>>> int(Decimal('29'))
29
In general decimal is probably going overboard and still will have rounding errors in rare cases when the number does not have a finite decimal representation (for example any fraction where the denominator is not 1 or divisible by 2 or 5 - the factors of the decimal base (10)). For example:
>>> s = Decimal(7)
>>> Decimal(1)/s/s/s/s/s/s/s*s*s*s*s*s*s*s
Decimal('0.9999999999999999999999999996')
>>> int(Decimal('0.9999999999999999999999999996'))
0
So its best to always just round before casting floating points to ints, unless you want a floor function.
>>> int(1.9999)
1
>>> int(round(1.999))
2
Another alternative is to use the Fraction class from the fractions library which doesn't approximate. (It justs keeps adding/subtracting and multiplying the integer numerators and denominators as necessary).