Wrong results when dividing floats [duplicate] - python

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 7 years ago.
I'm expecting 0.1 as a result but:
In [1]: 0.3 / 3
Out[1]: 0.09999999999999999
Tried with Decimal, nothing changed.
In [2]: from decimal import Decimal
In [3]: Decimal(0.3) / Decimal(3)
Out[3]: Decimal('0.09999999999999999629925658458')
What should I have to do to get correct result?

Just think about what the result of 0.3/3 is, if done non-numerically. It's 0.1, right? If you perform any mathematical operation numerically (read: using a computer of some sort), you will introduce errors, which are unavoidable. They are due to the way computers do arithmetical operations. And the result that python is giving you is not really wrong. It's just objected to those arithmetical errors. The result that you get is 0.1, and only off by approximately 1e-16, which his machine tolerance. This is basically the best computers can do.

Related

Sum() returns bad value when used in a list of numbers with many decimals [duplicate]

This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Closed 4 months ago.
a = [0.0021, 0.0087]
s = sum(a)
print(s)
Outcome: 0.010799999999999999
When executing the program above, the result is complex and eronated.
After performing multiple tests, including:
a = 0.0021
b = 0.0087
The result is the same. I tried different combinations of numbers and it seems that only these 2 have such an odd outcome.
I would say that this is floating point arithmetics error. Or I think there were some performance improvements done to math operations in Python which cause this, you can look more into it if you want by searching for PyNumber_Add and BINARY_ADD operation

Different division result python [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Python floating-point math is wrong [duplicate]
(2 answers)
Closed 4 years ago.
I have some strange result in my python 3.6.3. Once i tried some code and i reach this problem.
>>> a = 10**32
>>> print(a/1000/1000)
9.999999999999999e+25
As you see it not actually right, but if i go other way, i reach what i expect
>>> print(a/1000000)
1e+26
Same thing with
>>> 10**26
>>> 10**31
Can somebody explain me what's wrong? i tried write it in one line no result
>>> a = 10**32
>>> a/1000/1000
9.999999999999999e+25
As you know, Python 3 division is no longer an integer division (a//1000//1000 would have worked fine), so you're performing 2 floating point divisions here, introducing an (unnecessary) floating point accumulation error.
>>> a/1000000
1e+26
this only performs one division, so lower floating point error effect, even if result is now floating point.

Wrong division result in Python [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 5 years ago.
this is my first time asking on stackoverflow, and I have a trouble when programming with python 2.7.
Here I have a calculation:
1350/2.7
The exact answer must be 500, but python give the answer 499.99999999999994
I know about the fact that some numbers cannot be represented exactly in binary, causing error on floating calculation.
So can anybody give me an advice? How to deal exactly with it?
You could use the Decimal module. However in your specific case, you can avoid the problem by multiplying both numerator and divisor by the same number so to make the divisor an integer, like this:
(1350*10)/(2.7*10)
which is of course the same as:
13500/27
That is a representation error, see https://docs.python.org/2/tutorial/floatingpoint.html#representation-error
you can check if it is 500.00 with
eps=1.0e-10
if abs(1350/2.7-500) < eps:
...
Or, just use round(number[, ndigits])
You can use python's built-in round function, or math's ceil function to round up
$ python
>>> round(1350/2.7)
500.0
>>> import math
>>> math.ceil(1350/2.7)
500.0
Here is more of an explanation of why this happens

Sum of floats: unexpected result [duplicate]

This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Closed 7 years ago.
The following output surprised me:
1.1 + 2.2
=> 3.3000000000000003
An unexpected small digit came up from the sum. The same does not happen for other addends, e.g.:
3.0 + 0.3
=> 3.3
I tried both in Python 2.7 and in 3.4, but the result is the same. What is the reason for this unexpected result of the sum?
Mainly because binary doesn't play well with decimal (2 and 10 are coprime) and floats have limited precision.
Ultimately, when it comes down to it, computers are working with binary numbers. Some fractional numbers do not translate as neat as we would like to binary numbers. The resulting value includes some left-over digital garbage.
For a more complete discussion, see: python floating number and Limiting floats to two decimal points but a reasonable solution might be to specify the desired precision like:
>>> a = 1.1 + 2.2
>>> a = round(a,1)
>>> a
3.3

How to round a float up on 5 [duplicate]

This question already has answers here:
Limiting floats to two decimal points
(35 answers)
Closed 7 years ago.
So I was surprised I didn't find anything regarding this.
I have a python script which is testing a C++ program. It needs to format a float in the same way std::setprecision does. That is a float like 1.265 should be rounded UP to 1.27 (2 dp).
Now I have the following code:
"{:.2f}".format(myFloat)
The issue is that numbers like 1.265 are rounded to 1.26 and my tests fail. setprecision rounds 1.265 to 1.27.
What is the best way to fix this issue?
You can use double rounding to overcome the inability of binary arithmetic to exactly represent a decimal value.
round(round(1.265, 3) + 0.0005, 2)

Categories