How to round a float up on 5 [duplicate] - python

This question already has answers here:
Limiting floats to two decimal points
(35 answers)
Closed 7 years ago.
So I was surprised I didn't find anything regarding this.
I have a python script which is testing a C++ program. It needs to format a float in the same way std::setprecision does. That is a float like 1.265 should be rounded UP to 1.27 (2 dp).
Now I have the following code:
"{:.2f}".format(myFloat)
The issue is that numbers like 1.265 are rounded to 1.26 and my tests fail. setprecision rounds 1.265 to 1.27.
What is the best way to fix this issue?

You can use double rounding to overcome the inability of binary arithmetic to exactly represent a decimal value.
round(round(1.265, 3) + 0.0005, 2)

Related

Why does math.floor() sometimes round up? [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 7 months ago.
It seems, from what I can tell, that python 3.10.4 math.floor() will sometimes round up instead of down. This seems to be in contrast to the purpose of the function.
Could someone please explain this?
Example:
>>> math.floor(0.9999999999999999)
0
>>> math.floor(0.99999999999999999)
1
floor() has nothing to do with this:
>>> 0.9999999999999999
0.9999999999999999
>>> 0.99999999999999999
1.0
That is, your second literal rounds up to 1.0 all on its own before math.floor() happens. Thus, math.floor() is flooring the number 1, not 0.99999999999999999. Floating-point literals are automatically converted to internal machine binary floating-point format, which has only 53 bits of precision ("IEEE 754 double" format on almost all machines).

why isn't 0.1235 * 10 equals 1.235 in python? [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 2 years ago.
I am using python 3.6.8 and tried to multiply 0.1235 with 10 and the answer is 1.2349999999999999 rather than 1.235.
After importing the decimal module, when we multiply decimal.Decimal(0.1235) with 10 we get Decimal('1.234999999999999986677323704') rather than Decimal('1.235').
So How to do precision float calculations with python?
The value 0.1235 is a decimal fraction that needs to be converted to a binary fraction in memory, resulting in not a 100 percent accuracy. please refer to Floating Point Arithmetic: Issues and Limitations

limited precision of floats in Python [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 6 years ago.
I am running this code on Python (both 2.7, 3.x):
>>> 1.1 + 2.2
3.3000000000000003
>>> 1.1 + 2.3
3.4
Could someone explain how does it work and what is happening?
float in Python implementing double point precision. Unless a number has power two denominator, it cannot be represented exactly by Python, but only "approximately" - up to the 16-th digit. Thus number like: 1, 0.5, 0.25 can be represented exactly, but number like your case (3.3) can only be represented "approximately". Its all correct, up to the 16 digit, and then you get the last 3 there, which is incorrect.

Sum of floats: unexpected result [duplicate]

This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Closed 7 years ago.
The following output surprised me:
1.1 + 2.2
=> 3.3000000000000003
An unexpected small digit came up from the sum. The same does not happen for other addends, e.g.:
3.0 + 0.3
=> 3.3
I tried both in Python 2.7 and in 3.4, but the result is the same. What is the reason for this unexpected result of the sum?
Mainly because binary doesn't play well with decimal (2 and 10 are coprime) and floats have limited precision.
Ultimately, when it comes down to it, computers are working with binary numbers. Some fractional numbers do not translate as neat as we would like to binary numbers. The resulting value includes some left-over digital garbage.
For a more complete discussion, see: python floating number and Limiting floats to two decimal points but a reasonable solution might be to specify the desired precision like:
>>> a = 1.1 + 2.2
>>> a = round(a,1)
>>> a
3.3

Wrong results when dividing floats [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 7 years ago.
I'm expecting 0.1 as a result but:
In [1]: 0.3 / 3
Out[1]: 0.09999999999999999
Tried with Decimal, nothing changed.
In [2]: from decimal import Decimal
In [3]: Decimal(0.3) / Decimal(3)
Out[3]: Decimal('0.09999999999999999629925658458')
What should I have to do to get correct result?
Just think about what the result of 0.3/3 is, if done non-numerically. It's 0.1, right? If you perform any mathematical operation numerically (read: using a computer of some sort), you will introduce errors, which are unavoidable. They are due to the way computers do arithmetical operations. And the result that python is giving you is not really wrong. It's just objected to those arithmetical errors. The result that you get is 0.1, and only off by approximately 1e-16, which his machine tolerance. This is basically the best computers can do.

Categories