This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 7 years ago.
I wish to generate a list with these value:
[0,0.01,0.02...0.49]
and I though that this was the correct way to do it:
[0.01*i for i in range(50)]
But to my horror I observed the following:
>>> 35*0.01
0.35000000000000003
>>> float("0.35")
0.35
What am I doing wrong? If I understand correctly It seems 35*0.01 does not give the closest floating point representation of 0.35 but float("0.35") does.
To guarantee the nearest valid floating point number, you need to start with values that are exact. 0.01 isn't exact, but 100.0 is.
[i/100.0 for i in range(50)]
Related
This question already has answers here:
Is floating point math broken?
(31 answers)
How to round a floating point number up to a certain decimal place?
(12 answers)
Closed 7 months ago.
ceil((14.84 - 14.04)/0.08)
output - 11
I was expecting the output to be 10 when manually calculated but when running it in python, it is giving output as 11
Is floating point math broken?
The float value from your equation is actually 10.000000000000009 because of how floats are handled (see link for more information). So, even though it is such a small amount above 10 the ceiling function will still place it at 11.
You can try rounding the number to a decimal point that you trust to get the value you want:
from math import ceil
ceil(round((14.84 - 14.04)/0.08, 2))
Output: 10
This question already has answers here:
Is floating point math broken?
(31 answers)
Limiting floats to two decimal points
(35 answers)
Closed 5 years ago.
I have this function
a = 2
b = 6
c = .4
def f(n):
if n == 1:
return a
if n == 2:
return b
else:
return c*f(n-1) + (1-c)*f(n-2)
It's a simple recursive function with two base cases. f(1) yields 2, f(2) yields 6, which is good, but f(3) yields 3.6000000000000005, when it should be just 3.6. I don't understand where these extra digits are coming from. What might be causing this?
Welcome to the magic of floating point math. You can find a nice answer here: Is floating point math broken?
Spoiler: no, just difficult.
Using floating point numbers in computers makes it necessary to round numbers because there are infinite real numbers between 0.0 and 1.0 for example. Thus the computer would need infinite memory to represent them. Of course it does not do that but rather uses a discrete number of floating point numbers that can be represented exactly. If the result of a computation happens to not be representable with any of these finite floating point numbers, the result is not exact. This is what happens to you here.
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 5 years ago.
I am using numpy.logspace(-4,1,6) to create [0.0001, 0.001, 0.01, 0.1, 1, 10].
I noticed the numbers generated with numpy.logspace have numeric errors.
print('{:.24f}'.format(np.logspace(-4, 1, 6)[3]))
prints "0.100000000000000005551115" versus "0.1" i expected.
Anyways to eliminate numpy.logspace numeric errors for integer powers of 10 (i.e., 10^n, where n is integer)?
Thanks.
The simple answer is: no.
The somewhat longer answer is, that no floating point number with the value 0.1 exists, at least not "exact" (well, it's exact, i.e. within machine precision, for the computer but not in an analytical/mathematical sense)
Check out: https://en.wikipedia.org/wiki/Floating-point_arithmetic for further info.
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 6 years ago.
Using Python 2.7. Here is the code and output, my purpose is simply to check if a number is a cube number.
Source code,
x = 1728 ** (1.0/3)
print x
y = int(x)
print y
Output,
12.0
11
Because you're using floating points, and the result is some very very small fraction less than 12, and when you cast x to an int, the entire decimal portion of the number is discarded.
If what you want to do is round the number, use round().
This question already has answers here:
python numpy arange unexpected results
(5 answers)
Closed 7 years ago.
Why does np.arange(5, 60, 0.1)[150] yield 19.999999999999947. But np.arange(5, 60, 0.5)[30] yield 20.0?
Why does this happen?
That's because floats (most of the time) cannot represent the exact value you put in. Try print("%.25f" % np.float64(0.1)) which returns 0.1000000000000000055511151 that's not exactly 0.1.
Numpy already provides a good workaround for almost-equal (floating point) comparisons: np.testing.assert_almost_equal so you can test by using np.testing.assert_almost_equal(20,np.arange(5, 60, 0.1)[150]).
The reason why your second example provides the real valus is because 0.5 can be represented as exact float 2**(-1) = 0.5 and therefore multiplications with this value do not suffer from that floating point problem.