Python float approximations interpreted as actual values by cython - python

Note: I already know that Python uses approximations for floating point numbers. However their behaviour is inconsistent even after a subtraction (which should not increase the representation error, should it?). Furthermore, I would like to know how to fix the problem.
Python sometimes seems to use an approximation for the actual value of a float, as you can see in the 5th and 6th element of the this_spectrum variable in the image. All values were read from a text file and should have only 2 decimals.
When you print them or use them in a Python calculation, they behave as if they have the intended value:
In: this_spectrum[4] == 1716.31
Out: True
However, after using them in a Cython script which calculates all pairwise distances between elements (i.e. performs a simple subtraction of values) and stores them in alphaMatrix, it seems the Python approximations were used instead of the actual values:
In: alphaMatrix[0][0]
Out: 161.08
In: alphaMatrix[0][0] == 161.08
Out: False
In: alphaMatrix[0][0] == 161.07999999999996
Out: True
Why is this happening, and what would be the proper way to fix this problem? Is this really a Cython problem/bug/feature(?) or is there something else going on?

Your problem is here: “a Cython script which calculates all pairwise distances…”
Essentially every floating-point operation (add, subtract, multiply, divide, using any “elementary” function such as square root, cosine, logarithm, and so on), some rounding error may be added. For the basic arithmetic operations, the result is calculated as if the exact mathematical result were calculated and then rounded to the nearest representable floating-point value. Ideally, the elementary functions would behave this way too, but most implementations have some additional error, because calculating these functions precisely is difficult.
As an example, consider calculating 1/3*18 using decimal floating-point with four digits. 1 and 3 in decimal floating-point have no error; they are representable as 1.000 and 3.000. But their quotient, ⅓, is not representable. When you do the division, the result is .3333. Then, when you multiply .3333 by 18, the exact result is 5.9994. Because our floating-point format has only four digits, we have to round this to some number we can represent. The nearest representable value is 5.999, so that is returned.
When you compare a calculated value to a constant such as 161.08, you are comparing a value with several accumulated rounding errors to a value with only one rounding error (when the decimal numeral “161.08” in the source is converted to binary floating-point, there is a rounding error). So the values are different.
You often will not see this difference when numbers are printed with default precision because only a few digits are shown. This means the full internal value is rounded to a few digits for display, and that rounding conceals the differences. If you print the numbers with more precision, you will see the differences.

Related

Algorithm to check if a given integer is a power of two; I am using log and failing? [duplicate]

I work daily with Python 2.4 at my company. I used the versatile logarithm function 'log' from the standard math library, and when I entered log(2**31, 2) it returned 31.000000000000004, which struck me as a bit odd.
I did the same thing with other powers of 2, and it worked perfectly. I ran 'log10(2**31) / log10(2)' and I got a round 31.0
I tried running the same original function in Python 3.0.1, assuming that it was fixed in a more advanced version.
Why does this happen? Is it possible that there are some inaccuracies in mathematical functions in Python?
This is to be expected with computer arithmetic. It is following particular rules, such as IEEE 754, that probably don't match the math you learned in school.
If this actually matters, use Python's decimal type.
Example:
from decimal import Decimal, Context
ctx = Context(prec=20)
two = Decimal(2)
ctx.divide(ctx.power(two, Decimal(31)).ln(ctx), two.ln(ctx))
You should read "What Every Computer Scientist Should Know About Floating-Point Arithmetic".
http://docs.sun.com/source/806-3568/ncg_goldberg.html
Always assume that floating point operations will have some error in them and check for equality taking that error into account (either a percentage value like 0.00001% or a fixed value like 0.00000000001). This inaccuracy is a given since not all decimal numbers can be represented in binary with a fixed number of bits precision.
Your particular case is not one of them if Python uses IEEE754 since 31 should be easily representable with even single precision. It's possible however that it loses precision in one of the many steps it takes to calculate log2231, simply because it doesn't have code to detect special cases like a direct power of two.
floating-point operations are never exact. They return a result which has an acceptable relative error, for the language/hardware infrastructure.
In general, it's quite wrong to assume that floating-point operations are precise, especially with single-precision. "Accuracy problems" section from Wikipedia Floating point article :)
IEEE double floating point numbers have 52 bits of precision. Since 10^15 < 2^52 < 10^16, a double has between 15 and 16 significant figures. The result 31.000000000000004 is correct to 16 figures, so it is as good as you can expect.
This is normal. I would expect log10 to be more accurate then log(x, y), since it knows exactly what the base of the logarithm is, also there may be some hardware support for calculating base-10 logarithms.
float are imprecise
I don't buy that argument, because exact power of two are represented exactly on most platforms (with underlying IEEE 754 floating point).
So if we really want that log2 of an exact power of 2 be exact, we can.
I'll demonstrate it in Squeak Smalltalk, because it is easy to change the base system in that language, but the language does not really matter, floating point computation are universal, and Python object model is not that far from Smalltalk.
For taking log in base n, there is the log: function defined in Number, which naively use the Neperian logarithm ln:
log: aNumber
"Answer the log base aNumber of the receiver."
^self ln / aNumber ln
self ln (take the neperian logarithm of receiver) , aNumber ln and / are three operations that will round there result to nearest Float, and these rounding error can cumulate... So the naive implementation is subject to the rounding error you observe, and I guess that Python implementation of log function is not much different.
((2 raisedTo: 31) log: 2) = 31.000000000000004
But if I change the definition like this:
log: aNumber
"Answer the log base aNumber of the receiver."
aNumber = 2 ifTrue: [^self log2].
^self ln / aNumber ln
provide a generic log2 in Number class:
log2
"Answer the base-2 log of the receiver."
^self asFloat log2
and this refinment in Float class:
log2
"Answer the base 2 logarithm of the receiver.
Care to answer exact result for exact power of two."
^self significand ln / Ln2 + self exponent asFloat
where Ln2 is a constant (2 ln), then I effectively get an exact log2 for exact power of two, because significand of such number = 1.0 (including subnormal for Squeak exponent/significand definition), and 1.0 ln = 0.0.
The implementation is quite trivial, and should translate without difficulty in Python (probably in the VM); the runtime cost is very cheap, so it's just a matter of how important we think this feature is, or is not.
As I always say, the fact that floating point operations results are rounded to nearest (or whatever rounding direction) representable value is not a license to waste ulp. Exactness has a cost, both in term of runtime penalty and implementation complexity, so it's trade-offs driven.
The representation (float.__repr__) of a number in python tries to return a string of digits as close to the real value as possible when converted back, given that IEEE-754 arithmetic is precise up to a limit. In any case, if you printed the result, you wouldn't notice:
>>> from math import log
>>> log(2**31,2)
31.000000000000004
>>> print log(2**31,2)
31.0
print converts its arguments to strings (in this case, through the float.__str__ method), which caters for the inaccuracy by displaying less digits:
>>> log(1000000,2)
19.931568569324174
>>> print log(1000000,2)
19.9315685693
>>> 1.0/10
0.10000000000000001
>>> print 1.0/10
0.1
usuallyuseless' answer is very useful, actually :)
If you wish to calculate the highest power of 'k' in a number 'n'. Then the code below might be helpful:
import math
answer = math.ceil(math.log(n,k))
while k**answer>n:
answer-=1
NOTE: You shouldn't use 'if' instead of 'while' because that will give wrong results in some cases like n=2**51-1 and k=2. In this example with 'if' the answer is 51 whereas with 'while' the answer is 50, which is correct.

Mitigating Floating Point Approximation Issues with Numpy

My code is quite simple, and only 1 line is causing an issue:
np.tan(np.radians(rotation))
Instead of my expected output for rotation = 45 as 1, I get 0.9999999999999999. I understand that 0 and a ton of 9's is 1. In my use case, however, it seems like the type of thing that will definitely build up over iterations.
What is causing the floating point error: np.tan or np.radians, and how do I get the problem function to come out correctly regardless of floating point inaccuracies?
Edit:
I should clarify that I am familiar with floating point inaccuracies. My concern is that as that number gets multiplied, added, and compared, the 1e-6 error suddenly becomes a tangible issue. I've normally been able to safely ignore floating point issues, but now I am far more concerned about the build up of error. I would like to reduce the possibility of such an error.
Edit 2:
My current solution is to just round to 8 decimal places because that's most likely enough. It's sort of a temporary solution because I'd much prefer a way to get around the IEEE decimal representations.
What is causing the floating point error: np.tan or np.radians, and how do I get the problem function to come out correctly regardless of floating point inaccuracies?
Both functions incur rounding error, since in neither case is the exact result representable in floating point.
My current solution is to just round to 8 decimal places because that's most likely enough. It's sort of a temporary solution because I'd much prefer a way to get around the IEEE decimal representations.
The problem has nothing to do with decimal representation, and this will give worse results outside of the exact case you mention above, e.g.
>>> np.tan(np.radians(60))
1.7320508075688767
>>> round(np.tan(np.radians(60)), 8)
1.73205081
>>> np.sqrt(3) # sqrt is correctly rounded, so this is the closest float to the true result
1.7320508075688772
If you absolutely need higher accuracy than the 15 decimal digits you would get from code above, then you can use an arbitrary precision library like gmpy2.
Take a look here: https://docs.scipy.org/doc/numpy/user/basics.types.html .
Standard dtypes in numpy do not go beyond 64 bits precision. From the docs:
Be warned that even if np.longdouble offers more precision than python
float, it is easy to lose that extra precision, since python often
forces values to pass through float. For example, the % formatting
operator requires its arguments to be converted to standard python
types, and it is therefore impossible to preserve extended precision
even if many decimal places are requested. It can be useful to test
your code with the value 1 + np.finfo(np.longdouble).eps.
You can increase precision with np.longdouble, but this is platform dependent
In spyder (windows):
np.finfo(np.longdouble).eps #same precision as float
>> 2.220446049250313e-16
np.finfo(np.longdouble).precision
>> 15
In google colab:
np.finfo(np.longdouble).eps #larger precision
>> 1.084202172485504434e-19
np.finfo(np.longdouble).precision
>> 18
print(np.tan(np.radians(45, dtype=np.float), dtype=np.float) - 1)
print(np.tan(np.radians(45, dtype=np.longfloat), dtype=np.longfloat) - 1)
>> -1.1102230246251565e-16
0.0

Python and R are returning different results where they should be exactly the same [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 years ago.
Improve this question
[Python numpy code]
In [171]: A1*b
Out[171]:
array([ -7.55603523e-01, 7.18519356e-01, 3.98628050e-03,
9.27047917e-04, -1.31074698e-03, 1.44455190e-03,
1.02676602e-03, 5.03891225e-02, -1.15752426e-03,
-2.43685270e-02, 5.88382307e-03, 2.63372861e-04])
In [172]: (A1*b).sum()
Out[172]: -1.6702134467139196e-16
[R code]
> cholcholT[2,] * b
[1] -0.7556035225 0.7185193560 0.0039862805 0.0009270479 -0.0013107470
[6] 0.0014445519 0.0010267660 0.0503891225 -0.0011575243 -0.0243685270
[11] 0.0058838231 0.0002633729
> sum(cholcholT[2,] * b)
[1] -9.616873e-17
The first is the R code and second is numpy. Up until the element-wise product of two vectors, they return the same result. However, if I try to add them up, they become different. I believe it doesn't have to do with the precision settings of the two since they both are double-precision based. Why is this happening?
You are experiencing what is called catastrophic cancellation. You are subtracting numbers from each other which differ only very slightly. As a result you get numbers which have a very high error relative to their value. The error stems from rounding errors which are introduced when your system stores values which cannot be represented by the binary system accurately.
Intuitively, you can think of this as the same difficulties you have when writing 1/3 as a decimal number. You would have to write 0.3333... , so infinitely many 3s behind the decimal point. You cannot do this and your computer can't either.
So your computer has to round the numbers somewhere.
You can see the rounding errors if you use something like
"{:.20e}".format(0.1)
You will see that after the 16th digit or so the number you wanted to store (1.0000000000000000000...×10^-1) is different from the number the computer stores (1.00000000000000005551...×10^-1)
To see in which order of magnitude this inaccuracy lies, you can view the machine epsilon. In simplified terms, this value gives you the minimum amount relative to your value which you can add to your value so that the computer can still distinguish the result from the old value (so it gets not rounded away while storing the result in memory).
If you execute
import numpy as np
eps = np.finfo(float).eps
you can see that this value lies on the order of magnitude of 10^-16.
The computer reprents floats in a form like SIGN|EXPONENT|FRACTION. So to simplify greatly, If computer memory would store numbers in decimal format, a number like -0.0053 would be stored as 1|-2|.53|. 1 is for the negative sign, -2 means 'FRACTION times 10^-2'.
If you sum up floats, the computer must represent each float with the same exponent to add/subtract the digits of the FRACTION from each other. Therefore all your values will be represented in terms of the greatest exponent of your data, which is -1. Therefore your rounding error will be in the order of magnitude of 10^-16*10^-1 which is 10^-17. You can see that your result is in this order of magnitude as well, so it is very much influenced by the rounding errors of your digits.
You are using floats and apply arithmetic on it. Floating point arithmetic is a dangerous thing because it always produces a small rounding error. Whether this error is rounded up or down or just "cut off" from the binary representation, different results may appear.

Inaccurate Logarithm in Python

I work daily with Python 2.4 at my company. I used the versatile logarithm function 'log' from the standard math library, and when I entered log(2**31, 2) it returned 31.000000000000004, which struck me as a bit odd.
I did the same thing with other powers of 2, and it worked perfectly. I ran 'log10(2**31) / log10(2)' and I got a round 31.0
I tried running the same original function in Python 3.0.1, assuming that it was fixed in a more advanced version.
Why does this happen? Is it possible that there are some inaccuracies in mathematical functions in Python?
This is to be expected with computer arithmetic. It is following particular rules, such as IEEE 754, that probably don't match the math you learned in school.
If this actually matters, use Python's decimal type.
Example:
from decimal import Decimal, Context
ctx = Context(prec=20)
two = Decimal(2)
ctx.divide(ctx.power(two, Decimal(31)).ln(ctx), two.ln(ctx))
You should read "What Every Computer Scientist Should Know About Floating-Point Arithmetic".
http://docs.sun.com/source/806-3568/ncg_goldberg.html
Always assume that floating point operations will have some error in them and check for equality taking that error into account (either a percentage value like 0.00001% or a fixed value like 0.00000000001). This inaccuracy is a given since not all decimal numbers can be represented in binary with a fixed number of bits precision.
Your particular case is not one of them if Python uses IEEE754 since 31 should be easily representable with even single precision. It's possible however that it loses precision in one of the many steps it takes to calculate log2231, simply because it doesn't have code to detect special cases like a direct power of two.
floating-point operations are never exact. They return a result which has an acceptable relative error, for the language/hardware infrastructure.
In general, it's quite wrong to assume that floating-point operations are precise, especially with single-precision. "Accuracy problems" section from Wikipedia Floating point article :)
IEEE double floating point numbers have 52 bits of precision. Since 10^15 < 2^52 < 10^16, a double has between 15 and 16 significant figures. The result 31.000000000000004 is correct to 16 figures, so it is as good as you can expect.
This is normal. I would expect log10 to be more accurate then log(x, y), since it knows exactly what the base of the logarithm is, also there may be some hardware support for calculating base-10 logarithms.
float are imprecise
I don't buy that argument, because exact power of two are represented exactly on most platforms (with underlying IEEE 754 floating point).
So if we really want that log2 of an exact power of 2 be exact, we can.
I'll demonstrate it in Squeak Smalltalk, because it is easy to change the base system in that language, but the language does not really matter, floating point computation are universal, and Python object model is not that far from Smalltalk.
For taking log in base n, there is the log: function defined in Number, which naively use the Neperian logarithm ln:
log: aNumber
"Answer the log base aNumber of the receiver."
^self ln / aNumber ln
self ln (take the neperian logarithm of receiver) , aNumber ln and / are three operations that will round there result to nearest Float, and these rounding error can cumulate... So the naive implementation is subject to the rounding error you observe, and I guess that Python implementation of log function is not much different.
((2 raisedTo: 31) log: 2) = 31.000000000000004
But if I change the definition like this:
log: aNumber
"Answer the log base aNumber of the receiver."
aNumber = 2 ifTrue: [^self log2].
^self ln / aNumber ln
provide a generic log2 in Number class:
log2
"Answer the base-2 log of the receiver."
^self asFloat log2
and this refinment in Float class:
log2
"Answer the base 2 logarithm of the receiver.
Care to answer exact result for exact power of two."
^self significand ln / Ln2 + self exponent asFloat
where Ln2 is a constant (2 ln), then I effectively get an exact log2 for exact power of two, because significand of such number = 1.0 (including subnormal for Squeak exponent/significand definition), and 1.0 ln = 0.0.
The implementation is quite trivial, and should translate without difficulty in Python (probably in the VM); the runtime cost is very cheap, so it's just a matter of how important we think this feature is, or is not.
As I always say, the fact that floating point operations results are rounded to nearest (or whatever rounding direction) representable value is not a license to waste ulp. Exactness has a cost, both in term of runtime penalty and implementation complexity, so it's trade-offs driven.
The representation (float.__repr__) of a number in python tries to return a string of digits as close to the real value as possible when converted back, given that IEEE-754 arithmetic is precise up to a limit. In any case, if you printed the result, you wouldn't notice:
>>> from math import log
>>> log(2**31,2)
31.000000000000004
>>> print log(2**31,2)
31.0
print converts its arguments to strings (in this case, through the float.__str__ method), which caters for the inaccuracy by displaying less digits:
>>> log(1000000,2)
19.931568569324174
>>> print log(1000000,2)
19.9315685693
>>> 1.0/10
0.10000000000000001
>>> print 1.0/10
0.1
usuallyuseless' answer is very useful, actually :)
If you wish to calculate the highest power of 'k' in a number 'n'. Then the code below might be helpful:
import math
answer = math.ceil(math.log(n,k))
while k**answer>n:
answer-=1
NOTE: You shouldn't use 'if' instead of 'while' because that will give wrong results in some cases like n=2**51-1 and k=2. In this example with 'if' the answer is 51 whereas with 'while' the answer is 50, which is correct.

Significant figures in the decimal module

So I've decided to try to solve my physics homework by writing some python scripts to solve problems for me. One problem that I'm running into is that significant figures don't always seem to come out properly. For example this handles significant figures properly:
from decimal import Decimal
>>> Decimal('1.0') + Decimal('2.0')
Decimal("3.0")
But this doesn't:
>>> Decimal('1.00') / Decimal('3.00')
Decimal("0.3333333333333333333333333333")
So two questions:
Am I right that this isn't the expected amount of significant digits, or do I need to brush up on significant digit math?
Is there any way to do this without having to set the decimal precision manually? Granted, I'm sure I can use numpy to do this, but I just want to know if there's a way to do this with the decimal module out of curiosity.
Changing the decimal working precision to 2 digits is not a good idea, unless you absolutely only are going to perform a single operation.
You should always perform calculations at higher precision than the level of significance, and only round the final result. If you perform a long sequence of calculations and round to the number of significant digits at each step, errors will accumulate. The decimal module doesn't know whether any particular operation is one in a long sequence, or the final result, so it assumes that it shouldn't round more than necessary. Ideally it would use infinite precision, but that is too expensive so the Python developers settled for 28 digits.
Once you've arrived at the final result, what you probably want is quantize:
>>> (Decimal('1.00') / Decimal('3.00')).quantize(Decimal("0.001"))
Decimal("0.333")
You have to keep track of significance manually. If you want automatic significance tracking, you should use interval arithmetic. There are some libraries available for Python, including pyinterval and mpmath (which supports arbitrary precision). It is also straightforward to implement interval arithmetic with the decimal library, since it supports directed rounding.
You may also want to read the Decimal Arithmetic FAQ: Is the decimal arithmetic ‘significance’ arithmetic?
Decimals won't throw away decimal places like that. If you really want to limit precision to 2 d.p. then try
decimal.getcontext().prec=2
EDIT: You can alternatively call quantize() every time you multiply or divide (addition and subtraction will preserve the 2 dps).
Just out of curiosity...is it necessary to use the decimal module? Why not floating point with a significant-figures rounding of numbers when you are ready to see them? Or are you trying to keep track of the significant figures of the computation (like when you have to do an error analysis of a result, calculating the computed error as a function of the uncertainties that went into the calculation)? If you want a rounding function that rounds from the left of the number instead of the right, try:
def lround(x,leadingDigits=0):
"""Return x either as 'print' would show it (the default)
or rounded to the specified digit as counted from the leftmost
non-zero digit of the number, e.g. lround(0.00326,2) --> 0.0033
"""
assert leadingDigits>=0
if leadingDigits==0:
return float(str(x)) #just give it back like 'print' would give it
return float('%.*e' % (int(leadingDigits),x)) #give it back as rounded by the %e format
The numbers will look right when you print them or convert them to strings, but if you are working at the prompt and don't explicitly print them they may look a bit strange:
>>> lround(1./3.,2),str(lround(1./3.,2)),str(lround(1./3.,4))
(0.33000000000000002, '0.33', '0.3333')
Decimal defaults to 28 places of precision.
The only way to limit the number of digits it returns is by altering the precision.
What's wrong with floating point?
>>> "%8.2e"% ( 1.0/3.0 )
'3.33e-01'
It was designed for scientific-style calculations with a limited number of significant digits.
If I undertand Decimal correctly, the "precision" is the number of digits after the decimal point in decimal notation.
You seem to want something else: the number of significant digits. That is one more than the number of digits after the decimal point in scientific notation.
I would be interested in learning about a Python module that does significant-digits-aware floating point point computations.

Categories