I'm writing unit tests for my simulation and want to check that for specific parameters the result, a numpy array, is zero. Due to calculation inaccuracies, small values are also accepted (1e-7). What is the best way to assert this array is close to 0 in all places?
np.testing.assert_array_almost_equal(a, np.zeros(a.shape)) and assert_allclose fail as the relative tolerance is inf (or 1 if you switch the arguments) Docu
I feel like np.testing.assert_array_almost_equal_nulp(a, np.zeros(a.shape)) is not precise enough as it compares the difference to the spacing, therefore it's always true for nulps >= 1 and false otherways but does not say anything about the amplitude of a Docu
Use of np.testing.assert_(np.all(np.absolute(a) < 1e-7)) based on this question does not give any of the detailed output, I am used to by other np.testing methods
Is there another way to test this? Maybe another testing package?
If you compare a numpy array with all zeros, you can use the absolute tolerance, as the relative tolerance does not make sense here:
from numpy.testing import assert_allclose
def test_zero_array():
a = np.array([0, 1e-07, 1e-08])
assert_allclose(a, 0, atol=1e-07)
The rtol value does not matter in this case, as it is multiplied with 0 if calculating the tolerance:
atol + rtol * abs(desired)
Update: Replaced np.zeros_like(a) with the simpler scalar 0. As pointed out by #hintze, np array comparisons also work against scalars.
Related
Given a float x, I would like to find the largest floating point number that is less than x. How can I do this in Python?
I've tried subtracting machine epsilon from x (x - numpy.finfo(float).eps), but this evaluates to x for sufficiently large floats, and I need the value I get back to be strictly less than x.
There's some information about how to do this in C# here, but I have no idea how to do the same bitwise conversion in Python. Anybody know how to do this, or have another method for getting the same value?
(Bigger-picture problem -- I'm trying to numerically find the root of an equation with a singularity at x, within the bounds 0 < root < x. The solver (Scipy's toms748 implementation) evaluates on the boundaries, and it can't handle nan or inf values, so I can't give it exactly x as a bound. I don't know how close the root might be to the bound, so I want to give a bound as close to x as possible without actually producing an infinite value and crashing the solver.)
You are describing the basic usage of numpy.nextafter.
>>> import numpy as np
>>> np.nextafter(1.5, 0.0) # biggest float smaller than 1.5
1.4999999999999998
>>> np.nextafter(1.5, 2.0) # smallest float bigger than 1.5
1.5000000000000002
Given:
>>> a,b=2,3
>>> c,d=3,2
>>> def f(x,y): print(x,y)
I have an existing (as in cannot be changed) 2 positional parameter function where I want the positional parameters to always be in ascending order; i.e., f(2,3) no matter what two arguments I use (f(a,b) is the same as f(c,d) in the example)
I know that I could do:
>>> f(*sorted([c,d]))
2 3
Or I could do:
>>> f(*((a,b) if a<b else (b,a)))
2 3
(Note the need for tuple parenthesis in this form because , is lower precedence than the ternary...)
Or,
def my_f(a,b):
return f(a,b) if a<b else f(b,a)
All these seem kinda kludgy. Is there another syntax that I am missing?
Edit
I missed an 'old school' Python two member tuple method. Index a two member tuple based on the True == 1, False == 0 method:
>>> f(*((a,b),(b,a))[a>b])
2 3
Also:
>>> f(*{True:(a,b), False:(b,a)}[a<b])
2 3
Edit 2
The reason for this silly exercise: numpy.isclose has the following usage note:
For finite values, isclose uses the following equation to test whether
two floating point values are equivalent.
absolute(a - b) <= (atol + rtol * absolute(b))
The above equation is not symmetric in a and b, so that isclose(a, b)
might be different from isclose(b, a) in some rare cases.
I would prefer that not happen.
I am looking for the fastest way to make sure that arguments to numpy.isclose are in a consistent order. That is why I am shying away from f(*sorted([c,d]))
Implemented my solution in case anyone else is looking.
def sort(f):
def wrapper(*args):
return f(*sorted(args))
return wrapper
#sort
def f(x, y):
print(x, y)
f(3, 2)
>>> (2, 3)
Also since #Tadhg McDonald-Jensen mention that you may not be able to change the function yourself that you could wrap the function as such
my_func = sort(f)
You mention that your use-case is np.isclose. However your approach isn't a good way to solve the real issue. But it's understandable given the poor argument naming of that function - it sort of implies that both arguments are interchangable. If it were: numpy.isclose(measured, expected, ...) (or something like it) it would be much clearer.
For example if you expect the value 10 and measure 10.51 and you allow for 5% deviation, then in order to get a useful result you must use np.isclose(10.51, 10, ...), otherwise you would get wrong results:
>>> import numpy as np
>>> measured = 10.51
>>> expected = 10
>>> err_rel = 0.05
>>> err_abs = 0.0
>>> np.isclose(measured, expected, err_rel, err_abs)
False
>>> np.isclose(expected, measured, err_rel, err_abs)
True
It's clear to see that the first one gives the correct result because the actually measured value is not within the tolerance of the expected value. That's because the relative uncertainty is an "attribute" of the expected value, not of the value you compare it with!
So solving this issue by "sorting" the parameters is just wrong. That's a bit like changing the numerator and denominator for division because the denominator contains zeros and dividing by zero could give NaN, Inf, a Warning or an Exception... it definetly avoids the problem but just by giving an incorrect result (the comparison isn't perfect because with division it will almost always give a wrong result; with isclose it's rare).
This was a somewhat artificial example designed to trigger that behaviour and most of the time it's not important if you use measured, expected or expected, measured but in the few cases where it does matter you can't solve it by swapping the arguments (except when you have no "expected" result, but that rarely happens - at least it shouldn't).
There was some discussion about this topic when math.isclose was added to the python library:
Symmetry (PEP 485)
[...]
Which approach is most appropriate depends on what question is being asked. If the question is: "are these two numbers close to each other?", there is no obvious ordering, and a symmetric test is most appropriate.
However, if the question is: "Is the computed value within x% of this known value?", then it is appropriate to scale the tolerance to the known value, and an asymmetric test is most appropriate.
[...]
This proposal [for math.isclose] uses a symmetric test.
So if your test falls into the first category and you like a symmetric test - then math.isclose could be a viable alternative (at least if you're dealing with scalars):
math.isclose(a, b, *, rel_tol=1e-09, abs_tol=0.0)
[...]
rel_tol is the relative tolerance – it is the maximum allowed difference between a and b, relative to the larger absolute value of a or b. For example, to set a tolerance of 5%, pass rel_tol=0.05. The default tolerance is 1e-09, which assures that the two values are the same within about 9 decimal digits. rel_tol must be greater than zero.
[...]
Just in case this answer couldn't convince you and you still want to use a sorted approach - then you should order by the absolute of you values (i.e. *sorted([a, b], key=abs)). Otherwise you might get surprising results when comparing negative numbers:
>>> np.isclose(-10.51, -10, err_rel, err_abs) # -10.51 is smaller than -10!
False
>>> np.isclose(-10, -10.51, err_rel, err_abs)
True
For only two elements in the tuple, the second one is the preferred idiom -- in my experience. It's fast, readable, etc.
No, there isn't really another syntax. There's also
(min(a,b), max(a,b))
... but this isn't particularly superior to the other methods; merely another way of expressing it.
Note after comment by dawg:
A class with custom comparison operators could return the same object for both min and max.
I need to estimate the size of a population, by finding the value of n which maximises scipy.misc.comb(n, a)/n**b where a and b are constants. n, a and b are all integers.
Obviously, I could just have a loop in range(SOME_HUGE_NUMBER), calculate the value for each n and break out of the loop once I reach an inflexion in the curve. But I wondered if there was an elegant way of doing this with (say) numpy/scipy, or is there some other elegant way of doing this just in pure Python (e.g. like an integer equivalent of Newton's method?)
As long as your number n is reasonably small (smaller than approx. 1500), my guess for the fastest way to do this is to actually try all possible values. You can do this quickly by using numpy:
import numpy as np
import scipy.misc as misc
nMax = 1000
a = 77
b = 100
n = np.arange(1, nMax+1, dtype=np.float64)
val = misc.comb(n, a)/n**b
print("Maximized for n={:d}".format(int(n[val.argmax()]+0.5)))
# Maximized for n=181
This is not especially elegant but rather fast for that range of n. Problem is that for n>1484 the numerator can already get too large to be stored in a float. This method will then fail, as you will run into overflows. But this is not only a problem of numpy.ndarray not working with python integers. Even with them, you would not be able to compute:
misc.comb(10000, 1000, exact=True)/10000**1001
as you want to have a float result in your division of two numbers larger than the maximum a float in python can hold (max_10_exp = 1024 on my system. See sys.float_info().). You couldn't use your range in that case, as well. If you really want to do something like that, you will have to take more care numerically.
You essentially have a nicely smooth function of n that you want to maximise. n is required to be integral but we can consider the function instead to be a function of the reals. In this case, the maximising integral value of n must be close to (next to) the maximising real value.
We could convert comb to a real function by using the gamma function and use numerical optimisation techniques to find the maximum. Another approach is to replace the factorials with Stirling's approximation. This gives a moderately complicated but tractable algebraic expression. This expression is not hard to differentiate and set to zero to find the extrema.
I did this and obtained
n * (b + (n-a) * log((n-a)/n) ) = a * b - a/2
This is not straightforward to solve algebraically but easy enough numerically (e.g. using Newton's method, as you suggest).
I may have made a mistake in the algebra, but I typed the a = 77, b = 100 example into Wolfram Alpha and got 180.58 so the approach seems to work.
I need to solve this:
Check if AT * n * A = n, where A is the test matrix, AT is the transposed test matrix and n = [[1,0,0,0],[0,-1,0,0],[0,0,-1,0],[0,0,0,-1]].
I don't know how to check for equality due to the numerical errors in the float multiplication. How do I go about doing this?
Current code:
def trans(A):
n = numpy.matrix([[1,0,0,0],[0,-1,0,0],[0,0,-1,0],[0,0,0,-1]])
c = numpy.matrix.transpose(A) * n * numpy.matrix(A)
Have then tried
>if c == n:
return True
I have also tried assigning variables to every element of matrix and then checking that each variable is within certain limits.
Typically, the way that numerical-precision limitations are overcome is by allowing for some epsilon (or error-value) between the actual value and expected value that is still considered 'equal'. For example, I might say that some value a is equal to some value b if they are within plus/minus 0.01. This would be implemented in python as:
def float_equals(a, b, epsilon):
return abs(a-b)<epsilon
Of course, for matrixes entered as lists, this isn't quite so simple. We have to check if all values are within the epsilon to their partner. One example solution would be as follows, assuming your matrices are standard python lists:
from itertools import product # need this to generate indexes
def matrix_float_equals(A, B, epsilon):
return all(abs(A[i][j]-B[i][j])<epsilon for i,j in product(xrange(len(A)), repeat = 2))
all returns True iff all values in a list are True (list-wise and). product effectively dot-products two lists, with the repeat keyword allowing easy duplicate lists. Therefore given a range repeated twice, it will produce a list of tuples for each index. Of course, this method of index generation assumes square, equally-sized matrices. For non-square matrices you have to get more creative, but the idea is the same.
However, as is typically the way in python, there are libraries that do this kind of thing for you. Numpy's allclose does exactly this; compares two numpy arrays for equality element-wise within some tolerance. If you're working with matrices in python for numeric analysis, numpy is really the way to go, I would get familiar with its basic API.
If a and b are numpy arrays or matrices of the same shape, then you can use allclose:
if numpy.allclose(a, b): # a is approximately equal to b
# do something ...
This checks that for all i and all j, |aij - bij| < εa for some absolute error εa (by default 10-5) and that |aij - bij| < |bij| εr for some relative error εr (by default 10-8). Thus it is safe to use, even if your calculations introduce numerical errors.
Basically I have an array that may vary between any two numbers, and I want to preserve the distribution while constraining it to the [0,1] space. The function to do this is very very simple. I usually write it as:
def to01(array):
array -= array.min()
array /= array.max()
return array
Of course it can and should be more complex to account for tons of situations, such as all the values being the same (divide by zero) and float vs. integer division (use np.subtract and np.divide instead of operators). But this is the most basic.
The problem is that I do this very frequently across stuff in my project, and it seems like a fairly standard mathematical operation. Is there a built in function that does this in NumPy?
Don't know if there's a builtin for that (probably not, it's not really a difficult thing to do as is). You can use vectorize to apply a function to all the elements of the array:
def to01(array):
a = array.min()
# ignore the Runtime Warning
with numpy.errstate(divide='ignore'):
b = 1. /(array.max() - array.min())
if not(numpy.isfinite(b)):
b = 0
return numpy.vectorize(lambda x: b * (x - a))(array)