So I have an 500k array of floating values. When I am trying to:
np.log10(my_long_array)
270k numbers getting replaced to nan, and they are not that small. For example:
In [1]: import numpy as np
In [2]: t = -0.055488893531690543
In [3]: np.log10(t)
/home/aydar/anaconda3/bin/ipython:1: RuntimeWarning: invalid value encountered in log10
#!/home/aydar/anaconda3/bin/python3
Out[3]: nan
In [4]: type(t)
Out[4]: float
What am I missing?
the logarithm of a negative number is undefined, hence the nan
From the docs to numpy.log10:
Returns: y : ndarray
The logarithm to the base 10 of x, element-wise. NaNs are returned where x is negative.
Negative numbers always give undefined log,
The logarithmic function
y = logb(x)
is the inverse function of the exponential function
x = b^y
Since the base b is positive (b>0), the base b raised to the power of y must be positive (b^y>0) for any real y. So the number x must be positive (x>0).
The real base b logarithm of a negative number is undefined.
logb(x) is undefined for x ≤ 0
Related
``I need to Write a function which calculates the following math equation and round your answer to 2 decimal places: z = π*ex2/4y.
There are the following contstraints: The input variables x and y are single values (that is, not a list/array). If a division by zero occurs, raise a ValueError. Output should be rounded to 2 decimal places. (hint) You can calculate ex using the NumPy function np.exp(x).
I have done the following but still fail the value error tests (ValueError
Inputs: [0.5, 0]
def test_question_3_ValueError(test_input)):
def custom_function(x, y):
# your code here
a = (np.pi*np.exp(x**2))
b = 4*y
z = a / b
if b < 0:
raise ValueError("Div by zero")
return round(z, 2)
First thing, I believe that the division by 0, will only happen if y = 0, as b = 4*y, and b can be only 0 if y = 0, thus, you should change the if statement for b == 0.
Anoter thing, that if statement should be before calculating the z value, because you want to raise the Error before any computations has been started, and thus stopping everything that follows afterwards.
How do I get a "small enough" infinity that its product with zero is zero?
I'm using an integer programming solver for Python, and some of my variables have infinite cost. I can't use float('inf') because float('inf')*0 = NaN
You might use the largest finite number that float can hold:
In [9]: print sys.float_info.max
1.79769313486e+308
In [10]: sys.float_info.max * 0
Out[10]: 0.0
Rather than looking for a "smaller infinity", which doesn't exist, it may be easier to trap the NaN and substitute zero for it. To do this, you can use the fact that NaN is not equal to any float value, even itself. This approach is convenient because generally you can use it at the end of a chain of calculations (since the NaN will cascade through and make the results all NaN).
possible_inf = float("+inf")
result = possible_inf * 0
result = result if result == result else 0
You could also just trap for the infinity itself earlier on:
possible_inf = float("+inf")
result = 0 if abs(possible_inf) == float("+inf") else possible_inf * 0
NAN isn't zero. u can use numpy to check the NAN value and convert it to zero.
from numpy import isnan
result = 0 if isnan(result) else result
Trying to increment a single-precision floating point number by the smallest possible amount. I see there is a nextafter function, but I can't get that to work for single precision numbers. Any suggestions?
Seems to work fine:
>>> x = np.float32(1.)
>>> y = np.nextafter(x, np.float32(2.))
>>> y
1.0000001
>>> type(y)
numpy.float32
Given a value numerical value x, you can just do this float(x).is_integer() to check if it's an integer. Is there a way to do this for complex numbers?
I'm trying to use list comprehension to take only the integer roots of a polynomial over a finite field which are integers.
[r for r in solve(f,domain=FiniteField(p)) if float(r).is_integer()]
but if the solve function returns complex roots, this doesn't work.
Does anyone know how to either: check if a given (possibly complex number) is an integer OR know if there's a SymPy function to get the roots of a polynomial over a finite field which are integers?
Use the ground_roots function along with the modulus keyword argument. This returns the roots modulo p, with multiplicities. Here's an example:
>>> from sympy import Symbol, ground_roots
>>> x = Symbol('x')
>>> f = x**5 + 7*x + 1
>>> p = 13
>>> ground_roots(f, modulus=p)
{-5: 1, 4: 2}
That says that the roots of poly modulo 13 are -5 and 4, with the root 4 having multiplicity 2.
By the way, it looks to me as though complex numbers are a red herring here: the roots of an integral polynomial over a finite field can't naturally be regarded as complex numbers. The call to solve in your original post is ignoring the domain argument and is simply giving algebraic numbers (which can reasonably be interpreted as complex numbers) as its results, which is probably why you ended up looking at complex numbers. But these aren't going to help when trying to find the roots modulo a prime p.
float.is_integer(z.real) tells you if the real part is integer
If you want to check if imaginary part is zero, you can do this:
In [17]: a=2 + 2j
In [18]: bool(a.imag)
Out[18]: True
In [19]: b=2 + 0j
In [20]: bool(b.imag)
Out[20]: False
Using a list comprehension as in your original question, one could do:
roots = [complex(r) for r in solve(f, domain=FiniteField(p))]
int_roots = [z.real for z in roots if z.real.is_integer() and not z.imag]
after googling a while, I'm posting here for help.
I have two float64 variables returned from a function.
Both of them are apparently 1:
>>> x, y = somefunc()
>>> print x,y
>>> if x < 1 : print "x < 1"
>>> if y < 1 : print "y < 1"
1.0 1.0
y < 1
Behavior changes when variables are defined float32, in which case the 'y<1' statement doesn't appear.
I tried setting
np.set_printoptions(precision=10)
expecting to see the differences between variables but even so, both of them appear as 1.0 when printed.
I am a bit confused at this point.
Is there a way to visualize the difference of these float64 numbers?
Can "if/then" be used reliably to check float64 numbers?
Thanks
Trevarez
The printed values are not correct. In your case y is smaller than 1 when using float64 and bigger or equal to 1 when using float32. this is expected since rounding errors depend on the size of the float.
To avoid this kind of problems, when dealing with floating point numbers you should always decide a "minimum error", usually called epsilon and, instead of comparing for equality, checking whether the result is at most distant epsilon from the target value:
In [13]: epsilon = 1e-11
In [14]: number = np.float64(1) - 1e-16
In [15]: target = 1
In [16]: abs(number - target) < epsilon # instead of number == target
Out[16]: True
In particular, numpy already provides np.allclose which can be useful to compare arrays for equality given a certain tolerance. It works even when the arguments aren't arrays(e.g. np.allclose(1 - 1e-16, 1) -> True).
Note however than numpy.set_printoptions doesn't affect how np.float32/64 are printed. It affects only how arrays are printed:
In [1]: import numpy as np
In [2]: np.float(1) - 1e-16
Out[2]: 0.9999999999999999
In [3]: np.array([1 - 1e-16])
Out[3]: array([ 1.])
In [4]: np.set_printoptions(precision=16)
In [5]: np.array([1 - 1e-16])
Out[5]: array([ 0.9999999999999999])
In [6]: np.float(1) - 1e-16
Out[6]: 0.9999999999999999
Also note that doing print y or evaluating y in the interactive interpreter gives different results:
In [1]: import numpy as np
In [2]: np.float(1) - 1e-16
Out[2]: 0.9999999999999999
In [3]: print(np.float64(1) - 1e-16)
1.0
The difference is that print calls str while evaluating calls repr:
In [9]: str(np.float64(1) - 1e-16)
Out[9]: '1.0'
In [10]: repr(np.float64(1) - 1e-16)
Out[10]: '0.99999999999999989'
In [26]: x = numpy.float64("1.000000000000001")
In [27]: print x, repr(x)
1.0 1.0000000000000011
In other words, you are plagued by loss of precision in print statement. The value is very slightly different than 1.
Following the advices provided here I summarize the answers in this way:
To make comparisons between floats, the programmer has to define a minimum distance (eps) for them to be considered different (eps=1e-12, for example). Doing so, the conditions should be written like this:
Instead of (x>a), use (x-a)>eps
Instead of (x<a), use (a-x)>eps
Instead of (x==a), use abs(x-a)<eps
This doesn't apply to comparison between integer numbers since difference between them is fixed to 1.
Hope it helps others as it helped me.