Handling operations with infinities in python - python

I have a piece of code that does a simple calculation.
import numpy as np
#Constants
R = 8.314462
T = 298.15
e = -678.692
e_overkbT = e*1000/(R*T)
#Independent variable
mu = np.linspace(-2000,2000,1000)
mu_overkbT = mu*1000/(R*T)
#Calculation
aa = (np.exp(mu_overkbT- e_overkbT))
theta = aa/(1+aa)
For negative values of 'mu', 'aa' is very small and thus the variable "theta" is very close to 0. For positive values of 'mu', 'aa' is very large. Thus for large numbers 'theta' approaches 1. (large number over large number + 1).
For large values of 'aa' python rounds 'theta' to be 1, which is fine. However, eventually for large enough numbers python will say 'aa' is 'inf'. Thus in the final step of calculating 'theta' I encounter a runtime error of dividing 'inf'/'inf'.
I need someway to handle this error such that it gives me '1' as the result for 'theta'. I can't reduce the range of the variable 'mu' and stop before the error, because this calculation is inside of a large function that changes the value of 'e', and thus this error does not always occur at the same spot.
Thanks.

Such overflow happens very often when using the exponential function on large terms. Other than the good very good comment noting that exp(x)/(1+exp(x)) = 1/(1+exp(-x)), another general approach in case you don't find easy transformations is to use the logarithm to make intermediary numbers more manageable, and then in the end to reverse this operation. This is especially true with products of many large (or very small) terms, which by applying the logarithm become a simple sum.

If you don't mind a dependency on SciPy, you can replace
aa = (np.exp(mu_overkbT- e_overkbT))
theta = aa/(1+aa)
with
from scipy.special import expit
theta = expit(mu_overkbT- e_overkbT)
expit is an implementation of the logistic sigmoid function. It handles very large positive and negative numbers correctly. Note that 1/(1 + np.exp(-x)) will generate a warning for large negative values (but it still correctly returns 0):
In [148]: x = -1500
In [149]: 1/(1 + np.exp(-x))
<ipython-input-149-0afe09c93af3>:1: RuntimeWarning: overflow encountered in exp
1/(1 + np.exp(-x))
Out[149]: 0.0

Related

When I take an nth root in Python and NumPy, which of the n existing roots do I actually get?

Entailed by the fundamental theorem of algebra is the existence of n complex roots for the formula z^n=a where a is a real number, n is a positive integer, and z is a complex number. Some roots will also be real in addition to complex (i.e. a+bi where b=0).
One example where there are multiple real roots is z^2=1 where we obtain z = ±sqrt(1) = ± 1. The solution z = 1 is immediate. The solution z = -1 is obtained by z = sqrt(1) = sqrt(-1 * -1) = I * I = -1, which I is the imaginary unit.
In Python/NumPy (as well as many other programming languages and packages) only a single value is returned. Here are two examples for 5^{1/3}, which has 3 roots.
>>> 5 ** (1 / 3)
1.7099759466766968
>>> import numpy as np
>>> np.power(5, 1/3)
1.7099759466766968
It is not a problem for my use case that only one of the possible roots are returned, but it would be informative to know 'which' root is systematically calculated in the contexts of Python and NumPy. Perhaps there is an (ISO) standard stating which root should be returned, or perhaps there is a commonly-used algorithm that happens to return a specific root. I've imagined of an equivalence class such as "the maximum of the real-valued solutions", but I do not know.
Question: When I take an nth root in Python and NumPy, which of the n existing roots do I actually get?
Since typically the idenity xᵃ = exp(a⋅log(x)) is used to define the general power, you'll get the root corresponding to the chosen branch cut of the complex logarithm.
With regards to this, the numpy documentation says:
For real-valued input data types, log always returns real output. For each value that cannot be expressed as a real number or infinity, it yields nan and sets the invalid floating point error flag.
For complex-valued input, log is a complex analytical function that has a branch cut [-inf, 0] and is continuous from above on it. log handles the floating-point negative zero as an infinitesimal negative number, conforming to the C99 standard.
So for example, np.power(-1 +0j, 1/3) = 0.5 + 0.866j = np.exp(np.log(-1+0j)/3).

Python Numerical Differentiation and the minimum value for h

I calculate the first derivative using the following code:
def f(x):
f = np.exp(x)
return f
def dfdx(x):
Df = (f(x+h)-f(x-h)) / (2*h)
return Df
For example, for x == 10 this works fine. But when I set h to around 10E-14 or below, Df starts
to get values that are really far away from the expected value f(10) and the relative error between the expected value and Df becomes huge.
Why is that? What is happening here?
The evaluation of f(x) has, at best, a rounding error of |f(x)|*mu where mu is the machine constant of the floating point type. The total error of the central difference formula is thus approximately
2*|f(x)|*mu/(2*h) + |f'''(x)|/6 * h^2
In the present case, the exponential function is equal to all of its derivatives, so that the error is proportional to
mu/h + h^2/6
which has a minimum at h = (3*mu)^(1/3), which for the double format with mu=1e-16 is around h=1e-5.
The precision is increased if instead of 2*h the actual difference (x+h)-(x-h) between the evaluation points is used in the denominator. This can be seen in the following loglog plot of the distance to the exact derivative.
You are probably encountering some numerical instability, as for x = 10 and h =~ 1E-13, the argument for np.exp is very close to 10 whether h is added or subtracted, so small approximation errors in the value of np.exp are scaled significantly by the division with the very small 2 * h.
In addition to the answer by #LutzL I will add some info from a great book Numerical Recipes 3rd Edition: The Art of Scientific Computing from chapter 5.7 about Numerical Derivatives, especially about the choice of optimal h value for given x:
Always choose h so that h and x differ by an exactly representable number. Funny stuff like 1/3 should be avoided, except when x is equal to something along the lines of 14.3333333.
Round-off error is approximately epsilon * |f(x) * h|, where epsilon is floating point accuracy, Python represents floating point numbers with double precision so it's 1e-16. It may differ for more complicated functions (where precision errors arise further), though it's not your case.
Choice of optimal h: Not getting into details it would be sqrt(epsilon) * x for simple forward case, except when your x is near zero (you will find more information in the book), which is your case. You may want to use higher x values in such cases, complementary answer is already provided. In the case of f(x+h) - f(x-h) as in your example it would amount to epsilon ** 1/3 * x, so approximately 5e-6 times x, which choice might be a little difficult in case of small values like yours. Quite close (if one can say so bearing in mind floating point arithmetic...) to practical results posted by #LutzL though.
You may use other derivative formulas, except the symmetric one you are using. You may want to use the forward or backward evaluation(if the function is costly to evaluate and you have calculated f(x) beforehand. If your function is cheap to evaluate, you may want to evaluate it multiple times using higher order methods to make the precision error smaller (see five-point stencil on wikipedia as provided in the comment to your question).
This Python tutorial explains the reason behind the limited precision. In summary, decimals are ultimately represented in binary and the precision is about 17 significant digits. So, you are right that it gets fuzzy beyond 10E-14.

Logarithm over x

Since the following expansion for the logarithm holds:
log(1-x)=-x-x^2/2-x^3/3-...
one can calculate the following functions which have removable singularities at x:
log(1-x)/x=-1-x/2-...
(log(1-x)/x+1)/x=-1/2-x/3-...
((log(1-x)/x+1)/x+1/2)/x=-1/3-x/4-...
I am trying to use NumPy for these calculations, and specifically the log1p function, which is accurate near x=0. However, convergence for the aforementioned functions is still problematic.
Do you have any ideas for any existing functions implementing these formulas or should I write one myself using the previous expansions, which will not be as efficient, however?
The simplest thing to do is something like
In [17]: def logf(x, eps=1e-6):
...: if abs(x) < eps:
...: return -0.5 - x/3.
...: else:
...: return (1. + log1p(-x)/x)/x
and play a bit with the threshold eps.
If you want a numpy-like, vectorized solution, replace an if with a np.where
>>> np.where(x > eps, 1. + log1p(-x)/x) / x, -0.5 - x/3.)
Why not successively take the Square of the candidate, after initially extracting the exponent component? When the square results in a number greater than 2, divide by two, and set the bit in the mantissa of your result that corresponds to the iteration. This is a much quicker and simpler way of determining log base 2, which can then in a single multiplication, be transformed to the e or 10 base.
Some predefined functions don't work at singularity points. One simple-minded solution is to compute the series by adding terms from a peculiar sequence.
For your example, the sequence would be :
sum = 0
for i in range(n):
sum+= x^k/k
sum = -sum
for log(1-x)
Then you keep adding a lot of terms or until the last term is under a small threshold.

Avoiding overflow error for exp in numpy

I am implementing the following function in numpy:
def weak_softmax(a):
b=np.exp(a)
return b/(1+np.sum(b))
The size of array a is small but the entries can sometimes be big, maybe as large as 1000. So I am receiving the following error very often because of the overflow in exponential function:
a=np.array([1000,1000])
a=weak_softmax(a)
The above code return the vector a=[nan nan] and raises the following warning:
Warning: overflow encountered in exp
Is there any clever way to avoid this issue but still returning the array b as intended? This is because all the entries of bare less than one only and I feel that it must be possible to avoid this issue using some trick.
You can simply divide the numerator and denominator by the same factor exp(c) for suitably sized c.
The following code uses np.finfo to check whether overflow may happen and to calculate c.
def modified_soft_max(a, SAFETY=2.0):
mrn = np.finfo(a.dtype).max # largest representable number
thr = np.log(mrn / a.size) - SAFETY
amx = a.max()
if(amx > thr):
b = np.exp(a - (amx-thr))
return b / (np.exp(thr-amx) + b.sum())
else:
b = np.exp(a)
return b / (1.0 + b.sum())

Integer optimization/maximization in numpy

I need to estimate the size of a population, by finding the value of n which maximises scipy.misc.comb(n, a)/n**b where a and b are constants. n, a and b are all integers.
Obviously, I could just have a loop in range(SOME_HUGE_NUMBER), calculate the value for each n and break out of the loop once I reach an inflexion in the curve. But I wondered if there was an elegant way of doing this with (say) numpy/scipy, or is there some other elegant way of doing this just in pure Python (e.g. like an integer equivalent of Newton's method?)
As long as your number n is reasonably small (smaller than approx. 1500), my guess for the fastest way to do this is to actually try all possible values. You can do this quickly by using numpy:
import numpy as np
import scipy.misc as misc
nMax = 1000
a = 77
b = 100
n = np.arange(1, nMax+1, dtype=np.float64)
val = misc.comb(n, a)/n**b
print("Maximized for n={:d}".format(int(n[val.argmax()]+0.5)))
# Maximized for n=181
This is not especially elegant but rather fast for that range of n. Problem is that for n>1484 the numerator can already get too large to be stored in a float. This method will then fail, as you will run into overflows. But this is not only a problem of numpy.ndarray not working with python integers. Even with them, you would not be able to compute:
misc.comb(10000, 1000, exact=True)/10000**1001
as you want to have a float result in your division of two numbers larger than the maximum a float in python can hold (max_10_exp = 1024 on my system. See sys.float_info().). You couldn't use your range in that case, as well. If you really want to do something like that, you will have to take more care numerically.
You essentially have a nicely smooth function of n that you want to maximise. n is required to be integral but we can consider the function instead to be a function of the reals. In this case, the maximising integral value of n must be close to (next to) the maximising real value.
We could convert comb to a real function by using the gamma function and use numerical optimisation techniques to find the maximum. Another approach is to replace the factorials with Stirling's approximation. This gives a moderately complicated but tractable algebraic expression. This expression is not hard to differentiate and set to zero to find the extrema.
I did this and obtained
n * (b + (n-a) * log((n-a)/n) ) = a * b - a/2
This is not straightforward to solve algebraically but easy enough numerically (e.g. using Newton's method, as you suggest).
I may have made a mistake in the algebra, but I typed the a = 77, b = 100 example into Wolfram Alpha and got 180.58 so the approach seems to work.

Categories