Using extremely small floats in NumPy - python

I'm using Python 3 and trying to plot the half-life time of a process. The formula for this half life time is -ln(2)/(ln(1-f)). In this formula, f is an extremely small number, of the order 10^-17 most of the time, and even less.
Because I have to plot a range of values of f, I have to repeat the calculation -ln(2)/(ln(1-f)) multiple times. I do this via the expression
np.log(2)/(-1*np.log(1-f))
When I plot the half life time for many values of f, I find that for really small values of f, Python starts rounding 1-f to the same number, even though I input the same values of f.
Is there anyway I could increase float precision so that Python could distuingish between outputs of 1-f for small changes in f?

The result you want can be achieved using numpy.log1p. It computes log(1 + x) with a better numerical precision than numpy.log(1 + x), or, as the docs say:
For real-valued input, log1p is accurate also for x so small that
1 + x == 1 in floating-point accuracy.
With this your code becomes:
import numpy as np
min_f, max_f = -32, -15
f = np.logspace(min_f, max_f, max_f - min_f + 1)
y = np.log(2)/(-1*np.log1p(-f))
This can be evaluated consistently:
import matplotlib.pyplot as plt
plt.loglog(f, y)
plt.show()
This function will only stop working if your values of f leave the range of floats, i.e. down to 1e-308. This should be sufficient for any physical measurement (especially considering that there is such a thing as a smallest physical time-scale, the Planck-time t_P = 5.39116(13)e-44 s).

Related

Reducing redundancy for calculating large number of integrals numerically

I need to calculate the following integral on a 2D-grid (x,y positions):
with r = sqrt(x^2 + y^2) and the 2D-grid centered at x=y=0.
The implementation is straightforward:
import numpy as np
from scipy import integrate
def integralFunction(x):
def squareSaturation(y):
return np.sqrt(1-np.exp(-y**2))
return integrate.quad(squareSaturation,0,x)[0]
#vectorize function to apply function with integrals on np-array
integralFunctionVec = np.vectorize(integralFunction)
xmax = ymax = 5
Nx = Ny = 1024
X, Y = np.linspace(-xmax, xmax, Nx), np.linspace(-ymax, ymax, Ny)
X, Y = np.meshgrid(X, Y)
R = np.sqrt(X**2+Y**2)
Z = integralFunctionVec(R)
However, I'm currently working on a 1024x1024 grid and the calculation takes ~1.5 minutes. Now there is some redundancy in those calculations that I want to reduce to speed up the calculation. Namely:
As the grid is centered around r = 0, many values for r on the grid are the same. Due to symmetry only ~1/8 of all values are unique (for a square grid). One idea was to calculate the integral only for the unique values (found via np.unique) and then save them in a look-up table (hashmap?) Or I could cache the function values so that only new values are calculated (via #lru_cache). But does that actually work when I vectorize the function afterwards?
As the integral goes from 0 to r, the integral is often calculating integrals over intervals it has already calculated. E.g. if you calculate from 0 to 1 and afterwards from 0 to 2, only the interval from 1 to 2 is "new". But what would be the best way to utilize that? And would that even be a real performance boost using scipy.integrate.quad?
Do you have any feedback or other ideas to optimize this calculation?
You can use Numba to speed up the computation of quad. Here is an example:
import numpy as np
import numba as nb
from scipy import integrate
#nb.cfunc('float64(float64)')
def numbaSquareSaturation(y):
return np.sqrt(1-np.exp(-y**2))
squareSaturation = scipy.LowLevelCallable(numbaSquareSaturation.ctypes)
def integralFunction(x):
return integrate.quad(squareSaturation,0,x)[0]
integralFunctionVec = np.vectorize(integralFunction)
xmax = ymax = 5
Nx = Ny = 1024
X, Y = np.linspace(-xmax, xmax, Nx), np.linspace(-ymax, ymax, Ny)
X, Y = np.meshgrid(X, Y)
R = np.sqrt(X**2+Y**2)
Z = integralFunctionVec(R)
This is about 25 times faster on my machine. The code is still suboptimal since squareSaturation calls introduces a big overhead but is seems SciPy does not provide a way to vectorize quad efficiently for your case. Note that using nb.cfunc+scipy.LowLevelCallable significantly speed up the execution as pointed out by #max9111.
As the grid is centered around r = 0, many values for r on the grid are the same. Due to symmetry only ~1/8 of all values are unique (for a square grid). One idea was to calculate the integral only for the unique values (found via np.unique) and then save them in a look-up table (hashmap?) Or I could cache the function values so that only new values are calculated (via #lru_cache). But does that actually work when I vectorize the function afterwards?
I do not expect this approach to be significantly faster although not recomputing the values is indeed a good idea. Note that hashmap are pretty slow as well as np.unique. I suggest you to just select the quarter of the input array R. Something like R[0:R.shape[0]//2, 0:R.shape[1]//2]. Be careful if the shape is odd.
As the integral goes from 0 to r, the integral is often calculating integrals over intervals it has already calculated. E.g. if you calculate from 0 to 1 and afterwards from 0 to 2, only the interval from 1 to 2 is "new". But what would be the best way to utilize that? And would that even be a real performance boost using scipy.integrate.quad?
This could help since the domain of a integral is smaller and the function should be smoother. This means Scipy should be faster to compute it. Even if it would not do that automatically, you can reduce the precision of the computed sub-intervals using optional parameters of quad.

How do I numerically integrate a function thats a product of a lorentzian and a cosinus in Python?

I am new to stackoverflow and also quite new to Python. So, I hope to ask my question in an appropriate manner.
I am running a Python code similar to this minimal example with an example function that is a product of a lorentzian with a cosinus that I want to numerically integrate:
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import quad
#minimal example:
omega_loc = 15
gamma = 5
def Lorentzian(w):
#print(w)
return (w**3)/((w/omega_loc) + 1)**2*(gamma/2)/((w-omega_loc)**2+(gamma/2)**2)
def intRe(t):
return quad(lambda w: w**(-2)*Lorentzian(w)*(1-np.cos(w*t)),0,np.inf,limit=10000)[0]
plt.figure(1)
plot_range = np.linspace(0,100,1000)
plt.plot(plot_range, [intRe(t) for t in plot_range])
Independent on the upper limit of the integration I never get the code to run and to give me a result.
When I enable the #print(w) line it seems like the code just keeps on probing the integral at random different values of w in an infinite loop (?). Also the console gives me a detection of a roundoff error.
Is there a different way for numerical integration in Python that is better suited for this kind of function than the quad function or did I do a more fundamental error?
Observations
Close to zero (1 - cos(w*t)) / w**2 tends to 0/0. We can take the taylor expansion t**2(1/2 - (w*t)**2/24).
Going to the infinity the Lorentzian is a constant and the cosine term will cause the output to oscillate indefinitely, the integral can be approximated by multiplying that term by a slowly decreasing term.
You are using a linearly spaced scale with many points. It is easier to visualize with w in log scale.
The plot looks like this before damping the cosine term
I introduced two parameters to tune the attenuation of the oscilations
def cosinus_term(w, t, damping=1e4*omega_loc):
return np.where(abs(w*t) < 1e-6, t**2*(0.5 - (w*t)**2/24.0), (1-np.exp(-abs(w/damping))*np.cos(w*t))/w**2)
def intRe(t, damping=1e4*omega_loc):
return quad(lambda w: cosinus_term(w, t)*Lorentzian(w),0,np.inf,limit=10000)[0]
Plotting with the following code
plt.figure(1)
plot_range = np.logspace(-3,3,100)
plt.plot(plot_range, [intRe(t, 1e2*omega_loc) for t in plot_range])
plt.plot(plot_range, [intRe(t, 1e3*omega_loc) for t in plot_range])
plt.xscale('log')
It runs in less than 3 minutes here, and the two results are close to each other, specially for large w, suggesting that the damping doesn't affect too much the result.

Numerical inconsistency between loop and builtin function

I'm trying to compute the sum of an array of random numbers. But there seems to be an inconcistancy between the results when I do it one element at a time and when I use the built-in function. Furthermore, the error seems to increase when I decrease the data precision.
import torch
columns = 43*22
rows = 44
torch.manual_seed(0)
array = torch.rand([rows,columns], dtype = torch.float64)
array_sum = 0
for i in range(rows):
for j in range(columns):
array_sum += array[i, j]
torch.abs(array_sum - array.sum())
results in:
tensor(3.6380e-10, dtype=torch.float64)
using dtype = torch.float32 results in:
tensor(0.1426)
using dtype = torch.float16 results in (a whooping!):
tensor(18784., dtype=torch.float16)
I find it hard to believe no one has ever asked about it. Yet, I haven't found a similar question in SO.
Can anyone please help me find some explanation or the source of this error?
The first mistake is this:
you should change the summation line to
array_sum += float(array[i, j])
For float64 this causes no problems, for the other values it is a problem, the explenation will follow.
To start with: when doing floating point arithmetic, you should always keep in mind that there are small mistakes due to rounding errors. The most simple way to see this is in a python shell:
>>> .1+.1+.1-.3
5.551115123125783e-17
But how do you take these errors into account?
When summing n positive integers to a total tot, the analysis is fairly simple and it the rule is:
error(tot) < tot * n * machine_epsilon
Where the factor n is usually a gross over-estimation and the machine_epsilon is dependant on the type (representation size) of floating point-number.
And is approximatly:
float64: 2*10^-16
float32: 1*10^-7
float16: 1*10^-3
And one would generally expect as an error approximately within a reasonable factor of tot*machine_epsilon.
And for my tests with float16 we get (with always +-40000 variables summing to a total of +- 20000):
error(float64) = 3*10^-10 ≈ 80* 20000 * 2*10^-16
error(float32) = 1*10^-1 ≈ 50* 20000 * 1*10^-7
which is acceptable.
Then there is another problem with the float 16. There is the machine epsilon = 1e-4 and you can see the problem with
>>> ar = torch.ones([1], dtype=float16)
>>> ar
tensor([2048.], dtype=torch.float16)
>>> ar[0] += .5
>>> ar
tensor([2048.], dtype=torch.float16)
Here the problem is that when the value 2048 is reached, the value is not precise enough to be able to add a value 1 or less. More specifically: with a float16 you can 'represent' the value 2048, and you can represent the value 2050, but nothing in between because it has too little bits for that precision. By keeping the sum in a float64 variable, you overcome this problem. Fixing this we get for float16:
error(float16) = 16 ≈ 8* 20000 * 1*10^-4
Which is large, but acceptable as a value relative to 20000 represented in float16.
If you ask yourself, which of the two methods is 'right' then the answer is none of the two, they are both approximations with the same precision, but a different error.
But as you probably guessed using the sum() method is faster, better and more reliable.
You can use float(array[i][j]) in place of array[i][j] in order to ensure ~0 difference between the loop-based sum and the torch.sum(). The ~0 is easy to observe when the number of elements are taken into account as shown in the two plots below.
The heatmaps below show the error per element = (absolute difference between torch.sum() and loop-based sum), divided by the number of elements. The heatmap value when using an array of r rows and c columns is computed as:
heatmap[r, c] = torch.abs(array_sum - array.sum())/ (r*c)
We vary the size of the array in order to observe how it affects the errors per element. Now, in the case of OP's code, the heatmaps show accumulating error with increasing size of matrix. However, when we use float(array[i,j]), the error is not dependent on the size of the array.
Top Image: when using array_sum += float(array[i][j])
Bottom Image: when using array_sum += (array[i][j])
The script used to generate these plots is reproduced below if someone wants to fiddle around with these.
import torch
import numpy as np
column_list = range(1,200,10)
row_list = range(1,200,10)
heatmap = np.zeros(shape=(len(row_list), len(column_list)))
for count_r, rows in enumerate(row_list):
for count_c, columns in enumerate(column_list):
### OP's snippet start
torch.manual_seed(0)
array = torch.rand([rows,columns], dtype = torch.float16)
array_sum = np.float16(0)
for i in range(rows):
for j in range(columns):
array_sum += (array[i, j])
### OP's snippet end
heatmap[count_r, count_c] = torch.abs(array_sum - array.sum())/ (rows*columns)
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import pandas as pd
X = row_list
Y = column_list
Z = heatmap
df = pd.DataFrame(data=Z, columns=X, index=Y)
sns.heatmap(df, square=False)
plt.xlabel('number of rows')
plt.ylabel('number of columns')
plt.tight_layout()
plt.savefig('image_1.png', dpi=300)
plt.show()
You have hit the top of a rather big iceberg wrt to storing high precision values in a computer.
There are two concerns here, one how python always stores a double floating point value, so you have casting between two different data types here leading to some of the odd behavior. Second is how floating point numbers in general work (you can read more here).
In general when you store a number in a float you are "guaranteed" some number of significant figures, say 10, then any values after 10 places will have some error in them due to the precision they were stored at (often denoted ε). This means that if you have a sum of two numbers across 10 orders of magnitude then ε will be significant in your answer, or (far more likely in this case) you will drop some of the values you care about because the total is much larger than one of the numbers you are adding. Below are some examples of this in numpy:
import numpy as np
val_v_small = np.float(0.0000000000001)
val_small = np.float(1.000000001)
val_big = np.float(10000000)
print(val_big + val_small) # here we got an extra .000000001 from the ε of val_big
>>> 10000001.000000002
print(val_big + val_v_small) # here we dropped the values we care about the val_v_small as they were truncated off v_big
>>> 10000000.0

numpy matrix dot product - unexpected result

I have a 4x4 matrix and 4x1 vector. If i calculate dot product by hand (in excel) i get different values to the numpy result. I expect it has to do with float values, but a difference of 6.7E-3 for example seems too large to just be float error? What am i missing?
Isolated code result (see below):
[-3.24218399e-06 1.73591630e-04 -3.49611749e+04 1.90697291e+05]
With handcalculation (excel):
[-1.04791731E-11 7.08581638E-10 -3.49611670E+04 1.90697275E+05]
The input values for the matrix are pulled from code, where i do the same evaluation. There, result is:
[-2.09037901e-04 6.77221033e-03 -3.49612277e+04 1.90697438e+05]
isolated input values:
import numpy as np
arrX1 = np.array([
[-2.18181817e+01, 1.78512395e+03,-5.84222383e+04, 7.43555757e+05],
[ 8.92561977e+02,-6.81592780e+04, 2.07133390e+06,-2.43345520e+07],
[-9.73703971e+03, 6.90444632e+05,-1.96993992e+07, 2.21223199e+08],
[ 3.09814899e+04,-2.02787933e+06, 5.53057997e+07,-6.03335995e+08]],
dtype=np.float64)
arrX2 = np.array([0,-1.97479339E+00,-1.20681818E-01,-4.74107143E-03],dtype=np.float64)
print (np.dot(arrX1, arrX2))
#-> [-3.24218399e-06 1.73591630e-04 -3.49611749e+04 1.90697291e+05]
at a guess this is because you're pulling your values out from Excel with too little precision. the values in your question only have 9 significant figures, while the 64 bit floats used in Excel and that you're requesting in Numpy are good to about 15 digits.
redoing the calculation with Python's arbitrary precision Decimals gives me something very close to Numpy's answer:
from decimal import Decimal as D, getcontext
x = [1.78512395e+03,-5.84222383e+04, 7.43555757e+05]
y = [-1.97479339E+00,-1.20681818E-01,-4.74107143E-03]
# too much precision please
getcontext().prec = 50
sum(D(x) * D(y) for x, y in zip(x, y))
gets within ~4e-13 of the value from Numpy, which seems reasonable given the scale of the values involved.
np.allclose can be good to check whether things are relatively close, but has relatively loose default bounds. if I redo the spreadsheet calculation with the numbers you gave, then allclose says everything is consistent

Model I-V in Python

Model I-V.
Method:
Perform an integral, as a function of E, which outputs Current for each Voltage value used. This is repeated for an array of v_values. The equation can be found below.
Although the limits in this equation range from -inf to inf, the limits must be restricted so that (E+eV)^2-\Delta^2>0 and E^2-\Delta^2>0, to avoid poles. (\Delta_1 = \Delta_2). Therefore there are currently two integrals, with limits from -inf to -gap-e*v and gap to inf.
However, I keep returning a math range error although I believe I have excluded the troublesome E values by using the limits stated above. Pastie of errors: http://pastie.org/private/o3ugxtxai8zbktyxtxuvg
Apologies for the vagueness of this question. But, can anybody see obvious mistakes or code misuse?
My attempt:
from scipy import integrate
from numpy import *
import scipy as sp
import pylab as pl
import numpy as np
import math
e = 1.60217646*10**(-19)
r = 3000
gap = 400*10**(-6)*e
g = (gap)**2
t = 0.02
k = 1.3806503*10**(-23)
kt = k*t
v_values = np.arange(0,0.001,0.0001)
I=[]
for v in v_values:
val, err = integrate.quad(lambda E:(1/(e*r))*(abs(E)/np.sqrt(abs(E**2-g)))*(abs(E+e*v)/(np.sqrt(abs((E+e*v)**2-g))))*((1/(1+math.exp((E+e*v)/kt)))-(1/(1+math.exp(E/k*t)))),-inf,(-gap-e*v)*0.9)
I.append(val)
I = array(I)
I2=[]
for v in v_values:
val2, err = integrate.quad(lambda E:(1/(e*r))*(abs(E)/np.sqrt(abs(E**2-g)))*(abs(E+e*v)/(np.sqrt(abs((E+e*v)**2-g))))*((1/(1+math.exp((E+e*v)/kt)))-(1/(1+math.exp(E/k*t)))),gap*0.9,inf)
I2.append(val2)
I2 = array(I2)
I[np.isnan(I)] = 0
I[np.isnan(I2)] = 0
pl.plot(v_values,I,'-b',v_values,I2,'-b')
pl.show()
This question is better suited for the Computational Science site. Still here are some points for you to think about.
First, the range of integration is the intersection of (-oo, -eV-gap) U (-eV+gap, +oo) and (-oo, -gap) U (gap, +oo). There are two possible cases:
if eV < 2*gap then the allowed energy values are in (-oo, -eV-gap) U (gap, +oo);
if eV > 2*gap then the allowed energy values are in (-oo, -eV-gap) U (-eV+gap, -gap) U (gap, +oo).
Second, you are working in a very low temperature region. With t equal to 0.02 K, the denominator in the Boltzmann factor is 1.7 µeV, while the energy gap is 400 µeV. In this case the value of the exponent is huge for positive energies and it soon goes off the limits of the double precision floating point numbers, used by Python. As this is the minimum possible positive energy, things would not get any better at higher energies. With negative energies the value would always be very close to zero. Note that at this temperature, the Fermi-Dirac distribution has a very sharp edge and resembles a reflected theta function. At E = gap you would have exp(E/kT) of approximately 6.24E+100. You would run out of range when E/kT > 709.78 or E > 3.06*gap.
Yet it makes no sense to go to such energies since at that temperature the difference between the two Fermi functions very quickly becomes zero outside the [-eV, 0] interval which falls entirely inside the gap for the given temperature when V < (2*gap)/e (0.8 mV). That's why one would expect that the current would be very close to zero when the bias voltage is less than 0.8 mV. When it is more than 0.8 mV, then the main value of the integral would come from the integrand in (-eV+gap, -gap), although some non-zero value would come from the region near the singularity at E = gap and some from the region near the singularity at E = -eV-gap. You should not avoid the singularities in the DoS, otherwise you would not get the expected discontinuities (vertical lines) in the I(V) curve (image taken from Wikipedia):
Rather, you have to derive equivalent approximate expressions in the vicinity of each singularity and integrate them instead.
As you can see, there are many special cases for the value of the integrand and you have to take them all into account when computing numerically. If you don't want to do that, you should probably turn to some other mathematical package like Maple or Mathematica. These have much more sophisticated numerical integration routines and might be able to directly handle your formula.
Note that this is not an attempt to answer your question but rather a very long comment that would not fit in any comment field.
The reason for the math range error is that your exponential goes to infinity. Taking v = 0.0009 and E = 5.18e-23, the expression exp((E + e*v) / kt) (I corrected the typo pointed out by Hristo Liev in your Python expression) is exp(709.984..) which is beyond the range you can represent with double precision numbers (up to ca. 1E308).
Two additional notes:
As noted by others, you should probably rescale your equation by using a unit system which delivers numbers in a smaller range. Maybe, atomic units are a possible choice as it would set e = 1, but I did not try to convert your equation into it. (Probably, your timestep would then become quite large, as in atomic units the time unit is about is 1/40 fs).
Usually, one uses the exponential notation for float point numbers: e = 1.60217E-19 instead of e = 1.60217*10**(-19).
The best way to approach this problem in the end was to use a heaviside function to preventE variable from exceeding \Delta variable.

Categories