Overflow error encountered in double scalars - python

Hi I am trying to do linear regression and this what happens to me when n I try to run the code
the code is :
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
data = pd.read_csv('train.csv')
df = data.dropna()
X = np.array(df.x)
Y = np.array(df.y)
def compute_error(x,y,m,b):
error = 0
for i in range(len(x)):
error+= (y[i]-(m*x[i]+b))**2
return error / float(len(x))
def step_graident_descent(x,y,m,b , alpha):
N = float(len(x))
b_graident = 0
m_graident = 0
for i in range(0 , len(x)):
x = X[i]
y = Y[i]
b_graident +=(-2/N) * (y-(m*x+b))
m_graident += (-2/N) * x*(y-(m*x+b))
new_m = m - alpha*m_graident
new_b = b - alpha*b_graident
return new_m , new_b
def graident_decsent(x,y,m,b,num_itters,alpha):
for i in range(num_itters):
m,b = step_graident_descent(x,y,m,b,alpha)
return m,b
def run():
b=0
m=0
numberOfIttertions = 1000
m,b = graident_decsent(X , Y ,m,b,numberOfIttertions , 0.001)
print(m,b)
if __name__ == '__main__':
run()
and the error that i get is :
linearRegression.py:22: RuntimeWarning: overflow encountered in double_scalars
m_graident += (-2/N) * x*(y-(m*x+b))
linearRegression.py:21: RuntimeWarning: invalid value encountered in double_scalars
b_graident +=(-2/N) * (y-(m*x+b))
linearRegression.py:22: RuntimeWarning: invalid value encountered in double_scalars
m_graident += (-2/N) * x*(y-(m*x+b))
if any one can help me i would be so greatfull since i am stuck on this for about two months and thankyou

Edit: tl;dr Solution
Ok so here is the minimal reproducible example that I was talking of. I replace your X,Y by the following.
n = 10**2
X = np.linspace(0,10**6,n)
Y = 1.5*X+0.2*10**6*np.random.normal(size=n)
If I then run
b=0
m=0
numberOfIttertions = 1000
m,b = graident_decsent(X , Y ,m,b,numberOfIttertions , 0.001)
I get exactly the problem you describe. The only surprising thing is the ease of the solution. I just replace your alpha by 10**-14 and everything works fine.
Why and how to give a Minimal, Reproducible Example
Your example is not reproducible since we don't have train.csv. Generally both for understanding your problem yourself and to get concrete answers it is very helpful to have a very small example that people can run and tinker with it. E.g. maybe you can think of a much shorter input to your regression that also results in this error.
The first RuntimeWarning
But now to your question. Your first RuntimeWarning i.e.
linearRegression.py:22: RuntimeWarning: overflow encountered in double_scalars
m_graident += (-2/N) * x*(y-(m*x+b))
means x and hence m_graident are of type numpy.double=numpy.float64. This datatype can store numbers in the range (-1.79769313486e+308, 1.79769313486e+308). If you go bigger or smaller that's called an overflow. E.g.
np.double(1.79769313486e+308)
is still ok but if you multiply it by say 1.1 you get your favorite runtime warning. Notice that this is 'just' a warning and still runs. But it can't give you a number back since it would be too big. instead it gives you inf.
The other RuntimeWarnings
Ok but what does
linearRegression.py:21: RuntimeWarning: invalid value encountered in double_scalars
b_graident +=(-2/N) * (y-(m*x+b))
mean?
It comes from calculating with the infinity that I just mentioned. Some calculations with infinity are valid.
np.inf-10**6 -> inf
np.inf+10**6 -> inf
np.inf/10**6 -> inf
np.inf*10**6 -> inf
np.inf*(-10**6) -> -inf
1/np.inf -> 0
np.inf *np.inf -> inf
but some are not and give nan i.e. not a number.
np.inf/np.inf
np.inf-np.inf
These are called indeterminate forms in math since it depends on how you got to the infinity what you would get out. E.g.
(np.double(1e+309)+np.double(1e+309))-np.double(1e+309)
np.double(1e+309)-(np.double(1e+309)+np.double(1e+309))
are both inf-inf but you would expect different results.
Getting a nan is unfortunate since calculations with nan yield always nan. And you can't use your gradients anymore once you add a nan.
Other resources
An other option is to use an existing implementation of linear regression. E.g. from scikit-learn. See
scikit-learn linear regression reference
scikit-learn user guid on linear models

Related

Numpy Array operations returning NaN values; despite no NaN values in input

I am running a Linear Regression (using Gradience Descent Analysis / GDA) using imported data from a .csv file (data_axis and data are exported dates and stock market prices respectively.) The code below returns [nan nan nan nan nan nan] as the theta value. The square error also returns nan.
Error messages: 'overflow encountered in multiply', 'invalid value encountered in add'
import numpy as np
Xdata = np.array(data_axis)
Xdata = Xdata.reshape(-1,2)
print(Xdata.shape)
Ydata = np.array(data[0:783])
Ydata = Ydata[::-1]
print(Ydata.shape)
def phi(x):
return np.array([1,x[0],x[1],x[0]*x[0],x[1]*x[0],x[1]*x[1]])
def gda(X, Y):
n= len(X)
theta= np.zeros(len(X[0]))
alpha= 0.5
iterations = 1000
for j in range(iterations):
for i in range(len(X)):
theta += alpha*(Y[i] - np.dot(theta,X[i]))*X[i]/n
return theta
def linear_regression_with_features(X, Y, phi):
phi_X = np.array([ phi(x) for x in X])
return gda(phi_X,Y)
theta = linear_regression_with_features(Xdata, Ydata, phi)
def h(theta, x):
return np.dot(theta,phi(x))
error = np.array([Ydata[i]-h(theta,Xdata[i]) for i in range(len(Xdata))])
s_error = np.dot(error,error)
print('theta= ', theta)
print('square error= ', s_error)
plt.plot(Xdata,Ydata,'co')
plt.plot(h(theta,Xdata),'r-')
The code does successfully return and plot a linear regression for randomly generated inputs Xdata = np.random.rand(783,2),Ydata = np.array([ 2-4*x[0]+3*x[1]+x[0]*x[0]+2*x[0]*x[1]-3*x[1]*x[1] for x in X]).
I checked that there are no NaN values in the .csv file. I searched for the error message, and read that some Python & NumPy operations involving extremely small or extremely large numbers may output a NaN value as the result. Could this be my issue? Or is there something else which may be fixed?
I found the problem - it was a combination of very small values, very large values, and input matrices being the wrong shape. Here, X-axis input and Y-axis input had to be shapes (n,2) and (n,) respectively
Rounding did not help, but rearranging the data from scratch did.

Multiple overflow warnings when using scipy.integrate.quad, minimize and derivative

I'm trying to solve a physical problem with python. The goal is to find the a_n values that minimize the function epsilon (see image):
And the code is:
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import quad
from scipy.optimize import minimize
from scipy.misc import derivative
def TryN(x, an, N):
out = 0
for n in range(N+1):
out += an[n]*(1-x)**(N-n+1)*(x+1)**(n+1)
return out
def Bracket(f, g):
dot = lambda x: f(x)*g(x)
return quad(dot, -1, 1)[0]
def Energy(an, N):
phip = lambda x: TryN(x, an, N)
norm = np.sqrt(Bracket(phip, phip))
phip2 = lambda x: derivative(phip, x, n = 2)
energy = Bracket(phip, phip2)
return energy/norm
N = 1
an = [1]*(N+1)
af = minimize(lambda a: Energy(a, N), an, method = 'Powell', tol = 1e-8).x
print(Energy(af, N))
But the result is:
Warning (from warnings module):
File "C:\Users\Abel\AppData\Local\Programs\Python\Python38\lib\site-packages\scipy\optimize\optimize.py", line 2366
if (w - xc) * (xb - w) > 0.0:
RuntimeWarning: overflow encountered in double_scalars
Warning (from warnings module):
File "C:\Users\Abel\AppData\Local\Programs\Python\Python38\lib\site-packages\scipy\optimize\optimize.py", line 2382
elif (w - wlim)*(wlim - xc) >= 0.0:
RuntimeWarning: overflow encountered in double_scalars
Warning (from warnings module):
File "C:\Users\Abel\AppData\Local\Programs\Python\Python38\lib\site-packages\scipy\optimize\optimize.py", line 2361
w = xb - ((xb - xc) * tmp2 - (xb - xa) * tmp1) / denom
RuntimeWarning: overflow encountered in double_scalars
Warning (from warnings module):
File "E:/Sync1/Estudios/Universidad/Grado en Física/3º Grado en Física/Mecanica_Cuantica_II/Practicas/Practica2/P22.py", line 17
dot = lambda x: f(x)*g(x)
RuntimeWarning: overflow encountered in double_scalars
Warning (from warnings module):
File "E:/Sync1/Estudios/Universidad/Grado en Física/3º Grado en Física/Mecanica_Cuantica_II/Practicas/Practica2/P22.py", line 18
return quad(dot, -1, 1)[0]
IntegrationWarning: The occurrence of roundoff error is detected, which prevents
the requested tolerance from being achieved. The error may be
underestimated.
nan
I think that the problem is related with derivative, but I don't find the error. The result of print(Energy(af, N)) should be a positive number and the function phi_p is similar to a sine function, their aproximation depends on N.
The code looks fine. I think that you should try different parameters and methods depending on your problem.
For example, trying with other methods than Powell you will get no warnings (e.g. method="L-BFGS-B"), but the minimized energy becomes negative.
Therefore, since the energy must be a positive number, you can try to use a different method than Powell that allows a constraint.
In your case, lambda a: Energy(a, N) must be greater or equal to zero.
Try to change:
af = minimize(lambda a: Energy(a, N), an, method = 'Powell', tol = 1e-8).x
into:
af = minimize(lambda a: Energy(a, N), an, method = "COBYLA",
tol = 1e-8,constraints={"type": "ineq", "fun": lambda a: Energy(a, N)}).x
Using your code with this change I get
af = [2.54114168e-11, -3.57836680e-10]
Energy(ag, n) = -1.4143692851366237e-09
Using higher N (better approximation for phi), e.g. N = 4:
af = [0.07193712, 1.24003014, 0.38689775, 1.14349888, 0.26623006]
Energy(ag, n) = -4.9576111873203776e-08
If this makes sense, depends on your problem. I suggest you to read through the documentation for minimize and derivative and try different combinations and parameters.
For example, you can try to give a smaller spacing in the derivative to achieve a higher precision.

What is causing this warning in symfit?

Consider this simple optimization code:
from symfit import *
x1 = 1
x2 = 4
p1, p2 = parameters('p1, p2')
model = p1*p2
constraints = [
Eq(x1*p1+(x2-x1)*p2, 1),
Ge(p1, p2),
Ge(p2, 0)
]
fit = Fit(- model, constraints=constraints)
fit_result = fit.execute()
print(fit_result)
This gives me the following warning:
/home/user/.local/lib/python3.5/site-packages/symfit/core/fit.py:1783: RuntimeWarning: invalid value encountered in double_scalars
return 1 - SS_res/SS_tot
What is causing this?
This happens because symfit currently always tries to calculate the coefficient of determination (R^2) for the fit. In this case, since there is no data, both SS_res and SS_tot are zero, resulting in a 0/0 and hence the warning and the resulting nan in fit_results.
We are now adding extra intelligence to FitResults such that it won't be calculated in the first place, but it is safe to ignore this warning.

RuntimeWarning: invalid value encountered in double_scalars app.launch_new_instance()

I am applying Euler's method to solve a differential equation. This is my code:
def f(x, y):
return ((x**(2))*y)/((x**(4)) + (y**(4)))
di = 0.01
I = 100
x = np.linspace(-I, I, int(I/di) + 1)
w = np.zeros(len(x))
x[0], w[0]
for i in range(1, len(w)):
w[i] = w[i - 1] + f(x[i - 1], w[i - 1])*di
plt.plot(x, w, label='approximation' )
plt.xlabel("x")
plt.ylabel("y")
plt.show()
When I run the code I have this Warning:
"C:\Users\USER\Anaconda3\lib\site-packages\ipykernel__main__.py:3: RuntimeWarning: invalid value encountered in double_scalars app.launch_new_instance()"
I would like to now how to solve it and make it work.
Your code is running into Divide by Zero Error. Try this to convince yourself:
>>> def f(x, y):
... return ((x**(2))*y)/((x**(4))+(y**(4)))
...
>>> I, di = 100, 0.01
>>> x = np.linspace(-I, I, int(I/di) + 1)
>>> w = np.zeros(len(x))
>>> i = len(x)//2 + 1
>>> i
5001
>>> f(x[i-1], w[i-1])
nan
It clearly emerges from the interactive session above that when i takes the value 5001 in the for loop, f(x[i-1], w[i-1]) yields nan. There are several solutions to this issue. For instance, in order to avoid NaN values you could check whether the denominator of the fraction returned by f() is zero prior to perform the division. If it is, you should return a conventional value of your choice (for example 0) instead of the result of the division. The following snippet implements such approach though a conditional expression:
def f(x, y):
return (0 if x==0 and y==0 else float(x**2*y)/(x**4 + y**4))
Alternatively, you could disable run time warnings (but if you do so you need to be aware of the potential risks) by including this code in your script:
import warnings
def f(x, y):
with warnings.catch_warnings():
warnings.simplefilter('ignore')
return ((x**(2))*y)/((x**(4)) + (y**(4)))
The proposed workarounds avoid the RuntimeWarning but don't get your code working as you expect. Indeed, the calculated solution w is a vector in which all the elements are zero. I guess the reason for your code not being working properly is that you missed to assign w[0] an initial value different to 0.
For example, if you simply add this line just before the for loop:
w[0] = 0.5
you get this (apparently meaningful) curve rather than a flat plot.
Hope this helps!

Solve a system of differential equations using Euler's method

I'm trying to solve a system of ordinary differential equations with Euler's method, but when I try to print velocity I get
RuntimeWarning: overflow encountered in double_scalars
and instead of printing numbers I get nan (not a number). I think the problem might be when defining the acceleration, but I don't know for sure, I would really appreciate if someone could help me.
from numpy import *
from math import pi,exp
d=0.0005*10**-6
a=[]
while d<1.0*10**-6 :
d=d*2
a.append(d)
D=array(a)
def a_particula (D,x, v, t):
cc=((1.00+((2.00*lam)/D))*(1.257+(0.400*exp((-1.10*D)/(2.00*lam)))))
return (g-((densaire*g)/densparticula)-((mu*18.0*v)/(cc*densparticula* (D**2.00))))
def euler (acel,D, x, v, tv, n=15):
nv, xv, vv = tv.size, zeros_like(tv), zeros_like(tv)
xv[0], vv[0] = x, v
for k in range(1, nv):
t, Dt = tv[k-1], (tv[k]-tv[k-1])/float(n)
for i in range(n):
a = acel(D,x, v, t)
t, v = t+Dt, v+a*Dt
x = x+v*Dt
xv[k], vv[k] = x, v
return (xv, vv)
g=(9.80)
densaire= 1.225
lam=0.70*10**-6
densparticula=1000.00
mu=(1.785*10**-5)
tv = linspace(0, 5, 50)
x, v = 0, 0 #initial conditions
for j in range(len(D)):
xx, vv = euler(a_particula, D[j], x, v, tv)
print(D[j],xx,vv)
In future it would be helpful if you included the full warning message in your question - it will contain the line where the problem occurs:
tmp/untitled.py:15: RuntimeWarning: overflow encountered in double_scalars
return (g-((densaire*g)/densparticula)-((mu*18.0*v)/(cc*densparticula* (D**2.00))))
Overflow occurs when the magnitude of a variable exceeds the largest value that can be represented. In this case, double_scalars refers to a 64 bit float, which has a maximum value of:
print(np.finfo(float).max)
# 1.79769313486e+308
So there is a scalar value in the expression:
(g-((densaire*g)/densparticula)-((mu*18.0*v)/(cc*densparticula* (D**2.00))))
that is exceeding ~1.79e308. To find out which one, you can use np.errstate to raise a FloatingPointError when this occurs, then catch it and start the Python debugger:
...
with errstate(over='raise'):
try:
ret = (g-((densaire*g)/densparticula)-((mu*18.0*v)/(cc*densparticula* (D**2.00))))
except FloatingPointError:
import pdb
pdb.set_trace()
return ret
...
From within the debugger you can then check the values of the various parts of this expression. The overflow seems to occur in:
(mu*18.0*v)/(cc*densparticula* (D**2.00))
The first time the warning occurs, (cc*densparticula* (D**2.00) evaluates as 2.3210168586496022e-12, whereas (mu*18.0*v) evaluates as -9.9984582297025182e+299.
Basically you are dividing a very large number by a very small number, and the magnitude of the result is exceeding the maximum value that can be represented. This might be a problem with your math, or it might be that your inputs to the function are not reasonably scaled.
Your system reduces to
dv/dt = a = K - L*v
with K about 10 and L ranging between, at first glance 1e+5 to 1e+10. The actual coefficients used confirm that:
D=1.0000e-09 K= 9.787995 L=1.3843070e+08
D=3.2000e-08 K= 9.787995 L=4.2570244e+06
D=1.0240e-06 K= 9.787995 L=9.0146813e+04
The Euler step for the velocity is
v[j+1]=v[j]+(K-L*v[j])*dt =(1-L*dt)*v[j] + K*dt
For anything resembling the intended friction effect, i.e., the velocity falling to K/L one needs that abs(1-L*dt)<1, and if possible 0<1-L*dt<1, that is, dt < 1/L. Which means here that dt < 1e-10.
To be able to use larger time steps you need to use methods for stiff differential equations, which means implicit methods. The most simple ones are the implicit Euler method, the midpoint method and the trapezoidal method.
Because of the linearity, the midpoint and trapezoidal method amount to the same formula
v[j+1] = v[j] + dt * ( K - L*(v[j]+v[j+1])/2 )
or
v[j+1] = ( (1-L*dt/2)*v[j] + K*dt ) / (1+L*dt/2)
Of course, the most simple method is to just exactly integrate the ODE
(-L*v')/(K-L*v)=-L => K-L*v(t)=C*exp(-L*t), C=K-L*v(0)
v(t)=K/L + exp(-L*t)*(v(0)-K/L)
which integrates to
x(t)=x(0)+K/L*t+(1-exp(-L*t))/L*(v(0)-K/L).
Alternatively it is possible that you made an error in transcribing the physical laws into formulas so that the magnitudes of the constants are all wrong.

Categories