I have to define two separate functions z(x, mu, c) and landau(x, A, mu, c).
z = (x-mu)/c and landau = 1.64872*A*np.exp(-0.5(z+np.exp(-z)))
As they share variables I have tried to define z inside the definition of landau as a function itself and not as a function. Also, I have tried to define z as a function outside the definition of landau. However, nothing I try seems to work. Either python tells me "'float' object is not callable" or "bad operant type for unary -: 'function'". Is there a quick fix for this?
landau(1,1,1,1) should give an answer that's roughly equal to 1
def landau(x, A, mu, c): # Functie Landau met variabelen x, A, c en mu
# A = amplitude
# c = schaalparameter
# mu = positieparameter
def z(x, mu, c):
return (x-mu)/c
return 1.64872*A*np.exp(-0.5(z(x, mu, c)+np.exp(-z(x, mu, c))))
You missed a * in -0.5 * (z(x. At least I'm assuming it's supposed to be a multiplication.
import numpy as np
def landau(x, A, mu, c): # Functie Landau met variabelen x, A, c en mu
# A = amplitude
# c = schaalparameter
# mu = positieparameter
return 1.64872 * A * np.exp(-0.5 * (z(x, mu, c) + np.exp(-z(x, mu, c))))
def z(x, mu, c):
return (x - mu) / c
landau(1, 1, 1, 1)
0.9999992292814129
Related
I am trying to use a system of ODEs to fit a curve using Python.
However, both of the equations requires calling on itself for any time point t.
If our two equations are
S'(t) = (a-b-c) * S(t)
R'(t) = (a-b) * R(t) + mN(t)
how would I pass these through to odeint?
So far this code is throwing the error
UnboundLocalError: local variable 'R' referenced before assignment
I am not sure how to get around this. Any suggestions or help would be greatly appreciated. Thanks!
from scipy.optimize import minimize
from scipy.optimize import curve_fit
from scipy import integrate
import numpy as np
import pylab as plt
xdata = np.array([0,1,2,3,6,7,8,9,10,13,14,15,16,17,22,23,24,27,28,29,30,31,34,35,36,37,38,41,42,43,44,45,48,49,50,51,52])
ydata = np.array([100, 97.2912199,91.08273896,86.28651363,70.58056853,49.00137427,47.50069587,48.22363999,47.22288896,42.59400221,29.35959158,30.47252256,30.85180297,33.44706703,41.93176936,44.2826702,46.51306081,57.32118321,62.58641733,53.23377307,50.4804287,51.59281055,67.82561566,61.28460679,65.49713333,67.99324793,74.55147631,104.5586471,98.97800534,94.31637549,99.0441014,109.6035876,151.0500311,135.2589923,135.2083231,145.1832811,169.0801019])
xdata = np.array(xdata, dtype=float)
ydata = np.array(ydata, dtype=float)
def model(y, x, a, b, c, m):
S = (a - b - c) * y[0]
R = (a - b) * R + x*(m *S)
return S, R
def fit_odeint1(x, a, b, c, m):
return integrate.odeint(model, (S0, R0), x, args=(a, b, c, m))[:,0]
def fit_odeint2(x, a, b, c, m):
return integrate.odeint(model, (S0, R0), x, args=(a, b, c, m))[:,1]
S0 = ydata[0]
R0 = 0
popt, pcov = curve_fit(fit_odeint1, xdata, ydata)
fitted1 = fit_odeint1(xdata, *popt)
absError = fitted1 - ydata
SE = np.square(absError) # squared errors
MSE = np.mean(SE) # mean squared errors
RMSE = np.sqrt(MSE) # Root Mean Squared Error, RMSE
Rsquared = 1.0 - (np.var(absError) / np.var(ydata))
print("RMSE "+str(RMSE))
print("R^2 "+str(Rsquared))
print()
popt, pcov = curve_fit(fit_odeint2, xdata, ydata)
fitted2 = fit_odeint2(xdata, *popt)
absError = fitted2 - ydata
SE = np.square(absError) # squared errors
MSE = np.mean(SE) # mean squared errors
RMSE = np.sqrt(MSE) # Root Mean Squared Error, RMSE
Rsquared = 1.0 - (np.var(absError) / np.var(ydata))
print("RMSE "+str(RMSE))
print("R^2 "+str(Rsquared))
print()
plt.plot(xdata, ydata, 'D', color='purple')
plt.plot(xdata, fitted1, '--', color='lightcoral')
plt.plot(xdata, fitted2, '--', color='mediumaquamarine')
plt.show()```
The UnboundLocalError happens due to your model function. You are trying to assign to the R variable, but also trying to use it during the expression calculation (so it does not have an assigned value):
def model(y, x, a, b, c, m):
S = (a - b - c) * y[0]
R = (a - b) * R + x*(m *S)
return S, R
The concept you are missing there is that, in the R'(t) equation (R'(t) = (a-b) * R(t) + mN(t)), R'(t) is different from R(t), so you should have different variables for them.
I have a few lines of code which doesn't converge. If anyone has an idea why, I would greatly appreciate. The original equation is written in def f(x,y,b,m) and I need to find parameters b,m.
np.random.seed(42)
x = np.random.normal(0, 5, 100)
y = 50 + 2 * x + np.random.normal(0, 2, len(x))
def f(x, y, b, m):
return (1/len(x))*np.sum((y - (b + m*x))**2) # it is supposed to be a sum operator
def dfb(x, y, b, m): # partial derivative with respect to b
return b - m*np.mean(x)+np.mean(y)
def dfm(x, y, b, m): # partial derivative with respect to m
return np.sum(x*y - b*x - m*x**2)
b0 = np.mean(y)
m0 = 0
alpha = 0.0001
beta = 0.0001
epsilon = 0.01
while True:
b = b0 - alpha * dfb(x, y, b0, m0)
m = m0 - alpha * dfm(x, y, b0, m0)
if np.sum(np.abs(m-m0)) <= epsilon and np.sum(np.abs(b-b0)) <= epsilon:
break
else:
m0 = m
b0 = b
print(m, f(x, y, b, m))
Both derivatives got some signs mixed up:
def dfb(x, y, b, m): # partial derivative with respect to b
# return b - m*np.mean(x)+np.mean(y)
# ^-------------^------ these are incorrect
return b + m*np.mean(x) - np.mean(y)
def dfm(x, y, b, m): # partial derivative with respect to m
# v------ this should be negative
return -np.sum(x*y - b*x - m*x**2)
In fact, these derivatives are still missing some constants:
dfb should be multiplied by 2
dfm should be multiplied by 2/len(x)
I imagine that's not too bad because the gradient is scaled by alpha anyway, but it could make the speed of convergence worse.
If you do use the correct derivatives, your code will converge after one iteration:
def dfb(x, y, b, m): # partial derivative with respect to b
return 2 * (b + m * np.mean(x) - np.mean(y))
def dfm(x, y, b, m): # partial derivative with respect to m
# Used `mean` here since (2/len(x)) * np.sum(...)
# is the same as 2 * np.mean(...)
return -2 * np.mean(x * y - b * x - m * x**2)
I'm currently trying to simulate a PDE including a Brownian path (one of the terms includes, that when going one timestep 'dt' further the change is weighted by a normal distributed variable with mean 0 and variance dt).
For this I used Fast Fourier Transform to get a system of ODEs which I can solve much more easily (at least that's what I thought). This lead me to the following code.
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
#Defining some parameters a, b, c, which are included in the PDE
a = 10
b = 1.5
c = 20
#Creating the mesh
L = 100
N = 100
dx = L/N
x = np.arange(0, L, dx)
dt=0.01
t = np.linspace(0, 1, 100)
#Frequency for the Fourier Transformation
kappa= 2*np.pi*np.fft.fftfreq(N, d=dx)
#Initial condition for function u and its Fast Fourier Transformation
u0= np.zeros_like(x)
u0[int((L/4-L/10)/dx):int((L/4+L/10)/dx)]=2.5
u0[int((3*L/4-L/10)/dx):int((3*L/4+L/10)/dx)]=2.5
u0hat = np.fft.fft(u0)
u0hat_ri= np.concatenate((u0hat.real, u0hat.imag))
#Define the function describing the Transformation from the PDE to the system of ODEs
def func(uhat_ri, t, kappa, a, b, c):
uhat = uhat_ri[:N] +(1j)*uhat_ri[N:]
#Define the weighted change by the Brownian path B
mean = [0]*len(uhat)
diag = [0.1] * len(uhat)
cov = np.diag(diag)
B = np.random.multivariate_normal(mean, cov)
d_uhat = -a**2 * (np.power(kappa, 2))* uhat-c*(1j)*kappa*uhat + b* (1j) * kappa * uhat * B
d_uhat_ri = np.concatenate((d_uhat.real, d_uhat.imag))
return d_uhat_ri
#Solve the ODE with odeint
uhat_ri = odeint(func, u0hat_ri, t, args=(kappa, a, b, c))
uhat = uhat_ri[:, :N] + (1j) * uhat_ri[:, N:]
u = np.zeros_like(uhat)
#Inverse Transform the Solution
for k in range(len(t)):
u[:, k] = np.fft.ifft(uhat[k, :])
u = u.real
This program works if I exclude the Brownian path B in func
def func(uhat_ri, t, kappa, a, b, c):
uhat = uhat_ri[:N] +(1j)*uhat_ri[N:]
d_uhat = -a**2 * (np.power(kappa, 2))* uhat-c*(1j)*kappa*uhat + b* (1j) * kappa * uhat
d_uhat_ri = np.concatenate((d_uhat.real, d_uhat.imag))
return d_uhat_ri
But it takes a long time to execute when including B and also it tells me:
C:\Users\leo_h\AppData\Local\Programs\Python\Python39\lib\site-packages\scipy\integrate\odepack.py:247: ODEintWarning: Excess work done on this call (perhaps wrong Dfun type). Run with full_output = 1 to get quantitative information.
warnings.warn(warning_msg, ODEintWarning)
EDIT/ANSWER:
I solved the problem, by putting the change in the Brownian path out of func. I guess it was just too much for odeint to cope with the function (or it generated a new Brownian path for each t?)
mean = [0]*len(u0hat)
diag = [2] * len(u0hat)
cov = np.diag(diag)
B = np.random.multivariate_normal(mean, cov)
def func(uhat_ri, t, kappa, a, b, c, B):
uhat = uhat_ri[:N] +(1j)*uhat_ri[N:]
d_uhat = np.zeros_like(uhat)
d_uhat = -a**2 * (np.power(kappa, 2)) * uhat-c * (1j) * kappa * uhat + b * B * (1j) * kappa * uhat
d_uhat_ri = np.concatenate((d_uhat.real, d_uhat.imag))
return d_uhat_ri
uhat_ri = odeint(func, u0hat_ri, t, args=(kappa, a, b, c, B))
I am trying to do a curve fit for a transient thermal data. I have equations which will calculate the delta temperature in every timepoint. This delta has to be added to the temperature from previous timepoint to get the temperature at any given timepoint.
ie; Tn = Tn-1 + delta.
If I express using the example from scipy's documentation for curve_fit. It would be something similar.
def func(x, a, b, c):
return a * np.exp(-b * x) + c #+ func(x[n-1], a, b, c) <<< need help here
xdata = np.linspace(0, 4, 50)
y = func(xdata, 2.5, 1.3, 0.5)
np.random.seed(1729)
y_noise = 0.2 * np.random.normal(size=xdata.size)
ydata = y + y_noise
popt, pcov = curve_fit(func, xdata, ydata)
print(popt)
plt.plot(xdata, ydata, 'b-', label='data')
plt.plot(xdata, func(xdata, *popt), 'r-',
label='fit: a=%5.3f, b=%5.3f, c=%5.3f' % tuple(popt))
Any lead on how to achive this is really appreciated. Thanks!
If you have a differential equation you need to find the integral before fitting it to data. Unless the data is also differential, in which case you can just fit.
The question seems to imply that in your case delta is given by a * np.exp(-b * x) + c, which makes the resulting y values easy to compute because curve_fit passes all the x values to func and expects it to return all the y values anyway.
def delta_func(x, a, b, c):
return a * np.exp(-b * x) + c
def func(x, a, b, c):
y = np.empty(x.shape)
y[0] = delta_func(0, a, b, c)
for i in range(1, len(x)):
y[i] = y[i-1] + delta_func(x[i], a, b, c)
return y
This was for illustration. You can obtain the same result with np.cumsum:
def func(x, a, b, c):
return np.cumsum(a * np.exp(-b * x) + c)
I want to use scipy.optimize.broyden2, the problem is that my function doesn't just take an array as argument, but a lot more parameters.
What should I do? Define global variables?
These are my functions:
def F(S, I, R, alpha, beta):
return [- beta * S * I, beta * S * I - alpha * R, alpha * R]
def euler(xi, xf, m, F, initial_values, alpha, beta):
h = (xf - xi) / m
t = np.linspace(xi, xf, m + 1)
t = np.delete(t, 0)
vect_y = [initial_values[0], initial_values[1], initial_values[2]]
for i in range(len(t)):
y_actual = [sum(x) for x in zip(vect_y, [element * h for element in F(vect_y[0], vect_y[1], vect_y[2], alpha, beta)])]
vect_y = y_actual
return vect_y
I want to use broyden2 with euler, where x0 would be initial_values.
As was suggested in comments, you can use an auxiliary function that unpacks the list of arguments using *list syntax, and calls your main function with that. A minimal example is shown below, where f is the function whose root is being found.
from scipy.optimize import broyden2
def f(x, y, z):
return [x-1, y-2, z-3]
broyden2(lambda X: f(*X), [0, 0, 0])
Output: array([ 1., 2., 3.])