In python I have a function which has many parameters. I want to fit this function to a data set, but using only one parameter, the rest of the parameters I want to supply on on my own. Here is an example:
def func(x,a,b):
return a*x*x + b
for b in xrange(10):
popt,pcov = curve_fit(func,x1,x2)
In this I want that the fitting is done only for a and the parameter b takes the value of the loop variable. How can this be done?
You can wrap func in a lambda, as follows:
def func(x, a, b):
return a*x*x + b
for b in xrange(10):
popt, pcov = curve_fit(lambda x, a: func(x, a, b), x1, x2)
A lambda is an anonymous function, which in Python can only be used for simple one line functions. Basically, it's normally used to reduce the amount of code when don't need to assign a name to the function. A more detailed description is given in the official documentation: http://docs.python.org/tutorial/controlflow.html#lambda-forms
In this case, a lambda is used to fix one of the arguments of func. The newly created function accepts only two arguments: x and a, whereas b is fixed to the value taken from the local b variable. This new function is then passed into curve_fit as an argument.
A better approach would use lmfit, which provides a higher level interface to curve-fitting. Among other features, Lmfit makes fitting parameters be first-class objects that can have bounds or be explicitly fixed (among other features).
Using lmfit, this problem might be solved as:
from lmfit import Model
def func(x,a,b):
return a*x*x + b
# create model
fmodel = Model(func)
# create parameters -- these are named from the function arguments --
# giving initial values
params = fmodel.make_params(a=1, b=0)
# fix b:
params['b'].vary = False
# fit parameters to data with various *static* values of b:
for b in range(10):
params['b'].value = b
result = fmodel.fit(ydata, params, x=x)
print(": b=%f, a=%f+/-%f, chi-square=%f" % (b, result.params['a'].value,
result.params['a'].stderr,
result.chisqr))
Instead of using the lambda function which might be less intuitive to digest I would recommend to specify the scikit curve_fit parameter bounds that will force your parameter to be searched within custom boundaries.
All you have to do is to let your variable a move between -inf and +inf and your variable b between (b - epsilon) and (b + epsilon)
In your example:
epsilon = 0.00001
def func(x,a,b):
return a*x*x + b
for b in xrange(10):
popt,pcov = curve_fit(func,x1,x2, bounds=((-np.inf,b-epsilon), (np.inf,b+epsilon))
I effectively use Anton Beloglazov's solution, though I like to avoid using lambda functions for readability so I do the following:
def func(x,a,b):
return a*x*x + b
def helper(x,a):
return func(x,a,b)
for b in xrange(10):
popt,pcov = curve_fit(helper, x1, x2)
This ends up being reminiscent of Rick Berg's answer, but I like having one function dedicated to the "physics" of the problem and a helper function to get the code to work.
Another way is to use upper and lower bounds that are identical (+ eps) as the initial value.
Using the same example with initial conditions and bounds:
def func(x,a,b):
return a*x*x + b
# free for a and b
popt,pcov = curve_fit(func, x1, x2,
p0=[1,1],
bounds=[(-inf,-inf),(inf,inf)])
# free for a; fixed for b ;
eps=1/100
popt,pcov = curve_fit(func, x1, x2,
p0=[1,1],
bounds=[(-inf,(1-eps)),(inf,(1+eps))])
Remember to insert an epsilon, otherwise, a and b must be the same
There is a simpler option if you are willing/able to edit the original function.
Redefine your function as:
def func(x,a):
return a*x*x + b
Then you can simply put it in your loop for parameter b:
for b in xrange(10):
popt,pcov = curve_fit(func, x1, x2)
Caveat: the function needs to be defined in the same script in which it is called for this to work.
Scipy's curve_fit takes three positional arguments, func, xdata and ydata.
So an alternative approach (to using a function wrapper) is to treat 'b' as xdata (i.e. independent variable) by building a matrix that contains both your original xdata (x1) and a second column for your fixed parameter b.
Assuming x1 and x2 are arrays:
def func(xdata,a):
x, b = xdata[:,0], xdata[:,1] # Extract your x and b
return a*x*x + b
for b in xrange(10):
xdata = np.zeros((len(x1),2)) # initialize a matrix
xdata[:,0] = x1 # your original x-data
xdata[:,1] = b # your fixed parameter
popt,pcov = curve_fit(func,xdata,x2) # x2 is your y-data
Related
I have an assignment for school. First of all can you help me with confirming I have interpreted the question right? And also does the code seem somewhat ok? There have been other tasks before this like create the class with a two dimensional function, write the newton method and so on. And now this question. Im not finished programming it, but Im a bit stuck and I feel like I dont know exactly what to do. On what do I run my Newton method? On the point P. Do I create it like I have done in the Plot method??
This is the question:
Write a method plot that checks the dependence of Newton’s method on
several initial vectors x0. This method should plot what is described
in the following steps:
• Use the meshgrid command to set up a grid of
N2 points in the set G = [a, b]×[c, d] (the parameters N, a, b, c and
d are parameters of the methods). You obtain two matrices X and Y
where a specific grid point is defined as pij = (Xij , Yij )
class fractals2D(object):
Allzeroes = [] #a list to add all stored values from each run of newtons method
def __init__(self,f, x):
self.f=f
f0 = self.f(x) #giving a variable name with the function to use in ckass
n=len(x) #for size of matrice
jac=zeros([n]) #creates an array to use for jacobian matrice
h=1.e-8 #to set h for derivative
self.jac = jac
for i in range(n): #creating loop to take partial derivatives of x and y from x in f
temp=x[i]
#print(x[i])
x[i]=temp +h #why setting x[i] two times?
#print(x[i])
f1=f(x)
x[i]=temp
#print(x[i])
jac[:,i]=(f1-f0)/h
def Newtons_method(self,guess):
f_val = f(guess)
self.guess = guess
for i in range(40):
delta = solve(self.jac,-f_val)
guess = guess +delta
if norm((delta),ord=2)<1.e-9:
return guess #alist for storing zeroes from one run
def ZeroesMethod(self, point):
point = self.guess
self.Newtons_method(point)
#adds zeroes from the run of newtons to a list to store them all
self.Allzeroes.append(self.guess)
return (len(self.Allzeroes)) #returns how many zeroes are found
def plot(self, N, a, b, c, d):
x = np.linspace(a, b, N)
y = np.linspace(c, d, N)
P = [X, Y] = np.meshgrid(x, y)
return P #calling ZeroesMethos with our newly meshed point of several arrays
x0 = array([2.0, 1.0]) #creates an x and y value?
x1= array([1, -5])
a= array([2, 8])
b = array([-2, -6])
def f(x):
f = np.array(
[x[0]**2 - x[1] + x[0]*cos(pi*x[0]),
x[0]*x[1] + exp(-x[1]) - x[0]**(-1)])
This is the errormessage im receiving:
delta = solve(self.jac,-f_val)
TypeError: bad operand type for unary -: 'NoneTyp
I am trying to solve this differential equation as part of my assignment. I am not able to understand on how can i put the condition for u in the code. In the code shown below, i arbitrarily provided
u = 5.
2dx(t)dt=−x(t)+u(t)
5dy(t)dt=−y(t)+x(t)
u=2S(t−5)
x(0)=0
y(0)=0
where S(t−5) is a step function that changes from zero to one at t=5. When it is multiplied by two, it changes from zero to two at that same time, t=5.
def model(x,t,u):
dxdt = (-x+u)/2
return dxdt
def model2(y,x,t):
dydt = -(y+x)/5
return dydt
x0 = 0
y0 = 0
u = 5
t = np.linspace(0,40)
x = odeint(model,x0,t,args=(u,))
y = odeint(model2,y0,t,args=(u,))
plt.plot(t,x,'r-')
plt.plot(t,y,'b*')
plt.show()
I do not know the SciPy Library very well, but regarding the example in the documentation I would try something like this:
def model(x, t, K, PT)
"""
The model consists of the state x in R^2, the time in R and the two
parameters K and PT regarding the input u as step function, where K
is the infimum of u and PT is the delay of the step.
"""
x1, x2 = x # Split the state into two variables
u = K if t>=PT else 0 # This is the system input
# Here comes the differential equation in vectorized form
dx = [(-x1 + u)/2,
(-x2 + x1)/5]
return dx
x0 = [0, 0]
K = 2
PT = 5
t = np.linspace(0,40)
x = odeint(model, x0, t, args=(K, PT))
plt.plot(t, x[:, 0], 'r-')
plt.plot(t, x[:, 1], 'b*')
plt.show()
You have a couple of issues here, and the step function is only a small part of it. You can define a step function with a simple lambda and then simply capture it from the outer scope without even passing it to your function. Because sometimes that won't be the case, we'll be explicit and pass it.
Your next problem is the order of arguments in the function to integrate. As per the docs (y,t,...). Ie, First the function, then the time vector, then the other args arguments. So for the first part we get:
u = lambda t : 2 if t>5 else 0
def model(x,t,u):
dxdt = (-x+u(t))/2
return dxdt
x0 = 0
y0 = 0
t = np.linspace(0,40)
x = odeint(model,x0,t,args=(u,))
Moving to the next part, the trouble is, you can't feed x as an arg to y because it's a vector of values for x(t) for particular times and so y+x doesn't make sense in the function as you wrote it. You can follow your intuition from math class if you pass an x function instead of the x values. Doing so requires that you interpolate the x values using the specific time values you are interested in (which scipy can handle, no problem):
from scipy.interpolate import interp1d
xfunc = interp1d(t.flatten(),x.flatten(),fill_value="extrapolate")
#flatten cuz the shape is off , extrapolate because odeint will go out of bounds
def model2(y,t,x):
dydt = -(y+x(t))/5
return dydt
y = odeint(model2,y0,t,args=(xfunc,))
Then you get:
#Sven's answer is more idiomatic for vector programming like scipy/numpy. But I hope my answer provides a clearer path from what you know already to a working solution.
I currently have a system of odes with a time-dependent constant. E.g.
def fun(u, t, a, b, c):
x = u[0]
y = u[1]
z = u[2]
dx_dt = a * x + y * z
dy_dt = b * (y-z)
dz_dt = -x*y+c*y-z
return [dx_dt, dy_dt, dz_dt]
The constants are "a", "b" and "c". I currently have a list of "a"s for every time-step which I would like to insert at every time-step, when using the scipy ode solver...is this possible?
Thanks!
Yes, this is possible. In the case where a is constant, I guess you called scipy.integrate.odeint(fun, u0, t, args) where fun is defined as in your question, u0 = [x0, y0, z0] is the initial condition, t is a sequence of time points for which to solve for the ODE and args = (a, b, c) are the extra arguments to pass to fun.
In the case where a depends on time, you simply have to reconsider a as a function, for example (given a constant a0):
def a(t):
return a0 * t
Then you will have to modify fun which computes the derivative at each time step to take the previous change into account:
def fun(u, t, a, b, c):
x = u[0]
y = u[1]
z = u[2]
dx_dt = a(t) * x + y * z # A change on this line: a -> a(t)
dy_dt = b * (y - z)
dz_dt = - x * y + c * y - z
return [dx_dt, dy_dt, dz_dt]
Eventually, note that u0, t and args remain unchanged and you can again call scipy.integrate.odeint(fun, u0, t, args).
A word about the correctness of this approach. The performance of the approximation of the numerical integration is affected, I don't know precisely how (no theoretical guarantees) but here is a simple example which works:
import matplotlib.pyplot as plt
import numpy as np
import scipy as sp
import scipy.integrate
tmax = 10.0
def a(t):
if t < tmax / 2.0:
return ((tmax / 2.0) - t) / (tmax / 2.0)
else:
return 1.0
def func(x, t, a):
return - (x - a(t))
x0 = 0.8
t = np.linspace(0.0, tmax, 1000)
args = (a,)
y = sp.integrate.odeint(func, x0, t, args)
fig = plt.figure()
ax = fig.add_subplot(111)
h1, = ax.plot(t, y)
h2, = ax.plot(t, [a(s) for s in t])
ax.legend([h1, h2], ["y", "a"])
ax.set_xlabel("t")
ax.grid()
plt.show()
I Hope this will help you.
No, that is not possible in the literal sense of
"I currently have a list of "a"s for every time-step which I would like to insert at every time-step"
as the solver has adaptive step size control, that is, it will use internal time steps that you have no control over, and each time step uses several evaluations of the function. Thus there is no connection between the solver time steps and the data time steps.
In the extended sense that the given data defines a piecewise constant step function however, there are several approaches to get to a solution.
You can integrate from jump point to jump point, using the ODE function with the constant parameter for this time segment. After that use numpy array operations like concatenate to assemble the full solution.
You can use interpolation functions like numpy.interp or scipy.interpolate.interp1d. The first gives a piecewise linear interpolation, which may not be desired here. The second returns a function object that can be configured to be a "zero-order hold", which is a piecewise constant step function.
You could implement your own logic to go from the time t to the correct values of those parameters. This mostly applies if there is some structure to the data, for instance, if they have the form f(int(t/h)).
Note that the approximation order of the numerical integration is not only bounded by the order of the RK (solve_ivp) or multi-step (odeint) method, but also by the differentiability order of the (parts of) the differential equation. If the ODE is much less smooth than the order of the method, the implicit assumptions for the step size control mechanism are violated, which may result in a very small step size requiring a huge number of integration steps.
I also encountered similar problem. In my case, parameters a, b, and c are not in direct function with time, but determined by x, y, and z at that time. So I have to get x, y, z at time t, and calculate a, b, c for the integration calculation for x, y, z at t+dt. It turns out that if I change dt value, the whole integration result will change dramatically, even to something unreasonable.
I am a beginner/intermediate in Python. I have coded a 4th-order Runge-Kutta method (RK4) into Python. It is basically solving a pendulum, but that is not the point here.
I want to improve the RK4 method in the following way: I want to be able to pass the function f directly to the RK4 function, i.e. RK4(y_0, n, h) should become RK4(f,y_0,n,h). This would have the great advantage that I could use RK4 for other f functions that describe other systems, not just this one pendulum.
I have played around with just passing simple functions to RK4, but I am doing something wrong. How do I do this in Python?
import numpy as np
def RK4(y_0, n, h):
#4th order Runge-Kutta solver, takes as input
#initial value y_0, the number of steps n and stepsize h
#returns solution vector y and time vector t
#right now function f is defined below
t = np.linspace(0,n*h,n,endpoint = False) #create time vector t
y = np.zeros((n,len(y_0))) #create solution vector y
y[0] = y_0 #assign initial value to first position in y
for i in range(0,n-1):
#compute Runge-Kutta weights k_1 till k_4
k_1 = f(t[i],y[i])
k_2 = f(t[i] + 0.5*h, y[i] + 0.5*h*k_1)
k_3 = f(t[i] + 0.5*h, y[i] + 0.5*h*k_2)
k_4 = f(t[i] + 0.5*h, y[i] + h*k_3)
#compute next y
y[i+1] = y[i] + h / 6. * (k_1 + 2.*k_2 + 2.*k_3 + k_4)
return t,y
def f(t,vec):
theta=vec[0]
omega = vec[1]
omegaDot = -np.sin(theta) - omega + np.cos(t)
result = np.array([omega,omegaDot])
return result
test = np.array([0,0.5])
t,y = RK4(test,10,0.1)
Python functions are objects too. You can pass them around like any other object:
>>> def foo(): print 'Hello world!'
...
>>> foo
<function foo at 0x10c4685f0>
>>> foo()
Hello world!
>>> bar = foo
>>> bar()
Hello world!
Simply pass a function as an extra parameter to your RK4 function and use that as a local variable.
You can pass a function to a function in Python just as you might expect:
def call_function(f):
f()
def my_function():
print "OK"
call_function(my_function) # Prints OK
Maybe you should post your failing code?
It's very simple. Change the definition of the RK4 function like so:
def RK4(f, y_0, n, h):
Here, I have added an extra argument, the function.
Then, when you call RK4, pass the function:
t, y = RK4(f, test, 10, 0.1)
And now, of course, you can substitute different functions without having to re-write the integration code.
Functions in Python are just another kind of object. You can pass them around just as you do more prosaic objects.
I am trying to create a function called calc(f,a,b) where x is an equation with the variable f and I want to put this code within the function.
def calc(f, a, b):
limits = [a, b]
integral = odeint(lambda y, x : f, 0, limits)
return integral[1]
This function gets the integral using the built in odeint function.
This is what I am trying to do
print calc(x**2, 0, 1)
where x^2 is the function to be integrated. My problem is that this function (x**2)needs to be passed on to the odeint function right after y, x: f where f after the semicolon is the f from the calc(f,a,b)
what I cant figure out is that how can I pass f from the calc function input to the odeint inside. It says that f isnt declared and if I put it within strings.. it doest work
When I run this function.. it doesnt work I get this error
NameError: name 'f' is not defined
I am not sure how to pass my equation to be integrated inside odeint
Thanks
If one were to rewrite the function calc as follows:
def calc(f, a, b):
limits = [a, b]
integral = odeint(lambda y, x: f(x), 0, limits)
return integral[1][0]
Then one may use this function thus:
>>> calc(lambda x: x ** 2, 0, 1) # Integrate x ** 2 over the interval [0, 1] (expected answer: 0.333...)
0.33333335809177234
>>> calc(lambda x: x, 0, 1) # Integrate x over the interval [0, 1] (expected answer: 0.5)
0.50000001490120016
>>> calc(lambda x: 1, 0, 1) # Integrate 1 over the interval [0, 1] (expected answer: 1.0)
1.0
The odeint function from the scipy.integrate module has the signature:
odeint(func, y0, t, ...)
where: func is a callable that accepts parameters y, t0, ... and returns dy/dt at the given point; y0 is a sequence representing initial condition of y; t is a sequence that represents intervals to solve for y (t0 is the first item in the sequence).
It appears that you are solving a first-order differential equation of the form dy/dx = f(x) over the interval [a, b] where y0 = 0. In such a case, when you pass f (which accepts one argument) to the function odeint, you must wrap it in a lambda so that the passed-in function accepts two arguments (y and x--the y parameter is essentially ignored since you need not use it for a first-order differential equation).
I assume odeint is some function to which you are passing the lambda function. odeint will presumably call the lambda and needs to pass x and y to it. So the answer is, if you want odeint to call the function and pass it x and y, then you need to pass x and y to odeint as arguments, in addition to the function itself.
What exactly are you trying to do here? With more details and more code, we could probably get a better answer.
x cannot have two values; therefore, if you need two values, one of them must be named something else. Rename one of your variables.
Edit:
(smacking forehead): In calc(x**2, 0, 1), x**2 is not a function - it is an expression, which gets evaluated before being passed to calc - therefore it complains about it needs to know what x is (in order to calculate x**2).
Try
calc(lambda x: x**2, a, b)
instead. This is equivalent to
def unnamedfunction(x):
return x**2
calc(unnamedfunction, a, b)
I'm not completely sure because odeint() is not a built-in Python function so I don't know much about it, but the first argument you're passing it in your code is not a function that computes x^2. An easy way to do something like that would be to pass a lambda function to calc that does that sort of calculation. For example:
def calc(f, a, b):
limits = [a, b]
integral = odeint(f, 0, limits)
return integral[1]
print calc(lambda x: x**2, 0, 1)