Solve system of coupled differential equations using scipy's solve_bvp - python

I want to solve a boundary value problem consisting of 7 coupled 2nd order differential equations. There are 7 functions, y1(x),...y7(x), and each of them is described by a differential equation of the form
d^2yi/dx^2 = -(1/x)*dyi/dx - Li(y1,...,y7) for 0 < a <= x <= b,
where Li is a function that gives a linear combination of y1,...,y7. We have boundary conditions for the first-order derivatives dyi/dx at x=a and for the functions yi at x=b:
dyi/dx(a) = Ai,
yi(b) = Bi.
So we can rewrite this as a system of 14 coupled 1st order ODEs:
dyi/dx = zi,
dzi/dx = -(1/x)*zi - Li(y1,...,y7),
zi(a) = Ai,
yi(b) = Bi.
I want to solve this system of equations using the Python function scipy.integrate.solve_bvp. However, I have trouble understanding what exactly should be the input arguments for the function as described in the documentation (https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_bvp.html).
The first argument that this function requires is a callable fun(x,y). As I understand it, the input argument y must be an array consisting of values of yi and zi, and gives as output the values of zi and dzi/dx. So my function would look like this (pseudocode):
def fun(x,y):
y1, z1, y2, z2, ..., y7, z7 = y
return [z1, -(1/x)*z1 - L1(y1,...,y7),
...,
z7, -(1/x)*z7 - L7(y1,...,y7)]
Is that correct?
Then, the second argument for solve_bvp is a callable bc(ya,yb), which should evaluate the residuals of the boundary conditions. Here I really have trouble understanding how to define such a function. It is also not clear to me what exactly the arrays ya and yb are and what shape they should have?
The third argument is x, which is the 'initial mesh' with a shape (m,). Should x consist only of the points a and b, which is where we know the boundary conditions? Or should it be something else?
Finally the fourth argument is y, which is the 'initial guess for the function values at the mesh nodes', and has shape (n,m). Its ith column corresponds with x[i]. I guess that the 1st row corresponds with y1, the 2nd with z1, the 3rd with y2, etc. Is that right? Furthermore, which values should be put here? We could put in the known boundary conditions at x=a and x=b, but we don't know how the function looks like at any other points. Furthermore, how does this y relate to the function bc(ya,yb)? Are the input arguments ya,yb somehow derived from this y?
Any help with understanding the syntax of solve_bvp and its application in this case would be greatly appreciated.

For the boundary condition, bc returns the residuals of the equations. That is, transform the equations so that the right side is zero, and then return the vector of the left sides. ya,yb are the state vectors at the points x=a,b.
For the initial state, it could be anything. However, "anything" in most cases will lead to failure to converge due to singular Jacobian or too large mesh. What the solver does is solve a large non-linear system of equations (see collocation method). As with any such system, convergence is most assured if you start close enough to the actual solution. So you can try for yi
constant functions with value Bi
linear functions with slope Ai and value Bi at x=b
with the values for zi using the derivatives of these functions. There is no guarantee that this works, especially as close to zero you have basis solutions close to the constant solution and the logarithm, the closer a is to zero, the more singular this becomes.

Related

How to solve a system of differential equations in Python?

How do we solve a system of linear equations in Python and NumPy:
We have a system of equations and there is the right side of the
values after the equal sign. We write all the coefficients into the
matrix matrix = np.array(...),, and write the
right side into the vector vector = np.array(...) and then
use the command np.linalg.solve(matrix, vector) to find the
variables.
But if I have derivatives after the equal sign and I want to do the same with the system of differential equations, how can I implement this?
(Where are the lambda known values, and I need to find A
P.S. I saw the use of this command y = odeint(f, y0, t) from the library scipy but I did not understand how to set my own function f if I have a matrix there, what are the initial values y0 and what t?
You can solve your system with the compact form
t = arange(t0,tf,h)
solX = odeint(lambda X,t: M.dot(X), X0, t)
after setting the parameters and initial condition.
For advanced use set also the absolute and relative error thresholds according to the scale of the state vector and the desired accuracy.

Is there a way to integrate and get an array or a function instead of all the area under the curve?

I want to integrate the following equation:
d^2[Ψ(z)] / dz^2 = A * ρ(z)
Where Ψ (unknown) and ρ (known) are 1-D arrays and A is a constant.
I have already performed a Taylor expansion, i.e.
d^2[Ψ(z_0)] / dz^2 = [Ψ(z0+Δz) - 2Ψ(z0) + Ψ(z0-Δz)] / [Δz^2]
And successfully solve it by building a matrice.
Now, I would like to know if there is a Python (preferably) or Matlab function that can solve this function without having to do a Taylor expansion.
I have tried numpy.trapz and scipy.integrate.quad, but it seems that these functions only return the area under the curve, i.e. a number, and I am interested to get an array or a function (solving for Ψ).
What you want to do is to solve the differential equation. Because it is a second order differential equation, you should modify your function to make it a system of first order ODEs. So you have to create a function like this:
Assuming ρ=rho
def f(z, y):
return np.array([y[1], A*rho(z)])
where y is a vector containing Ψ in the first position and its derivative in the second position. Then, f returns a vector containing the first and second derivatives of Ψ.
Once done that, you can use scipy.integrate.solve_ivp to solve the problem:
scipy.integrate.solve_ivp(f, [z_start, z_final], y0, method='RK45', z_eval)
where y0 are the initial conditions of Ψ (the value of Ψ and its derivative at z_start). z_eval is the points where you want to store the solution. The solution will be an array containing the values of Ψ and its derivative.
You could integrate rho twice using an indefinite integral . e.g.
import numpy as np
x=np.arange(0,3.1,0.1)
rho = x**3 # replace your rho here....
indef_integral1 = [np.trapz(rho[0:i],x[0:i]) for i in range(2,len(rho))]
indef_integral2 = [np.trapz(indef_integral1[0:i],x[0:i]) for i in range(2,len(indef_integral1))]

Exponential fit with the least squares Python

I have a very specific task, where I need to find the slope of my exponential function.
I have two arrays, one denoting the wavelength range between 400 and 750 nm, the other the absorption spectrum. x = wavelengths, y = absorption.
My fit function should look something like that:
y_mod = np.float(a_440) * np.exp(-S*(x - 440.))
where S is the slope and in the image equals 0.016, which should be in the range of S values I should get (+/- 0.003). a_440 is the reference absorption at 440 nm, x is the wavelength.
Modelled vs. original plot:
I would like to know how to define my function in order to get an exponential fit (not on log transformed quantities) of it without guessing beforehand what the S value is.
What I've tried so far was to define the function in such way:
def func(x, a, b):
return a * np.exp(-b * (x-440))
And it gives pretty nice matches
fitted vs original.
What I'm not sure is whether this approach is correct or should I do it differently?
How would one use also the least squares or the absolute differences in y approaches for minimization in order to remove the effect of overliers?
Is it possible to also add random noise to the data and recompute the fit?
Your situation is the same as the one described in the documentation for scipy's curve_fit.
The problem you're incurring is that your definition of the function accepts only one argument when it should receive three: x (the independent variable where the function is evaluated), plus a_440 and S.
Cleaning a bit, the function should be more like this.
def func(x, A, S):
return A*np.exp(-S*(x-440.))
It might be that you run into a warning about the covariance matrix. you solve that by providing a decent starting point to the curve_fit through the argument p0 and providing a list. For example in this case p0=[1,0.01] and in the fitting call it would look like the following
curve_fit(func, x, y, p0=[1,0.01])

Reducing difference between two graphs by optimizing more than one variable in MATLAB/Python?

Suppose 'h' is a function of x,y,z and t and it gives us a graph line (t,h) (simulated). At the same time we also have observed graph (observed values of h against t). How can I reduce the difference between observed (t,h) and simulated (t,h) graph by optimizing values of x,y and z? I want to change the simulated graph so that it imitates closer and closer to the observed graph in MATLAB/Python. In literature I have read that people have done same thing by Lavenberg-marquardt algorithm but don't know how to do it?
You are actually trying to fit the parameters x,y,z of the parametrized function h(x,y,z;t).
MATLAB
You're right that in MATLAB you should either use lsqcurvefit of the Optimization toolbox, or fit of the Curve Fitting Toolbox (I prefer the latter).
Looking at the documentation of lsqcurvefit:
x = lsqcurvefit(fun,x0,xdata,ydata);
It says in the documentation that you have a model F(x,xdata) with coefficients x and sample points xdata, and a set of measured values ydata. The function returns the least-squares parameter set x, with which your function is closest to the measured values.
Fitting algorithms usually need starting points, some implementations can choose randomly, in case of lsqcurvefit this is what x0 is for. If you have
h = #(x,y,z,t) ... %// actual function here
t_meas = ... %// actual measured times here
h_meas = ... %// actual measured data here
then in the conventions of lsqcurvefit,
fun <--> #(params,t) h(params(1),params(2),params(3),t)
x0 <--> starting guess for [x,y,z]: [x0,y0,z0]
xdata <--> t_meas
ydata <--> h_meas
Your function h(x,y,z,t) should be vectorized in t, such that for vector input in t the return value is the same size as t. Then the call to lsqcurvefit will give you the optimal set of parameters:
x = lsqcurvefit(#(params,t) h(params(1),params(2),params(3),t),[x0,y0,z0],t_meas,h_meas);
h_fit = h(x(1),x(2),x(3),t_meas); %// best guess from curve fitting
Python
In python, you'd have to use the scipy.optimize module, and something like scipy.optimize.curve_fit in particular. With the above conventions you need something along the lines of this:
import scipy.optimize as opt
popt,pcov = opt.curve_fit(lambda t,x,y,z: h(x,y,z,t), t_meas, y_meas, p0=[x0,y0,z0])
Note that the p0 starting array is optional, but all parameters will be set to 1 if it's missing. The result you need is the popt array, containing the optimal values for [x,y,z]:
x,y,z = popt
h_fit = h(x,y,z,t_meas)

Derivatives blow up in python

I am trying to find higher order derivatives of a dataset (x,y). x and y are 1D arrays of length N.
Let's say I generate them as :
xder0=np.linspace(0,10,1000)
yder0=np.sin(xder0)
I define the derivative function which takes in 2 array (x,y) and returns (x1, y1) where y1 is the derivative calculated at each index as : (y[i+1]-y[i])/(x[i+1]-x[i]). x1 is just the mean of x[i+1] and x[i]
Here is the function that does it:
def deriv(x,y):
delx =np.zeros((len(x)-1), dtype=np.longdouble)
ydiff=np.zeros((len(x)-1), dtype=np.longdouble)
for i in range(len(x)-1):
delx[i] =(x[i+1]+x[i])/2.0
ydiff[i] =(y[i+1]-y[i])/(x[i+1]-x[i])
return delx, ydiff
Now to calculate the first derivative, I call this function as:
xder1, yder1 = deriv(xder0, yder0)
Similarly for second derivative, I call this function giving first derivatives as input:
xder2, yder2 = deriv(xder1, yder1)
And it goes on:
xder3, yder3 = deriv(xder2, yder2)
xder4, yder4 = deriv(xder3, yder3)
xder5, yder5 = deriv(xder4, yder4)
xder6, yder6 = deriv(xder5, yder5)
xder7, yder7 = deriv(xder6, yder6)
xder8, yder8 = deriv(xder7, yder7)
xder9, yder9 = deriv(xder8, yder8)
Something peculiar happens after I reach order 7. The 7th order becomes very noisy! Earlier derivatives are all either sine or cos functions as expected. However 7th order is a noisy sine. And hence all derivatives after that blow up.
Any idea what is going on?
This is a well known stability issue with numerical interpolation using equally-spaced points. Read the answers at http://math.stackexchange.com.
To overcome this problem you have to use non-equally-spaced points, like the roots of Lagendre polynomial. The instability occurs due to the unavailability of information at the boundaries, thus more concentration of points at the boundaries is required, as per the roots of say Lagendre polynomials or others with similar properties such as Chebyshev polynomial.

Categories