Python - Solve time-dependent matrix differential equation - python

I am facing some problems solving a time-dependent matrix differential equation.
The problem is that the time-dependent coefficient is not just following some time-dependent shape, rather it is the solution of another differential equation.
Up until now, I have considered the trivial case where my coefficient G(t) is just G(t)=pulse(t) where this pulse(t) is a function I define. Here is the code:
# Matrix differential equation
def Leq(t,v,pulse):
v=v.reshape(4,4) #covariance matrix
M=np.array([[-kappa,0,E_0*pulse(t),0],\. #coefficient matrix
[0,-kappa,0,-E_0*pulse(t)],\
[E_0*pulse(t),0,-kappa,0],\
[0,-E_0*pulse(t),0,-kappa]])
Driff=kappa*np.ones((4,4),float) #constant term
dv=M.dot(v)+v.dot(M)+Driff #solve dot(v)=Mv+vM^(T)+D
return dv.reshape(-1) #return vectorized matrix
#Pulse shape
def Gaussian(t):
return np.exp(-(t - t0)**2.0/(tau ** 2.0))
#scipy solver
cov0=np.zeros((4,4),float) ##initial vector
cov0 = cov0.reshape(-1); ## vectorize initial vector
Tmax=20 ##max value for time
Nmax=10000 ##number of steps
dt=Tmax/Nmax ##increment of time
t=np.linspace(0.0,Tmax,Nmax+1)
Gaussian_sol=solve_ivp(Leq, [min(t),max(t)] , cov0, t_eval= t, args=(Gaussian,))
And I get a nice result. The problem is that is it not exactly what I need. Want I need is that dot(G(t))=-kappa*G(t)+pulse(t), i.e. the coefficient is the solution of a differential equation.
I have tried to implement this differential equation in a sort of vectorized way in Leq by returning another parameter G(t) that would be fed to M, but I was getting problems with the dimensions of the arrays.
Any idea of how should I proceed?
Thanks,

In principle you have the right idea, you just have to split and join the state and derivative vectors.
def Leq(t,u,pulse):
v=u[:16].reshape(4,4) #covariance matrix
G=u[16:].reshape(4,4)
... # compute dG and dv
return np.concatenate([dv.flatten(), dG.flatten()])
The initial vector has likewise to be such a composite.

Related

How to couple a differential equation with a normal equation?

My problem is the following: I have a differential equation for X in dependence of z, dXdz. This equation depends on a Temperature T. This temperature T depends on dXdz in a seperate equation, equation 2. Technically, I could calculate the derivative dXdz based on the value of T, and then calculate the next value of T based on the second equation.
My question is, how do I implement this? For dXdz, I can implement the solve_ivp function from SciPy. But how can I implement the second equation?

How to solve a system of differential equations in Python?

How do we solve a system of linear equations in Python and NumPy:
We have a system of equations and there is the right side of the
values after the equal sign. We write all the coefficients into the
matrix matrix = np.array(...),, and write the
right side into the vector vector = np.array(...) and then
use the command np.linalg.solve(matrix, vector) to find the
variables.
But if I have derivatives after the equal sign and I want to do the same with the system of differential equations, how can I implement this?
(Where are the lambda known values, and I need to find A
P.S. I saw the use of this command y = odeint(f, y0, t) from the library scipy but I did not understand how to set my own function f if I have a matrix there, what are the initial values y0 and what t?
You can solve your system with the compact form
t = arange(t0,tf,h)
solX = odeint(lambda X,t: M.dot(X), X0, t)
after setting the parameters and initial condition.
For advanced use set also the absolute and relative error thresholds according to the scale of the state vector and the desired accuracy.

Is there a way to integrate and get an array or a function instead of all the area under the curve?

I want to integrate the following equation:
d^2[Ψ(z)] / dz^2 = A * ρ(z)
Where Ψ (unknown) and ρ (known) are 1-D arrays and A is a constant.
I have already performed a Taylor expansion, i.e.
d^2[Ψ(z_0)] / dz^2 = [Ψ(z0+Δz) - 2Ψ(z0) + Ψ(z0-Δz)] / [Δz^2]
And successfully solve it by building a matrice.
Now, I would like to know if there is a Python (preferably) or Matlab function that can solve this function without having to do a Taylor expansion.
I have tried numpy.trapz and scipy.integrate.quad, but it seems that these functions only return the area under the curve, i.e. a number, and I am interested to get an array or a function (solving for Ψ).
What you want to do is to solve the differential equation. Because it is a second order differential equation, you should modify your function to make it a system of first order ODEs. So you have to create a function like this:
Assuming ρ=rho
def f(z, y):
return np.array([y[1], A*rho(z)])
where y is a vector containing Ψ in the first position and its derivative in the second position. Then, f returns a vector containing the first and second derivatives of Ψ.
Once done that, you can use scipy.integrate.solve_ivp to solve the problem:
scipy.integrate.solve_ivp(f, [z_start, z_final], y0, method='RK45', z_eval)
where y0 are the initial conditions of Ψ (the value of Ψ and its derivative at z_start). z_eval is the points where you want to store the solution. The solution will be an array containing the values of Ψ and its derivative.
You could integrate rho twice using an indefinite integral . e.g.
import numpy as np
x=np.arange(0,3.1,0.1)
rho = x**3 # replace your rho here....
indef_integral1 = [np.trapz(rho[0:i],x[0:i]) for i in range(2,len(rho))]
indef_integral2 = [np.trapz(indef_integral1[0:i],x[0:i]) for i in range(2,len(indef_integral1))]

scipy.optimize.leastq Minimize sum of least squares

I have a least squares minimization problem that has the following form
Where the parameters I want to optimize over are x and everything else is known.
scipy.optimize.least_squares has the following form:
scipy.optimize.least_squares(fun, x0)
where x0 is an initial condition and fun is a "Function which computes the vector of residuals"
After reading the documentation, I'm a little confused about what fun wants me to return.
If I do the summation inside fun, then I'm afraid that it would compute RHS, which is not equivalent to the LHS (...or is it, when it comes to minimization?)
Thanks for any assistance!
According to the documentation of scipy.optimize.least_squares, the argument fun is to provide the vector of residuals with which the process of minimization proceeds. It is possible to supply a scalar that is the result of summation of squared residuals, but it is also possible to supply a one-dimensional vector of shape (m,), where m is the number of dimensions of the residual function. Note that squaring and summation is not done in this instance as least_squares handles that detail on its own. Only the residuals as such must be supplied in this instance.

Stochastic integration with python

I want to numerically solve integrals that contain white noise.
Mathematically white noise can be described by a variable X(t), which is a random variable with a time average, Avg[X(t)] = 0 and the correlation function, Avg[X(t), X(t')] = delta_distribution(t-t').
A simple example would be to calculate the integral over X(t) from t=0 to t=1. On average this is of course zero, but what I need are different realizations of this integral.
The problem is that this does not work with numpy.integrate.quad().
Are there any packages for python that deal with stochastic integrals?
This is a good starting point for numerical SDE methods: http://math.gmu.edu/~tsauer/pre/sde.pdf.
Here is a simple numpy solver for the stochastic differential equation dX_t = a(t,X_t)dt + b(t,X_t)dW_t which I wrote for a class project last year. It is based on the forward euler method for regular differential equations, and in practice is fairly widely used when solving SDEs.
def euler_maruyama(a,b,x0,t):
N = len(t)
x = np.zeros((N,len(x0)))
x[0] = x0
for i in range(N-1):
dt = t[i+1]-t[i]
dWt = np.random.normal(0,dt)
x[i+1] = x[i] + a(t[i],x[i])*dt + b(t[i],x[i])*dWt
return x
Essentially, at each timestep, the deterministic part of the function is integrated using forward Euler, and the stochastic part is integrated by generating a normal random variable dWt with mean 0 and variance dt and integrating the stochastic part with respect to this.
The reason we generate dWt like this is based on the definition of Brownian motions. In particular, if $W$ is a Brownian motion, then $(W_t-W_s)$ is normally distributed with mean 0 and variance $t-s$. So dWt is a discritization of the change in $W$ over a small time interval.
This is a the docstring from the function above:
Parameters
----------
a : callable a(t,X_t),
t is scalar time and X_t is vector position
b : callable b(t,X_t),
where t is scalar time and X_t is vector position
x0 : ndarray
the initial position
t : ndarray
list of times at which to evaluate trajectory
Returns
-------
x : ndarray
positions of trajectory at each time in t

Categories