Differential equations with coupled derivatives in python - python

I am trying to solve a set of differential equations using sympy and scipy, but cannot figure out how to bring them in the appropriate form. In brief, I have a set of two coupled second order differential equations that I can re-write into a system of four first order differential equations of the form:
dot(x1) = x2
dot(x2) = f(x1, x2, x3, x4, dot(x4))
dot(x3) = x4
dot(x4) = f(x1, x2, x3, x4, dot(x2))
My problem is the coupling of the first derivatives. In Matlab, I can use a time- and state-variable dependent mass matrix to solve this, but in python, I have only seen approaches with a time-invariant mass matrix. Any advice would be much appreciated.

Related

Solve Integro-Differential equation in Python

I am trying to solve a physics problem that involves the following equation,
As you can see, sigma could be any function and K1 is a Bessel function. The differential equation has to be solved for Y(x), but x is involved in the integral. In addition, we have another parameter m being a free parameter in the integral and outside the integral. Do you have any idea to solve this?
Sympy does not solve this since it has no analytical solution.
My idea was the following,
I write the integral as a function of x, m.
def get_integral(x,m):
integrand = lambda s: f(s,x,m)
integral = quad(integrand,4*m**2, + np.inf)[0]
return integral
Then write the derivative,
def dYdx(x,Y,m):
x1 = f(x,m))
x2 = get_integral(x,m)
x3 = Y**2-Y_0(x,m)**2
return x1*x2*x3
So I use odeint to solve it like,
solution = odeint(dYdx, y0=0, t = x_array,tfirst=True,args =(m,))
But as you can see I need to solve it for different values of x,s, and m. So that I have an array having the following structure
solution_array = [x,Y,m,integral_value]
for each time I evaluate x and m.

Solve system of coupled differential equations using scipy's solve_bvp

I want to solve a boundary value problem consisting of 7 coupled 2nd order differential equations. There are 7 functions, y1(x),...y7(x), and each of them is described by a differential equation of the form
d^2yi/dx^2 = -(1/x)*dyi/dx - Li(y1,...,y7) for 0 < a <= x <= b,
where Li is a function that gives a linear combination of y1,...,y7. We have boundary conditions for the first-order derivatives dyi/dx at x=a and for the functions yi at x=b:
dyi/dx(a) = Ai,
yi(b) = Bi.
So we can rewrite this as a system of 14 coupled 1st order ODEs:
dyi/dx = zi,
dzi/dx = -(1/x)*zi - Li(y1,...,y7),
zi(a) = Ai,
yi(b) = Bi.
I want to solve this system of equations using the Python function scipy.integrate.solve_bvp. However, I have trouble understanding what exactly should be the input arguments for the function as described in the documentation (https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_bvp.html).
The first argument that this function requires is a callable fun(x,y). As I understand it, the input argument y must be an array consisting of values of yi and zi, and gives as output the values of zi and dzi/dx. So my function would look like this (pseudocode):
def fun(x,y):
y1, z1, y2, z2, ..., y7, z7 = y
return [z1, -(1/x)*z1 - L1(y1,...,y7),
...,
z7, -(1/x)*z7 - L7(y1,...,y7)]
Is that correct?
Then, the second argument for solve_bvp is a callable bc(ya,yb), which should evaluate the residuals of the boundary conditions. Here I really have trouble understanding how to define such a function. It is also not clear to me what exactly the arrays ya and yb are and what shape they should have?
The third argument is x, which is the 'initial mesh' with a shape (m,). Should x consist only of the points a and b, which is where we know the boundary conditions? Or should it be something else?
Finally the fourth argument is y, which is the 'initial guess for the function values at the mesh nodes', and has shape (n,m). Its ith column corresponds with x[i]. I guess that the 1st row corresponds with y1, the 2nd with z1, the 3rd with y2, etc. Is that right? Furthermore, which values should be put here? We could put in the known boundary conditions at x=a and x=b, but we don't know how the function looks like at any other points. Furthermore, how does this y relate to the function bc(ya,yb)? Are the input arguments ya,yb somehow derived from this y?
Any help with understanding the syntax of solve_bvp and its application in this case would be greatly appreciated.
For the boundary condition, bc returns the residuals of the equations. That is, transform the equations so that the right side is zero, and then return the vector of the left sides. ya,yb are the state vectors at the points x=a,b.
For the initial state, it could be anything. However, "anything" in most cases will lead to failure to converge due to singular Jacobian or too large mesh. What the solver does is solve a large non-linear system of equations (see collocation method). As with any such system, convergence is most assured if you start close enough to the actual solution. So you can try for yi
constant functions with value Bi
linear functions with slope Ai and value Bi at x=b
with the values for zi using the derivatives of these functions. There is no guarantee that this works, especially as close to zero you have basis solutions close to the constant solution and the logarithm, the closer a is to zero, the more singular this becomes.

How to create interaction variable between only 1 variable with all other variables

When using the SciKit learn PolynomialFeatures package it is possible to do the following:
Take features: x1, x2, x3
Create interaction variables: [x1, x2, x3, x1x2, x2x3, x1x3] using PolynomialFeatures(interaction_only=True)
My problem is that I only want the interaction terms between x1 and all other terms meaning:
[x1, x2, x3, x1x2, x1x3], I do not want x2x3.
Is it possible to do this?
You could just make them yourself no? PolynomialFeatures doesn't do anything particularly innovative.

Is there a way to setup and solve multiple numerical integration in scipy like in mathematica?

I would like to solve the following formulas numerically in Python;
In Mathematica, you can input multiple differential equations and solve it at the same time. Is there a way to do the similar thing with Scipy?
edit: yes, I have looked at scipy.integrate.odeint already, but I'm still not sure how I can solve multiple equations that correlates each other at the same time. Does anyone have suggestion for that?
Eventually I figured out myself and I'm writing down for other people who might be clueless like me;
in odeint, you give three parameters model, y, and t where model is a function that takes in y and t then return dydt, y is the initial value of y, and t is the variable you're trying to take integral over. If you have multiple differential equations that are dependent on each other, you can just pass in all of them to odeint. In my case,
t = np.linspace(0, 20) # range of t
y0 = [No_0, Na_0, Ni_0, Nn_0] # initial condition for each Ns
def model(y, t):
No, Na, Ni, Nn = y
dNodt = -k_oa * No
dNadt = k_oa * No - k_ai * Na
dNidt = k_ai * Na - k_in * Ni
dNndt = k_in * Ni
return [dNodt, dNadt, dNidt, dNndt]
y = odeint(model, y0, t)
You can define multiple differential equations you want to solve within the model you define, and pass in to odeint along with the initial values for all species.
Have a loot at SciPy's Integration package:
https://docs.scipy.org/doc/scipy/reference/integrate.html
specifically at odeint to solve systems of ODE's:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.odeint.html#scipy.integrate.odeint

minimise function with constraints on non-optimisation variables

I need to optimise a function with a constraint on a variable which is calculated by solving a set of equations.
The optimisation parameters are the input variables to the system of equations and one of the calculated variables have a constraint.
Here is an extremely simplified example:
def opt(x):
x1, x2, x3 = x
z1 = x1 + x2 + x3
z2 = z1**2
.
.
.
z100 = f(x1, x2, x3, z1, ..., z99)
return some objective function
minimise opt(x)
s.t. z100 < a
I am familiar with scipy.optimize.minimize but I can not set a constraint on z100 and it is extremely difficult to calculate a function for z100 with only the variables x1, x2, x3.
This is very broad, but some remarks:
and it is extremely difficult to calculate a function for z100 with only the variables x1, x2, x3
Where is the problem? Just use the args-argument inside the call to minimize and you can use all the objects you want
I am familiar with scipy.optimize.minimize but I can not set a constraint on z100
This somewhat implicates, that z100 is depending on the optimization-variables (which makes it a constraint on optimization-variables opposed to the title; or there is no effect / reason to use it at all as we are only modifying opt-vars and no a-priori constraint-logic is affected)
Just introduce an auxiliary-variable y0 and constraints:
y0 < a
y0 == f(...)
But: this might contradict the solver's assumptions about smoothness (depending on f of course). So you probably should think again about your problem (and for us there is nothing to do as the structure of that problem is not posted)
And: eq-constraints sometimes are a bit of a hassle in regards to numerical-trouble

Categories