How do we solve a system of linear equations in Python and NumPy:
We have a system of equations and there is the right side of the
values after the equal sign. We write all the coefficients into the
matrix matrix = np.array(...),, and write the
right side into the vector vector = np.array(...) and then
use the command np.linalg.solve(matrix, vector) to find the
variables.
But if I have derivatives after the equal sign and I want to do the same with the system of differential equations, how can I implement this?
(Where are the lambda known values, and I need to find A
P.S. I saw the use of this command y = odeint(f, y0, t) from the library scipy but I did not understand how to set my own function f if I have a matrix there, what are the initial values y0 and what t?
You can solve your system with the compact form
t = arange(t0,tf,h)
solX = odeint(lambda X,t: M.dot(X), X0, t)
after setting the parameters and initial condition.
For advanced use set also the absolute and relative error thresholds according to the scale of the state vector and the desired accuracy.
Related
I want to solve a boundary value problem consisting of 7 coupled 2nd order differential equations. There are 7 functions, y1(x),...y7(x), and each of them is described by a differential equation of the form
d^2yi/dx^2 = -(1/x)*dyi/dx - Li(y1,...,y7) for 0 < a <= x <= b,
where Li is a function that gives a linear combination of y1,...,y7. We have boundary conditions for the first-order derivatives dyi/dx at x=a and for the functions yi at x=b:
dyi/dx(a) = Ai,
yi(b) = Bi.
So we can rewrite this as a system of 14 coupled 1st order ODEs:
dyi/dx = zi,
dzi/dx = -(1/x)*zi - Li(y1,...,y7),
zi(a) = Ai,
yi(b) = Bi.
I want to solve this system of equations using the Python function scipy.integrate.solve_bvp. However, I have trouble understanding what exactly should be the input arguments for the function as described in the documentation (https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_bvp.html).
The first argument that this function requires is a callable fun(x,y). As I understand it, the input argument y must be an array consisting of values of yi and zi, and gives as output the values of zi and dzi/dx. So my function would look like this (pseudocode):
def fun(x,y):
y1, z1, y2, z2, ..., y7, z7 = y
return [z1, -(1/x)*z1 - L1(y1,...,y7),
...,
z7, -(1/x)*z7 - L7(y1,...,y7)]
Is that correct?
Then, the second argument for solve_bvp is a callable bc(ya,yb), which should evaluate the residuals of the boundary conditions. Here I really have trouble understanding how to define such a function. It is also not clear to me what exactly the arrays ya and yb are and what shape they should have?
The third argument is x, which is the 'initial mesh' with a shape (m,). Should x consist only of the points a and b, which is where we know the boundary conditions? Or should it be something else?
Finally the fourth argument is y, which is the 'initial guess for the function values at the mesh nodes', and has shape (n,m). Its ith column corresponds with x[i]. I guess that the 1st row corresponds with y1, the 2nd with z1, the 3rd with y2, etc. Is that right? Furthermore, which values should be put here? We could put in the known boundary conditions at x=a and x=b, but we don't know how the function looks like at any other points. Furthermore, how does this y relate to the function bc(ya,yb)? Are the input arguments ya,yb somehow derived from this y?
Any help with understanding the syntax of solve_bvp and its application in this case would be greatly appreciated.
For the boundary condition, bc returns the residuals of the equations. That is, transform the equations so that the right side is zero, and then return the vector of the left sides. ya,yb are the state vectors at the points x=a,b.
For the initial state, it could be anything. However, "anything" in most cases will lead to failure to converge due to singular Jacobian or too large mesh. What the solver does is solve a large non-linear system of equations (see collocation method). As with any such system, convergence is most assured if you start close enough to the actual solution. So you can try for yi
constant functions with value Bi
linear functions with slope Ai and value Bi at x=b
with the values for zi using the derivatives of these functions. There is no guarantee that this works, especially as close to zero you have basis solutions close to the constant solution and the logarithm, the closer a is to zero, the more singular this becomes.
I want to integrate the following equation:
d^2[Ψ(z)] / dz^2 = A * ρ(z)
Where Ψ (unknown) and ρ (known) are 1-D arrays and A is a constant.
I have already performed a Taylor expansion, i.e.
d^2[Ψ(z_0)] / dz^2 = [Ψ(z0+Δz) - 2Ψ(z0) + Ψ(z0-Δz)] / [Δz^2]
And successfully solve it by building a matrice.
Now, I would like to know if there is a Python (preferably) or Matlab function that can solve this function without having to do a Taylor expansion.
I have tried numpy.trapz and scipy.integrate.quad, but it seems that these functions only return the area under the curve, i.e. a number, and I am interested to get an array or a function (solving for Ψ).
What you want to do is to solve the differential equation. Because it is a second order differential equation, you should modify your function to make it a system of first order ODEs. So you have to create a function like this:
Assuming ρ=rho
def f(z, y):
return np.array([y[1], A*rho(z)])
where y is a vector containing Ψ in the first position and its derivative in the second position. Then, f returns a vector containing the first and second derivatives of Ψ.
Once done that, you can use scipy.integrate.solve_ivp to solve the problem:
scipy.integrate.solve_ivp(f, [z_start, z_final], y0, method='RK45', z_eval)
where y0 are the initial conditions of Ψ (the value of Ψ and its derivative at z_start). z_eval is the points where you want to store the solution. The solution will be an array containing the values of Ψ and its derivative.
You could integrate rho twice using an indefinite integral . e.g.
import numpy as np
x=np.arange(0,3.1,0.1)
rho = x**3 # replace your rho here....
indef_integral1 = [np.trapz(rho[0:i],x[0:i]) for i in range(2,len(rho))]
indef_integral2 = [np.trapz(indef_integral1[0:i],x[0:i]) for i in range(2,len(indef_integral1))]
I would like to know how I can represent the third derivate term:
In Fipy python. I know that the diffusion term is represented as
DiffusionTerm(coeff=D)
and higher order diffusion terms as
DiffusionTerm(coeff=(Gamma1, Gamma2))
But can not figure out a way to represent this third derivate. Thanks
Is the vector v defined in terms of a (scalar) solution variable? If not, just write the term explicitly:
v.divergence.faceGrad.divergence
If v is a function of the solution variable (say \phi), then there's no mechanism to do this like there is with higher-order diffusion, but there really isn't a need (nor is there a need for higher-order diffusion). Split your equation into two 2nd order PDEs and couple them:
\partial \phi / \partial t = \nabla^2 \nabla\cdot\vec{v}
can be rewritten as
\partial \phi / \partial t = \nabla^2 \psi \\
\psi = \nabla\cdot\vec{v}
which would be
TransientTerm(var=phi) == DiffusionTerm(var=psi)
ImplicitSourceTerm(var=psi) == ConvectionTerm(coeff=v, var=???)
I'd need to know more about v and your full set of equations to advise further on what that ConvectionTerm should look like.
[notes added given the information that these terms arise from the Korteweg-de Vries equation]:
While it is not strictly true that v isn't a function of some phi in the KdV equation, there still is no way to put the \partial^3 v / \partial x^3 term into a form that FiPy can readily make use of. If v is scalar, then \partial^3 v / \partial x^3 is vector. If v is vector, then \partial^3 v / \partial x^3 is either scalar or tensor. There's no way to make the rank of this term consistent with the others unless you dot it with a unit vector, in which case it's just some source without an efficient implicit representation.
At the root, 1D equations are always misleading. It's critical to know what's a scalar and what's a vector. FiPy, as a finite volume code, is applying the divergence theorem when it solves, and so it is necessary to know when one is dealing with the divergence of a flux (which FiPy can treat implicitly) or just some random partial derivative (which it cannot).
Reading through the derivations of the KdV equation, it appears that so many long-wave approximations and variable substitutions have been made that any trace of vector calculus has been cast away. As a result, this is not a PDE that FiPy has efficient forms for. You can write v.faceGrad.divergence.grad.dot([[1]]), and FiPy should accept this, but it won't solve very effectively.
Further, since the KdV equations are about wave propagation and are essentially hyperbolic, FiPy really isn't well suited (some diffusive element is generally needed for the algorithms underlying FiPy to converge). You might take a look at Clawpack or hp-FEM.
I'm working with nonlinear systems of equations. These systems are generally a nonlinear vector differential equation.
I now want to use functions and derive them with respect to time and to their time-derivatives, and find equilibrium points by solving the nonlinear equations 0=rhs(eqs).
Similar things are needed to calculate the Euler-Lagrange equations, where you need the derivative of L wrt. diff(x,t).
Now my question is, how do I implement this in Sympy?
My main 2 problems are, that deriving a Symbol f wrt. t diff(f,t), I get 0. I can see, that with
x = Symbol('x',real=True);
diff(x.subs(x,x(t)),t) # because diff(x,t) => 0
and
diff(x**2, x)
does kind of work.
However, with
x = Fuction('x')(t);
diff(x,t);
I get this to work, but I cannot differentiate wrt. the funtion x itself, like
diff(x**2,x) -DOES NOT WORK.
Since I need these things, especially not only for scalars, but for vectors (using jacobian) all the time, I really want this to be a clean and functional workflow.
Which kind of type should I initiate my mathematical functions in Sympy in order to avoid strange substitutions?
It only gets worse for matricies, where I cannot get
eqns = Matrix([f1-5, f2+1]);
variabs = Matrix([f1,f2]);
nonlinsolve(eqns,variabs);
to work as expected, since it only allows symbols as input. Is there an easy conversion here? Like eqns.tolist() - which doesn't work either?
EDIT:
I just found this question, which was answered towards using expressions and matricies. I want to be able to solve sets of nonlinear equations, build the jacobian of a vector wrt. another vector and derive wrt. functions as stated above. Can anyone point me into a direction to start a concise workflow for this purpose? I guess the most complex task is calculating the Lie-derivative wrt. a vector or list of functions, the rest should be straight forward.
Edit 2:
def substi(expr,variables):
return expr.subs( {w:w(t)} )
would automate the subsitution, such that substi(vector_expr,varlist_vector).diff(t) is not all 0.
Yes, one has to insert an argument in a function before taking its derivative. But after that, differentiation with respect to x(t) works for me in SymPy 1.1.1, and I can also differentiate with respect to its derivative. Example of Euler-Lagrange equation derivation:
t = Symbol("t")
x = Function("x")(t)
L = x**2 + diff(x, t)**2 # Lagrangian
EL = -diff(diff(L, diff(x, t)), t) + diff(L, x)
Now EL is 2*x(t) - 2*Derivative(x(t), t, t) as expected.
That said, there is a build-in method for Euler-Lagrange:
EL = euler_equations(L)
would yield the same result, except presented as a differential equation with right-hand side 0: [Eq(2*x(t) - 2*Derivative(x(t), t, t), 0)]
The following defines x to be a function of t
import sympy as s
t = s.Symbol('t')
x = s.Function('x')(t)
This should solve your problem of diff(x,t) being evaluated as 0. But I think you will still run into problems later on in your calculations.
I also work with calculus of variations and Euler-Lagrange equations. In these calculations, x' needs to be treated as independent of x. So, it is generally better to use two entirely different variables for x and x' so as not to confuse Sympy with the relationship between those two variables. After we are done with the calculations in Sympy and we go back to our pen and paper we can substitute x' for the second variable.
I want to solve this kind of problem:
dy/dt = 0.01*y*(1-y), find t when y = 0.8 (0<t<3000)
I've tried the ode function in Python, but it can only calculate y when t is given.
So are there any simple ways to solve this problem in Python?
PS: This function is just a simple example. My real problem is so complex that can't be solve analytically. So I want to know how to solve it numerically. And I think this problem is more like an optimization problem:
Objective function y(t) = 0.8, Subject to dy/dt = 0.01*y*(1-y), and 0<t<3000
PPS: My real problem is:
objective function: F(t) = 0.85,
subject to: F(t) = sqrt(x(t)^2+y(t)^2+z(t)^2),
x''(t) = (1/F(t)-1)*250*x(t),
y''(t) = (1/F(t)-1)*250*y(t),
z''(t) = (1/F(t)-1)*250*z(t)-10,
x(0) = 0, y(0) = 0, z(0) = 0.7,
x'(0) = 0.1, y'(0) = 1.5, z'(0) = 0,
0<t<5
This differential equation can be solved analytically quite easily:
dy/dt = 0.01 * y * (1-y)
rearrange to gather y and t terms on opposite sides
100 dt = 1/(y * (1-y)) dy
The lhs integrates trivially to 100 * t, rhs is slightly more complicated. We can always write a product of two quotients as a sum of the two quotients * some constants:
1/(y * (1-y)) = A/y + B/(1-y)
The values for A and B can be worked out by putting the rhs on the same denominator and comparing constant and first order y terms on both sides. In this case it is simple, A=B=1. Thus we have to integrate
1/y + 1/(1-y) dy
The first term integrates to ln(y), the second term can be integrated with a change of variables u = 1-y to -ln(1-y). Our integrated equation therefor looks like:
100 * t + C = ln(y) - ln(1-y)
not forgetting the constant of integration (it is convenient to write it on the lhs here). We can combine the two logarithm terms:
100 * t + C = ln( y / (1-y) )
In order to solve t for an exact value of y, we first need to work out the value of C. We do this using the initial conditions. It is clear that if y starts at 1, dy/dt = 0 and the value of y never changes. Thus plug in the values for y and t at the beginning
100 * 0 + C = ln( y(0) / (1 - y(0) )
This will give a value for C (assuming y is not 0 or 1) and then use y=0.8 to get a value for t. Note that because of the logarithm and the factor 100 multiplying t y will reach 0.8 within a relatively short range of t values, unless the initial value of y is incredibly small. It is of course also straightforward to rearrange the equation above to express y in terms of t, then you can plot the function as well.
Edit: Numerical integration
For a more complexed ODE which cannot be solved analytically, you will have to try numerically. Initially we only know the value of the function at zero time y(0) (we have to know at least that in order to uniquely define the trajectory of the function), and how to evaluate the gradient. The idea of numerical integration is that we can use our knowledge of the gradient (which tells us how the function is changing) to work out what the value of the function will be in the vicinity of our starting point. The simplest way to do this is Euler integration:
y(dt) = y(0) + dy/dt * dt
Euler integration assumes that the gradient is constant between t=0 and t=dt. Once y(dt) is known, the gradient can be calculated there also and in turn used to calculate y(2 * dt) and so on, gradually building up the complete trajectory of the function. If you are looking for a particular target value, just wait until the trajectory goes past that value, then interpolate between the last two positions to get the precise t.
The problem with Euler integration (and with all other numerical integration methods) is that its results are only accurate when its assumptions are valid. Because the gradient is not constant between pairs of time points, a certain amount of error will arise for each integration step, which over time will build up until the answer is completely inaccurate. In order to improve the quality of the integration, it is necessary to use more sophisticated approximations to the gradient. Check out for example the Runge-Kutta methods, which are a family of integrators which remove progressive orders of error term at the cost of increased computation time. If your function is differentiable, knowing the second or even third derivatives can also be used to reduce the integration error.
Fortunately of course, somebody else has done the hard work here, and you don't have to worry too much about solving problems like numerical stability or have an in depth understanding of all the details (although understanding roughly what is going on helps a lot). Check out http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.ode.html#scipy.integrate.ode for an example of an integrator class which you should be able to use straightaway. For instance
from scipy.integrate import ode
def deriv(t, y):
return 0.01 * y * (1 - y)
my_integrator = ode(deriv)
my_integrator.set_initial_value(0.5)
t = 0.1 # start with a small value of time
while t < 3000:
y = my_integrator.integrate(t)
if y > 0.8:
print "y(%f) = %f" % (t, y)
break
t += 0.1
This code will print out the first t value when y passes 0.8 (or nothing if it never reaches 0.8). If you want a more accurate value of t, keep the y of the previous t as well and interpolate between them.
As an addition to Krastanov`s answer:
Aside of PyDSTool there are other packages, like Pysundials and Assimulo which provide bindings to the solver IDA from Sundials. This solver has root finding capabilites.
Use scipy.integrate.odeint to handle your integration, and analyse the results afterward.
import numpy as np
from scipy.integrate import odeint
ts = np.arange(0,3000,1) # time series - start, stop, step
def rhs(y,t):
return 0.01*y*(1-y)
y0 = np.array([1]) # initial value
ys = odeint(rhs,y0,ts)
Then analyse the numpy array ys to find your answer (dimensions of array ts matches ys). (This may not work first time because I am constructing from memory).
This might involve using the scipy interpolate function for the ys array, such that you get a result at time t.
EDIT: I see that you wish to solve a spring in 3D. This should be fine with the above method; Odeint on the scipy website has examples for systems such as coupled springs that can be solved for, and these could be extended.
What you are asking for is a ODE integrator with root finding capabilities. They exist and the low-level code for such integrators is supplied with scipy, but they have not yet been wrapped in python bindings.
For more information see this mailing list post that provides a few alternatives: http://mail.scipy.org/pipermail/scipy-user/2010-March/024890.html
You can use the following example implementation which uses backtracking (hence it is not optimal as it is a bolt-on addition to an integrator that does not have root finding on its own): https://github.com/scipy/scipy/pull/4904/files