I am trying to implement both the explicit and implicit Euler methods to approximate a solution for the following ODE:
dx/dt = -kx, where k = cos(2 pi t), and x(0) = 1
Euler's methods use finite differencing to approximate a derivative:
dx/dt = (x(t+dt) - x(t)) / dt
The forward method explicitly calculates x(t+dt) based on a previous solution x(t):
x(t+dt) = x(t) + f(x,t)dt
The backwards method is implicit, and finds the solution x(t+dt) by solving an equation involving the current state of the system x(t) and the later one x(t+dt):
x(t) = x(t+dt) - f(x,(t+dt))dt
My code for approximating a solution to dx/dt = -kx, x(0) = 1 and plotting it alongside the actual solution is given below:
### Import Necessary Packages
import matplotlib.pyplot as plt
import numpy as np
%config InlineBackend.figure_format = 'retina'
plt.rcParams['figure.figsize'] = (10.0, 6.0)
### Defining Basic Data
t0 = 0 # initial t
tf = 4*np.pi # final t
N = 1000 # factor affecting time step size
dt = (4*np.pi)/N # time step size (as a factor of N)
t = np.linspace(t0,tf,N) # defining a vector of t values from t0 to tf
x0 = 1 # initial x
x = np.zeros([N]) # initializing array for x values
f = lambda x,t: -np.cos(2*np.pi*t)*x # defining f(x,t) on RHS on ODE
### Define a Function for Euler's Forward Method ###
def ForwardEuler(f,x0,t):
x[0] = x0
for i in range(1,N-1):
x[i+1] = x[i] + (f(x[i],t[i]))*dt
return x
# Plot Solution
forward = ForwardEuler(f,x0,t)
actual = 1/np.exp((1/(2*np.pi))*np.sin(2*np.pi*t))
plt.plot(t,actual,'r-',t,forward,'b-')
plt.legend(['Actual','Backward Euler'])
plt.xlabel('Time')
plt.ylabel('Amplitude')
plt.title("Solution to $x'=-kx$, $x$(0)=1")
plt.grid(True);plt.show()
My question lies in how to adapt the for-loop section of the code to display the backward Euler method instead of forward Euler method. I am having trouble with this since the equations require you to know x[i+1] in order to solve x[i+1].
I believe the backwards for-loop would be what is given below, but I am unsure:
def BackwardEuler(f,x0,t):
x[0] = x0
for i in range(1,N-1):
x[i] = x[i+1] - (f(x[i+1],t[i+1]))*dt
return x
I have found very few resources online and am at a complete loss. Any help on this would be appreciated! Thank you!
Usually, for Backward Euler and Trapezoidal Rule, you write the expression as a equation (or a system of equations), then solve it (i.e. find its zeros). The zeros represent the value of x[i+1].
For example, for Backward Euler, the system is:
x[i+1] = x[i] + (f(x[i+1],t[i+1]))*dt
Which you can rewrite as:
x[i+1] - x[i] - dt*f(x[i+1], t[i+1]) = 0
The values x[i] and t[i+1] are known. The only unknown is x[i+1]. You can solve this system numerically (using something like fsolve), and the solution would be your x[i+1]. It is possible, of course, that you get more than one solution. You have to select the one that fits your problem (i.e. x cannot be an imaginary number, or x cannot be negative, etc.)
The same technique can be applied for Trapezoidal Rule, with the system being:
x[i+1] - x[i] - (f(x[i],t[i]) + f(x[i+1],t[i+1]))*(dt/2) = 0
PS: Check out Computational Science StackExchange. It is more suitable for question related to numerical and computational methods.
Related
I have the following system of ODEs: (two variables Td and Ta). Time step is delta_t
Td(t+delta_t) = Td(t) + (delta_t * S / (M * C)) * (H * (Ta(t)-Td(t)) - L*P)
Ta(t+delta_t) = Ta(t) + (delta_t * H * S / (A * P)) * (Td(t+delta_t) - Ta(t))
Every other alphabet you see can be treated as a constant that is re-calculated at each time step, dependent on Ta and Td. I have the initial values of Ta and Td, and I need to see the evolution of Ta, S, and Td with time.
Is there a way to convert this system into ODEs and solve in python? Thank you for helping out a beginner!
Your system has a similar structure to https://math.stackexchange.com/questions/4130422/ode-equivalent-to-a-system-of-difference-equations-discrete-to-continuous-time regarding the re-use of newly computed values
x(t+dt) = x(t) + dt*f(x(t),y(t))
y(t+dt) = y(t) + dt*g(x(t+dt),y(t))
For small values of dt the shift in the argument is of a lower order, so that you just simply get
x'(t) = f(x(t),y(t))
y'(t) = g(x(t),y(t))
as the corresponding ODE system. If dt is larger and you want to come closer to the discrete numerical solution with the exact solution of the ODE, you would need to incorporate additional Taylor terms as done in the linked question. The resulting corrected system can be written as (all at time t)
x' = f(x-0.5*dt*f(x,y), y-0.5*dt*g(x,y))
y' = g(x+0.5*dt*f(x,y), y-0.5*dt*g(x,y))
The behavior of the system will really depend on if the other parameters are really (positive?) constants or functions depending on time t or additionally on the state (x,y) or (Ta, Td).
In code, if f and g are given
def FG(t,xy,dt):
x,y = xy
dx1, dy1 = dt*f(x,y), dt*g(x,y)
return f(x-0.5*dx1, y-0.5*dy1), g(x+0.5*dx1, y-0.5*dy1)
where dt is passed as constant parameter (in the args argument of odeint or newer solve_ivp).
Example: Use f(x,y)=-4*x+2*y+7, g(x,y)=-3*y-x+2, dt=0.1, which converges visually around t=1.5. Comparing the "partitioned Euler" method, the exact integration of the (f,g) ODE system and the exact integration of the modified system gives the following plot
x - discrete, dotted - unmodified fg, line - modified FG ODE
As expected, the modified system is more faithful to the given discrete method.
I am trying to solve a set of differential equations, but I have been having difficulty making this work. My differential equations contain an "i" subscript that represents numbers from 1 to n. I tried implementing a forloop as follows, but I have been getting this index error (the error message is below). I have tried changing the initial conditions (y0) and other values, but nothing seems to work. In this code, I am using solve_ivp. The code is as follows:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from scipy.integrate import solve_ivp
def testmodel(t, y):
X = y[0]
Y = y[1]
J = y[2]
Q = y[3]
a = 3
S = 0.4
K = 0.8
L = 2.3
n = 100
for i in range(1,n+1):
dXdt[i] = K**a+(Q[i]**a) - S*X[i]
dYdt[i] = (K*X[i])-(L*Y[i])
dJdt[i] = S*Y[i]-(K*Q[i])
dQdt[i] = K*X[i]/L+J[i]
return dXdt, dYdt, dJdt, dQdt
t_span= np.array([0, 120])
times = np.linspace(t_span[0], t_span[1], 1000)
y0 = 0,0,0,0
soln = solve_ivp(testmodel, t_span, y0, t_eval=times,
vectorized=True)
t = soln.t
X = soln.y[0]
Y = soln.y[1]
J = soln.y[2]
Q = soln.y[3]
plt.plot(t, X,linewidth=2, color='red')
plt.show()
The error I get is
IndexError Traceback (most recent call last)
<ipython-input-107-3a0cfa6e42ed> in testmodel(t, y)
15 n = 100
16 for i in range(1,n+1):
--> 17 dXdt[i] = K**a+(Q[i]**a) - S*X[i]
IndexError: index 1 is out of bounds for axis 0 with size 1
I have scattered the web for a solution to this, but I have been unable to apply any solution to this problem. I am not sure what I am doing wrong and what to actually change.
I have tried to remove the "vectorized=True" argument, but then I get an error that states I cannot index scalar variables. This is confusing because I do not think these values should be scalar. How do I resolve this problem, my ultimate goal is to plot these differential equations. Thank you in advance.
It is nice that you provide the standard solver with a vectorized ODE function for multi-point evalutions. But the default method is the explicit RK45, and explicit methods do not use Jacobi matrices. So there is no need for multi-point evaluations for difference quotients for the partial derivatives.
In essence, the coordinate arrays always have size 1, as the evaluation is at a single point, so for instance Q is an array of length 1, the only valid index is 0. Remember, in all "true" programming languages, array indices start at 0. It is only some CAS script languages that use the "more mathematical" 1 as index start. (Setting n=100 and ignoring the length of the arrays provided by the solver is wrong as well.)
You can avoid all that and shorten your routine by taking into account that the standard arithmetic operations are applied element-wise for numpy arrays, so
def testmodel(t, y):
X,Y,J,Q = y
a = 3; S = 0.4; K = 0.8; L = 2.3
dXdt = K**a + Q**a - S*X
dYdt = K*X - L*Y
dJdt = S*Y - K*Q
dQdt = K*X/L + J
return dXdt, dYdt, dJdt, dQdt
Modifying your code for multiple compartments with the same dynamic
You need to pass the solver a flat vector of the state. The first design decision is how the compartments and their components are arranged in the flat vector. One variant that is most compatible with the existing code is to cluster the same components together. Then in the ODE function the first operation is to separate out these clusters.
X,Y,J,Q = y.reshape([4,-1])
This splits the input vector into 4 pieces of equal length. At the end you need to reverse this split so that the derivatives are again in a flat vector.
return np.concatenate([dXdt, dYdt, dJdt, dQdt])
Everything else remains the same. Apart from the initial vector, which needs to have 4 segments of length N containing the data for the compartments. Here that could just be
y0 = np.zeros(4*N)
If the initial data is from any other source, and given in records per compartment, you might have to transpose the resulting array before flattening it.
Note that this construction is not vectorized, so leave that option unset in its default False.
For uniform interaction patterns like in a circle I recommend the use of numpy.roll to continue to avoid the use of explicit loops. For an interaction pattern that looks like a network one can use connectivity matrices and masks like in Using python built-in functions for coupled ODEs
I have data that I want to fit with polynomials. I have 200,000 data points, so I want an efficient algorithm. I want to use the numpy.polynomial package so that I can try different families and degrees of polynomials. Is there some way I can formulate this as a system of equations like Ax=b? Is there a better way to solve this than with scipy.minimize?
import numpy as np
from scipy.optimize import minimize as mini
x1 = np.random.random(2000)
x2 = np.random.random(2000)
y = 20 * np.sin(x1) + x2 - np.sin (30 * x1 - x2 / 10)
def fitness(x, degree=5):
poly1 = np.polynomial.polynomial.polyval(x1, x[:degree])
poly2 = np.polynomial.polynomial.polyval(x2, x[degree:])
return np.sum((y - (poly1 + poly2)) ** 2 )
# It seems like I should be able to solve this as a system of equations
# x = np.linalg.solve(np.concatenate([x1, x2]), y)
# minimize the sum of the squared residuals to find the optimal polynomial coefficients
x = mini(fitness, np.ones(10))
print fitness(x.x)
Your intuition is right. You can solve this as a system of equations of the form Ax = b.
However:
The system is overdefined and you want to get the least-squares solution, so you need to use np.linalg.lstsq instead of np.linalg.solve.
You can't use polyval because you need to separate the coefficients and powers of the independent variable.
This is how to construct the system of equations and solve it:
A = np.stack([x1**0, x1**1, x1**2, x1**3, x1**4, x2**0, x2**1, x2**2, x2**3, x2**4]).T
xx = np.linalg.lstsq(A, y)[0]
print(fitness(xx)) # test the result with original fitness function
Of course you can generalize over the degree:
A = np.stack([x1**p for p in range(degree)] + [x2**p for p in range(degree)]).T
With the example data, the least squares solution runs much faster than the minimize solution (800µs vs 35ms on my laptop). However, A can become quite large, so if memory is an issue minimize might still be an option.
Update:
Without any knowledge about the internals of the polynomial function things become tricky, but it is possible to separate terms and coefficients. Here is a somewhat ugly way to construct the system matrix A from a function like polyval:
def construct_A(valfunc, degree):
columns1 = []
columns2 = []
for p in range(degree):
c = np.zeros(degree)
c[p] = 1
columns1.append(valfunc(x1, c))
columns2.append(valfunc(x2, c))
return np.stack(columns1 + columns2).T
A = construct_A(np.polynomial.polynomial.polyval, 5)
xx = np.linalg.lstsq(A, y)[0]
print(fitness(xx)) # test the result with original fitness function
I have used numpy's polyfit and obtained a very good fit (using a 7th order polynomial) for two arrays, x and y. My relationship is thus;
y(x) = p[0]* x^7 + p[1]*x^6 + p[2]*x^5 + p[3]*x^4 + p[4]*x^3 + p[5]*x^2 + p[6]*x^1 + p[7]
where p is the polynomial array output by polyfit.
Is there a way to reverse this method easily, so I have a solution in the form of,
x(y) = p[0]*y^n + p[1]*y^n-1 + .... + p[n]*y^0
No there is no easy way in general. Closed form-solutions for arbitrary polynomials are not available for polynomials of the seventh order.
Doing the fit in the reverse direction is possible, but only on monotonically varying regions of the original polynomial. If the original polynomial has minima or maxima on the domain you are interested in, then even though y is a function of x, x cannot be a function of y because there is no 1-to-1 relation between them.
If you are (i) OK with redoing the fitting procedure, and (ii) OK with working piecewise on single monotonic regions of your fit at a time, then you could do something like this:
-
import numpy as np
# generate a random coefficient vector a
degree = 1
a = 2 * np.random.random(degree+1) - 1
# an assumed true polynomial y(x)
def y_of_x(x, coeff_vector):
"""
Evaluate a polynomial with coeff_vector and degree len(coeff_vector)-1 using Horner's method.
Coefficients are ordered by increasing degree, from the constant term at coeff_vector[0],
to the linear term at coeff_vector[1], to the n-th degree term at coeff_vector[n]
"""
coeff_rev = coeff_vector[::-1]
b = 0
for a in coeff_rev:
b = b * x + a
return b
# generate some data
my_x = np.arange(-1, 1, 0.01)
my_y = y_of_x(my_x, a)
# verify that polyfit in the "traditional" direction gives the correct result
# [::-1] b/c polyfit returns coeffs in backwards order rel. to y_of_x()
p_test = np.polyfit(my_x, my_y, deg=degree)[::-1]
print p_test, a
# fit the data using polyfit but with y as the independent var, x as the dependent var
p = np.polyfit(my_y, my_x, deg=degree)[::-1]
# define x as a function of y
def x_of_y(yy, a):
return y_of_x(yy, a)
# compare results
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(my_x, my_y, '-b', x_of_y(my_y, p), my_y, '-r')
Note: this code does not check for monotonicity but simply assumes it.
By playing around with the value of degree, you should see that see the code only works well for all random values of a when degree=1. It occasionally does OK for other degrees, but not when there are lots of minima / maxima. It never does perfectly for degree > 1 because approximating parabolas with square-root functions doesn't always work, etc.
I want to solve this kind of problem:
dy/dt = 0.01*y*(1-y), find t when y = 0.8 (0<t<3000)
I've tried the ode function in Python, but it can only calculate y when t is given.
So are there any simple ways to solve this problem in Python?
PS: This function is just a simple example. My real problem is so complex that can't be solve analytically. So I want to know how to solve it numerically. And I think this problem is more like an optimization problem:
Objective function y(t) = 0.8, Subject to dy/dt = 0.01*y*(1-y), and 0<t<3000
PPS: My real problem is:
objective function: F(t) = 0.85,
subject to: F(t) = sqrt(x(t)^2+y(t)^2+z(t)^2),
x''(t) = (1/F(t)-1)*250*x(t),
y''(t) = (1/F(t)-1)*250*y(t),
z''(t) = (1/F(t)-1)*250*z(t)-10,
x(0) = 0, y(0) = 0, z(0) = 0.7,
x'(0) = 0.1, y'(0) = 1.5, z'(0) = 0,
0<t<5
This differential equation can be solved analytically quite easily:
dy/dt = 0.01 * y * (1-y)
rearrange to gather y and t terms on opposite sides
100 dt = 1/(y * (1-y)) dy
The lhs integrates trivially to 100 * t, rhs is slightly more complicated. We can always write a product of two quotients as a sum of the two quotients * some constants:
1/(y * (1-y)) = A/y + B/(1-y)
The values for A and B can be worked out by putting the rhs on the same denominator and comparing constant and first order y terms on both sides. In this case it is simple, A=B=1. Thus we have to integrate
1/y + 1/(1-y) dy
The first term integrates to ln(y), the second term can be integrated with a change of variables u = 1-y to -ln(1-y). Our integrated equation therefor looks like:
100 * t + C = ln(y) - ln(1-y)
not forgetting the constant of integration (it is convenient to write it on the lhs here). We can combine the two logarithm terms:
100 * t + C = ln( y / (1-y) )
In order to solve t for an exact value of y, we first need to work out the value of C. We do this using the initial conditions. It is clear that if y starts at 1, dy/dt = 0 and the value of y never changes. Thus plug in the values for y and t at the beginning
100 * 0 + C = ln( y(0) / (1 - y(0) )
This will give a value for C (assuming y is not 0 or 1) and then use y=0.8 to get a value for t. Note that because of the logarithm and the factor 100 multiplying t y will reach 0.8 within a relatively short range of t values, unless the initial value of y is incredibly small. It is of course also straightforward to rearrange the equation above to express y in terms of t, then you can plot the function as well.
Edit: Numerical integration
For a more complexed ODE which cannot be solved analytically, you will have to try numerically. Initially we only know the value of the function at zero time y(0) (we have to know at least that in order to uniquely define the trajectory of the function), and how to evaluate the gradient. The idea of numerical integration is that we can use our knowledge of the gradient (which tells us how the function is changing) to work out what the value of the function will be in the vicinity of our starting point. The simplest way to do this is Euler integration:
y(dt) = y(0) + dy/dt * dt
Euler integration assumes that the gradient is constant between t=0 and t=dt. Once y(dt) is known, the gradient can be calculated there also and in turn used to calculate y(2 * dt) and so on, gradually building up the complete trajectory of the function. If you are looking for a particular target value, just wait until the trajectory goes past that value, then interpolate between the last two positions to get the precise t.
The problem with Euler integration (and with all other numerical integration methods) is that its results are only accurate when its assumptions are valid. Because the gradient is not constant between pairs of time points, a certain amount of error will arise for each integration step, which over time will build up until the answer is completely inaccurate. In order to improve the quality of the integration, it is necessary to use more sophisticated approximations to the gradient. Check out for example the Runge-Kutta methods, which are a family of integrators which remove progressive orders of error term at the cost of increased computation time. If your function is differentiable, knowing the second or even third derivatives can also be used to reduce the integration error.
Fortunately of course, somebody else has done the hard work here, and you don't have to worry too much about solving problems like numerical stability or have an in depth understanding of all the details (although understanding roughly what is going on helps a lot). Check out http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.ode.html#scipy.integrate.ode for an example of an integrator class which you should be able to use straightaway. For instance
from scipy.integrate import ode
def deriv(t, y):
return 0.01 * y * (1 - y)
my_integrator = ode(deriv)
my_integrator.set_initial_value(0.5)
t = 0.1 # start with a small value of time
while t < 3000:
y = my_integrator.integrate(t)
if y > 0.8:
print "y(%f) = %f" % (t, y)
break
t += 0.1
This code will print out the first t value when y passes 0.8 (or nothing if it never reaches 0.8). If you want a more accurate value of t, keep the y of the previous t as well and interpolate between them.
As an addition to Krastanov`s answer:
Aside of PyDSTool there are other packages, like Pysundials and Assimulo which provide bindings to the solver IDA from Sundials. This solver has root finding capabilites.
Use scipy.integrate.odeint to handle your integration, and analyse the results afterward.
import numpy as np
from scipy.integrate import odeint
ts = np.arange(0,3000,1) # time series - start, stop, step
def rhs(y,t):
return 0.01*y*(1-y)
y0 = np.array([1]) # initial value
ys = odeint(rhs,y0,ts)
Then analyse the numpy array ys to find your answer (dimensions of array ts matches ys). (This may not work first time because I am constructing from memory).
This might involve using the scipy interpolate function for the ys array, such that you get a result at time t.
EDIT: I see that you wish to solve a spring in 3D. This should be fine with the above method; Odeint on the scipy website has examples for systems such as coupled springs that can be solved for, and these could be extended.
What you are asking for is a ODE integrator with root finding capabilities. They exist and the low-level code for such integrators is supplied with scipy, but they have not yet been wrapped in python bindings.
For more information see this mailing list post that provides a few alternatives: http://mail.scipy.org/pipermail/scipy-user/2010-March/024890.html
You can use the following example implementation which uses backtracking (hence it is not optimal as it is a bolt-on addition to an integrator that does not have root finding on its own): https://github.com/scipy/scipy/pull/4904/files