Parametric plot of solution of 2x2 diff. system in python, Mathematica - python

I've implemented a solution to the following system of equations
dy/dt = -t*y(t) - x(t)
dx/dt = 2*x(t) - y(t)^3
y(0) = x(0) = 1.
0 <= t <= 20
firstly in Mathematica and afterwards in Python.
My code in Mathematica:
s = NDSolve[
{x'[t] == -t*y[t] - x[t], y'[t] == 2 x[t] - y[t]^3, x[0] == y[0] == 1},
{x, y}, {t, 20}]
ParametricPlot[Evaluate[{x[t], y[t]} /. s], {t, 0, 20}]
From that I get the following plot: Plot1 (if it gives a 403 Forbidden message please press enter inside the url field)
Later on I coded the same into python:
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
g = lambda t: t
def f(z,t):
xi = z[0]
yi = z[1]
gi = z[2]
f1 = -gi*yi-xi
f2 = 2*xi-yi**3
return [f1,f2]
# Initial Conditions
x0 = 1.
y0 = 1.
g0 = g(0)
z0 = [x0,y0,g0]
t= np.linspace(0,20.,1000)
# Solve the ODEs
soln = odeint(f,z0,t)
x = soln[:,0]
y = soln[:,1]
plt.plot(x,y)
plt.show()
And this is the plot I get:
Plot2 (if it gives a 403 Forbidden message please press enter inside the url field)
If one plots again the Mathematica solution in a smaller field:
ParametricPlot[Evaluate[{x[t], y[t]} /. s], {t, 0, 6}]
he will get a similar result to the python solution. Only the axis' will be misplaced.
Why is there such a big difference in the plots? What am I doing wrong?
I suspect that my python implementation of the model is wrong, especially where f1 is calculated. Or maybe the plot() function isn't handy at all for plotting parametric equations as in this case.
Thanks.
ps: sorry for making your life hard by not slapping the images inside the text; I don't have enough reputation yet.

You're using t as your third parameter in the input vector, not as a separate parameter. The t in f(z,t) is never used; instead, you use z[2], which will not equal the range of t as you define before (t=np.linspace(0,20.,1000)). The lambda function for g won't help here: you only use it once to set a t0, but never after.
Simplify your code, and remove that third parameter from your input vector (as well as the lambda function). For example:
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
def f(z,t):
xi = z[0]
yi = z[1]
f1 = -t*yi-xi
f2 = 2*xi-yi**3
return [f1,f2]
# Initial Conditions
x0 = 1.
y0 = 1.
#t= np.linspace(0,20.,1000)
t = np.linspace(0, 10., 100)
# Solve the ODEs
soln = odeint(f,[x0,y0],t)
x = soln[:,0]
y = soln[:,1]
ax = plt.axes()
#plt.plot(x,y)
plt.plot(t,x)
# Put those axes at their 0 value position
ax.spines['left'].set_position('zero')
ax.spines['bottom'].set_position('zero')
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
#plt.axis([-0.085, 0.085, -0.05, 0.07])
plt.show()
I have commented out the actual plot you want, and instead I'm plotting x versus t, what you have in the comments, since I feel that makes it easier to see things are correct now. The figure I get:

Related

Is there a way to plot Nullclines of a nonlinear system of ODEs

So I am trying to plot the nullclines of a system of ODEs, however I can't seem to plot them in the correct way. When I plot them, I manage to plot them according to time (t vs x and t vs y) but not at (x vs y). I'm not really sure how to explain it, and I think it would be better to just show it. I am trying to replicate this. The equations and parameters are given, however this was done in a program called XPP (I'll post these at the bottom), and there are some parameters that i don't understand what they mean.
My entire code is:
import numpy as np
from scipy import integrate
import matplotlib.pyplot as plt
# define system in terms of a Numpy array
def Sys(X, t=0):
# here X[0] = x and x[1] = y
#protien [] is represented with y, and mRNA [] is represented by x
return np.array([ (k1*S*Kd**p)/(Kd**p + X[1]**p) - kdx*X[0], ksy*X[0] - (k2*ET*X[1])/(Km + X[1])])
#variables
k1=.1
S=1
Kd=1
kdx=.1
p=2
ksy=1
k2=1
ET=1
Km=1
# generate 1000 linearly spaced numbers for x-axes
t = np.linspace(0, 50,100)
# initial values
Sys0 = np.array([1, 0])
#Solves the ODE
X, infodict = integrate.odeint(Sys, Sys0, t, full_output = 1, mxstep = 50000)
#assigns appropriate equations to x and y
x,y = X.T
#plot's the graph
fig = plt.figure(figsize=(15,5))
fig.subplots_adjust(wspace = 0.5, hspace = 0.3)
ax1 = fig.add_subplot(1,2,1)
ax1.plot(x, color="blue")
ax1.plot(y, color = 'red')
ax1.set_xlabel("Protien concentration")
ax1.set_ylabel("mRNA concentration")
ax1.set_title("Phase space")
ax1.grid()
The given equations and parameters are:
model for a simple negative feedback loop
protein (y) inhibits the synthesis of its mRNA (x)
dx/dt = k1SKd^p/(Kd^p + y^p) - kdx*x
dy/dt = ksyx - k2ET*y/(Km + y)
p k1=0.1, S=1, Kd=1, kdx=0.1, p=2
p ksy=1, k2=1, ET=1, Km=1
# XP=y, YP=x, TOTAL=100, METH=stiff, XLO=0, XHI=4, YLO=0, YHI=1.05 (I don't exactly understand what is going on here)
Again, this uses a program called XPP or WINPP.
Any help with this would be appreciated, the original paper I am trying to replicate this from is : Design principles of biochemical oscillators by Bela Novak and John J. Tyson

Python scipy.integrate.odeint failure for simple gravity simulation

I am trying to write a very simple gravity simulation of a mass orbiting the origin. I have used scipy.integrate.odeint to integrate the differential equations.
The problem is that I get the following error message:
ODEintWarning: Excess work done on this call (perhaps wrong Dfun type). Run with full_output = 1 to get quantitative information.
warnings.warn(warning_msg, ODEintWarning)
As well as this something is clearly going wrong - the equations are not being integrated correctly and the motion is incorrect. Below is a plot of the motion for initial conditions that should give circular motion around the origin:
This is the code:
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
G=1
m=1
def f_grav(y, t):
x1, x2, v1, v2 = y
m = t
dydt = [v1, v2, -x1*G*m/(x1**2+x2**2)**(3/2), -x2*G*m/(x1**2+x2**2)**(3/2)]
return dydt
t = np.linspace(0, 100, 1001)
init = [0, 1, 1, 0]
ans = odeint(f_grav, init, t)
print(ans)
x = []
y = []
for i in range (100):
x.append(ans[i][0])
y.append(ans[i][1])
plt.plot(x, y)
plt.show()
Note that I have used this function before, and writing almost identical code for an SHM differential equation obtains correct results. Changing the numbers in t does not help. Does anyone have any idea as to why this may be failing so badly?
The incorrect motion is probably numerical instability, either way, from the documentation of odeint:
note: For new code, use scipy.integrate.solve_ivp to solve a differential equation.
solve_ivp actually only takes the boundaries and decides the number of points so that the integration method is stable for the equation. You can also choose the integration method.
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import solve_ivp
G=1
m=1
def f_grav(t, y):
x1, x2, v1, v2 = y
m = t
dydt = [v1, v2, -x1*G*m/(x1**2+x2**2)**(3/2), -x2*G*m/(x1**2+x2**2)**(3/2)]
return dydt
domain = (0, 100)
init = [0, 1, 1, 0]
ans = solve_ivp(fun=f_grav, t_span=domain, y0=init)
plt.plot(ans['y'][0], ans['y'][1])
plt.show()
with that I'm not getting any warnings and the simulation looks better (note that the function must have the parameters in the order (t, y)).
As has been worked out in the comments, the error was that the mass was set to the time, this growing mass is in contradiction to the physics of the situation, but on the other hand, it explains the spiraling down as energy and momentum are preserved.
The corrected and a little streamlined code
G=1
m=1
def f_grav(y, t):
x1, x2, v1, v2 = y
r = np.hypot(x1,x2);
F = G*m/r**3;
return [v1, v2, -x1*F, -x2*F];
t = np.linspace(0, 100, 1001)
init = [0, 1, 1, 0]
ans = odeint(f_grav, init, t)
print(ans)
x,y,_,_ = ans.T
plt.plot(0,0,'oy', ms=8)
plt.plot(x, y); plt.axis('equal');
plt.show()
gives (in vx=1, where (vx,0) is the initial speed) a visually perfect circle as orbit.
There might be two main issues: you give input parameters as integers instead of doubles; in addition, you do not set the tolerance for the accuracy of the integration and the output is not 'dense'. Something like:
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import solve_ivp
G=1.
m=1.
def f_grav(t, w):
x, y, z, vx, vy, vz = w
fg = -G*m/(x**2.+y**2.+z**2.)**(1.5)
dwdt = [vx, vy, vz, fg*x, fg*y, fg*z]
return dwdt
n=100 # total number of orbits
tmin = 0.
tmax = float(n)*6.283185
domain = (tmin, tmax)
t_eval = np.linspace(tmin, tmax, int(tmax*1000))
init = [0., 1., 0., 1., 0., 0.]
ans = solve_ivp(fun=f_grav, t_span=domain, t_eval =t_eval, y0=init,dense_output=True, rtol=1.e-10)
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
plt.plot(ans['y'][0], ans['y'][1])
plt.show()
Should work properly, at least it does on my laptop.
I hope this is helpful.

Solve nonlinear equation in python

I am trying to find the fundamental TE mode of the dielectric waveguide. The way I try to solve it is to compute two function and try to find their intersection on graph. However, I am having trouble get the intersect point on the plot.
My code:
def LHS(w):
theta = 2*np.pi*1.455*10*10**(-6)*np.cos(np.deg2rad(w))/(900*10**(-9))
if(theta>(np.pi/2) or theta < 0):
y1 = 0
else:
y1 = np.tan(theta)
return y1
def RHS(w):
y = ((np.sin(np.deg2rad(w)))**2-(1.440/1.455)**2)**0.5/np.cos(np.deg2rad(w))
return y
x = np.linspace(80, 90, 500)
LHS_vals = [LHS(v) for v in x]
RHS_vals = [RHS(v) for v in x]
# plot
fig, ax = plt.subplots(1, 1, figsize=(6,3))
ax.plot(x,LHS_vals)
ax.plot(x,RHS_vals)
ax.legend(['LHS','RHS'],loc='center left', bbox_to_anchor=(1, 0.5))
plt.ylim(0,20)
plt.xlabel('degree')
plt.ylabel('magnitude')
plt.show()
I got plot like this:
The intersect point is around 89 degree, however, I am having trouble to compute the exact value of x. I have tried fsolve, solve to find the solution but still in vain. It seems not able to print out solution if it is not the only solution. Is it possible to only find solution that x is in certain range? Could someone give me any suggestion here? Thanks!
edit:
the equation is like this (m=0):
and I am trying to solve theta here by finding the intersection point
edit:
One of the way I tried is as this:
from scipy.optimize import fsolve
def f(wy):
w, y = wy
z = np.array([y - LHS(w), y - RHS(w)])
return z
fsolve(f,[85, 90])
However it gives me the wrong answer.
I also tried something like this:
import matplotlib.pyplot as plt
x = np.arange(85, 90, 0.1)
f = LHS(x)
g = RHS(x)
plt.plot(x, f, '-')
plt.plot(x, g, '-')
idx = np.argwhere(np.diff(np.sign(f - g)) != 0).reshape(-1) + 0
plt.plot(x[idx], f[idx], 'ro')
print(x[idx])
plt.show()
But it shows:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
First, you need to make sure that the function can actually handle numpy arrays. Several options for defining piecewise functions are shown in
Plot Discrete Distribution using np.linspace(). E.g.
def LHS(w):
theta = 2*np.pi*1.455*10e-6*np.cos(np.deg2rad(w))/(900e-9)
y1 = np.select([theta < 0, theta <= np.pi/2, theta>np.pi/2], [np.nan, np.tan(theta), np.nan])
return y1
This already allows to use the second approach, plotting a point at the index which is closest to the minimum of the difference between the two curves.
import numpy as np
import matplotlib.pyplot as plt
def LHS(w):
theta = 2*np.pi*1.455*10e-6*np.cos(np.deg2rad(w))/(900e-9)
y1 = np.select([theta < 0, theta <= np.pi/2, theta>np.pi/2], [np.nan, np.tan(theta), np.nan])
return y1
def RHS(w):
y = ((np.sin(np.deg2rad(w)))**2-(1.440/1.455)**2)**0.5/np.cos(np.deg2rad(w))
return y
x = np.linspace(82.1, 89.8, 5000)
f = LHS(x)
g = RHS(x)
plt.plot(x, f, '-')
plt.plot(x, g, '-')
idx = np.argwhere(np.diff(np.sign(f - g)) != 0).reshape(-1) + 0
plt.plot(x[idx], f[idx], 'ro')
plt.ylim(0,40)
plt.show()
One may then also use scipy.optimize.fsolve to get the actual solution.
idx = np.argwhere(np.diff(np.sign(f - g)) != 0)[-1]
from scipy.optimize import fsolve
h = lambda x: LHS(x)-RHS(x)
sol = fsolve(h,x[idx])
plt.plot(sol, LHS(sol), 'ro')
plt.ylim(0,40)
plt.show()
Something quick and (very) dirty that seems to work (at least it gave theta value of ~89 for your parameters) - add the following to your code before the figure, after RHS_vals = [RHS(v) for v in x] line:
# build a list of differences between the values of RHS and LHS
diff = [abs(r_val- l_val) for r_val, l_val in zip(RHS_vals, LHS_vals)]
# find the minimum of absolute values of the differences
# find the index of this minimum difference, then find at which angle it occured
min_diff = min(diff)
print "Minimum difference {}".format(min_diff)
print "Theta = {}".format(x[diff.index(min_diff)])
I looked in range of 85-90:
x = np.linspace(85, 90, 500)

How to extrapolate a function based on x,y values?

Ok so I started with Python a few days ago. I mainly use it for DataScience because I am an undergraduate chemistry student. Well, now I got a small problem on my hands, as I have to extrapolate a function. I know how to make simple diagrams and graphs, so please try to explain it as easy to me as possible. I start off with:
from matplotlib import pyplot as plt
from matplotlib import style
style.use('classic')
x = [0.632455532, 0.178885438, 0.050596443, 0.014310835, 0.004047715]
y = [114.75, 127.5, 139.0625, 147.9492188, 153.8085938]
x2 = [0.707, 0.2, 0.057, 0.016, 0.00453]
y2 = [2.086, 7.525, 26.59375,87.03125, 375.9765625]
so with these values I have to work out a way to extrapolate in order to get a y(or y2) value when my x=0. I know how to do this mathematically, but I would like to know if python can do this and how do I execute it in Python. Is there a simple way? Can you give me maybe an example with my given values?
Thank you
Taking a quick look at your data,
from matplotlib import pyplot as plt
from matplotlib import style
style.use('classic')
x1 = [0.632455532, 0.178885438, 0.050596443, 0.014310835, 0.004047715]
y1 = [114.75, 127.5, 139.0625, 147.9492188, 153.8085938]
plt.plot(x1, y1)
x2 = [0.707, 0.2, 0.057, 0.016, 0.00453]
y2 = [2.086, 7.525, 26.59375,87.03125, 375.9765625]
plt.plot(x2, y2)
This is definitely not linear. If you know what sort of function this follows, you may want to use scipy's curve fitting to get a best-fit function which you can then use.
Edit:
If we convert the plots to log-log,
import numpy as np
plt.plot(np.log(x1), np.log(y1))
plt.plot(np.log(x2), np.log(y2))
they look pretty linear (if you squint a bit). Finding a best-fit line,
np.polyfit(np.log(x1), np.log(y1), 1)
# array([-0.05817402, 4.73809081])
np.polyfit(np.log(x2), np.log(y2), 1)
# array([-1.01664659, 0.36759068])
we can convert back to functions,
# f1:
# log(y) = -0.05817402 * log(x) + 4.73809081
# so
# y = (e ** 4.73809081) * x ** (-0.05817402)
def f1(x):
return np.e ** 4.73809081 * x ** (-0.05817402)
xs = np.linspace(0.01, 0.8, 100)
plt.plot(x1, y1, xs, f1(xs))
# f2:
# log(y) = -1.01664659 * log(x) + 0.36759068
# so
# y = (e ** 0.36759068) * x ** (-1.01664659)
def f2(x):
return np.e ** 0.36759068 * x ** (-1.01664659)
plt.plot(x2, y2, xs, f2(xs))
The second looks pretty darn good; the first still needs a bit of refinement (ie find a more representative function and curve-fit it). But you should have a pretty good picture of the process ;-)
Here's some example code that can hopefully help you get started on building a linear model for your purposes.
import numpy as np
from sklearn.linear_model import LinearRegression
from matplotlib import pyplot as plt
# sample data
x = [0.632455532, 0.178885438, 0.050596443, 0.014310835, 0.004047715]
y = [114.75, 127.5, 139.0625, 147.9492188, 153.8085938]
# linear model
lm = LinearRegression()
lm.fit(np.array(x).reshape(-1, 1), y)
test_x = np.linspace(0.01, 0.7, 100)
test_y = [lm.predict(xx) for xx in test_x]
## try linear model with log(x)
lm2 = LinearRegression()
lm2.fit(np.log(np.array(x)).reshape(-1, 1), y)
test_y2 = [lm2.predict(np.log(xx)) for xx in test_x]
# plot
plt.figure()
plt.plot(x, y, label='Given Data')
plt.plot(test_x, test_y, label='Linear Model')
plt.plot(test_x, test_y2, label='Log-Linear Model')
plt.legend()
Which produces the following:
As the #Hugh Bothwell showed, the values you gave did not have a linear relationship. However, taking the log of x seems to produce a better fit.

What am I doing wrong in this Dopri5 implementation

I am totally new to python, and try to integrate following ode:
$\dot{x} = -2x-y^2$
$\dot{y} = -y-x^2
This results in an array with everything 0 though
What am I doing wrong? It is mostly copied code, and with another, not coupled ode it worked fine.
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import ode
def fun(t, z):
"""
Right hand side of the differential equations
dx/dt = -omega * y
dy/dt = omega * x
"""
x, y = z
f = [-2*x-y**2, -y-x**2]
return f
# Create an `ode` instance to solve the system of differential
# equations defined by `fun`, and set the solver method to 'dop853'.
solver = ode(fun)
solver.set_integrator('dopri5')
# Set the initial value z(0) = z0.
t0 = 0.0
z0 = [0, 0]
solver.set_initial_value(z0, t0)
# Create the array `t` of time values at which to compute
# the solution, and create an array to hold the solution.
# Put the initial value in the solution array.
t1 = 2.5
N = 75
t = np.linspace(t0, t1, N)
sol = np.empty((N, 2))
sol[0] = z0
# Repeatedly call the `integrate` method to advance the
# solution to time t[k], and save the solution in sol[k].
k = 1
while solver.successful() and solver.t < t1:
solver.integrate(t[k])
sol[k] = solver.y
k += 1
# Plot the solution...
plt.plot(t, sol[:,0], label='x')
plt.plot(t, sol[:,1], label='y')
plt.xlabel('t')
plt.grid(True)
plt.legend()
plt.show()
Your initial state (z0) is [0,0]. The time derivative (fun) for this initial state is also [0,0]. Hence, for this initial condition, [0,0] is the correct solution for all times.
If you change your initial condition to some other value, you should observe more interesting result.

Categories