I am using a module in python differint to solve the system of Lorenz 3D equation. After running my 3D system over differint ==> Riemann-Liouville operator for alpha value 1 the original equation and Riemann-Liouville results are not same . The code is mentioned below
from scipy.integrate import odeint
import numpy as np
import matplotlib.pyplot as plt
import differint.differint as df
t = np.arange(1 , 50, 0.01)
def Lorenz(state,t):
# unpack the state vector
x = state[0]
y = state[1]
z = state[2]
a=10;b=8/3;c=28
xd = a*(y -x)
yd = - y +c*x - x*z
zd = -b*z + x*y
return [xd,yd,zd]
state0 = [1,1,1]
state = odeint(Lorenz, state0, t)
#Simple lorentz eqaution plot
plt.subplot(2, 2, 1)
plt.plot(state[:,0],state[:,1])
plt.subplot(2, 2, 2)
plt.plot(state[:,0],state[:,2])
plt.subplot(2, 2, 3)
plt.plot(state[:,1],state[:,2])
plt.show()
DF = df.RL(1, state, 0, len(t), len(t))
# Riemann-Liouville plots
state=DF
plt.subplot(2, 2, 1)
plt.plot(state[:,0],state[:,1])
plt.subplot(2,2, 2)
plt.plot(state[:,0],state[:,2])
plt.subplot(2, 2, 3)
plt.plot(state[:,1],state[:,2])
plt.show()
Am i making mistake anywhere or this is the true result?
as you can see in eqaution (2) when we put α = 1 we will get the results same as of non fractional system (1). Interested in calculating equation (3) for different values of alpha
I think this perhaps the idea of mine is incorrect, because what i am doing is first calculating the system of differential equation using
state = odeint(Lorenz, state0, t)
followed by the differint module
DF = df.RL(1, state, 0, len(t), len(t))
Graphs for lorenz eqaution
Graphs for RL fractional for alpha = 1
in these graphs as u can see that the trajectories are absolutely same but the scaling become different
The call to DF.RL requires a function of the differential equation, however, you used the solution to a differential equation - state (output of odeint call). From the package site https://pypi.org/project/differint/
def f(x):
return x**0.5
DF = df.RL(0.5, f)
print(DF)
Can you try DF = df.RL(1, Lorenz, 0, len(t), len(t)) ?
Related
I had been given the harmonic oscillator equation in the form of : y'' = -w^2 * y with initial conditions y(0) = 1 and y'(0) = 0 and w = 2*pi and I had to compare the analytical solution with the exact solution in the form of y(t) = cos(wt).
So the first part went smoothly and I saw that my result using solve_ivp gives me the exact same curve as the exact solution.
But then I had to compare the evolution of energy for RK45, RK23, and DOP853.
The energy is written : E = 0.5 * k * y^2 + 0.5 * v^2 with k = w^2 and v the velocity v = y'
I was expecting to get a straigh tline, since my harmonic oscillator isn't damped but I got a decreasing curve for each integrations, and I do not know why. If anyone has any idea, I post my code Here. Thanks in advance for the help !
import numpy as np
from scipy.integrate import solve_ivp
import matplotlib.pyplot as plt
#Exercice 1
#We first write the initial conditions
omega = (2*np.pi)
omega_sq = omega**2
y_0 = [1., 0.]
t = np.linspace(0, 4 ,100) #Time interval between 0 and 4 as asked in the problem
def dY_dt (t, y): #Definition of the system
solution = [y[1], - omega_sq *y[0]]
return solution
result = solve_ivp(dY_dt, (0, 100),y0 = y_0, method = 'RK45', t_eval = t)
z = np.cos(omega*t) #Exact solution, for comparison
plt.plot(t, z, 'r', lw = 5) #Plot of the exact solution
plt.plot(t,result.y[0], 'b') #Plot of the analytical solution
#Exercice 2
def Energie(Y): #Energy
k = omega_sq
E = 0.5*(result.y[1])**2 + 0.5*k*(result.y[0])**2
return E
E = Energie(result)
#Legendes
plt.ylabel("Position")
plt.xlabel("Time in seconds")
plt.title('Comparison between analytical and exact')
plt.legend(['Solution exacte', 'Solution solve_ivp'], loc='lower left')
plt.figure()
plt.plot(t, E)
plt.show
So I am trying to plot the nullclines of a system of ODEs, however I can't seem to plot them in the correct way. When I plot them, I manage to plot them according to time (t vs x and t vs y) but not at (x vs y). I'm not really sure how to explain it, and I think it would be better to just show it. I am trying to replicate this. The equations and parameters are given, however this was done in a program called XPP (I'll post these at the bottom), and there are some parameters that i don't understand what they mean.
My entire code is:
import numpy as np
from scipy import integrate
import matplotlib.pyplot as plt
# define system in terms of a Numpy array
def Sys(X, t=0):
# here X[0] = x and x[1] = y
#protien [] is represented with y, and mRNA [] is represented by x
return np.array([ (k1*S*Kd**p)/(Kd**p + X[1]**p) - kdx*X[0], ksy*X[0] - (k2*ET*X[1])/(Km + X[1])])
#variables
k1=.1
S=1
Kd=1
kdx=.1
p=2
ksy=1
k2=1
ET=1
Km=1
# generate 1000 linearly spaced numbers for x-axes
t = np.linspace(0, 50,100)
# initial values
Sys0 = np.array([1, 0])
#Solves the ODE
X, infodict = integrate.odeint(Sys, Sys0, t, full_output = 1, mxstep = 50000)
#assigns appropriate equations to x and y
x,y = X.T
#plot's the graph
fig = plt.figure(figsize=(15,5))
fig.subplots_adjust(wspace = 0.5, hspace = 0.3)
ax1 = fig.add_subplot(1,2,1)
ax1.plot(x, color="blue")
ax1.plot(y, color = 'red')
ax1.set_xlabel("Protien concentration")
ax1.set_ylabel("mRNA concentration")
ax1.set_title("Phase space")
ax1.grid()
The given equations and parameters are:
model for a simple negative feedback loop
protein (y) inhibits the synthesis of its mRNA (x)
dx/dt = k1SKd^p/(Kd^p + y^p) - kdx*x
dy/dt = ksyx - k2ET*y/(Km + y)
p k1=0.1, S=1, Kd=1, kdx=0.1, p=2
p ksy=1, k2=1, ET=1, Km=1
# XP=y, YP=x, TOTAL=100, METH=stiff, XLO=0, XHI=4, YLO=0, YHI=1.05 (I don't exactly understand what is going on here)
Again, this uses a program called XPP or WINPP.
Any help with this would be appreciated, the original paper I am trying to replicate this from is : Design principles of biochemical oscillators by Bela Novak and John J. Tyson
I have written this code to model the motion of a spring pendulum
import numpy as np
from scipy.integrate import odeint
from numpy import sin, cos, pi, array
import matplotlib.pyplot as plt
def deriv(z, t):
x, y, dxdt, dydt = z
dx2dt2=(0.415+x)*(dydt)**2-50/1.006*x+9.81*cos(y)
dy2dt2=(-9.81*1.006*sin(y)-2*(dxdt)*(dydt))/(0.415+x)
return np.array([x,y, dx2dt2, dy2dt2])
init = array([0,pi/18,0,0])
time = np.linspace(0.0,10.0,1000)
sol = odeint(deriv,init,time)
def plot(h,t):
n,u,x,y=h
n=(0.4+x)*sin(y)
u=(0.4+x)*cos(y)
return np.array([n,u,x,y])
init2 = array([0.069459271,0.393923101,0,pi/18])
time2 = np.linspace(0.0,10.0,1000)
sol2 = odeint(plot,init2,time2)
plt.xlabel("x")
plt.ylabel("y")
plt.plot(sol2[:,0], sol2[:, 1], label = 'hi')
plt.legend()
plt.show()
where x and y are two variables, and I'm trying to convert x and y to the polar coordinates n (x-axis) and u (y-axis) and then graph n and u on a graph where n is on the x-axis and u is on the y-axis. However, when I graph the code above it gives me:
Instead, I should be getting an image somewhat similar to this:
The first part of the code - from "def deriv(z,t): to sol:odeint(deriv..." is where the values of x and y are generated, and using that I can then turn them into rectangular coordinates and graph them. How do I change my code to do this? I'm new to Python, so I might not understand some of the terminology. Thank you!
The first solution should give you the expected result, but there is a mistake in the implementation of the ode.
The function you pass to odeint should return an array containing the solutions of a 1st-order differential equations system.
In your case what you are solving is
While instead you should be solving
In order to do so change your code to this
import numpy as np
from scipy.integrate import odeint
from numpy import sin, cos, pi, array
import matplotlib.pyplot as plt
def deriv(z, t):
x, y, dxdt, dydt = z
dx2dt2 = (0.415 + x) * (dydt)**2 - 50 / 1.006 * x + 9.81 * cos(y)
dy2dt2 = (-9.81 * 1.006 * sin(y) - 2 * (dxdt) * (dydt)) / (0.415 + x)
return np.array([dxdt, dydt, dx2dt2, dy2dt2])
init = array([0, pi / 18, 0, 0])
time = np.linspace(0.0, 10.0, 1000)
sol = odeint(deriv, init, time)
plt.plot(sol[:, 0], sol[:, 1], label='hi')
plt.show()
The second part of the code looks like you are trying to do a change of coordinate.
I'm not sure why you try to solve the ode again instead of just doing this.
x = sol[:,0]
y = sol[:,1]
def plot(h):
x, y = h
n = (0.4 + x) * sin(y)
u = (0.4 + x) * cos(y)
return np.array([n, u])
n,u = plot( (x,y))
As of now, what you are doing there is solving this system:
Which leads to x=e^t and y=e^t and n' = (0.4 + e^t) * sin(e^t) u' = (0.4 + e^t) * cos(e^t).
Without going too much into the details, with some intuition you could see that this will lead to an attractor as the derivative of n and u will start to switch sign faster and with greater magnitude at an exponential rate, leading to n and u collapsing onto an attractor as shown by your plot.
If you are actually trying to solve another differential equation I would need to see it in order to help you further
This is what happen if you do the transformation and set the time to 1000:
I compare phase and amplitude spectrum in Matlab and numpy. I think Matlab work correct, but numpy compute correct amplitude spectrum, but phase spectrum is strange. How i must change python code for correct computing fft by numpy?
Matlab:
fs = 1e4;
dt = 1 / fs;
t = 0:dt:0.5;
F = 1e3;
y = cos(2*pi*F*t);
S = fftshift(fft(y) / length(y));
f_scale = linspace(-1, 1, length(y)) * (fs / 2);
a = abs(S);
phi = (angle(S));
subplot(2, 1, 1)
plot(f_scale, a)
title('amplitude')
subplot(2, 1, 2)
plot(f_scale, phi)
title('phase')
Python:
import numpy as np
import matplotlib.pyplot as plt
fs = 1e4
dt = 1 / fs
t = np.arange(0, 0.5, dt)
F = 1e3
y = np.cos(2*np.pi*F*t)
S = np.fft.fftshift(np.fft.fft(y) / y.shape[0])
f_scale = np.linspace(-1, 1, y.shape[0]) * (fs / 2)
a = np.abs(S)
phi = np.angle(S)
plt.subplot(2, 1, 1, title="amplitude")
plt.plot(f_scale, a)
plt.subplot(2, 1, 2, title="phase")
plt.plot(f_scale, phi)
plt.show()
matlab output
numpy output
It's a problem in understanding np.arange. It stops one dt before reaching the desired value (the interval you pass is open on the right side). If you define
t = np.arange(0, 0.5+dt, dt)
everything will work fine.
As pointed out in another answer, to make the Python plot match the matlab output, you have to adjust the t array to have the same values as the t array in the matlab code.
However, if your intent was to have an integer number of periods in the signal, so the FFT has just two nonzero values (at ± the input frequency), then it is the Python code that is correct. The phase in the Python code looks strange because all the Fourier coefficients except those associated with the signal's frequency are (theoretically) 0. With finite precision arithmetic, the coefficients end up being numerical "noise" with very small amplitude and essentially random phase.
So I've got some data stored as two lists, and plotted them using
plot(datasetx, datasety)
Then I set a trendline
trend = polyfit(datasetx, datasety)
trendx = []
trendy = []
for a in range(datasetx[0], (datasetx[-1]+1)):
trendx.append(a)
trendy.append(trend[0]*a**2 + trend[1]*a + trend[2])
plot(trendx, trendy)
But I have a third list of data, which is the error in the original datasety. I'm fine with plotting the errorbars, but what I don't know is using this, how to find the error in the coefficients of the polynomial trendline.
So say my trendline came out to be 5x^2 + 3x + 4 = y, there needs to be some sort of error on the 5, 3 and 4 values.
Is there a tool using NumPy that will calculate this for me?
I think you can use the function curve_fit of scipy.optimize (documentation). A basic example of the usage:
import numpy as np
from scipy.optimize import curve_fit
def func(x, a, b, c):
return a*x**2 + b*x + c
x = np.linspace(0,4,50)
y = func(x, 5, 3, 4)
yn = y + 0.2*np.random.normal(size=len(x))
popt, pcov = curve_fit(func, x, yn)
Following the documentation, pcov gives:
The estimated covariance of popt. The diagonals provide the variance
of the parameter estimate.
So in this way you can calculate an error estimate on the coefficients. To have the standard deviation you can take the square root of the variance.
Now you have an error on the coefficients, but it is only based on the deviation between the ydata and the fit. In case you also want to account for an error on the ydata itself, the curve_fit function provides the sigma argument:
sigma : None or N-length sequence
If not None, it represents the standard-deviation of ydata. This
vector, if given, will be used as weights in the least-squares
problem.
A complete example:
import numpy as np
from scipy.optimize import curve_fit
def func(x, a, b, c):
return a*x**2 + b*x + c
x = np.linspace(0,4,20)
y = func(x, 5, 3, 4)
# generate noisy ydata
yn = y + 0.2 * y * np.random.normal(size=len(x))
# generate error on ydata
y_sigma = 0.2 * y * np.random.normal(size=len(x))
popt, pcov = curve_fit(func, x, yn, sigma = y_sigma)
# plot
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(111)
ax.errorbar(x, yn, yerr = y_sigma, fmt = 'o')
ax.plot(x, np.polyval(popt, x), '-')
ax.text(0.5, 100, r"a = {0:.3f} +/- {1:.3f}".format(popt[0], pcov[0,0]**0.5))
ax.text(0.5, 90, r"b = {0:.3f} +/- {1:.3f}".format(popt[1], pcov[1,1]**0.5))
ax.text(0.5, 80, r"c = {0:.3f} +/- {1:.3f}".format(popt[2], pcov[2,2]**0.5))
ax.grid()
plt.show()
Then something else, about using numpy arrays. One of the main advantages of using numpy is that you can avoid for loops because operations on arrays apply elementwise. So the for-loop in your example can also be done as following:
trendx = arange(datasetx[0], (datasetx[-1]+1))
trendy = trend[0]*trendx**2 + trend[1]*trendx + trend[2]
Where I use arange instead of range as it returns a numpy array instead of a list.
In this case you can also use the numpy function polyval:
trendy = polyval(trend, trendx)
I have not been able to find any way of getting the errors in the coefficients that is built in to numpy or python. I have a simple tool that I wrote based on Section 8.5 and 8.6 of John Taylor's An Introduction to Error Analysis. Maybe this will be sufficient for your task (note the default return is the variance, not the standard deviation). You can get large errors (as in the provided example) because of significant covariance.
def leastSquares(xMat, yMat):
'''
Purpose
-------
Perform least squares using the procedure outlined in 8.5 and 8.6 of Taylor, solving
matrix equation X a = Y
Examples
--------
>>> from scipy import matrix
>>> xMat = matrix([[ 1, 5, 25],
[ 1, 7, 49],
[ 1, 9, 81],
[ 1, 11, 121]])
>>> # matrix has rows of format [constant, x, x^2]
>>> yMat = matrix([[142],
[168],
[211],
[251]])
>>> a, varCoef, yRes = leastSquares(xMat, yMat)
>>> # a is a column matrix, holding the three coefficients a, b, c, corresponding to
>>> # the equation a + b*x + c*x^2
Returns
-------
a: matrix
best fit coefficients
varCoef: matrix
variance of derived coefficents
yRes: matrix
y-residuals of fit
'''
xMatSize = xMat.shape
numMeas = xMatSize[0]
numVars = xMatSize[1]
xxMat = xMat.T * xMat
xyMat = xMat.T * yMat
xxMatI = xxMat.I
aMat = xxMatI * xyMat
yAvgMat = xMat * aMat
yRes = yMat - yAvgMat
var = (yRes.T * yRes) / (numMeas - numVars)
varCoef = xxMatI.diagonal() * var[0, 0]
return aMat, varCoef, yRes