Lyapunov exponents with JiTCODE - python

I am using JiTCODE to calculate the Lyapunov exponents of the Lorenz oscillator.
Here is the simple script following the documentation:
import numpy as np
import pylab as plt
from jitcode import jitcode_lyap, y
p, r, b = 10.0, 28.0, 8.0/3.0
x0 = np.array([10.0, 10.0, 5.0])
f = [
p * y(1) - p * y(0),
-y(0) * y(2) + r * y(0) - y(1),
y(0) * y(1) - b * y(2)
]
n = len(f)
times = range(10, 1000, 10)
nstep = len(times)
lyaps = np.zeros((nstep, n))
ODE = jitcode_lyap(f, n_lyap=n)
ODE.set_integrator("vode")
ODE.set_initial_value(x0, 0.0)
for time in times:
print(ODE.integrate(time)[1])
And I got the following error.
Generating, compiling, and loading C code.
generated C code for f
generated symbolic Jacobian
generated C code for Jacobian
/usr/local/lib/python3.6/dist-packages/scipy/integrate/_ode.py:1009: UserWarning: vode: Excess work done on this call. (Perhaps wrong MF.)
self.messages.get(istate, unexpected_istate_msg)))
Traceback (most recent call last):
File "main.py", line 29, in <module>
print(ODE.integrate(time)[1])
File "/home/abolfazl/.local/lib/python3.6/site-packages/jitcode/_jitcode.py", line 755, in integrate
super(jitcode_lyap, self).integrate(*args, **kwargs)
File "/home/abolfazl/.local/lib/python3.6/site-packages/jitcode/_jitcode.py", line 656, in integrate
return self.integrator.integrate(*args,**kwargs)
File "/home/abolfazl/.local/lib/python3.6/site-packages/jitcode/integrator_tools.py", line 131, in integrate
raise UnsuccessfulIntegration
jitcode.integrator_tools.UnsuccessfulIntegration
I think the equation is not the problem because I have solved that with jitcode but jitcode_lyap can not solve it.

I got the answer from the developer:
Your problem is that your integration steps are too long. Since the tangent vectors are only re-normalised after every integration step, this causes numerical overflows in them as the Lorenz system has a much smaller time scale than the one used as an example in the documentation.
Using times = range(1, 100, 1) fixes this.

Related

How can I save scipy.integrate.solve_ivp solution to a file to minimize load time?

Background:
I am currently using solve_ivp to solve a stiff system of odes. Below is a simple illustrative example that has been artificially altered (max_step=1e-3) to make the output file larger. The system of odes I actually care about is (in terms of file size) roughly equivalent to setting max_step=1e-4 for the system here.
Question:
How do I minimizes the time required to load the ".sol" part of solve_ivp solution?
Note:
I am not trying to optimize how long it takes to save the files (since I only need to do that once), but I need to minimize load time of the soln.sol objects because I will be doing it millions of times on a cluster.
Here is a simplified version of my current attempt.
File 1: create and save soln.sol
import numpy as np
from scipy.integrate import solve_ivp
import pickle
def deriv(t, y):
x, y, z = y
xdot = -0.04 * x + 1.e4 * y * z
ydot = 0.04 * x - 1.e4 * y * z - 3.e7 * y**2
zdot = 3.e7 * y**2
return xdot, ydot, zdot
t0, tf = 0, 100
y0 = 1, 0, 0
soln = solve_ivp(deriv, (t0, tf), y0, method='Radau', dense_output=True, max_step=1e-3)
print(soln.nfev, 'evaluations required.')
print(soln.sol(100))
print(soln.sol(100)[1])
with open('file.pkl', 'wb') as file:
pickle.dump(soln.sol, file)
File 2: load soln.sol and do stuff
import pickle
import scipy.integrate #class definition speeds up load
# Open the file in binary mode
with open('file.pkl', 'rb') as file:
# Call load method to deserialze
myvar = pickle.load(file)
print(myvar)
print(myvar(100))

Trying to integrate a function this error shows: TypeError: only size-1 arrays can be converted to Python scalars

I am trying to do an integral over the first derivative of the Fermi-Dirac function f(E) and a transmission function t(E) to find a value for conductance, G. I am having a problem with the fact that t(E) is a summation and has multiple values that are vectorized.
This is the code that I have produced, the error shows up in the integral.
import numpy as np
import scipy.constants as phys
import scipy.integrate as integrate
import math
E = np.expand_dims(np.linspace(0, 15, 10000), 1)
n = np.arange(0, 6)
h = 1/(1 + np.exp(-2*np.pi * (E-(n-0.5)*3)))
fermi = 2.5
kT = 0.2
def fermi_integral(E, fermi, T, n):
return (np.exp((E-fermi)/(kT))/((np.exp((E-fermi)/(kT)) + 1)**2) * 1/(kT)) * h
# above function is the integral part of G; df/dE * t(e)
result = integrate.quad(fermi_integral, 0, np.inf, args = (kT, fermi, E))
# integrating the function from 0 to infinity
print('Result of integral;', result)
G = (-(2*math.e**2)/phys.Planck) *np.array(result)
# multiplying the constants outside the integral in
print('Result for G:', G)
I am looking for multiple values but have not been able to produce any.
Any help would be appreciated.
Edit:
Error shows as following
Traceback (most recent call last):
File ~\OneDrive\Documents\BSc_Project\Fermi.py:27 in <module>
result = integrate.quad(fermi_integral, 0, np.inf, args = (kT, fermi, E))
File ~\anaconda3\lib\site-packages\scipy\integrate\quadpack.py:351 in quad
retval = _quad(func, a, b, args, full_output, epsabs, epsrel, limit,
File ~\anaconda3\lib\site-packages\scipy\integrate\quadpack.py:465 in _quad
return _quadpack._qagie(func,bound,infbounds,args,full_output,epsabs,epsrel,limit)
TypeError: only size-1 arrays can be converted to Python scalars
scipy.quad does not support integration of vector valued functions. The vectorized version is: scipy.quad_vec and you need version 1.8.0 or newer for the "args" parameter.
Simply replacing quad with quad_vec works, but results in NaNs, although I think that might be a flaw in the logic, and not the code itself.
The answer was based on this SO question.

Python: plotting an exponential on an axis

I'm currently working on a piece of code to model the evolution of the dark energy equation of state parameter w with the scale factor a. In order to do this I am solving a system of three coupled ODEs, however the derivative used is with respect to e-foldings N = ln(a) (in the code x = w and ln(a) = t for simplicity). I have the following code:
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
import math
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
def f(s,t):
p = 1.0
G = 1.0 + (1.0/p)
xm = 0
x = s[0]
y = s[1]
z = s[2]
dxdt = (x - 1.0)*(3.0*(1.0 + x) - z*math.sqrt(3.0*(1.0 + x)*y))
dydt = -3.0*(x - xm)*y*(1.0 - y)
dzdt = -math.sqrt(3.0*(1.0 + x)*y)*(G - 1.0)*(z**2)
return [dxdt, dydt, dzdt]
t = np.linspace(0.0001,1,10000)
s0 = [-0.667,0.01,0.45]
s = odeint(f,s0,t)
plt.plot(t,s[:,0],'b-')
plt.grid(True)
plt.xlabel('e-foldings, N = ln(a)')
plt.ylabel('Equation of state parameter w')
plt.show()
which gives me this plot.
This works fine, however I want the x-axis in units of a and not N = ln(a) but I can't figure out how to make it work. I've tried changing the plot line to plt.plot(math.exp(t),s[:,0],'b-') but I get the following error:
Traceback (most recent call last):
File "/Users/bradleyaldous/propr2.py", line 26, in <module>
plt.plot(math.exp(t),s[:,0],'b-')
TypeError: only size-1 arrays can be converted to Python scalars
[Finished in 6.0s]
Any help is greatly appreciated.
EDIT:
I've tried using np.exp() in the plot line like I did with the

python complex_ode pass matrix-valued parameters

I'm having some trouble with python's complex_ode solver.
I'm trying to solve the following equation:
dy/dt = -iAy - icos(Omegat)By
where A and B are NxN arrays and the unknown y is an Nx1 array, i is the imaginary unit and Omega is a parameter.
Here's my code:
import numpy as np
from scipy.integrate import ode,complex_ode
N = 3 #linear matrix dim
Omega = 1.0 #parameter
# define symmetric matrices A and B
A = np.random.ranf((N,N))
A = (A + A.T)/2.0
B = np.random.ranf((N,N))
B = (B + B.T)/2.0
# define RHS of ODE
def f(t,y,Omega,A,B):
return -1j*A.dot(y)-1j*np.cos(Omega*t)*B.dot(y)
# define list of parameter
params=[Omega,A,B]
# choose solver: need complex_ode for this ODE
#solver = ode(f)
solver = complex_ode(f)
solver.set_f_params(*params)
solver.set_integrator("dop853")
# set initial value
v0 = np.zeros((N,),dtype=np.float64)
v0[0] = 1.0
# check that the function f works properly
print f(0,v0,Omega,A,B)
# solve-check the ODE
solver.set_initial_value(v0)
solver.integrate(10.0)
print solver.successful()
Running this script produces the error
capi_return is NULL
Call-back cb_fcn_in___user__routines failed.
Traceback (most recent call last):
File "ode_test.py", line 37, in <module>
solver.integrate(10.0)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/scipy/integrate/_ode.py", line 515, in integrate
y = ode.integrate(self, t, step, relax)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/scipy/integrate/_ode.py", line 388, in integrate
self.f_params, self.jac_params)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/scipy/integrate/_ode.py", line 946, in run
tuple(self.call_args) + (f_params,)))
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/scipy/integrate/_ode.py", line 472, in _wrap
f = self.cf(*((t, y[::2] + 1j * y[1::2]) + f_args))
TypeError: f() takes exactly 5 arguments (2 given)
If instead I use solver = ode(f), ie. the real-valued solver, it runs fine. Except that it doesn't solve the ODE I want which is complex-valued :(
I then tried to reduce the number of parameters by making the matrices A and B global variables. This way the only parameter the function f accepts is Omega. The error changes to
capi_return is NULL
Call-back cb_fcn_in___user__routines failed.
Traceback (most recent call last):
File "ode_test.py", line 37, in <module>
solver.integrate(10.0)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/scipy/integrate/_ode.py", line 515, in integrate
y = ode.integrate(self, t, step, relax)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/scipy/integrate/_ode.py", line 388, in integrate
self.f_params, self.jac_params)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/scipy/integrate/_ode.py", line 946, in run
tuple(self.call_args) + (f_params,)))
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/scipy/integrate/_ode.py", line 472, in _wrap
f = self.cf(*((t, y[::2] + 1j * y[1::2]) + f_args))
TypeError: 'float' object has no attribute '__getitem__'
where I figured out that float refers to the parameter Omega [by trying an integer]. Again, "ode" alone works in this case as well.
Last, I tried the same complex valued equation, but now A and B are just numbers. I tried to pass them both as parameters, i.e. params = [Omega,A,B], as well as making them global variables in which case params=[Omega]. The error is the
TypeError: 'float' object has no attribute '__getitem__'
error - the full error is the same as above. And once again this problem does not occur for the real-valued "ode".
I know zvode is an alternative, but it appears to become quite slow for large N. In the real problem I have, A is a diagonal matrix but B is a non-sparse full matrix.
Any insights are much appreciated! I'm interested both in (i) alternative ways to solve this complex-valued ODE with array-valued parameters, and (ii) how to get complex_ode to run :)
Thanks!
It seems like the link that Reti43 posted contains the answer, so let me put it here for the benefit of future users:
from scipy.integrate import complex_ode
import numpy as np
N = 3
Omega = 1.0;
class myfuncs(object):
def __init__(self, f, fargs=[]):
self._f = f
self.fargs=fargs
def f(self, t, y):
return self._f(t, y, *self.fargs)
def f(t, y, Omega,A,B):
return -1j*(A+np.cos(Omega*t)*B).dot(y)
A = np.random.ranf((N,N))
A = (A + A.T)/2.0
B = np.random.ranf((N,N))
B = (B + B.T)/2.0
v0 = np.zeros((N,),dtype=np.float64)
v0[0] = 1.0
t0 = 0
case = myfuncs(f, fargs=[Omega, A, B] )
solver = complex_ode(case.f)
solver.set_initial_value(v0, t0)
solver.integrate([10.0])
print solver.successful()
"""
t1 = 10
dt = 1
while solver.successful() and solver.t < t1:
solver.integrate(solver.t+dt)
print(solver.t, solver.y)
"""
Could maybe someone comment on why this trick does the job?

Division by zero ? (In newton iteration method)

I'm doing a newton iteration to find T_a. Everything seems fine in the code except in one the very first definitions.
My rho(T_a) returns a division by zero (it assumes that T_a is zero while it's just a variable. If I change the T_a in the equation to something like 100, everything runs smoothly.
Any idea why it's returning a division by zero?
from numpy import *
import numpy as np
import pylab
import scipy
from scipy.optimize import leastsq
from math import *
import matplotlib.pyplot as plt
from scipy import integrate
# THETA NOTATION:
#pi/2: substellar point
#-pi/2: antistellar point
#0: terminators
#define constants used in equations:
alb = 0.2 #constant albedo
F = 866 #J/s*m**2
R = 287.0 #J/K*kg
U = 5.0 #m/s
C_p = 1000 #J/K*kg
C_d = 0.0015
p1 = 10**4
p2 = 10**5.0
p3 = 10**6.0 #Pa
sig = 5.67*10**-8.0 #J/s*m**2*K**4 #Stefan-Boltzmann cst
def rho(T_a):
p1=10000.0
R=287.0 #J/K*kg
return (p1/(T_a*R))
def a(T_a):
U = 5 #m/s
C_p = 1000 #J/K*kg
C_d = 0.0015
return rho(T_a)*C_p*C_d*U
#################################################
##### PART 2 : check integrals equality
#################################################
#define the RHS and LHS of integral equality
def LHS(theta):
return (1-alb)*F*np.sin(theta)*np.cos(theta)
#define the result of each integral
Left = integrate.quad(lambda theta: LHS(theta), 0, pi/2)[0]
#define a function 1-(result LHS/result RHS) >>> We look for the zero of this
x0=130.0 #guess a value for T_a
#T_a = 131.0
#Python way of solving for the zero of the function
#Define T_g in function of T_a, have RHS(T_a) return T_g**4 etc, have result_RHS(T_a) return int.RHS(T_a),
#have func(T_a) return result_LHS/result_RHS
def T_g(T_a,theta):
return np.roots(array([(sig),0,0,(a(T_a)),((-a(T_a)*T_a)-LHS(theta))]))[3]
def RHS(theta,T_a):
return sig*T_g(T_a,theta)**4*np.cos(theta)
def result_RHS(T_a,theta):
return integrate.quad(lambda theta: RHS(T_a,theta), -pi/2, pi/2)[0]
def function(T_a,theta):
return 1-((Left/result_RHS(T_a,theta)))
theta = np.arange(-pi/2, pi/2, pi/20)
T_a_0 = scipy.optimize.newton(function,x0,fprime=None,args=(theta,),tol= (10**-5),maxiter=50000)
Output:
Traceback (most recent call last):
File "/Users/jadecheclair/Documents/PHY479Y/FindT_a.py", line 85, in <module>
T_a_0 = scipy.optimize.newton(function,x0,fprime=None,args=(theta,),tol=(10**-5),maxiter=50000)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/optimize/zeros.py", line 120, in newton
q0 = func(*((p0,) + args))
File "/Users/jadecheclair/Documents/PHY479Y/FindT_a.py", line 81, in function
return 1-((Left/result_RHS(T_a,theta)))
File "/Users/jadecheclair/Documents/PHY479Y/FindT_a.py", line 78, in result_RHS
return integrate.quad(lambda theta: RHS(T_a,theta), -pi/2, pi/2)[0]
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/integrate/quadpack.py", line 247, in quad
retval = _quad(func,a,b,args,full_output,epsabs,epsrel,limit,points)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/integrate/quadpack.py", line 312, in _quad
return _quadpack._qagse(func,a,b,args,full_output,epsabs,epsrel,limit)
File "/Users/jadecheclair/Documents/PHY479Y/FindT_a.py", line 78, in <lambda>
return integrate.quad(lambda theta: RHS(T_a,theta), -pi/2, pi/2)[0]
File "/Users/jadecheclair/Documents/PHY479Y/FindT_a.py", line 75, in RHS
return sig*T_g(T_a,theta)**4*np.cos(theta)
File "/Users/jadecheclair/Documents/PHY479Y/FindT_a.py", line 72, in T_g
return np.roots(array([(sig),0,0,(a(T_a)),((-a(T_a)*T_a)-LHS(theta))]))[3]
File "/Users/jadecheclair/Documents/PHY479Y/FindT_a.py", line 38, in a
return rho(T_a)*C_p*C_d*U
File "/Users/jadecheclair/Documents/PHY479Y/FindT_a.py", line 32, in rho
return (p1/(T_a*R))
ZeroDivisionError: float division by zero
Your RHS function is defined slightly differently to all the others, in that it has theta first and T_a as its second argument:
def RHS(theta,T_a):
return sig*T_g(T_a,theta)**4*np.cos(theta)
I think that's why you passed the arguments in the wrong order here:
lambda theta: RHS(T_a,theta)
Get them in the right order and you should be OK.
As a side-note, some of your imports look like they could cause weird bugs:
from numpy import *
from math import *
Numpy and the math module have at least a few function names in common, like sqrt. It's safer to just do import math and import numpy as np, and access the functions through the module name. Otherwise what happens when you call sqrt could change depending on the order you do your imports in.
You reversed your parameters:
In result_RHS you call: RHS(T_a,theta), but the parameter definition of RHS is def RHS(theta,T_a)
Swap those in in the definition and the error no longer occurs. Your definition should look like this:
def RHS(T_a, theta)

Categories