complex ODE systems in scipy - python

I am having trouble sovling the optical bloch equation, which is a first order ODE system with complex values. I have found scipy may solve such system, but their webpage offers too little information and I can hardly understand it.
I have 8 coupled first order ODEs, and I should generate a function like:
def derv(y):
compute the time dervative of elements in y
return answers as an array
then do complex_ode(derv)
My questions are:
my y is not a list but a matrix, how can i give a corrent output
fits into complex_ode()?
complex_ode() needs a jacobian, I have no idea how to start constructing one
and what type it should be?
Where should I put the initial conditions like in the normal ode and
time linspace?
this is scipy's complex_ode link:
http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.complex_ode.html
Could anyone provide me with more infomation so that I can learn a bit more.

I think we can at least point you in the right direction. The optical
bloch equation is a problem which is well understood in the scientific
community, although not by me :-), so there are already solutions on the internet
to this particular problem.
http://massey.dur.ac.uk/jdp/code.html
However, to address your needs, you spoke of using complex_ode, which I suppose
is fine, but I think just plain scipy.integrate.ode will work just fine as well
according to their documentation:
from scipy import eye
from scipy.integrate import ode
y0, t0 = [1.0j, 2.0], 0
def f(t, y, arg1):
return [1j*arg1*y[0] + y[1], -arg1*y[1]**2]
def jac(t, y, arg1):
return [[1j*arg1, 1], [0, -arg1*2*y[1]]]
r = ode(f, jac).set_integrator('zvode', method='bdf', with_jacobian=True)
r.set_initial_value(y0, t0).set_f_params(2.0).set_jac_params(2.0)
t1 = 10
dt = 1
while r.successful() and r.t < t1:
r.integrate(r.t+dt)
print r.t, r.y
You also have the added benefit of an older more established and better
documented function. I am surprised you have 8 and not 9 coupled ODE's, but I'm
sure you understand this better than I. Yes, you are correct, your function
should be of the form ydot = f(t,y), which you call def derv() but you're
going to need to make sure your function takes at least two parameters
like derv(t,y). If your y is in matrix, no problem! Just "reshape" it in
the derv(t,y) function like so:
Y = numpy.reshape(y,(num_rows,num_cols));
As long as num_rows*num_cols = 8, your number of ODE's you should be fine. Then
use the matrix in your computations. When you're all done, just be sure to return
a vector and not a matrix like:
out = numpy.reshape(Y,(8,1));
The Jacobian is not required, but it will likely allow the computation to proceed
much more quickly. If you do not know how to compute this you may want to consult
wikipedia or a calculus text book. It's pretty simple, but can be time consuming.
As far as initial conditions, you should probably already know what those should
be, whether it's complex or real valued. As long as you select values that are
within reason, it shouldn't matter much.

Related

Scipy quad returns (near) zero, even with points argument

Sometimes scipy.integrate.quad wrongly returns near-0 values. This has been addressed in this question, and seems to happen when the integration technique doesn't evaluate the function in the narrow range where it is significantly different than 0. In this and similar questions, the accepted solution was always to use the points parameter to tell scipy where to look. However, for me, this seems to actually make things worse.
Integration of exponential distribution pdf (answer should be just under 1):
from scipy import integrate
import numpy as np
t2=.01
#initial problem: fails when f is large
f=5000000
integrate.quad(lambda t:f*np.exp(-f*(t2-t)),0,t2)
#>>>(3.8816838175855493e-22, 7.717972744727115e-22)
Now, the "fix" makes it fail, even on smaller values of f where the original worked:
f=2000000
integrate.quad(lambda t:f*np.exp(-f*(t2-t)),0,t2)
#>>>(1.00000000000143, 1.6485317987792634e-14)
integrate.quad(lambda t:f*np.exp(-f*(t2-t)),0,t2,points=[t2])
#>>>(1.6117047218907458e-17, 3.2045611390981406e-17)
integrate.quad(lambda t:f*np.exp(-f*(t2-t)),0,t2,points=[t2,t2])
#>>>(1.6117047218907458e-17, 3.2045611390981406e-17)
What's going on here? How can I tell scipy what to do so that this will evaluate for arbitrary values of f?
It's not a generic solution, but I've been able to fix this for the given function by using points=np.geomspace to guide the numerical algorithm towards heavier sampling in the interesting region. I'll leave this open for a little bit to see if anyone finds a generic solution.
Generate random values for t2 and f, then check min and max values for subset that should be very close to 1:
>>> t2s=np.exp((np.random.rand(20)-.5)*10)
>>> fs=np.exp((np.random.rand(20)-.1)*20)
>>> min(integrate.quad(lambda t:f*np.exp(-f*(t2-t)),0,t2,points=t2-np.geomspace(1/f,t2,40))[0] for f in fs for t2 in t2s if f>(1/t2)*10)
0.9999621825009719
>>> max(integrate.quad(lambda t:f*np.exp(-f*(t2-t)),0,t2,points=t2-np.geomspace(1/f,t2,40))[0] for f in fs for t2 in t2s if f>(1/t2)*10)
1.000000288722783

The most efficient way to encode max, min and abs in Z3

I have a system of non-linear integer inequalities that I want to solve. In it I need to compute the absolute value of integers and also the maximum/minimum of two integers.
Here is a toy example:
from z3 import *
set_option(verbose=10)
x, y, z, z1 = Ints('x y z z1')
def abs(x):
return If(x >= 0,x,-x)
def max(x, y):
return If(x>=y, x, y)
def min(x, y):
return If(x<=y, x, y)
s = Solver()
s.add(x**2 + y**2 >= 26)
s.add(min(abs(y), abs(x))> 5)
s.add(3*x**2 + 25*y**2 >= 100)
s.add(x*y - z*z1 < 10)
s.add(max(abs(z), abs(z1)) <= 10)
s.add(min(abs(z), abs(z1)) > 1)
s.check()
print(s.model())
My real system is more complicated and takes much longer to run.
I don't really understand how Z3 works under the hood but I am worried that the way I have defined abs, max and min using Python functions may make it hard for Z3 to solve the system of inequalities. Is there a better way that allows Z3 potentially to be more efficient?
The way you coded them are just fine. There's really no "better" way to code these operations.
Nonlinear problems are really difficult for SMT solvers. In fact, one way they solve these is to assume the values are "real" numbers, solve it, and then check to see if the model actually only consists of integers. Another trick is to reduce to bit-vectors: Assign larger and larger bit-sized vectors to variables and see if one can find a model. You can imagine that both of these techniques are good for "model finding" but are terrible at proving unsat. (For details see: How does Z3 handle non-linear integer arithmetic?)
If your problem is truly non-linear, perhaps an SMT solver just isn't the best tool for you. An actual theorem prover that has support for arithmetic theories might be a better choice, though of course that's an entirely different discussion.
One thing you can try is "simplify" the problem. For instance, you seem to be always using abs(y) and abs(x), perhaps you can drop the abs term and simply assert x > 0 and y > 0? Note that this is not a sound reduction: You are explicitly telling the solver to ignore all negative x and y values, but it might be "good" enough for your problem since you may only care when x and y are positive anyhow. This would help the solver as it would reduce the search space and would get rid of the conditional expression, though keep in mind that you're asking a different question and hence your solution-space is now different. (It might even become unsat with the new constraint.)
Long story short; non-linear arithmetic is difficult, and the way you're coding min/max/abs are just fine. See if you can "simplify" the problem by not using them, by perhaps solving a related bit simpler problem for the solver. If that's not possible, I'm afraid you'll have to look beyond SMT solvers to handle your non-linear set of equations. (And none of that will be easy of course, as the problem is inherently difficult. Again read through How does Z3 handle non-linear integer arithmetic? for some extra details.)

Non Negative ODE Solutions with functools in R?

I am trying to implement an R algortihm dealing with non-negative ODE Systems. I need something like ode45 in MATLAB to define states which have to be none-negative.
I discussed about that already 3 years ago but with no real solution. deSolve is still not the way to go. I found some python code which looks very promising. Maybe this is possible in R as well. In the end I have to define a function wraper, as functools in python. What it does is pretty simple. Here is the code the of the python wraper:
def wrap(f):
#wraps(f)
def wrapper(t, y, *args, **kwargs):
low = y < 0
y = np.maximum(y, np.ones(np.shape(y))*0)
result = f(t, y, *args, **kwargs)
result[too_low] = np.maximum(result[low], np.ones(low.sum())*0)
return result
return wrapper
return wrap
I mean in python this is straight forward. The wraper will be used in each step of the integration called by
solver = scipy.integrate.odeint(f, y0)
solution = solver.solve()
Is the same possible in R? I know there is a functools package and functools function, as well. But I have no clue if this really works. Can I use events in deSolve for that?
I am working now on this project for 5 years and I am out of ideas. I used an MATLAB, C++ and Python interface but all this is to slow, I need it in R. Thank you very much for your help!
deSolve does not support automatic non-negativity constraints for good reasons. We had such questions several times in the past, but it turned out in all these cases, that the reason of the negative value was an incomplete model specification. The typical case is that something is exported from an empty pool. Because unwanted negative values are usually an indicator of an inadequate model specification, we do (currently) not consider to add a "non-negative" constraint in the future.
Example: in the following equation, X can become negative by model design:
dX/dt = -k
whereas the following cannot:
dX/dt = -k * X
If you need a linear decrease "most of the time" that reduces to zero shortly before X becomes zero, you can use a Monod-type safeguard (or something similar):
dX/dt = -k*X / (k2 + X)
The selection of k2 is relatively uncritical. It should be small enough not to influence the overall behavior and not too small, compared to the numerical accuracy of the solver.
Another method to avoid negative values is to work in log-transformed space. Here are some related threads:
https://stat.ethz.ch/pipermail/r-sig-dynamic-models/2010q2/000028.html
https://stat.ethz.ch/pipermail/r-sig-dynamic-models/2013q3/000222.html
https://stat.ethz.ch/pipermail/r-sig-dynamic-models/2016/000437.html
In addition, it is of course also possible to write an own wrapper in R.
Hope it helps

fsolve problems with the starting point

I'm using fsolve in order to solve a non linear equation. My problem is that, depending on the starting point the solutions change and I am not sure that the ones that I found are the most reasonable.
This is the code
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import fsolve, brentq,newton
A = np.arange(0.05,0.95,0.01)
PHI = np.deg2rad(np.arange(0,90,1))
def f(b):
return np.angle((1+3*a**4-3*a**2)+(a**4-a**6)*(np.exp(2j*b)+2*np.exp(-1j*b))+(a**2-2*a**4+a**6)*(np.exp(-2j*b)+2*np.exp(1j*b)))-Phi
B = np.zeros((len(A),len(PHI)))
for i in range(len(A)):
for j in range(len(PHI)):
a = A[i]
Phi = PHI[j]
b = fsolve(f, 1)
B[i,j]= b
I fixed x0 = 1 because it seems to give the more reasonable values. But sometimes, I think the method doesn't converge and the resulting values are too big.
What can I do to find the best solution?
Many thanks!
The eternal issue with turning non-linear solvers loose is having a really good understanding of your function, your initial guess, the solver itself, and the problem you are trying to address.
I note that there are many (a,Phi) combinations where your function does not have real roots. You should do some math, directed by the actual problem you are trying to solve, and determine where the function should have roots. Not knowing the actual problem, I can't do that for you.
Also, as noted on a (since deleted) answer, this is cyclical on b, so using a bounded solver (such as scipy.optimize.minimize using method='L-BFGS-B' might help to keep things under control. Note that to find roots with a minimizer you use the square of your function. If the found minimum is not close to zero (for you to define based on the problem), the real minima might be a complex conjugate pair.
Good luck.

Best way to write a Python function that integrates a gaussian?

In attempting to use scipy's quad method to integrate a gaussian (lets say there's a gaussian method named gauss), I was having problems passing needed parameters to gauss and leaving quad to do the integration over the correct variable. Does anyone have a good example of how to use quad w/ a multidimensional function?
But this led me to a more grand question about the best way to integrate a gaussian in general. I didn't find a gaussian integrate in scipy (to my surprise). My plan was to write a simple gaussian function and pass it to quad (or maybe now a fixed width integrator). What would you do?
Edit: Fixed-width meaning something like trapz that uses a fixed dx to calculate areas under a curve.
What I've come to so far is a method make___gauss that returns a lambda function that can then go into quad. This way I can make a normal function with the average and variance I need before integrating.
def make_gauss(N, sigma, mu):
return (lambda x: N/(sigma * (2*numpy.pi)**.5) *
numpy.e ** (-(x-mu)**2/(2 * sigma**2)))
quad(make_gauss(N=10, sigma=2, mu=0), -inf, inf)
When I tried passing a general gaussian function (that needs to be called with x, N, mu, and sigma) and filling in some of the values using quad like
quad(gen_gauss, -inf, inf, (10,2,0))
the parameters 10, 2, and 0 did NOT necessarily match N=10, sigma=2, mu=0, which prompted the more extended definition.
The erf(z) in scipy.special would require me to define exactly what t is initially, but it nice to know it is there.
Okay, you appear to be pretty confused about several things. Let's start at the beginning: you mentioned a "multidimensional function", but then go on to discuss the usual one-variable Gaussian curve. This is not a multidimensional function: when you integrate it, you only integrate one variable (x). The distinction is important to make, because there is a monster called a "multivariate Gaussian distribution" which is a true multidimensional function and, if integrated, requires integrating over two or more variables (which uses the expensive Monte Carlo technique I mentioned before). But you seem to just be talking about the regular one-variable Gaussian, which is much easier to work with, integrate, and all that.
The one-variable Gaussian distribution has two parameters, sigma and mu, and is a function of a single variable we'll denote x. You also appear to be carrying around a normalization parameter n (which is useful in a couple of applications). Normalization parameters are usually not included in calculations, since you can just tack them back on at the end (remember, integration is a linear operator: int(n*f(x), x) = n*int(f(x), x) ). But we can carry it around if you like; the notation I like for a normal distribution is then
N(x | mu, sigma, n) := (n/(sigma*sqrt(2*pi))) * exp((-(x-mu)^2)/(2*sigma^2))
(read that as "the normal distribution of x given sigma, mu, and n is given by...") So far, so good; this matches the function you've got. Notice that the only true variable here is x: the other three parameters are fixed for any particular Gaussian.
Now for a mathematical fact: it is provably true that all Gaussian curves have the same shape, they're just shifted around a little bit. So we can work with N(x|0,1,1), called the "standard normal distribution", and just translate our results back to the general Gaussian curve. So if you have the integral of N(x|0,1,1), you can trivially calculate the integral of any Gaussian. This integral appears so frequently that it has a special name: the error function erf. Because of some old conventions, it's not exactly erf; there are a couple additive and multiplicative factors also being carried around.
If Phi(z) = integral(N(x|0,1,1), -inf, z); that is, Phi(z) is the integral of the standard normal distribution from minus infinity up to z, then it's true by the definition of the error function that
Phi(z) = 0.5 + 0.5 * erf(z / sqrt(2)).
Likewise, if Phi(z | mu, sigma, n) = integral( N(x|sigma, mu, n), -inf, z); that is, Phi(z | mu, sigma, n) is the integral of the normal distribution given parameters mu, sigma, and n from minus infinity up to z, then it's true by the definition of the error function that
Phi(z | mu, sigma, n) = (n/2) * (1 + erf((x - mu) / (sigma * sqrt(2)))).
Take a look at the Wikipedia article on the normal CDF if you want more detail or a proof of this fact.
Okay, that should be enough background explanation. Back to your (edited) post. You say "The erf(z) in scipy.special would require me to define exactly what t is initially". I have no idea what you mean by this; where does t (time?) enter into this at all? Hopefully the explanation above has demystified the error function a bit and it's clearer now as to why the error function is the right function for the job.
Your Python code is OK, but I would prefer a closure over a lambda:
def make_gauss(N, sigma, mu):
k = N / (sigma * math.sqrt(2*math.pi))
s = -1.0 / (2 * sigma * sigma)
def f(x):
return k * math.exp(s * (x - mu)*(x - mu))
return f
Using a closure enables precomputation of constants k and s, so the returned function will need to do less work each time it's called (which can be important if you're integrating it, which means it'll be called many times). Also, I have avoided any use of the exponentiation operator **, which is slower than just writing the squaring out, and hoisted the divide out of the inner loop and replaced it with a multiply. I haven't looked at all at their implementation in Python, but from my last time tuning an inner loop for pure speed using raw x87 assembly, I seem to remember that adds, subtracts, or multiplies take about 4 CPU cycles each, divides about 36, and exponentiation about 200. That was a couple years ago, so take those numbers with a grain of salt; still, it illustrates their relative complexity. As well, calculating exp(x) the brute-force way is a very bad idea; there are tricks you can take when writing a good implementation of exp(x) that make it significantly faster and more accurate than a general a**b style exponentiation.
I've never used the numpy version of the constants pi and e; I've always stuck with the plain old math module's versions. I don't know why you might prefer either one.
I'm not sure what you're going for with the quad() call. quad(gen_gauss, -inf, inf, (10,2,0)) ought to integrate a renormalized Gaussian from minus infinity to plus infinity, and should always spit out 10 (your normalization factor), since the Gaussian integrates to 1 over the real line. Any answer far from 10 (I wouldn't expect exactly 10 since quad() is only an approximation, after all) means something is screwed up somewhere... hard to say what's screwed up without knowing the actual return value and possibly the inner workings of quad().
Hopefully that has demystified some of the confusion, and explained why the error function is the right answer to your problem, as well as how to do it all yourself if you're curious. If any of my explanation wasn't clear, I suggest taking a quick look at Wikipedia first; if you still have questions, don't hesitate to ask.
scipy ships with the "error function", aka Gaussian integral:
import scipy.special
help(scipy.special.erf)
The gaussian distribution is also called a normal distribution. The cdf function in the scipy norm module does what you want.
from scipy.stats import norm
print norm.cdf(0.0)
>>>0.5
http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.html#scipy.stats.norm
Why not just always do your integration from -infinity to +infinity, so that you always know the answer? (joking!)
My guess is that the only reason that there's not already a canned Gaussian function in SciPy is that it's a trivial function to write. Your suggestion about writing your own function and passing it to quad to integrate sounds excellent. It uses the accepted SciPy tool for doing this, it's minimal code effort for you, and it's very readable for other people even if they've never seen SciPy.
What exactly do you mean by a fixed-width integrator? Do you mean using a different algorithm than whatever QUADPACK is using?
Edit: For completeness, here's something like what I'd try for a Gaussian with the mean of 0 and standard deviation of 1 from 0 to +infinity:
from scipy.integrate import quad
from math import pi, exp
mean = 0
sd = 1
quad(lambda x: 1 / ( sd * ( 2 * pi ) ** 0.5 ) * exp( x ** 2 / (-2 * sd ** 2) ), 0, inf )
That's a little ugly because the Gaussian function is a little long, but still pretty trivial to write.
I assume you're handling multivariate Gaussians; if so, SciPy already has the function you're looking for: it's called MVNDIST ("MultiVariate Normal DISTribution). The SciPy documentation is, as ever, terrible, so I can't even find where the function is buried, but it's in there somewhere. The documentation is easily the worst part of SciPy, and has frustrated me to no end in the past.
Single-variable Gaussians just use the good old error function, of which many implementations are available.
As for attacking the problem in general, yes, as James Thompson mentions, you just want to write your own gaussian distribution function and feed it to quad(). If you can avoid the generalized integration, though, it's a good idea to do so -- specialized integration techniques for a particular function (like MVNDIST uses) are going to be much faster than a standard Monte Carlo multidimensional integration, which can be extremely slow for high accuracy.

Categories