I am currently solving a system of differential equation under python using odeint to simulate charged particles in a field (the source comes from this package):
time = np.linspace(0, 5, 1000)
def sm(x, t):
return np.array([x[1], eta*Ez0(x[0])])
traj = odeint(sm,[0,1.], time)
It works fine but I would like to stop the calculation as soon as x[0] < 0. For the moment I just block the evolution of the sytem:
def sm1(x, t):
if x[0] < 0:
return np.array([0, 0])
else:
return np.array([x[1], eta*Ez0(x[0])])
traj = odeint(sm1,[0,1.],time)
but I gess there are better solutions. I've found this but is seems to me that it fixes the number of steps, which is regrettable.
Any suggestion appreciated.
If you write a custom extension of the odeint function, you could have your function raise a particular exception when it's finished. Doing it in Python might make it substantially slower, but I think you write the same thing in C or Cython. Note that I haven't tested the following.
class ThatsEnoughOfThat(Exception):
pass
def custom_odeint(func, y0, t): # + whatever parameters you need
for timestep in t:
try:
# Do stuff. Call odeint/other scipy functions?
except ThatsEnoughOfThat:
break
return completedstuff
def sm2(x, t):
if x[0] < 0:
raise ThatsEnoughOfThat
return np.array([x[1], eta*Ez0(x[0])])
Related
I'm trying to write a function that generates the restrictions of a function g at a given point p.
For example, let's say g(x, y, z) = 2x + 3y + z and p = (5, 10, 15). I'm trying to create a function that would return [lambda x : g(x, 10, 15), lambda y: g(5, y, 15), lambda z: g(5, 10, z)]. In other words, I want to take my multivariate function and return a list of univariate functions.
I wrote some Python to describe what I want, but I'm having trouble figuring out how to pass the right inputs from p into the lambda properly.
def restriction_generator(g, p):
restrictions = []
for i in range(len(p)):
restriction = lambda x : g(p[0], p[1], ..., p[i-1], p[x], p[i+1], .... p[-1])
restrictions.append(restriction)
return restrictions
Purpose: I wrote a short function to estimate the derivative of a univariate function, and I'm trying to extend it to compute the gradient of a multivariate function by computing the derivative of each restriction function in the list returned by restriction_generator.
Apologies if this question has been asked before. I couldn't find anything after some searching, but I'm having trouble articulating my problem without all of this extra context. Another title for this question would probably be more appropriate.
Since #bandicoot12 requested some more solutions, I will try to fix up your proposed code. I'm not familiar with the ... notation, but I think this slight change should work:
def restriction_generator(g, p):
restrictions = []
for i in range(len(p)):
restriction = lambda x : g(*p[: i], x, *p[i+1:])
restrictions.append(restriction)
return restrictions
Although I am not familiar with the ... notation, if I had to guess, your original code doesn't work because it probably always inputs p[0]. Maybe it can be fixed by changing it from p[0], p[1], ..., p[i-1] to p[0], ..., p[i-1].
try something like this:
def foo(fun, p, i):
def bar(x):
p[i] = x
return fun(*p)
return bar
and
def restriction_generator(g, p):
restrictions = []
for i in range(len(p)):
restrictions.append(foo(g, p, i))
return restrictions
I wrote a script in Python that finds the zero of a fairly complicated function using fsolve. The way it works is as follows. There is a class that simply stores the parameter of the function. The class has an evaluate method that returns a value based on the stored parameter and another method (inversion) that finds the parameter at which the function takes the supplied output.
The inversion method updates the parameter of the function at each iteration and it keeps on doing so until the mismatch between the value returned by the evaluate method and the supplied value is zero.
The issue that I am having is that while the value returned by the inversion method is correct, the parameter, that is part of the object, is always 0 after the inversion method terminates. Oddly enough, this issue disappears if I use root instead of fsolve. As far as I know, fsolve is just a wrapper for root with some settings on the solver algorithm and some other things enforced.
Is this a known problem with fsolve or am I doing something dumb here? The script below demonstrates the issue I am having on the sine function.
from scipy.optimize import fsolve, root
from math import sin, pi
class invertSin(object):
def __init__(self,x):
self.x = x
def evaluate(self):
return sin(self.x)
def arcsin_fsolve(self,y):
def errorfunc(xnew):
self.x = xnew
return self.evaluate() - y
soln = fsolve(errorfunc, 0.1)
return soln
def arcsin_root(self,y):
def errorfunc(xnew):
self.x = xnew
return self.evaluate() - y
soln = root(errorfunc, 0.1, method = 'anderson')
return soln
myobject = invertSin(pi/2)
x0 = myobject.arcsin_fsolve(0.5) #find x s.t. sin(x) = 0.5 using fsolve
print(x0) #this prints pi/6
x0obj = myobject.x
print(x0obj) #this always prints 0 no matter which function I invert
myobject2 = invertSin(pi/2)
x1 = myobject2.arcsin_root(0.5) #find x s.t. sin(x) = 0.5 using root
print(x1) #this prints pi/6
x1obj = myobject2.x
print(x1obj) #this prints pi/6
If you add print statements for xnew in the errorfunc then you will see that fsolve works with a list (of one element). This means that the function is re-interpreted that way, not the original function. Somehow the type information is lost after exiting the solver so that then the address/reference to that list is interpreted as floating point data, which gives the wrong value.
Setting self.x = xnew[0] there restores the desired behavior.
I am trying to optimise the output of a function using the scipy basinhopping algorithm.
def acceptance_criteria(self,**kwargs):
print "kwargs "
print kwargs
x = kwargs["x_new"]
beta = x[0]
alpha = [x[1],x[2],x[3],x[4],x[5],x[6]]
print x
inputnow= raw_input()
beta_gamma_pass = beta != self.gamma
beta_zero_pass = beta >= 0.0
alpha1_pass = alpha[0] > 0.0
alpha2_pass = alpha[1] > 0.0
alpha3_pass = alpha[2] > 0.0
alpha4_pass= alpha[3] > 0.0
alpha5_pass= alpha[4] > 0.0
alpha6_pass= alpha[5] > 0.0
return beta_gamma_pass,beta_zero_pass,alpha1_pass,alpha2_pass,alpha3_pass,alpha4_pass,alpha5_pass,alpha6_pass
def variational_calculation(self):
minimizer_kwargs = {"method": "BFGS"}
initial_paramater_guesses = [2,1.0,1.0/2.0,1.0/3.0,1.0/4.0,1.0/5.0,1.0/6.0]
ret = basinhopping(self.Calculate, initial_paramater_guesses, minimizer_kwargs=minimizer_kwargs, niter=200, accept_test=self.acceptance_criteria)
I am getting problems with Nans and infs in my calculate function.
This is due to invalid parameter values being used.
I have attempted to prevent this by using acceptance criteria.
But the basinhopping routine does not call the accept_test function.
Thus the criteria remain unimplemted.
Can anyone help me out as to why basinhopping isn't calling the accept_test function?
Thanks
EDIT:
in response to #sascha's comment,
There are fractional powers of parameters, and 1/parameter terms in the function.
Not limiting the range of the allowed parameter values gives complex and inf values in this case.
It is actually an eigenvalue problem, where I am trying to minimise the trace of the eigenvalues of a set of 18*18 matrices.
The matrix elements depend on the 7 parameters in a complex way with dozens of non linear terms.
I have never worked on anything more complex than polynomial regression before, so I am not familiar with the algorithms or their applicability at all.
However, the function/s that I am trying to minimise are smooth as long as you avoid parameter values near poles; caused by 1/parameter and 1/(paramter^n -constant) terms.
EDIT2:
QUESTION CLARIFICATION
The question here is nothing to do with the applicability of the basinhopping algorithm.
It is why the specific implementation of it, in the 2.7 version of python and scipy, does not call the accept_test function?
I can't say why your example doesn't work, but here's a similar but minimal example where it does call the accept_test, maybe you can spot the difference
import scipy
import numpy as np
from scipy.optimize import basinhopping
class MyClass:
def Calculate(self, x):
return np.dot(x, x)
def acceptance_criteria(self, **kwargs):
print("in accept test")
return True
def run(self):
minimizer_kwargs = {"method": "BFGS"}
initial_paramater_guesses = [2,1.0,1.0/2.0,1.0/3.0,1.0/4.0,1.0/5.0,1.0/6.0]
ret = basinhopping(self.Calculate,
initial_paramater_guesses,
minimizer_kwargs=minimizer_kwargs,
niter=200,
accept_test=self.acceptance_criteria)
my_class = MyClass()
my_class = my_class.run()
I know this post is old, but it still shows up on Googling this question.
I was having the same issue, so I ran a test by modifying the code here a bit and adding a counter. My code minimizes 5 variables, but requires all values to be greater than 0.5
import numpy as np
from scipy.optimize import basinhopping
n = 0
def acceptance_criteria(**kwargs):
print("in accept test")
X = kwargs['x_new']
for x in X:
if x < .5:
print('False!')
return False
return True
def f(x):
global n
print(n)
n += 1
return (x[0]**2-np.sin(x[1])*4+np.cos(x[2]**2)+np.sin(x[3]*5.0)-(x[4]**2 -3*x[4]**3))**2
if __name__ == '__main__':
res = basinhopping(f,[.5]*5,accept_test=acceptance_criteria)
It took about 100 iterations before entering the acceptance_criteria function.
If you are optimizing a function that takes a long time to run (as I was), then you might just need to give it more time to enter into the acceptance_test.
Suppose I have a complex mathematical function with many input parameters P = [p1, ..., pn]. Suppose that I can factor the function in blocks, for example:
f(P) = f1(p1, p2) * f2(p2, ... pn)
and maybe
f2(p2, ..., pn) = p2 * f3(p4) + f4(p5, ..., pn)
suppose I have to evaluate f for many value of P, for example I want to find the minimum of f. Suppose I have already computed f(P) and I need to compute f(P') where P' is equal to P except for p1. In this case I don't have to recompute f2, f3, f4, but only f1.
Is there a library which help me to implement this kind of caching system? I know RooFit, but it is oriented to statistical model, made by blocks. I am looking for something more general. scipy / scikits and similar are preferred, but also c++ libraries are ok. Has this technique a name?
If you can write these function to be pure functions (which means that they always return the same value for the same arguments, and have no side effects), you can use memoization, which is a method for saving results of function calls.
try:
from functools import lru_cache # Python 3.2+
except ImportError: # Python 2
# Python 2 and Python 3.0-3.1
# Requires the third-party functools32 package
from functools32 import lru_cache
#lru_cache(maxsize=None)
def f(arg):
# expensive operations
return result
x = f('arg that will take 10 seconds') # takes 10 seconds
y = f('arg that will take 10 seconds') # virtually instant
For illustration, or if you don't want to use functools32 on Python < 3.2:
def memoize(func):
memo = {}
def wrapper(*args):
if args not in memo:
memo[args] = func(*args)
return memo[args]
return helper
#memoize
def f(arg):
# expensive operations
return result
x = f('arg that will take 10 seconds') # takes 10 seconds
y = f('arg that will take 10 seconds') # virtually instant
I am working on a project analyzing data and am trying to use a least squares method (built-in) to do so. I found a tutorial that provided code as an example and it works fine:
x = arange(0, 6e-2, 6e-2/30)
A, k, theta = 10, 1.0/3e-2, pi/6
y_true = A*sin(2*pi*k*x+theta)
y_meas = y_true+2*random.randn(len(x))
def residuals(p, y, x):
A, k, theta = p
print "A type" + str(type(A))
print "k type" + str(type(k))
print "theta type" + str(type(theta))
print "x type" + str(type(x))
err = y - A*sin(2*pi*k*x+theta)
return err
def peval(x, p):
return p[0]*sin(2*pi*p[1]*x+p[2])
p0 = [8,1/2.3e-2,pi/3]
plsq = leastsq(residuals, p0, args=(y_meas, x))
print(plsq[0])
However, when I try transferring this to my own code, it keeps throwing errors. I have been working on this for a while and have managed to eliminate, I think, all of the type mismatch issues which plagued me early on. As far as I can tell, currently the two pieces of code are nearly identical but I am getting the error
'unsupported operand type(s)' and can't figure out what to do next. Here is the section of my code that pertains to this question my code:
if (ls is not None):
from scipy.optimize import leastsq
p0 = [8, 1/2.3e-2,pi/3]
def residuals(p, y, x):
A,k,theta = p
if (type(x) is list):
x = asarray(x)
err = y - A*sin(2*pi*k*x+theta) #Point of error
return err
def peval(x, p):
return p[0]*sin(2*pi*p[1]*x+p[2])
plsq = leastsq(residuals, p0, args=(listRelativeCount, listTime))
plsq_0 = peval(listTime, plsq[0])
Where listTime is the x-values of the data found in listRelativeCount. I have marked the line where the code is currently failing. Any help would be appreciated as I have been stuck on this problem for over a month.
Three things are happening in the line you called #Point of error: You are multiplying values, adding values and applying the sin() function. "Unsupported operand type" means something is wrong in one of these operations. It means you need to verify the types of the operands, and also make sure you know what function is being applied.
Are you sure you know the types (and dtypes, for ndarrays) of all the operands, including pi, x, theta and A?
Are you sure which sin function you are using? math.sin is not the same as np.sin, and they accept different operands.
Mulitplying a list by a scalar (if your listTime variable is really a list) does something completely different from multiplying scalar and ndarray.
If it's unclear which operation is causing the error, try breaking up the expression:
err1 = 2*pi*k
err2 = err1*x
err3 = err2 + theta
err4 = sin(err3)
err5 = A*err4
err = y - err5
This ought to clarify what operation throws the exception.
This is an example of why it's often a better idea to use explicit package names, like np.sin() rather than sin().