Memoizing Recursive Class Instances that use Scipy Optmize - python

I am using Python 2.7, and have a program that solves a recursive optimization problem, that is, a dynamic programming problem. A simplified version of the code is:
from math import log
from scipy.optimize import minimize_scalar
class vT(object):
def __init__(self,c):
self.c = c
def x(self,w):
return w
def __call__(self,w):
return self.c*log(self.x(w))
class vt(object):
def __init__(self,c,vN):
self.c = c
self.vN = vN
def objFunc(self,x,w):
return -self.c*log(x) - self.vN(w - x)
def x(self,w):
x_star = minimize_scalar(self.objFunc,args=(w,),method='bounded',
bounds=(1e-10,w-1e-10)).x
return x_star
def __call__(self,w):
return self.c*log(self.x(w)) + self.vN(w - self.x(w))
p3 = vT(2.0)
p2 = vt(2.0,p3)
p1 = vt(2.0,p2)
w1 = 3.0
x1 = p1.x(w1)
w2 = w1 - x1
x2 = p2.x(w2)
w3 = w2 - x2
x3 = w3
x = [x1,x2,x3]
print('Optimal x when w1 = 3 is ' + str(x))
If enough periods are added, the program can begin to take a long time to run. When x1 = p1.x(w1) is run, p2 and p3 are being evaluated multiple times by the minimize_scalar. Also, when x2 = p2(w2) is run, we know the ultimate solution will involve evaluating p2 and p3 in ways that were already done in the first step.
I have two questions:
What's the best way to use a memoize wrapper on the vT and vt classes to speed up this program?
When minimize_scalar is run, will it benefit from this memoization?
In my actually application, the solutions can take hours to solve currently. So, speeding this up would be of great value.
UPDATE: A response below points out that the example above could be written without the use of classes, and the normal decoration can be used for functions. In my actual application, I do have to use classes, not function. Moreover, my first question is whether the call of the function or method (when it's a class) inside of minimize_scalar will benefit from the memoization.

I found out the answer. Below is an example of how to memoize the program. There may be an even more efficient approach, but this approach memoizes the methods of the class. Furthermore, when minimize_scalar is run, the memoize wrapper records the results each time it evaluates the functions:
from math import log
from scipy.optimize import minimize_scalar
from functools import wraps
def memoize(obj):
cache = obj.cache = {}
#wraps(obj)
def memoizer(*args, **kwargs):
key = str(args) + str(kwargs)
if key not in cache:
cache[key] = obj(*args, **kwargs)
return cache[key]
return memoizer
class vT(object):
def __init__(self,c):
self.c = c
#memoize
def x(self,w):
return w
#memoize
def __call__(self,w):
return self.c*log(self.x(w))
class vt(object):
def __init__(self,c,vN):
self.c = c
self.vN = vN
#memoize
def objFunc(self,x,w):
return -self.c*log(x) - self.vN(w - x)
#memoize
def x(self,w):
x_star = minimize_scalar(self.objFunc,args=(w,),method='bounded',
bounds=(1e-10,w-1e-10)).x
return x_star
#memoize
def __call__(self,w):
return self.c*log(self.x(w)) + self.vN(w - self.x(w))
p3 = vT(2.0)
p2 = vt(2.0,p3)
p1 = vt(2.0,p2)
x1 = p1.x(3.0)
len(p3.x.cache) # how many times was p3.x evaluated?
Out[3]: 60
x2 = p2.x(3.0 - x1)
len(p3.x.cache) # how many additional times was p3.x evaluated?
Out[5]: 60

Related

How I can formulat my optimization problem with Pymoo?

I want to formulate the objective function (minimization problem): sum[sum[Ri*{Pi² + (Qi - Qcj*Xij)²}for j in range(Nc)] for i in range(N) ] with P and Q are the constants, Qc is a list of proposed solution and X is our decision variable (binary variable), R=[0.2,0.4,0.5], P=[2,4,5], Q=[1,3,4], Qc=[0,1,3,4,5], N= 3=len(P), Nc= 5.
I'm trying to get the vector X which minimizes the objective function.
You can find her my attempt:
class Problem(ElementwiseProblem):
def __init__(self,L,n_max,Q,P,T,R):
super().__init__(n_var=len(L), n_obj=1, n_ieq_constr=1)
self.L = L
self.n_max = n_max
self.Q = Q
self.P = P
self.T = T
self.R = R
def _evaluate(self, x, out, *args, **kwargs):
out["F"] =(( np.sum(self.P))**2+(np.sum(self.Q -self.L[x]))**2)*np.sum(self.R)
out["G"] = (np.sum(self.Q -self.L[x]))
# create the actual problem to be solved
np.random.seed(1)
P=[2,3,4,5,6]
Q=[6,11,13,14,15]
R=[0.2,0.3,0.4,0.5,0.6]
L = np.array([12,13,14,15,16,17,18,19,2,3,4,5,6,7,8,9,10,11])
n_max = 5
problem = Problem(L, n_max,Q,P,T,R)

List of functions->single function (in python)

Say that I have a list of functions: [f1, f2, f3] (in python)
How do I return a single function F:=f3(f2(f1())) (notice that F is of function type). I know that it's possible to do it with .reduce() but I was wondering if there's another way to do it without using libraries.
edit:
Also, notice that the length of the list can be greater than 3
I tried:
def func(list):
i = 1
new_function = filters[0]
while i<=len(filters)-1:
new_function = filters[i](new_function)
i+=1
return new_function
but it doesn't work
The problem in your code is that you pass a function as argument with filters[i](new_function).
I would suggest this recursive solution:
def chain(first, *rest):
return lambda x: first(chain(*rest)(x) if rest else x)
Example use:
def inc(x):
return x + 1
def double(x):
return x * 2
def square(x):
return x * x
f = chain(square, double, inc)
print(f(5)) # ((5+1)*2) ** 2 == 144
I see that in the code you tried, you never actually call the first of your functions. (I also assume that your code starts: def func(filters):
Taking into account that f1() takes no parameter, but the rest take the parameter of the return of the previous function, this should work:
def fit(funcs):
v = funcs[0]()
for f in funcs[1:]:
v = f(v)
return v
def f1():
return 42
def f2(x):
return x
def f3(x):
return x
fs = [f1, f2, f3]
a = lambda:fit(fs)
print(a())
Output: 42
def get_single_func(func_list):
def single_func(*args, **kwargs):
ret = func_list[0](*args, **kwargs)
for func in func_list[1:]:
ret = func(ret)
return ret
return single_func

Errors in a class when trying to call it

I have this code consisting of a class and a subclass. The class is Euler forward, while the second one is Eulers midpoint method. These are for solving an ODE (x'=x(1/2-x)). Now it doesn't seem to work because when I am to call the function, by typing:
Euler=H.solve(6)
where the 6 is the amount of steps, I get attributeerror.
AttributeError: 'int' object has no attribute 'size'
Could anyone help me make my code more robust and working so I could plot the values later on, really don't see whats wrong. My code below:
import numpy as np
class H:
def __init__(self, f):
self._f = f
def initial(self, u0):
self._u0 = u0
def solve(self, time_points):
n = time_points.size
self._t = time_points
self._u = np.zeros(n)
self._u[0] = self._u0
for k in range(n-1):
self._k = k
self._u[k+1] = self.advance()
return self._u, self._t
class F(H):
def ad(self):
u = self._u; t = self._t; f = self._f; k = self._k
dt = t[k+1] - t[k]
u_k12 = u[k] + dt/2 * f(u[k], t[k])
return u[k] + dt * f(u_k12, (t[k] + dt/2) )
I think what's wrong is the way you use the class. Initial value is set with initial method (u0), then you give solve method the list of points. You can use np.linscape to generate midpoint.
np.linspace(0, 3, 31) # 30 points evenly spaced between 0 and 3
So it's like this:
def func(x, y):
return x * y
midpoint = np.linspace(0, 3, 31)
F_ = F(func)
F_.initial(6)
F_.solve(midpoint)
Code:
class H:
def __init__(self, f):
self._f = f
def initial(self, u0):
self._u0 = u0
def solve(self, time_points):
n = time_points.size
self._t = time_points
self._u = np.zeros(n)
self._u[0] = self._u0
for k in range(n-1):
self._u[k+1] = self.advance(k)
return self._u, self._t
def advance(self, k):
....
class F(H):
def advance(self, k):
dt = self._t[k+1] + self._t[k]
u_k12 = self._u[k] + dt/2 * self._f(self._u[k], self._t[k])
return self._u[k] + dt * self._f(u_k12, (self._t[k] + dt/2))

Is it possible to pass a class method reference to a njit function?

I tried to improve the computation time of some of my code. So I use the njit decorator of numba module to do that. In this example:
import numpy as np
from numba import jitclass, jit, njit
from numba import int32, float64
import matplotlib.pyplot as plt
import time
spec = [('V_init' ,float64),
('a' ,float64),
('b' ,float64),
('g',float64),
('dt' ,float64),
('NbODEs',int32),
('dydx' ,float64[:]),
('time' ,float64[:]),
('V' ,float64[:]),
('W' ,float64[:]),
('y' ,float64[:]) ]
#jitclass(spec, )
class FHNfunc:
def __init__(self,):
self.V_init = .04
self.a= 0.25
self.b=0.001
self.g = 0.003
self.dt = .01
self.NbODEs = 2
self.dydx =np.zeros(self.NbODEs )
self.y =np.zeros(self.NbODEs )
def Eul(self,):
self.deriv()
self.y += (self.dydx * self.dt)
def deriv(self,):
self.dydx[0]= self.V_init - self.y[0] *(self.a-(self.y[0]))*(1-(self.y[0]))-self.y[1]
self.dydx[1]= self.b * self.y[0] - self.g * self.y[1]
return
#njit(fastmath=True)
def solve1(FH1,FHEuler,tp):
V = np.zeros(len(tp), )
W = np.zeros(len(tp), )
for idx, t in enumerate(tp):
FHEuler
V[idx] = FH1.y[0]
W[idx] = FH1.y[1]
return V,W
if __name__ == "__main__":
FH1 = FHNfunc()
FHEuler = FH1.Eul
dt = .01
tp = np.linspace(0, 1000, num = int((1000)/dt))
t0 = time.time()
[V1,W1] = solve1(FH1,FHEuler,tp)
print(time.time()- t0)
plt.figure()
plt.plot(tp,V1)
plt.plot(tp,W1)
plt.show()
I would like to pass a reference to a class method named FHEuler = FH1.Eul, but it crashes and gives me this error
This error may have been caused by the following argument(s):
- argument 1: cannot determine Numba type of <class 'method'>
So is it possible to pass a reference to a njit function? or does it exist a workaround?
Numba can not handle the function as argument. An alternative way is to compile the function before and than use an inner function to handle the other arguments and return the inner function with compiled input function ran inside it. Try this please:
def solve1(FH1,FHEuler,tp):
FHEuler_f = njit(FHEuler)
#njit(fastmath=True)
def inner(FH1_x, tp_x):
V = np.zeros(len(tp_x), )
W = np.zeros(len(tp_x), )
for idx, t in enumerate(tp_x):
FHEuler_f
V[idx] = FH1_x.y[0]
W[idx] = FH1_x.y[1]
return V,W
return inner(FH1, tp)
Passing function might not be necessary. This one looks work
#njit(fastmath=True)
def solve1(FH1,tp):
FHEuler = FH1.Eul
V = np.zeros(len(tp), )
W = np.zeros(len(tp), )
for idx, t in enumerate(tp):
FHEuler()
V[idx] = FH1.y[0]
W[idx] = FH1.y[1]
return V,W

Compute once, use multiple times within Python class

I am trying to define a class within which a function of many variables is optimized. Normally I'm working with ~500-1000 variables. In this class, I need to pass function and its derivative to minimize in scipy to find the x0 which minimizes this function.
The following is a simple working example of the concept and it works fine. But as you see both the function (f) and its derivative (df) depend on another function g (In this example, it looks trivial and can be written in another way but actual functions are much more complicated).
I was wondering if I can calculate g only once at each iteration and then use that value within the class. Considering that f and df get updated in minimize multiple times so at each step g should be re-evaluated as well.
Thanks!
from scipy.optimize import minimize
class Minimization(object):
'''A class to optimizae a function'''
def __init__(self,x,y):
self.x = x
self.y = y
self.p = np.array([x,y])
def g(self,x,y):
return x-y
def f(self,p):
return (self.g(*p) - 1)**2
def df(self,p):
fprime = 2*(self.g(*p) - 1)
return np.array([fprime,-fprime])
def optimize(self):
P1 = minimize(fun=self.f, x0=self.p, args=(), method='Newton-CG',jac=self.df)
return P1
m = Minimization(2,4)
m.optimize()
#fun: 0.0
# jac: array([ 0., -0.])
#message: 'Optimization terminated successfully.'
# nfev: 3
# nhev: 0
# nit: 2
#njev: 6
#status: 0
#success: True
# x: array([ 3.5, 2.5])
What you want is called "memoizing". When the function g calculates a value it stores the result in a dictionary, indexed by the arguments x, y. Every time g is called it checks the dictionary to see if the value it needs is already stored there. If you need to reset the values, you clear the dictionary. Something like this:
class Minimization(object):
'''A class to optimizae a function'''
def __init__(self,x,y):
self.x = x
self.y = y
self.p = np.array([x,y])
self.cache = {} # previously computed values of g
def g(self,x,y):
cache_index = (x, y)
if cache_index in self.cache: # check cache first
return self.cache[cache_index]
value = x - y
self.cache[cache_index] = value # save for later
return value
def f(self,p):
return (self.g(*p) - 1)**2
def df(self,p):
fprime = 2*(self.g(*p) - 1)
return np.array([fprime,-fprime])
def optimize(self):
self.cache.clear() # Blow the cache
P1 = minimize(fun=self.f, x0=self.p, args=(), method='Newton-CG',jac=self.df)
return P1
To complement Paul's answer, you could define a class aggregating caching-like methods that you will then (re-) use as decorator.
import functools as ft #<------ used to keep meth-related docstring
class Cache(object):
def __init__(self):
self._cache = {}
#classmethod
def _property(cls, meth):
#property
#ft.wraps(meth)
def __property(cls):
meth_name = meth.__name__
if meth_name not in cls._cache:
cls._cache[meth_name] = meth(cls)
return cls._cache[meth_name]
return __property
#classmethod
def _method(cls, meth):
#ft.wraps(meth)
def __method(cls, *args, **kwargs):
meth_key = '{}_{}'.format(meth.__name__, args)# <---- considered as string so as avoid unhashable-type errors
if meth_key not in cls._cache:
cls._cache[meth_key] = meth(cls, *args, **kwargs)
return cls._cache[meth_key]
return __method
And then using the class Cache as ancestor to Minimization, as follows
import numpy as np
from scipy.optimize import minimize
class Minimization(Cache):#<----------Inherits of Cache instead of object
'''A class to optimizae a function'''
def __init__(self,x,y):
super(Minimization,self).__init__()
self.x0 = x # I changed the names because as it stands,
self.y0 = y # these attributes are actually used as first guesses
self.p0 = np.array([x,y]) # for the resolution process
#Cache._method
def g(self, x, y):
return x - y
##Cache._method
def f(self,p):
return (self.g(*p) - 1)**2
##Cache._method
def df(self,p):
fprime = 2*(self.g(*p) - 1)
return np.array([fprime,-fprime])
#Cache._property
def optimized(self):#<----- I changed the name into optimized to make it representative of what it is, a property
return minimize(fun=self.f, x0=self.p0, args=(), method='Newton-CG',jac=self.df)
Use Case (tested under Python 2.7.11 and 3.6.1)
>>> m = Minimization(2,4)
>>> # Take care to clear the cache if optimized is not called for the first time and that you changed one of its "dependencies", doing m._cache.clear().
>>> # something you may want to do is simply removing the #Cache._property decorator
>>> m.optimized
status: 0
success: True
njev: 6
nfev: 3
fun: 0.0
x: array([ 3.5, 2.5])
message: 'Optimization terminated successfully.'
nhev: 0
jac: array([ 0., -0.])
Without having looked too deeply at the code itself, here is a sample class to demonstrate how to calculate a value once and avoid recomputing it on each invocation. You could also make this a property.
class StackOverflow:
def __init__(self, value=None):
self._value = value
def compute_value(self):
if self._value is None:
self._value = 100 # Compute value here
return self._value

Categories