How to use scipy `minimize` on a difference between two vectors? - python

I have two vectors w1 and w2 (each of length 100), and I want to minimize the sum of their absolute difference i.e.
import numpy as np
def diff(w: np.ndarray) -> float:
"""Get the sum of absolute differences in the vector w.
Args:
w: A flattened vector of length 200, with the first 100 elements
pertaining to w1, and the last 100 elements pertaining to w2.
Returns:
sum of absolute differences.
"""
return np.sum(np.absolute(w[:100] - w[-100:]))
I need to write diff() as only taking one argument since scipy.opyimize.minimize requires the array passed to the x0 argument to be 1 dimensional.
As for constraints, I have
w1 is fixed and not allowed to change
w2 is allowed to change
The sum of absolute values w2 is between 0.1 and 1.1: 0.1 <= sum(abs(w2)) <= 1.1
|w2_i| < 0.01 for any element i in w2
I am confused as to how we code these constraints using the Bounds and LinearConstraints objects. What I've tried so far is the following
from scipy.optimize import minimize, Bounds, LinearConstraint
bounds = Bounds(lb=[-0.01] * 200, ub=[0.01] * 200) # constraint #4
lc = LinearConstraint([[1] * 200], [0.1], [1.1]) # constraint #3
res = minimize(
fun=diff,
method='trust-constr',
x0=w, # my flattened vector containing w1 first 100 elements, and w2 in last 100 elements
bounds=bounds,
constraints=(lc)
)
My logic for the bounds variable is from constrain #4, and for the lc variable comes from constrain #3. However I know I've coded this wrong because because the lower and upper bounds are of length 200 which seems to indicate they are applied to both w1 and w2 whereas I only wan't to apply the constrains to w2 (I get an error ValueError: operands could not be broadcast together with shapes (200,) (100,) if I try to change the length of the array in Bounds from 200 to 100).
The shapes and argument types for LinearConstraint are especially confusing to me, but I did try to follow the scipy example.
This current implementation never seems to finish, it just hangs forever.
How do I properly implement bounds and LinearConstraint so that it satisfies my constraints list above, if that is even possible?

Your problem can easily be formulated as a linear optimization problem (LP). You only need to reformulate all absolute values of the optimization variables.
Changing the notation slightly (x is now the optimization variable w2 and w is just your given vector w1), your problem reads as
min |w_1 - x_1| + .... + |w_N - x_N|
s.t. lb <= |x1| + ... + |xN| <= ub (3)
|x_i| <= 0.01 - eps (4) (models the strict inequality)
where eps is just a sufficiently small number in order to model the strict inequality.
Let's consider the constraint (3). Here, we add additional positive variables z and define z_i = |x_i|. Then, we replace all absolute values |x_i| by z_i and impose the constraints -x_i <= z_i <= x_i which model the relationship z_i = |x_i|. Similarly, you can proceed with the objective and the constraint (4). The latter is by the way trivial and equivalent to -(0.01 - eps) <= x_i <= 0.01 - eps.
In the end, your optimization problem should read (assuming that all your w_i are positive):
min u1 + .... + uN
s.t. lb <= z1 + ... + zN <= ub
-x <= z <= x
-0.01 + eps <= x <= 0.01 - eps
-(w-x) <= u <= w - x
0 <= z
0 <= u
with 3*N optimization variables x1, ..., xN, u1, ..., uN, z1, ..., zN. It isn't hard to write these constraints as an matrix-vector product A_ineq * x <= b_ineq. Then, you can solve it by scipy.optimize.linprog as follows:
import numpy as np
from scipy.optimize import minimize, linprog
n = 100
w = np.abs(np.random.randn(n))
eps = 1e-10
lb = 0.1
ub = 1.1
# linear constraints: A_ub * (x, z, u)^T <= b_ub
A_ineq = np.block([
[np.zeros(n), np.ones(n), np.zeros(n)],
[np.zeros(n), -np.ones(n), np.zeros(n)],
[-np.eye(n), np.eye(n), np.zeros((n, n))],
[-np.eye(n), -np.eye(n), np.zeros((n, n))],
[ np.eye(n), np.zeros((n, n)), -np.eye(n)],
[ np.eye(n), np.zeros((n, n)), np.eye(n)],
])
b_ineq = np.hstack((ub, -lb, np.zeros(n), np.zeros(n), w, w))
# bounds: lower <= (x, z, u)^T <= upper
lower = np.hstack(((-0.01 + eps) * np.ones(n), np.zeros(n), np.zeros(n)))
upper = np.hstack((( 0.01 - eps) * np.ones(n), np.inf*np.ones(n), np.inf*np.ones(n)))
bounds = [(l, u) for (l, u) in zip(lower, upper)]
# objective: c^T * (x, z, u)
c = np.hstack((np.zeros(n), np.zeros(n), np.ones(n)))
# solve the problem
res = linprog(c, A_ub=A_ineq, b_ub=b_ineq, method="highs")
# your solution
x = res.x[:n]
print(res.message)
print(x)
Some notes in arbitrary order:
It's highly recommended to solve linear optimization problems with linprog instead of minimize. The former provides an interface to HiGHS, a high-performance LP solver HiGHs that outperforms all algorithms under the hood of minimize. However, it's also worth mentioning that minimize is meant to be used for nonlinear optimization problems.
In case your values w are not all positive, we need to change the formulation.

You can (and perhaps should, for clarity), use the args argument in minimize, and provide the fixed vector as an extra argument to your function.
If you set up your equation as follows:
def diff(w2, w1):
return np.sum(np.absolute(w1 - w2))
and your constraints with
bounds = Bounds(lb=[-0.01] * 100, ub=[0.01] * 100) # constraint #4
lc = LinearConstraint([[1] * 100], [0.1], [1.1]) # constraint #3
and then do
res = minimize(
fun=diff,
method='trust-constr',
x0=w1,
args=(w2,),
bounds=bounds,
constraints=[lc]
)
Then:
print(res.success, res.status, res.nit, np.abs(res.x).sum(), all(np.abs(res.x) < 0.01))
yields (for me at least)
(True, 1, 17, 0.9841520351691752, True)
which seems to be what you want.
Note that my test inputs are:
w1 = (np.arange(100) - 50) / 1000
w2 = np.ones(100, dtype=float)
which may or may not be favourable to the fitting procedure.

Related

scipy.optimize.minimize choosing parameters that defy constraints

I am running scipy.optimize.minimize trying to maximize the likelihood for left-truncated data on a Gompertz distribution. Since the data is left-truncated at 1, I get this likelihood:
# for a single point x_i, the left-truncated log-likelihood is:
# ln(tau) + tau*(ln(theta) - ln(x_i)) - (theta / x_i) ** tau - ln(x_i) - ln(1 - exp(-(theta / d) ** tau))
def to_minimize(args, data, d=1):
theta, tau = args
if tau <= 0 or theta <= 0 or theta / d < 0 or np.exp(-(theta / d) ** tau) >= 1:
print('ERROR')
term1 = len(data) * (np.log(tau) + tau * np.log(theta) - np.log(1 - np.exp(-(theta / d) ** tau)))
term2 = 0
for x in data:
term2 += (-(tau + 1) * np.log(x)) - (theta / x) ** tau
return term1 + term2
This will fail in all instances where the if statement is true. In other words, tau and theta have to be strictly positive, and theta ** tau must be sufficiently far away from 0 so that np.exp(-theta ** tau) is "far enough away" from 1, since otherwise the logarithm will be undefined.
These are the constraints which I thus defined. I used the notation with a dict instead of a NonlinearConstraints object since it seems that this methods accepts strict inequality (np.exp(-x[0] ** x[1]) must be strictly less than 1). Maybe I have misunderstood the documentation on this.
def constraints(x):
return [1 - np.exp(-(x[0]) ** x[1])]
To maximize the likelihood, I minimize the negative likelihood.
opt = minimize(lambda args: -to_minimize(args, data),
x0=np.array((1, 1)),
constraints={'type': 'ineq', 'fun': constraints},
bounds=np.array([(1e-15, 10), (1e-15, 10)]))
As I take it, the two arguments should then never be chosen in a way such that my code fails. Yet, the algorithm tries to move theta very close to its lower bound and tau very close to its upper bound so that the logarithm becomes undefined.
What makes my code fail?
Both forms of constraints, i.e. NonlinearConstraint and dict constraints don't support strict inequalities. Typically, one therefore uses g(x) >= c + Ɛ to model the strict inequality g(x) > c, where Ɛ is a sufficiently small number.
Note also that it is not guaranteed that each iteration lies inside the feasible region. Internally, most of the methods try to bring it back into the feasible region by a simple clipping of the bounds. In cases where this doesn't work, you can try NonlinearConstraints keep_feasible option and then use the trust-constr method:
import numpy as np
from scipy.optimize import NonlinearConstraint, minimize
def con_fun(x):
return 1 - np.exp(-(x[0]) ** x[1])
# 1.0e-8 <= con_fun <= np.inf
con = NonlinearConstraint(con_fun, 1.0e-8, np.inf, keep_feasible=True)
x0 = np.array((1., 1.))
bounds = np.array([(1e-5, 10), (1e-5, 10)])
opt = minimize(lambda args: -to_minimize(args, data),
x0=x0, constraints=(con,),
bounds=bounds, method="trust-constr")

How to fit a piecewise (alternating linear and constant segments) function to a parabolic function?

I do have a function, for example , but this can be something else as well, like a quadratic or logarithmic function. I am only interested in the domain of . The parameters of the function (a and k in this case) are known as well.
My goal is to fit a continuous piece-wise function to this, which contains alternating segments of linear functions (i.e. sloped straight segments, each with intercept of 0) and constants (i.e. horizontal segments joining the sloped segments together). The first and last segments are both sloped. And the number of segments should be pre-selected between around 9-29 (that is 5-15 linear steps + 4-14 constant plateaus).
Formally
The input function:
The fitted piecewise function:
I am looking for the optimal resulting parameters (c,r,b) (in terms of least squares) if the segment numbers (n) are specified beforehand.
The resulting constants (c) and the breakpoints (r) should be whole natural numbers, and the slopes (b) round two decimal point values.
I have tried to do the fitting numerically using the pwlf package using a segmented constant models, and further processed the resulting constant model with some graphical intuition to "slice" the constant steps with the slopes. It works to some extent, but I am sure this is suboptimal from both fitting perspective and computational efficiency. It takes multiple minutes to generate a fitting with 8 slopes on the range of 1-50000. I am sure there must be a better way to do this.
My idea would be to instead using only numerical methods/ML, the fact that we have the algebraic form of the input function could be exploited in some way to at least to use algebraic transforms (integrals) to get to a simpler optimization problem.
import numpy as np
import matplotlib.pyplot as plt
import pwlf
# The input function
def input_func(x,k,a):
return np.power(x,1/a)*k
x = np.arange(1,5e4)
y = input_func(x, 1.8, 1.3)
plt.plot(x,y);
def pw_fit(func, x_r, no_seg, *fparams):
# working on the specified range
x = np.arange(1,x_r)
y_input = func(x, *fparams)
my_pwlf = pwlf.PiecewiseLinFit(x, y_input, degree=0)
res = my_pwlf.fit(no_seg)
yHat = my_pwlf.predict(x)
# Function values at the breakpoints
y_isec = func(res, *fparams)
# Slope values at the breakpoints
slopes = np.round(y_isec / res, decimals=2)
slopes = slopes[1:]
# For the first slope value, I use the intersection of the first constant plateau and the input function
slopes = np.insert(slopes,0,np.round(y_input[np.argwhere(np.diff(np.sign(y_input - yHat))).flatten()[0]] / np.argwhere(np.diff(np.sign(y_input - yHat))).flatten()[0], decimals=2))
plateaus = np.unique(np.round(yHat))
# If due to rounding slope values (to two decimals), there is no change in a subsequent step, I just remove those segments
to_del = np.argwhere(np.diff(slopes) == 0).flatten()
slopes = np.delete(slopes,to_del + 1)
plateaus = np.delete(plateaus,to_del)
breakpoints = [np.ceil(plateaus[0]/slopes[0])]
for idx, j in enumerate(slopes[1:-1]):
breakpoints.append(np.floor(plateaus[idx]/j))
breakpoints.append(np.ceil(plateaus[idx+1]/j))
breakpoints.append(np.floor(plateaus[-1]/slopes[-1]))
return slopes, plateaus, breakpoints
slo, plat, breaks = pw_fit(input_func, 50000, 8, 1.8, 1.3)
# The piecewise function itself
def pw_calc(x, slopes, plateaus, breaks):
x = x.astype('float')
cond_list = [x < breaks[0]]
for idx, j in enumerate(breaks[:-1]):
cond_list.append((j <= x) & (x < breaks[idx+1]))
cond_list.append(breaks[-1] <= x)
func_list = [lambda x: x * slopes[0]]
for idx, j in enumerate(slopes[1:]):
func_list.append(plateaus[idx])
func_list.append(lambda x, j=j: x * j)
return np.piecewise(x, cond_list, func_list)
y_output = pw_calc(x, slo, plat, breaks)
plt.plot(x,y,y_output);
(Not important, but I think the fitted piecewise function is not continuous as it is. Intervals should be x<=r1; r1<x<=r2; ....)
As Anatolyg has pointed out, it looks to me that in the optimal solution (for the function posted at least, and probably for any where the derivative is different from zero), the horizantal segments will collapse to a point or the minimum segment length (in this case 1).
EDIT---------------------------------------------
The behavior above could only be valid if the slopes could have an intercept. If the intercepts are zero, as posted in the question, one consideration must be taken into account: Is the initial parabolic function defined in zero or nearby? Imagine the function y=0.001 *sqrt(x-1000), then the segments defined as b*x will have a slope close to zero and will be so similar to the constant segments that the best fit will be just the line that without intercept that fits better all the function.
Provided that the function is defined in zero or nearby, you can start by approximating the curve just by linear segments (with intercepts):
divide the function domain in N intervals(equal intervals or whose size is a function of the average curvature (or second derivative) of the function along the domain).
linear fit/regression in each intervals
for each interval, if a point (or bunch of points) in the extreme of any interval is better fitted by the line of the neighbor interval than the line of its interval, this point is assigned to the neighbor interval.
Repeat from 2) until no extreme points are moved.
Linear regressions might be optimized not to calculate all the covariance matrixes from scratch on each iteration, but just adding the contributions of the moved points to the previous covariance matrixes.
Then each linear segment (LSi) is replaced by a combination of a small constant segment at the beginning (Cbi), a linear segment without intercept (Si), and another constant segment at the end (Cei). This segments are easy to calculate as Si will contain the middle point of LSi, and Cbi and Cei will have respectively the begin and end values of the segment LSi. Then the intervals of each segment has to be calculated as an intersection between lines.
With this, the constant end segment will be collinear with the constant begin segment from the next interval so they will merge, resulting in a series of constant and linear segments interleaved.
But this would be a floating point start solution. Next, you will have to apply all the roundings which will mess up quite a lot all the segments as the conditions integer intervals and linear segments without slope can be very confronting. In fact, b,c,r are not totally independent. If ci and ri+1 are known, then bi+1 is already fixed
If nothing is broken so far, the final task will be to minimize the error/cost function (I assume that it will be the integral of the error between the parabolic function and the segments). My guess is that gradients here will be quite a pain, as if you change for example one ci, all the rest of the bj and cj will have to adapt as well due to the integer intervals restriction. However, if you can generalize the derivatives between parameters ( how much do I have to adapt bi+1 if ci changes a unit), you can propagate the change of one parameter to all other parameters and have kind of a gradient. Then for each interval, you can estimate what would be the ideal parameter and averaging all intervals calculate the best gradient step. Let me illustrate this:
Assuming first that r parameters are fixed, if I change c1 by one unit, b2 changes by 0.1, c2 changes by -0.2 and b3 changes by 0.2. This would be the gradient.
Then I estimate, comparing with the parabolic curve, that c1 should increase 0.5 (to reduce the cost by 10 points), b2 should increase 0.2 (to reduce the cost by 5 points), c2 should increase 0.2 (to reduce the cost by 6 points) and b3 should increase 0.1 (to reduce the cost by 9 points).
Finally, the gradient step would be (0.5/1·10 + 0.2/0.1·5 - 0.2/(-0.2)·6 + 0.1/0.2·9)/(10 + 5 + 6 + 9)~= 0.45. Thus, c1 would increase 0.45 units, b2 would increase 0.45·0.1, and so on.
When you add the r parameters to the pot, as integer intervals do not have an proper derivative, calculation is not straightforward. However, you can consider r parameters as floating points, calculate and apply the gradient step and then apply the roundings.
We can integrate the squared error function for linear and constant pieces and let SciPy optimize it. Python 3:
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize
xl = 1
xh = 50000
a = 1.3
p = 1 / a
n = 8
def split_b_and_c(bc):
return bc[::2], bc[1::2]
def solve_for_r(b, c):
r = np.empty(2 * n)
r[0] = xl
r[1:-1:2] = c / b[:-1]
r[2::2] = c / b[1:]
r[-1] = xh
return r
def linear_residual_integral(b, x):
return (
(x ** (2 * p + 1)) / (2 * p + 1)
- 2 * b * x ** (p + 2) / (p + 2)
+ b ** 2 * x ** 3 / 3
)
def constant_residual_integral(c, x):
return x ** (2 * p + 1) / (2 * p + 1) - 2 * c * x ** (p + 1) / (p + 1) + c ** 2 * x
def squared_error(bc):
b, c = split_b_and_c(bc)
r = solve_for_r(b, c)
linear = np.sum(
linear_residual_integral(b, r[1::2]) - linear_residual_integral(b, r[::2])
)
constant = np.sum(
constant_residual_integral(c, r[2::2])
- constant_residual_integral(c, r[1:-1:2])
)
return linear + constant
def evaluate(x, b, c, r):
i = 0
while x > r[i + 1]:
i += 1
return b[i // 2] * x if i % 2 == 0 else c[i // 2]
def main():
bc0 = (xl + (xh - xl) * np.arange(1, 4 * n - 2, 2) / (4 * n - 2)) ** (
p - 1 + np.arange(2 * n - 1) % 2
)
bc = scipy.optimize.minimize(
squared_error, bc0, bounds=[(1e-06, None) for i in range(2 * n - 1)]
).x
b, c = split_b_and_c(bc)
r = solve_for_r(b, c)
X = np.linspace(xl, xh, 1000)
Y = [evaluate(x, b, c, r) for x in X]
plt.plot(X, X ** p)
plt.plot(X, Y)
plt.show()
if __name__ == "__main__":
main()
I have tried to come up with a new solution myself, based on the idea of #Amo Robb, where I have partitioned the domain, and curve fitted a dual - constant and linear - piece together (with the help of np.maximum). I have used the 1 / f(x)' as the function to designate the breakpoints, but I know this is arbitrary and does not provide a global optimum. Maybe there is some optimal function for these breakpoints. But this solution is OK for me, as it might be appropriate to have a better fit at the first segments, at the expense of the error for the later segments. (The task itself is actually a cost based retail margin calculation {supply price -> added margin}, as the retail POS software can only work with such piecewise margin function).
The answer from #David Eisenstat is correct optimal solution if the parameters are allowed to be floats. Unfortunately the POS software can not use floats. It is OK to round up c-s and r-s afterwards. But the b-s should be rounded to two decimals, as those are inputted as percents, and this constraint would ruin the optimal solution with long floats. I will try to further improve my solution with both Amo's and David's valuable input. Thank You for that!
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
# The input function f(x)
def input_func(x,k,a):
return np.power(x,1/a) * k
# 1 / f(x)'
def one_per_der(x,k,a):
return a / (k * np.power(x, 1/a-1))
# 1 / f(x)' inverted
def one_per_der_inv(x,k,a):
return np.power(a / (x*k), a / (1-a))
def segment_fit(start,end,y,first_val):
b, _ = curve_fit(lambda x,b: np.maximum(first_val, b*x), np.arange(start,end), y[start-1:end-1])
b = float(np.round(b, decimals=2))
bp = np.round(first_val / b)
last_val = np.round(b * end)
return b, bp, last_val
def pw_fit(end_range, no_seg, **fparams):
y_bps = np.linspace(one_per_der(1, **fparams), one_per_der(end_range,**fparams) , no_seg+1)[1:]
x_bps = np.round(one_per_der_inv(y_bps, **fparams))
y = input_func(x, **fparams)
slopes = [np.round(float(curve_fit(lambda x,b: x * b, np.arange(1,x_bps[0]), y[:int(x_bps[0])-1])[0]), decimals = 2)]
plats = [np.round(x_bps[0] * slopes[0])]
bps = []
for i, xbp in enumerate(x_bps[1:]):
b, bp, last_val = segment_fit(int(x_bps[i]+1), int(xbp), y, plats[i])
slopes.append(b); bps.append(bp); plats.append(last_val)
breaks = sorted(list(x_bps) + bps)[:-1]
# If due to rounding slope values (to two decimals), there is no change in a subsequent step, I just remove those segments
to_del = np.argwhere(np.diff(slopes) == 0).flatten()
breaks_to_del = np.concatenate((to_del * 2, to_del * 2 + 1))
slopes = np.delete(slopes,to_del + 1)
plats = np.delete(plats[:-1],to_del)
breaks = np.delete(breaks,breaks_to_del)
return slopes, plats, breaks
def pw_calc(x, slopes, plateaus, breaks):
x = x.astype('float')
cond_list = [x < breaks[0]]
for idx, j in enumerate(breaks[:-1]):
cond_list.append((j <= x) & (x < breaks[idx+1]))
cond_list.append(breaks[-1] <= x)
func_list = [lambda x: x * slopes[0]]
for idx, j in enumerate(slopes[1:]):
func_list.append(plateaus[idx])
func_list.append(lambda x, j=j: x * j)
return np.piecewise(x, cond_list, func_list)
fparams = {'k':1.8, 'a':1.2}
end_range = 5e4
no_steps = 10
x = np.arange(1, end_range)
y = input_func(x, **fparams)
slopes, plats, breaks = pw_fit(end_range, no_steps, **fparams)
y_output = pw_calc(x, slopes, plats, breaks)
plt.plot(x,y_output,y);

Using python built-in functions for coupled ODEs

THIS PART IS JUST BACKGROUND IF YOU NEED IT
I am developing a numerical solver for the Second-Order Kuramoto Model. The functions I use to find the derivatives of theta and omega are given below.
# n-dimensional change in omega
def d_theta(omega):
return omega
# n-dimensional change in omega
def d_omega(K,A,P,alpha,mask,n):
def layer1(theta,omega):
T = theta[:,None] - theta
A[mask] = K[mask] * np.sin(T[mask])
return - alpha*omega + P - A.sum(1)
return layer1
These equations return vectors.
QUESTION 1
I know how to use odeint for two dimensions, (y,t). for my research I want to use a built-in Python function that works for higher dimensions.
QUESTION 2
I do not necessarily want to stop after a predetermined amount of time. I have other stopping conditions in the code below that will indicate whether the system of equations converges to the steady state. How do I incorporate these into a built-in Python solver?
WHAT I CURRENTLY HAVE
This is the code I am currently using to solve the system. I just implemented RK4 with constant time stepping in a loop.
# This function randomly samples initial values in the domain and returns whether the solution converged
# Inputs:
# f change in theta (d_theta)
# g change in omega (d_omega)
# tol when step size is lower than tolerance, the solution is said to converge
# h size of the time step
# max_iter maximum number of steps Runge-Kutta will perform before giving up
# max_laps maximum number of laps the solution can do before giving up
# fixed_t vector of fixed points of theta
# fixed_o vector of fixed points of omega
# n number of dimensions
# theta initial theta vector
# omega initial omega vector
# Outputs:
# converges true if it nodes restabilizes, false otherwise
def kuramoto_rk4_wss(f,g,tol_ss,tol_step,h,max_iter,max_laps,fixed_o,fixed_t,n):
def layer1(theta,omega):
lap = np.zeros(n, dtype = int)
converges = False
i = 0
tau = 2 * np.pi
while(i < max_iter): # perform RK4 with constant time step
p_omega = omega
p_theta = theta
T1 = h*f(omega)
O1 = h*g(theta,omega)
T2 = h*f(omega + O1/2)
O2 = h*g(theta + T1/2,omega + O1/2)
T3 = h*f(omega + O2/2)
O3 = h*g(theta + T2/2,omega + O2/2)
T4 = h*f(omega + O3)
O4 = h*g(theta + T3,omega + O3)
theta = theta + (T1 + 2*T2 + 2*T3 + T4)/6 # take theta time step
mask2 = np.array(np.where(np.logical_or(theta > tau, theta < 0))) # find which nodes left [0, 2pi]
lap[mask2] = lap[mask2] + 1 # increment the mask
theta[mask2] = np.mod(theta[mask2], tau) # take the modulus
omega = omega + (O1 + 2*O2 + 2*O3 + O4)/6
if(max_laps in lap): # if any generator rotates this many times it probably won't converge
break
elif(np.any(omega > 12)): # if any of the generators is rotating this fast, it probably won't converge
break
elif(np.linalg.norm(omega) < tol_ss and # assert the nodes are sufficiently close to the equilibrium
np.linalg.norm(omega - p_omega) < tol_step and # assert change in omega is small
np.linalg.norm(theta - p_theta) < tol_step): # assert change in theta is small
converges = True
break
i = i + 1
return converges
return layer1
Thanks for your help!
You can wrap your existing functions into a function accepted by odeint (option tfirst=True) and solve_ivp as
def odesys(t,u):
theta,omega = u[:n],u[n:]; # or = u.reshape(2,-1);
return [ *f(omega), *g(theta,omega) ]; # or np.concatenate([f(omega), g(theta,omega)])
u0 = [*theta0, *omega0]
t = linspan(t0, tf, timesteps+1);
u = odeint(odesys, u0, t, tfirst=True);
#or
res = solve_ivp(odesys, [t0,tf], u0, t_eval=t)
The scipy methods pass numpy arrays and convert the return value into same, so that you do not have to care in the ODE function. The variant in comments is using explicit numpy functions.
While solve_ivp does have event handling, using it for a systematic collection of events is rather cumbersome. It would be easier to advance some fixed step, do the normalization and termination detection, and then repeat this.
If you want to later increase efficiency somewhat, use directly the stepper classes behind solve_ivp.

How to add several constraints to differential_evolution?

I have the same problem as in this question but don't want to add only one but several constraints to the optimization problem.
So e.g. I want to maximize x1 + 5 * x2 with the constraints that the sum of x1 and x2 is smaller than 5 and x2 is smaller than 3 (needless to say that the actual problem is far more complicated and cannot just thrown into scipy.optimize.minimize as this one; it just serves to illustrate the problem...).
I can to an ugly hack like this:
from scipy.optimize import differential_evolution
import numpy as np
def simple_test(x, more_constraints):
# check wether all constraints evaluate to True
if all(map(eval, more_constraints)):
return -1 * (x[0] + 5 * x[1])
# if not all constraints evaluate to True, return a positive number
return 10
bounds = [(0., 5.), (0., 5.)]
additional_constraints = ['x[0] + x[1] <= 5.', 'x[1] <= 3']
result = differential_evolution(simple_test, bounds, args=(additional_constraints, ), tol=1e-6)
print(result.x, result.fun, sum(result.x))
This will print
[ 1.99999986 3. ] -16.9999998396 4.99999985882
as one would expect.
Is there a better/ more straightforward way to add several constraints than using the rather 'dangerous' eval?
An example is something like this::
additional_constraints = [lambda(x): x[0] + x[1] <= 5., lambda(x):x[1] <= 3]
def simple_test(x, more_constraints):
# check wether all constraints evaluate to True
if all(constraint(x) for constraint in more_constraints):
return -1 * (x[0] + 5 * x[1])
# if not all constraints evaluate to True, return a positive number
return 10
There is a proper solution to the problem described in the question, to enforce multiple nonlinear constraints with scipy.optimize.differential_evolution.
The proper way is by using the scipy.optimize.NonlinearConstraint function.
Here below I give a non-trivial example of optimizing the classic Rosenbrock function inside a region defined by the intersection of two circles.
import numpy as np
from scipy import optimize
# Rosenbrock function
def fun(x):
return 100*(x[1] - x[0]**2)**2 + (1 - x[0])**2
# Function defining the nonlinear constraints:
# 1) x^2 + (y - 3)^2 < 4
# 2) (x - 1)^2 + (y + 1)^2 < 13
def constr_fun(x):
r1 = x[0]**2 + (x[1] - 3)**2
r2 = (x[0] - 1)**2 + (x[1] + 1)**2
return r1, r2
# No lower limit on constr_fun
lb = [-np.inf, -np.inf]
# Upper limit on constr_fun
ub = [4, 13]
# Bounds are irrelevant for this problem, but are needed
# for differential_evolution to compute the starting points
bounds = [[-2.2, 1.5], [-0.5, 2.2]]
nlc = optimize.NonlinearConstraint(constr_fun, lb, ub)
sol = optimize.differential_evolution(fun, bounds, constraints=nlc)
# Accurate solution by Mathematica
true = [1.174907377273171, 1.381484428610871]
print(f"nfev = {sol.nfev}")
print(f"x = {sol.x}")
print(f"err = {sol.x - true}\n")
This prints the following with default parameters:
nfev = 636
x = [1.17490808 1.38148613]
err = [7.06260962e-07 1.70116282e-06]
Here is a visualization of the function (contours) and the feasible region defined by the nonlinear constraints (shading inside the green line). The constrained global minimum is indicated by the yellow dot, while the magenta one shows the unconstrained global minimum.
This constrained problem has an obvious local minimum at (x, y) ~ (-1.2, 1.4) on the boundary of the feasible region which will make local optimizers fail to converge to the global minimum for many starting locations. However, differential_evolution consistently finds the global minimum as expected.

Need help fixing my implementation of RK4

I'd appreciate it if someone more experienced on implementation would help me to spot my logical flaw in my current code. For the past couple of hours I've been stuck with the implementation and testing of various step sizes for the following RK4 function to solve the Lotka-Volterra Differential equation.
I did my absolute best to ensure readability of the code and comment out crucial steps, so the code below should be clear.
import matplotlib.pyplot as plt
import numpy as np
def model(state,t):
"""
A function that creates an 1x2-array containing the Lotka Volterra Differential equation
Parameter assignement/convention:
a natural growth rate of the preys
b chance of being eaten by a predator
c dying rate of the predators per week
d chance of catching a prey
"""
x,y = state # will corresponding to initial conditions
# consider it as a vector too
a = 0.08
b = 0.002
c = 0.2
d = 0.0004
return np.array([ x*(a-b*y) , -y*(c - d*x) ]) # corresponds to [dx/dt, dy/dt]
def rk4( f, x0, t):
"""
4th order Runge-Kutta method implementation to solve x' = f(x,t) with x(t[0]) = x0.
INPUT:
f - function of x and t equal to dx/dt.
x0 - the initial condition(s).
Specifies the value of x # t = t[0] (initial).
Can be a scalar or a vector (NumPy Array)
Example: [x0, y0] = [500, 20]
t - a time vector (array) at which the values of the solution are computed at.
t[0] is considered as the initial time point
the step size h is dependent on the time vector, choosing more points will
result in a smaller step size.
OUTPUT:
x - An array containing the solution evaluated at each point in the t array.
"""
n = len( t )
x = np.array( [ x0 ] * n ) # creating an array of length n
for i in xrange( n - 1 ):
h = t[i+1]- t[i] # step size, dependent on time vector
# starting below - the implementation of the RK4 algorithm:
# for further informations visit http://en.wikipedia.org/wiki/Runge-Kutta_methods
# k1 is the increment based on the slope at the beginning of the interval (same as Euler)
# k2 is the increment based on the slope at the midpoint of the interval
# k3 is AGAIN the increment based on the slope at the midpoint
# k4 is the increment based on the slope at the end of the interval
k1 = f( x[i], t[i] )
k2 = f( x[i] + 0.5 * h * k1, t[i] + 0.5 * h )
k3 = f( x[i] + 0.5 * h * k2, t[i] + 0.5 * h )
k4 = f( x[i] + h * k3, t[i] + h )
# finally computing the weighted average and storing it in the x-array
t[i+1] = t[i] + h
x[i+1] = x[i] + h * ( ( k1 + 2.0 * ( k2 + k3 ) + k4 ) / 6.0 )
return x
################################################################
# just the graphical output
# initial conditions for the system
x0 = 500
y0 = 20
# vector of times
t = np.linspace( 0, 200, 150 )
result = rk4( model,[x0,y0], t )
plt.plot(t,result)
plt.xlabel('Time')
plt.ylabel('Population Size')
plt.legend(('x (prey)','y (predator)'))
plt.title('Lotka-Volterra Model')
plt.show()
The current output looks 'okay-ish' on a small interval and then goes 'berserk'. Oddly enough the code seems to perform better when I choose a larger step size rather than a small one, which points out that my implementation must be wrong, or maybe my model is off. I couldn't spot the error myself.
Output (wrong):
and this is the desired output which can be easily obtained by using one of Scipys integration modules. Note that on the time interval [0,50] the simulation seems correct, then it gets worse by every step.
Unfortunately, you fell into the same trap I've occasionally fallen into: your initial x0 array contains integers, and thus, all resulting x[i] values will be converted to an integer after calculation.
Why is that? Because int is the type of your initial conditions:
x0 = 500
y0 = 20
The solution is, of course, to explicitly make them floats:
x0 = 500.
y0 = 20.
So why does scipy does it correctly when you feed it integer starting values? It probably converts them to float before starting the actual calculation. You could for example do:
x = np.array( [ x0 ] * n, dtype=np.float)
and then you're still safe to use integer initial conditions without problems.
At least this way, the conversion is done inside the function for once and for all, and if you ever use it again half a year (or, someone else uses it), you can't fall into that trap again.

Categories