Optimizing with more variables in Python - python

I have two simple arrays in Python and I would like to minimize the sumproduct of these arrays with respect to given target value by changing the values in the first array.
Here is an example:
import numpy as np
from scipy.optimize import fmin
def func2(params):
a, b, c = params
arr1 = [a, b, c]
arr2 = [150, 200, 230]
res = sum(np.multiply(arr1, arr2))
tar = 2
error = res - tar
return error
initial_guess = [0.0025, 0.0030, 0.0035]
finarr = fmin(func2, initial_guess)
print(finarr)
The code above runs but I receive wrong results because the numbers in first array should be ~ 0.0027, 0.0033 and 0.0040.
I would be grateful if someone can help me.
Thank you.

You need to return the absolute value of the error in func2.
error = abs(res - tar)

Related

weird behavior of numba guvectorize

I write a function to test numba.guvectorize. This function takes product of two numpy arrays and compute the sum after first axis, as following:
from numba import guvectorize, float64
import numpy as np
#guvectorize([(float64[:], float64[:], float64)], '(n),(n)->()')
def g(x, y, res):
res = np.sum(x * y)
However, the above guvectorize function returns wrong results as shown below:
>>> a = np.random.randn(3,4)
>>> b = np.random.randn(3,4)
>>> np.sum(a * b, axis=1)
array([-0.83053829, -0.15221319, -2.27825015])
>>> g(a, b)
array([4.67406747e-310, 0.00000000e+000, 1.58101007e-322])
What might be causing this problem?
Function g() receives an uninitialized array through the res parameter. Assigning a new value to it doesn't modify the original array passed to the function.
You need to replace the contents of res (and declare it as an array):
#guvectorize([(float64[:], float64[:], float64[:])], '(n),(n)->()')
def g(x, y, res):
res[:] = np.sum(x * y)
The function operates on 1D vectors and returns a scalar (thus the signature (n),(n)->()) and guvectorize does the job of dealing with 2D inputs and returning a 1D output.
>>> a = np.random.randn(3,4)
>>> b = np.random.randn(3,4)
>>> np.sum(a * b, axis=1)
array([-3.1756397 , 5.72632531, 0.45359806])
>>> g(a, b)
array([-3.1756397 , 5.72632531, 0.45359806])
But the original Numpy function np.sum is already vectorized and compiled, so there is little speed gain in using guvectorize in this specific case.
Your a and b arrays are 2-dimensional, while your guvectorized function has signature of accepting 1D arrays and returning 0D scalar. You have to modify it to accept 2D and return 1D.
In one case you do np.sum with axis = 1 in another case without it, you have to do same thing in both cases.
Also instead of res = ... use res[...] = .... Maybe it is not the problem in case of guvectorize but it can be a general problem in Numpy code because you have to assign values instead of variable reference.
In my case I added cache = True param to guvectorize decorator, it only speeds up running through caching/re-using Numba compiled code and not re-compiling it on every run. It just speeds up things.
Full modified corrected code see below:
Try it online!
from numba import guvectorize, float64
import numpy as np
#guvectorize([(float64[:, :], float64[:, :], float64[:])], '(n, m),(n, m)->(n)', cache = True)
def g(x, y, res):
res[...] = np.sum(x * y, axis = 1)
# Test
np.random.seed(0)
a = np.random.randn(3, 4)
b = np.random.randn(3, 4)
print(np.sum(a * b, axis = 1))
print(g(a, b))
Output:
[ 2.57335386 3.41749149 -0.42290296]
[ 2.57335386 3.41749149 -0.42290296]

SciPy Linprog() Optimization

I am trying to solve the following optimization problem:
Objective function is : min 1𝑥1+2𝑥2
Constraints: 𝑥1+4∗𝑥2>=50 and 𝑥1>=0,𝑥2>=0
This is a linear example. So we use linprog() function.
The answer needs to be an integer, how do I set up constraints so the answer is not a decimal?
import numpy as np
from scipy import optimize
c = [1, 2]
A = [[-1,-4]]
b = -50
x_bounds = (0, None)
y_bounds = (0, None)
result = optimize.linprog(c, A_ub = A, b_ub = b, bounds=[x_bounds, y_bounds] )
print(result)
It is not possible to make integer programming with scipy.
This problem has already discussed.
Please check this post :post

NLopt minimize eigenvalue, Python

I have matrices where elements can be defined as arithmetic expressions and have written Python code to optimise parameters in these expressions in order to minimize particular eigenvalues of the matrix. I have used scipy to do this, but was wondering if it is possible with NLopt as I would like to try a few more algorithms which it has (derivative free variants).
In scipy I would do something like this:
import numpy as np
from scipy.linalg import eig
from scipy.optimize import minimize
def my_func(x):
y, w = x
arr = np.array([[y+w,-2],[-2,w-2*(w+y)]])
ev, ew=eig(arr)
return ev[0]
x0 = np.array([10, 3.45]) # Initial guess
minimize(my_func, x0)
In NLopt I have tried this:
import numpy as np
from scipy.linalg import eig
import nlopt
def my_func(x,grad):
arr = np.array([[x[0]+x[1],-2],[-2,x[1]-2*(x[1]+x[0])]])
ev, ew=eig(arr)
return ev[0]
opt = nlopt.opt(nlopt.LN_BOBYQA, 2)
opt.set_lower_bounds([1.0,1.0])
opt.set_min_objective(my_func)
opt.set_xtol_rel(1e-7)
x = opt.optimize([10.0, 3.5])
minf = opt.last_optimum_value()
print "optimum at ", x[0],x[1]
print "minimum value = ", minf
print "result code = ", opt.last_optimize_result()
This returns:
ValueError: nlopt invalid argument
Is NLopt able to process this problem?
my_func should return double, posted sample return complex
print(type(ev[0]))
None
<class 'numpy.complex128'>
ev[0]
(13.607794065928395+0j)
correct version of my_func:
def my_func(x, grad):
arr = np.array([[x[0]+x[1],-2],[-2,x[1]-2*(x[1]+x[0])]])
ev, ew=eig(arr)
return ev[0].real
updated sample returns:
optimum at [ 1. 1.]
minimum value = 2.7015621187164243
result code = 4

Python SciPy linprog optimization fails with status 3

Trying to minimize a simple linear function with linprog. The coefficients are the elements of arr2 multiplied by -1. There are only inequality constraints for each variable, such as -1 <= x1 <= 1, -2 <= x2 <= 2 and so on.
If a choose not to specify bounds in linprog:
from scipy.optimize import linprog
import numpy as np
import pandas as pd
numdim = 28
arr1 = np.ones(numdim)
arr1 = - arr1
arr2 = np.array([
19.53,
128.97,
3538,
931.8,
0.1825,
150.88,
10315,
0.8109,
3.9475,
3022,
31.77,
10323,
110.93,
220,
2219.5,
119.2,
703.6,
616,
338,
84.67,
151.13,
111.28,
29.515,
29.67,
158800,
167.15,
0.06802,
1179
])
constr_a = []
for i in range(numdim):
constr_default = np.zeros(numdim)
constr_default[i] = 1
constr_a.append(constr_default)
for i in range(numdim):
constr_default = np.zeros(numdim)
constr_default[i] = -1
constr_a.append(constr_default)
constr_a = np.asarray(constr_a)
constr_b = np.arange(1, 2*numdim + 1, 1)
constr_b[numdim:] = constr_b[:numdim]
print linprog(np.transpose(arr1 * arr2), constr_a, constr_b, bounds=(None, None))
I get the following result:
fun: -4327476.2887400016
message: 'Optimization failed. The problem appears to be unbounded.'
status: 3
I've tried changing the last row to:
print linprog(np.transpose(arr1 * arr2), constr_a, constr_b, bounds=(-1000, 1000))
The numbers specified as bounds are random. The output is:
fun: -4327476.2887400296
message: 'Optimization terminated successfully.'
status: 0
which gives us a slightly different result and the desired status.
My question is, do I misuse the library and in which way? Which answer is correct? This code was expected to work without specifying the 'bounds' parameter. I cannot use this parameter because these simple constraints are unique for each variable.
I use python 2.7 and scipy 0.17.1. Big thanks in advance.
Upd
constr_a should be a matrix according to the documentation (https://docs.scipy.org/doc/scipy/reference/optimize.linprog-simplex.html) and actually is in the code. To be sure the syntax is correct, we can cut the number of dimensions to 2:
from scipy.optimize import linprog
import numpy as np
import pandas as pd
numdim = 2
arr1 = np.ones(numdim)
arr1 = - arr1
arr2 = np.array([
19.53,
128.97
])
constr_a = []
for i in range(numdim):
constr_default = np.zeros(numdim)
constr_default[i] = 1
constr_a.append(constr_default)
for i in range(numdim):
constr_default = np.zeros(numdim)
constr_default[i] = -1
constr_a.append(constr_default)
constr_a = np.asarray(constr_a)
constr_b = np.arange(1, 2*numdim + 1, 1)
constr_b[numdim:] = constr_b[:numdim]
print constr_a
print constr_b
print linprog(np.transpose(arr1 * arr2), constr_a, constr_b, bounds=(None, None))
and this will work.
the constr_a list is not properly formed. It is an array of array's instead of being an array of scalar. This might be leading to a improper lower bound causing the optimization to fail.
Perhaps
constr_a.append(constr_default)
should be
constr_a.append(constr_default[i])
inspect both the bound arrays to make sure they have proper form and values.

fsolve - mismatch between input and output

I'm trying to solve an overdetmined system of equations with three unknowns. I'm able to get solution with fsolve and lsqnonlin in MATLAB by calling the system of equations through a for loop.
But in python using scipy, I'm getting the following error message:
fsolve: there is a mismatch between the input and output shape of the 'func' argument 'fnz'
The code is given below:
from xlrd import open_workbook
import numpy as np
from scipy import optimize
g = [0.5,1,1.5]
wb = open_workbook('EThetaValuesA.xlsx')
sheet=wb.sheet_by_index(0)
y=sheet.col_values(0,1)
t1=sheet.col_values(1,1)
t2=sheet.col_values(2,1)
t3=sheet.col_values(3,1)
def fnz(g):
i=0
sol=[0 for i in range(len(t1))]
x1 = g[0]
x2 = g[1]
x3 = g[2]
print len(t1)
for i in range(len(t1)):
# various set of t1,t2 and t3 gives the various eqns
print i
sol[i]=x1+t1[i]/(x2*t2[i]+x3*t3[i])-y[i]
return sol
Anz = optimize.fsolve(fnz,g)
print Anz
Could anyone please suggest where I'm wrong? Thank you in advance.
The exception means that the result from fnz() function call does not has the same dimension as the input g, which is a list of 3 elements, or can be seen as an array of shape (3,).
To illustrate the problem, if we define:
def fnz(g):
return [2,3,5]
Anz = optimize.fsolve(fnz,g)
There will not be such an exception. But this will:
def fnz(g):
return [2,3,4,5]
Anz = optimize.fsolve(fnz,g)
The result from fnz() should have the same length as t1, which I am sure is longer than 3 elements.

Categories