Optimization of two vectors in python - python

I need to do an optimization of two vectors x and y, the objective function is a function of these two vectors f(x,y) and x and y are also related with a-x/y =0, is there a well-known method to solve this on python?

Well, your question is general, it'd be great if you provide more details. But I grabbed a code snippet from here, where you can edit. scipy has a class optimize which has a couple of methods to optimize functions.
import numpy as np
from scipy.optimize import minimize
def f(x,y):
return 10 - x/y
# initial values
x0 = 1.3
y0 = 0.5
res = minimize(f, [x0, y0], method='...')
print(res.x)
If you provide more info like what algorithm you want to use, I can provide more precise code.

Related

Is there a method for finding the root of a function represented as an narray?

I have a function represented as a narray i. e. y = f(x), where y and x are two narrays.
I am searching for a method that find the roots of f(x).
Reading the scipy documentation, I was able to find just methods that works on user defined functions, like scipy.optimize.root_scalar. I thought about using scipy.interpolate.interp1d to get an interpolated version of my function to be used in scipy.optimize.root_scalar, but I'm not sure it can work and it seems pretty complicated.
Is it there some other function that I can use instead?
You have to interpolate a function defined by numpy arrays as all the solvers require a function that can return a value for any input x, not just those in your array. But this is not complicated, here is an example
from scipy import optimize
from scipy import interpolate
# our xs and ys
xs = np.array([0,2,5])
ys = np.array([-3,-1,2])
# interpolated function
f = interpolate.interp1d(xs, ys)
sol = optimize.root_scalar(f, bracket = [xs[0],xs[-1]])
print(f'root is {sol.root}')
# check
f0 = f(sol.root)
print(f'value of function at the root: f({sol.root})={f0}')
output:
root is 3.0
value of function at the root: f(3.0)=0.0
You may also want to interpolate with higher-degree polynomials for higher accuracy of your root-finding, eg How to perform cubic spline interpolation in python?

Is there an alternative to scipy.optimize.lsq_linear for solving a non-square matrix equation by minimizing the L1 norm instead?

I wrote a python code that imports some data which I then manipulate to get a nonsquare matrix A. I then used the following code to solve the matrix equation.
from scipy.optimize import lsq_linear
X = lsq_linear(A_normalized, Y, bounds=(0, np.inf), method='bvls')
As you can see I used this particular method because I require all the X coefficients to be positive. However, I realized that lsq_linear from scipy.optimize minimizes the L2 norm of AX - Y to solve the equation. I was wondering if anyone knows of an alternative to lsq_linear that solves the equation by minimizing the L1 norm instead. I have looked in the scipy documentation, but so far I haven't had any luck finding such an alternative myself.
(Note that I actually know what Y is and what I am trying to solve for is X).
Edit: After trying various suggestions from the comments section and after much frustration I finally managed to sort of make it work using cvxpy. However, there is a problem with it. First of all the elements of X are supposed to all be positive, but that is not the case. Moreover, when I multiply the matrix A_normalized with X they are not equal to Y. My code is below. Any suggestions on what I can do to fix it would be highly appreciated. (By the way my original use of lsq_linear in the code above gave me an X that satisfied A_normalized*X = Y.)
import cvxpy as cp
from cvxpy.atoms import norm
import numpy as np
Y = specificity
x = cp.Variable(22)
objective = cp.Minimize(norm((A_normalized # x - Y), 1))
constraints = [0 <= x]
prob = cp.Problem(objective, constraints)
result = prob.solve()
print("Optimal value", result)
print("Optimal var")
X = x.value # A numpy ndarray.
print(X)
A_normalized # X == Y

Solving an ODE with a random variable with a custom pdf

I have the following ODE:
where p is a probability, and y is a random variable with pdf:
epsilon is a small cut-off value (typically 0.0001).
I am looking to numerically solve this system in Python, for t = 0 to about 500.
Is there a way I can implement this using numpy/scipy?
There's an annoying lack of quality Python packages for SDE integration. You have kind of an unusual setup, in that your random variable has an explicit dependence on your integrand (at least it appears that y depends on p). Because of this, it'll probably be hard to find a preexisting implementation that meets your needs.
Fortunately, the simplest method for SDE integration, Euler-Maruyama, is very easy to implement, as in the eulmar function below:
from matplotlib import pyplot as plt
import numpy as np
def eulmar(func, randfunc, x0, tinit, tfinal, dt):
times = np.arange(tinit, tfinal + dt, dt)
x = np.zeros(times.size, dtype=float)
x[0] = x0
for i,t in enumerate(times[1:]):
x[i+1] = x[i] + func(x[i], t) + randfunc(x[i], t)
return times, x
You could then use eulmar to integrate your SDE like so:
def func(x, t):
return 1 - 2*x
def randfunc(x, t):
return np.random.rand()
times,x = eulmar(func, randfunc, 0, 0, 500, 5)
plt.plot(times, x)
You'll have to supply your own randfunc however. Like above, it should be a function that takes x and t as arguments and returns a single sample from your random variable y. If you're having trouble coming up with a way to generate samples of y, since you know the PDF you can always use rejection sampling (though it does tend to be fairly inefficient).
Notes
This is not a particularly efficient implementation of Euler-Maruyama. For example, the random samples are usually generated all at once (eg np.random.rand(500)). However, you can't pre-generate your random samples, since y depends on p.

Estimate Euclidean transformation with python

I want to do something similar to what in image analysis would be a standard 'image registration' using features.
I want to find the best transformation that transforms a set of 2D coordinates A in another one B.
But I want to add an extra constraint being that the transformation is 'rigid/Euclidean transformation' Meaning that there is no scaling but only translation and rotation.
Normally allowing scaling I would do:
from skimage import io, transform
destination = array([[1.0,2.0],[1.0,4.0],[3.0,3.0],[3.0,7.0]])
source = array([[1.2,1.7],[1.1,3.8],[3.1,3.4],[2.6,7.0]])
T = transform.estimate_transform('similarity',source,destination)
I believeestimate_transform under the hood just solves a least squares problem.
But I want to add the constraint of no scaling.
Are there any function in skimage or other packages that solve this?
Probably I need to write my own optimization problem with scipy, CVXOPT or cvxpy.
Any help to phrase/implement this optimization problem?
EDIT:
My implementation thanks to Stefan van der Walt Answer
from matplotlib.pylab import *
from scipy.optimize import *
def obj_fun(pars,x,src):
theta, tx, ty = pars
H = array([[cos(theta), -sin(theta), tx],\
[sin(theta), cos(theta), ty],
[0,0,1]])
src1 = c_[src,ones(src.shape[0])]
return sum( (x - src1.dot(H.T)[:,:2])**2 )
def apply_transform(pars, src):
theta, tx, ty = pars
H = array([[cos(theta), -sin(theta), tx],\
[sin(theta), cos(theta), ty],
[0,0,1]])
src1 = c_[src,ones(src.shape[0])]
return src1.dot(H.T)[:,:2]
res = minimize(obj_fun,[0,0,0],args=(dst,src), method='Nelder-Mead')
With that extra constraint you are no longer solving a linear least squares problem, so you'll have to use one of SciPy's minimization functions. The inner part of your minimization would set up a matrix H:
H = np.array([[np.cos(theta), -np.sin(theta), tx],
[np.sin(theta), np.cos(theta), ty],
[0, 0, 1]])
Then, you would compute the distance
|x_target - H.dot(x_source)|
for all data-points and sum the errors. Now, you have a cost function that you can send to the minimization function. You probably will also want to make use of RANSAC, which is available as skimage.measure.ransac, to reject outliers.
skimage now provides native support in the transform module.
http://scikit-image.org/docs/dev/api/skimage.transform.html#skimage.transform.estimate_transform
Somewhat easier than OpenCV I find. There is an extensive set of functions which covers all use cases.

Orthogonal regression fitting in scipy least squares method

The leastsq method in scipy lib fits a curve to some data. And this method implies that in this data Y values depends on some X argument. And calculates the minimal distance between curve and the data point in the Y axis (dy)
But what if I need to calculate minimal distance in both axes (dy and dx)
Is there some ways to implement this calculation?
Here is a sample of code when using one axis calculation:
import numpy as np
from scipy.optimize import leastsq
xData = [some data...]
yData = [some data...]
def mFunc(p, x, y):
return y - (p[0]*x**p[1]) # is takes into account only y axis
plsq, pcov = leastsq(mFunc, [1,1], args=(xData,yData))
print plsq
I recently tryed scipy.odr library and it returns the proper results only for linear function. For other functions like y=a*x^b it returns wrong results. This is how I use it:
def f(p, x):
return p[0]*x**p[1]
myModel = Model(f)
myData = Data(xData, yData)
myOdr = ODR(myData, myModel , beta0=[1,1])
myOdr.set_job(fit_type=0) #if set fit_type=2, returns the same as leastsq
out = myOdr.run()
out.pprint()
This returns wrong results, not desired, and in some input data not even close to real.
May be, there is some special ways of using it, what do I do wrong?
I've found the solution. Scipy Odrpack works noramally but it needs a good initial guess for correct results. So I divided the process into two steps.
First step: find the initial guess by using ordinaty least squares method.
Second step: substitude these initial guess in ODR as beta0 parameter.
And it works very well with an acceptable speed.
Thank you guys, your advice directed me to the right solution
scipy.odr implements the Orthogonal Distance Regression. See the instructions for basic use in the docstring and documentation.
If/when you are able to invert the function described by p you may just include x-pinverted(y) in mFunc, I guess as sqrt(a^2+b^2), so (pseudo code)
return sqrt( (y - (p[0]*x**p[1]))^2 + (x - (pinverted(y))^2)
for example for
y=kx+m p=[m,k]
pinv=[-m/k,1/k]
return sqrt( (y - (p[0]+x*p[1]))^2 + (x - (pinv[0]+y*pinv[1]))^2)
But what you ask for is in some cases problematic. For example, if a polynomial (or your x^j) curve has a minimum ym at y(m) and you have a point x,y lower than ym, what kind of value do you want to return? There's not always a solution.
you can use the ONLS package in R.

Categories