Multiple variables and arguments in SciPy's optimize.fmin - python

I wish to use scipy's optimize.fmin function to find the minimum of a function, which is a function of both variables I wish to minimize over and parameters which do not change (are not optimized over).
I am able to do this when optimizing over a single variable here:
from scipy import optimize
c1=4
c2=-1
def f(x,c1,c2):
return x**2+c1+c2
guess_f=1
minimum = optimize.fmin(f,guess_f,args=(c1,c2),maxfun=400,maxiter=400,ftol=1e-2,xtol=1e-4)
However, I cannot get this to work when I add another variable to minimize over:
def g(x,y,c1,c2):
return x*y+c1+c2
guess_g=[1,1]
minimum2= optimize.fmin(g,guess_g,args=(c1,c2),maxfun=400,maxiter=400,ftol=1e-2,xtol=1e-4)
I get the following error message:
TypeError: g() missing 1 required positional argument: 'c2'
I did find Multiple variables in SciPy's optimize.minimize, and a solution is presented here in which the variables to be optimized over need to be grouped together as their own array. I try something like this below:
def g(params,c1,c2):
x,y=params
# print(params)
return x*y+c1*x+c2
guess_g=[1,1]
minimum2= optimize.fmin(g,guess_g,args=(c1,c2),maxfun=4000,maxiter=4000,ftol=1e-2,xtol=1e-4)
I do not receive a TypeError, but what I do get is the "Warning: Maximum number of function evaluations has been exceeded." message along with a RuntimeWarning: overflow encountered in double_scalars after removing the cwd from sys.path. (additionally, I tried using the optimize.minimize command to do the same thing, but was unable to get it to work when adding the extra arguments, but I do not post that code here as the question is already getting long).
So this does not seem to be the correct way to do this.
How do I go about optimizing with optimize.fmin function over multiple variables, while also giving my function additional arguments?

Related

Can I pass additional arguments into a scipy bvp function or boundary conditions? / SciPy questions from MATLAB

I'm working on converting a code that solves a BVP (Boundary Value Problem) from MATLAB to Python (SciPy). However, I'm having a little bit of trouble. I wanted to pass a few arguments into the function and the boundary conditions; so in MATLAB, it's something like:
solution = bvp4c(#(x,y)fun(x,y,args),#(ya,yb)boundarycond(ya,yb,arg1,args),solinit);
Where arg1 is a value, and args is a structure or a class instance. So I've been trying to do this on scipy, and the closest I get to is something like:
solBVP = solve_bvp(func(x,y,args), boundC(x,y,arg1,args), x, y)
But then it errors out saying that y is not defined (which it isn't, because it's the vector of first order DEs).
Has anyone tried to pass additional arguments into solve_bvp? If so, how did you manage to do it? I have a workaround right now essentially putting args and arg1 as global variables, but that seems incredibly sketchy if I want to make this code somewhat modular.
Thank you!

Error while using the fit_desoto function, caused by optimize.root (SciPy)

while using the fit_desoto function to estimate parameters for further solar module calculations, I received the following error:
RuntimeError: Parameter estimation failed:
The iteration is not making good progress, as measured by the
improvement from the last five Jacobian evaluations.
The error comes from the optimize.root function of the SciPy library,
which is used within the fit_desoto function.
Even if I use only 1s for the initial values of the fit_desoto function, I get this error.
The code for the function call in the main programm:
self.mp_desoto_fit = pvlib.ivtools.sdm.fit_desoto(v_mp=module['Vmpo'], i_mp=module['Impo'],
v_oc=module['Voco'], i_sc=module['Isco'], alpha_sc=module['Aisc'], beta_voc=module['Bvoco'],
cells_in_series=module['Cells_in_Series'], EgRef=1.121, dEgdT=-0.0002677, temp_ref=25, irrad_ref=1000,
root_kwargs={'options': {'col_deriv': 0, 'xtol': 1.49012e-05, 'maxfev': 0, 'band': None, 'eps': None, 'factor': 100, 'diag': None}})
The root_kwargs in the function call of the fit_desoto function influence the solver of the root function, but different properties for the solver don't fix the problem.
Do you have any ideas on this issue?
What I already tried:
Checked all types of initial values for the fit_desoto function.
Choosed different options for the root function solver
Code debugged and checked the variables inside the root_function. No NaN values or similar while the function is calculating
Thanks in advance
The Problem within the function isn't solved, but it can be bypassed by using the fit_cec_sam function which is quite similar to the fit desoto function.
The problem within the fit desoto function occurs by the initial guess.
Check out the answers on Git Hub for details.
https://github.com/pvlib/pvlib-python/issues/1226

How to use scipy.optimize.fmin with a vector instead of a scalar

When using Scipy's fmin function, I keep encountering the error message: ValueError: setting an array element with a sequence
I have seen that this question has been asked already some times, and I have read interesting posts such as:
ValueError: setting an array element with a sequence
Scipy optimize fmin ValueError: setting an array element with a sequence
Scipy minimize fmin - problems with syntax
..and have tried implementing the suggested solutions, such as adding '*args' to the cost function, appending the variables in the cost function to a list and vectorizing the variables. But nothing has worked for me so far.
I am quite new to programming in Python, so it is possible that I have read the solution and not known how to apply it.
A simplified version of the code, which I used to try to find the problem, is as follows:
import numpy as np
import scipy.optimize
from scipy.optimize import fmin
fcm28=40
M_test=np.array([32.37,62.54,208,410,802])
R_test=np.array([11.95,22.11,33.81,39.18,50.61])
startParams=np.array([fcm28,1,1])
def func(xarray):
x=xarray[0]
y=xarray[1]
z=xarray[2]
expo3=x*np.exp(-(y/M_test)**z)
cost=expo3-R_test
return cost
### If I write the following lines of code:
# xarray=(100,290,0.3)
# print(func(xarray))
# >> [ 2.557 -1.603 -0.684 1.423 -2.755] #I would obtain this output
func_optimised=fmin(func,x0=[fcm28,1,1],xtol=0.000001)
Objective: I am trying to obtain an exponential function 'expo3' (that takes 5 adjustment points, defined by vector 'M_test' on the horizontal axis and 'R_test' in the vertical axis.
What I am trying to minimise is the difference between the function 'expo3' and the adjustment points.
So, the exponential graph is meant to go as close as possible to the adjustment points, such as:
I obtain the following error message:
File "Example2.py", line 20, in <module>
func_optimised=fmin(func,x0=[fcm28,1,1],xtol=0.000001)
File "/home/.../python3.6/site-packages/scipy/optimize/optimize.py", line 443, in fmin
res=_minimize_neldermead(func,x0,args,callback=callback,**opts)
File "/home/.../python3.6/site-packages/scipy/optimize/optimize.py" line 586, in _minimize_neldermead
fsim[k] = func(sim[k])
ValueError: setting an array element with a sequence.
Can fmin be used to accomplish this task? Are there any viable alternatives?
Any help on how to solve this would be really appreciated.
As noted in the commments, your function must return a single value. Assuming that you want to perform a classic least squares fit, you could modify func to return just that:
def func(...):
# ... identical lines skipped
cost = sum((expo3-R_test)**2)
return cost
With that change, func_optimised becomes:
array([1.10633369e+02, 3.85674857e+02, 2.97121854e-01])
# or approximately (110.6, 385.6, 0.3)
Just as a pointer: you could alternatively use scipy.optimize.curve_fit for basically doing the same thing, but with a nicer API that allows you to directly provide the function skeleton + the sample points to fit.

How to fix some runtime warning in python?

I want to perform fitting using following Fit_Curve function but I'm getting the runtime error which may arise when the value of x is too small. But I don't want to change the value of x in my code. I read an answer related to this question that I need to perform sum simplification of fitting function. Is there any other way or easiest way to get rid of this error. Is it just a warning or is it an error? Does this warning effect my final result means fitting?
def Fit_Curve(x,a,beta,b,gamma):
return (a*(x**beta)*np.exp(-b*(x**gamma)))
matrix_check.py:105: RuntimeWarning: divide by zero encountered in
power return (a*(x**beta)*np.exp(-b*(x**gamma)))

How should I use #pm.stochastic in PyMC?

Fairly simple question: How should I use #pm.stochastic? I have read some blog posts that claim #pm.stochasticexpects a negative log value:
#pm.stochastic(observed=True)
def loglike(value=data):
# some calculations that generate a numeric result
return -np.log(result)
I tried this recently but found really bad results. Since I also noticed that some people used np.log instead of -np.log, I give it a try and worked much better. What is really expecting #pm.stochastic? I'm guessing there was a small confusion on the sign required due to a very popular example using something like np.log(1/(1+t_1-t_0)) which was written as -np.log(1+t_1-t_0)
Another question: What is this decorator doing with the value argument? As I understand it, we start with some proposed value for the priors that need to enter in the likelihood and the idea of #pm.stochastic is basically produce some number to compare this likelihood to the number generated by the previous iteration in the sampling process. The likelihood should receive the value argument and some values for the priors, but I'm not sure if this is all value is doing because that's the only required argument and yet I can write:
#pm.stochastic(observed=True)
def loglike(value=[1]):
data = [3,5,1] # some data
# some calculations that generate a numeric result
return np.log(result)
And as far as I can tell, that produces the same result as before. Maybe, it works in this way because I added observed=True to the decorator. If I would have tried this in a stochastic variable with observed=False by default, value would be changed in each iteration trying to obtain a better likelihood.
#pm.stochastic is a decorator, so it is expecting a function. The simplest way to use it is to give it a function that includes value as one of its arguments, and returns a log-likelihood.
You should use the #pm.stochastic decorator to define a custom prior for a parameter in your model. You should use the #pm.observed decorator to define a custom likelihood for data. Both of these decorators will create a pm.Stochastic object, which takes its name from the function it decorates, and has all the familiar methods and attributes (here is a nice article on Python decorators).
Examples:
A parameter a that has a triangular distribution a priori:
#pm.stochastic
def a(value=.5):
if 0 <= value < 1:
return np.log(1.-value)
else:
return -np.inf
Here value=.5 is used as the initial value of the parameter, and changing it to value=1 raises an exception, because it is outside of the support of the distribution.
A likelihood b that has is normal distribution centered at a, with a fixed precision:
#pm.observed
def b(value=[.2,.3], mu=a):
return pm.normal_like(value, mu, 100.)
Here value=[.2,.3] is used to represent the observed data.
I've put this together in a notebook that shows it all in action here.
Yes confusion is easy since the #stochastic returns a likelihood which is the opposite of the error essentially. So you take the negative log of your custom error function and return THAT as your log-likelihood.

Categories