Finding maximum value of unknown target function, given samples - python

I have a function that takes 4 variable and returns a single float value in range [0,1].
I want to know which inputs will maximize function's output. However, this function runs slow, so I just made 1000 random samples. i.e. 1000 tuples of (input, output)
Is there any good method to predict values that maximize my function with these tuples? I don't care if there are more function running, but not many.
Thanks in advance.

No there is no general method to do what you're asking.
Global optimization is a collection of techniques (and a whole field of study) that are used to minimize a function based on some of its general properties. Without more information about the underlying function, niave random sampling (as you're doing) is a 'reasonable' approach.
You're best best is to find additional information about the character of your function mapping (is the output spikey or smoothly varying with the input? Are there lots of minima, or just a few?), or just keep sampling.

Related

Optimization of a scalar, that is not in a function? Marginal increase in Python

I am working with a programming task that includes a randomly given parameter that I want to optimize. The tricky part is, that this parameter is not part of a function and can not be expressed as such. The task:
I am given values of 4 sets of functions, but I don't have f(x)=... but just the x and y values of the function, which have some noise. I can't fit a curve to find the function because the four functions are too different from each other to construct an algorithm to detect the data model.
I am then given test points (not part of a function), which have to be assigned to one of the 4 functions depending on their distance to the function. The criteria depend on the random factor, i.e. if a test point is within a corridor of sqrt(2)*std of a function, it can be assigned to this function. In doing so, some points can not be assigned to any of the functions.
I now want to find the ideal factor for the distance so that we can assign as many points as possible to one of the functions, with the constraint that none of the points will be assigned to more than one function.
I am a beginner, so I tried to sovle it with marginal increase over a wile loop:
(m is a dictionary)
while m["multiple_matches"]==0:
factor += 0.001
m=map_with_factor(factor)
print(factor)
if m["multiple_matches"]==1:
factor -= 0.0001
m_opt=map_with_factor(factor)
I get decent results, but I feel like there has to be a better way to solve this!
Is there a way that I can tell Python to marginally increase or to turn this into an optimization problem?
As I said, I can not turn the code in the map_with_factor() function into a mathematical function...

Evaluating a custom function when a solution is proposed by ORTOOLS

I am pretty new to OR-TOOLS in Python. I have made several tutorial examples, but I am facing issues trying to model my problem.
Let's see we have a bin packing problem, in which I need to find the fewest bins that will hold all the items in function of their weight. In this typical problem we would want to minimize the number of bins used. But let's say we have an additional objective: to maximize the "quality" of the bin. Here's the problem: to evaluate the quality of that bin, we need to call a non-linear function that takes the items in that bin and returns a quality. I guess I cannot use a multi-objective approach with CP/SAT, so we could model it weighting both objectives.
The problem I am facing is thus the following:
I cannot set the 'quality' as variable because it depends on the
current solution (the items associated to a bin)
How can I do that? assigning a callback? is it possible?
Depending on the "current" solution is not a problem. You could add a "quality" variable, which depends on the values of the variables representing the bins and their contents, and uses the solver's primitives to calculate the desired quantity.
This might not be possible for just any function, but the solver's primitives do allow some forms of non-linear calculations (just as an example, you can calculate abs(x), or x^2, (ref)).
So, for instance, you could have a quality variable which calculates the (number of bins used)^2.
Once you get to a form of quality calculation which works within the solver, you can go back to use one of the approaches for solving for more than a single objective, like weighted sum.

bayespy - does it work online?

how to define the chain if it is intended to work online i.e. the length of the chain will change at each iteration in a cumulative manner?
for example the bayespy categorical markov chain function asks for the number of states in the chain if it cannot infer it (which by providing it, it means the length is fixed) and also the evidence variables observed with observe() from bayespy.inference.vmp.nodes.expfamily.ExponentialFamily module, it needs to have the same length as the chain. The data associated with such variables instead is known only at runtime for me.

Scipy.optimize.minimize only iterates some variables.

I have written python (2.7.3) code wherein I aim to create a weighted sum of 16 data sets, and compare the result to some expected value. My problem is to find the weighting coefficients which will produce the best fit to the model. To do this, I have been experimenting with scipy's optimize.minimize routines, but have had mixed results.
Each of my individual data sets is stored as a 15x15 ndarray, so their weighted sum is also a 15x15 array. I define my own 'model' of what the sum should look like (also a 15x15 array), and quantify the goodness of fit between my result and the model using a basic least squares calculation.
R=np.sum(np.abs(model/np.max(model)-myresult)**2)
'myresult' is produced as a function of some set of parameters 'wts'. I want to find the set of parameters 'wts' which will minimise R.
To do so, I have been trying this:
res = minimize(get_best_weightings,wts,bounds=bnds,method='SLSQP',options={'disp':True,'eps':100})
Where my objective function is:
def get_best_weightings(wts):
wts_tr=wts[0:16]
wts_ti=wts[16:32]
for i,j in enumerate(portlist):
originalwtsr[j]=wts_tr[i]
originalwtsi[j]=wts_ti[i]
realwts=originalwtsr
imagwts=originalwtsi
myresult=make_weighted_beam(realwts,imagwts,1)
R=np.sum((np.abs(modelbeam/np.max(modelbeam)-myresult))**2)
return R
The input (wts) is an ndarray of shape (32,), and the output, R, is just some scalar, which should get smaller as my fit gets better. By my understanding, this is exactly the sort of problem ("Minimization of scalar function of one or more variables.") which scipy.optimize.minimize is designed to optimize (http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.optimize.minimize.html ).
However, when I run the code, although the optimization routine seems to iterate over different values of all the elements of wts, only a few of them seem to 'stick'. Ie, all but four of the values are returned with the same values as my initial guess. To illustrate, I plot the values of my initial guess for wts (in blue), and the optimized values in red. You can see that for most elements, the two lines overlap.
Image:
http://imgur.com/p1hQuz7
Changing just these few parameters is not enough to get a good answer, and I can't understand why the other parameters aren't also being optimised. I suspect that maybe I'm not understanding the nature of my minimization problem, so I'm hoping someone here can point out where I'm going wrong.
I have experimented with a variety of minimize's inbuilt methods (I am by no means committed to SLSQP, or certain that it's the most appropriate choice), and with a variety of 'step sizes' eps. The bounds I am using for my parameters are all (-4000,4000). I only have scipy version .11, so I haven't tested a basinhopping routine to get the global minimum (this needs .12). I have looked at minimize.brute, but haven't tried implementing it yet - thought I'd check if anyone can steer me in a better direction first.
Any advice appreciated! Sorry for the wall of text and the possibly (probably?) idiotic question. I can post more of my code if necessary, but it's pretty long and unpolished.

Find a random method that best fit list of values

I have a list of many float numbers, representing the length of an operation made several times.
For each type of operation, I have a different trend in numbers.
I'm aware of many random generators presented in some python modules, like in numpy.random
For example, I have binomial, exponencial, normal, weibul, and so on...
I'd like to know if there's a way to find the best random generator, given a list of values, that best fit each list of numbers that I have.
I.e, the generator (with its params) that best fit the trend of the numbers on the list
That's because I'd like to automatize the generation of time lengths, of each operation, so that I can simulate it during n years, without having to find by hand what method fits best what list of numbers.
EDIT: In other words, trying to clarify the problem:
I have a list of numbers. I'm trying to find the probability distribution that best fit the array of numbers I already have. The only problem I see is that each probability distribution has input params that may interfer on the result. So I'll have to figure out how to enter this params automatically, trying to best fit the list.
Any idea?
You might find it better to think about this in terms of probability distributions, rather than thinking about random number generators. You can then think in terms of testing goodness of fit for your different distributions.
As a starting point, you might try constructing probability plots for your samples. Probably the easiest in terms of the math behind it would be to consider a Q-Q plot. Using the random number generators, create a sample of the same size as your data. Sort both of these, and plot them against one another. If the distributions are the same, then you should get a straight line.
Edit: To find appropriate parameters for a statistical model, maximum likelihood estimation is a standard approach. Depending on how many samples of numbers you have and the precision you require, you may well find that just playing with the parameters by hand will give you a "good enough" solution.
Why using random numbers for this is a bad idea has already been explained. It seems to me that what you really need is to fit the distributions you mentioned to your points (for example, with a least squares fit), then check which one fits the points best (for example, with a chi-squared test).
EDIT Adding reference to numpy least squares fitting example
Given a parameterized univariate distirbution (e.g. exponential depends on lambda, or gamma depends on theta and k), the way to find the parameter values that best fit a given sample of numbers is called the Maximum Likelyhood procedure. It is not a least squares procedure, which would require binning and thus loosing information! Some Wikipedia distribution articles give expressions for the maximum likelyhood estimates of parameters, but many do not, and even the ones that do are missing expressions for error bars and covarainces. If you know calculus, you can derive these results by expressing the log likeyhood of your data set in terms of the parameters, setting the second derivative to zero to maximize it, and using the inverse of the curvature matrix at the minimum as the covariance matrix of your parameters.
Given two different fits to two different parameterized distributions, the way to compare them is called the likelyhood ratio test. Basically, you just pick the one with the larger log likelyhood.
Gabriel, if you have access to Mathematica, parameter estimation is built in:
In[43]:= data = RandomReal[ExponentialDistribution[1], 10]
Out[43]= {1.55598, 0.375999, 0.0878202, 1.58705, 0.874423, 2.17905, \
0.247473, 0.599993, 0.404341, 0.31505}
In[44]:= EstimatedDistribution[data, ExponentialDistribution[la],
ParameterEstimator -> "MaximumLikelihood"]
Out[44]= ExponentialDistribution[1.21548]
In[45]:= EstimatedDistribution[data, ExponentialDistribution[la],
ParameterEstimator -> "MethodOfMoments"]
Out[45]= ExponentialDistribution[1.21548]
However, it might be easy to figure what maximum likelihood method commands the parameter to be.
In[48]:= Simplify[
D[LogLikelihood[ExponentialDistribution[la], {x}], la], x > 0]
Out[48]= 1/la - x
Hence the estimated parameter for exponential distribution is sum (1/la -x_i) from where la = 1/Mean[data]. Similar equations can be worked out for other distribution families and coded in the language of your choice.

Categories