How to tune the parameters of the following system of ODEs? - python
I have a system of ODEs which assume the form
In essence, I have the solutions B_1(t) and B_2(t) for t=5 and I am interested in finding the unknown parameters rho_1 and rho_2. The approach I took entailed: 1) define the function corresponding to the system above; 2) integrate using solve_ivp and deduct the result from the true values of B_1(t) and B_2(t); 3) finally use fsolve to find the appropriate values of rho_1 and rho_2, such that the difference between the true parameters B_1(t) and B_2(t) and the ones obtained using the tuned parameters of rho_1 and rho_2 is a zero vector. The code I have implemented for this purpose is the following:
t_eval = np.arange(0, 5)
def fun(t, s, rho_1, rho_2):
return np.dot(np.array([0.775416, 0,0, 0.308968]).reshape(2,2), s) + np.array([rho_1, rho_2]).reshape(2,1)
def fun2(t, rho_1, rho_2):
res = solve_ivp(fun, [0, 5], y0 = [0, 0], t_eval=t_eval, args = (rho_1, rho_2), vectorized = True)
sol = res.y[:,4]-np.array([0.01306365, 0.00589119])
return sol
root = fsolve(fun2, [0, 0])
However, I am not sure whether fsolve is not appropriate for this purpose tor there is something wrong with my code, as I get the following error:
fun2() missing 2 required positional arguments: 'rho_1' and 'rho_2'
Related
Retrieve scipy minimize function lowest error
Is there a way to directly retrieve the minimized error after scipy.minimize has converged or must that be directly coded into the cost function? I can only retrieve the converged to coefficients it seems. def errorFunction(params,series,loss_function,slen = 12): alpha, beta, gamma = params breakUps = int(len(series) / slen) end = breakUps * slen test = series[end:] errors = [] for i in range(2,breakUps+1): model = HoltWinters(series=series[:i * 12], slen=slen, alpha=alpha, beta=beta, gamma=gamma, n_preds=len(test)) model.triple_exponential_smoothing() predictions = model.result[-len(test):] actual = test error = loss_function(predictions, actual) errors.append(error) return np.mean(np.array(errors)) opt = scipy.optimize.minimize(errorFunction, x0=x, args=(train, mean_squared_log_error), method="L-BFGS-B", bounds = ((0, 1), (0, 1), (0, 1)) ) #gets the converged values optimal values = opt.x #I would like to know what the error with errorFunction is when using opt.x values, without having to manually run the script again #Is the minimum error stored somewhere in the returned object opt
From what I understand from the documentation of the function scipy.optimize.minimize, the result is returned as a OptimizeResult object. From the documentation of this class (here) it has an attribute fun that is the "values of objective function". So if you do opt.fun, you should obtain the result you are looking for. (There are more values that you can retrieve like the Jacobian opt.jac, the Hessian opt.hess, ... as described in the documentation)
Least squares fit in python for 3d surface
I would like to fit my surface equation to some data. I already tried scipy.optimize.leastsq but as I cannot specify the bounds it gives me an unusable results. I also tried scipy.optimize.least_squares but it gives me an error: ValueError: too many values to unpack My equation is: f(x,y,z)=(x-A+y-B)/2+sqrt(((x-A-y+B)/2)^2+C*z^2) parameters A, B, C should be found so that the equation above would be as close as possible to zero when the following points are used for x,y,z: [ [-0.071, -0.85, 0.401], [-0.138, -1.111, 0.494], [-0.317, -0.317, -0.317], [-0.351, -2.048, 0.848] ] The bounds would be A > 0, B > 0, C > 1 How I should obtain such a fit? What is the best tool in python to do that. I searched for examples on how to fit 3d surfaces but most of examples involving function fitting is about line or flat surface fits.
I've edited this answer to provide a more general example of how this problem can be solved with scipy's general optimize.minimize method as well as scipy's optimize.least_squares method. First lets set up the problem: import numpy as np import scipy.optimize # =============================================== # SETUP: define common compoments of the problem def our_function(coeff, data): """ The function we care to optimize. Args: coeff (np.ndarray): are the parameters that we care to optimize. data (np.ndarray): the input data """ A, B, C = coeff x, y, z = data.T return (x - A + y - B) / 2 + np.sqrt(((x - A - y + B) / 2) ** 2 + C * z ** 2) # Define some training data data = np.array([ [-0.071, -0.85, 0.401], [-0.138, -1.111, 0.494], [-0.317, -0.317, -0.317], [-0.351, -2.048, 0.848] ]) # Define training target # This is what we want the target function to be equal to target = 0 # Make an initial guess as to the parameters # either a constant or random guess is typically fine num_coeff = 3 coeff_0 = np.ones(num_coeff) # coeff_0 = np.random.rand(num_coeff) This isn't strictly least squares, but how about something like this? This solution is like throwing a sledge hammer at the problem. There probably is a way to use least squares to get a solution more efficiently using an SVD solver, but if you're just looking for an answer scipy.optimize.minimize will find you one. # =============================================== # FORMULATION #1: a general minimization problem # Here the bounds and error are all specified within the general objective function def general_objective(coeff, data, target): """ General function that simply returns a value to be minimized. The coeff will be modified to minimize whatever the output of this function may be. """ # Constraints to keep coeff above 0 if np.any(coeff < 0): # If any constraint is violated return infinity return np.inf # The function we care about prediction = our_function(coeff, data) # (optional) L2 regularization to keep coeff small # (optional) reg_amount = 0.0 # (optional) reg = reg_amount * np.sqrt((coeff ** 2).sum()) losses = (prediction - target) ** 2 # (optional) losses += reg # Return the average squared error loss = losses.sum() return loss general_result = scipy.optimize.minimize(general_objective, coeff_0, method='Nelder-Mead', args=(data, target)) # Test what the squared error of the returned result is coeff = general_result.x general_output = our_function(coeff, data) print('====================') print('general_result =\n%s' % (general_result,)) print('---------------------') print('general_output = %r' % (general_output,)) print('====================') The output looks like this: ==================== general_result = final_simplex: (array([[ 2.45700466e-01, 7.93719271e-09, 1.71257109e+00], [ 2.45692680e-01, 3.31991619e-08, 1.71255150e+00], [ 2.45726858e-01, 6.52636219e-08, 1.71263360e+00], [ 2.45713989e-01, 8.06971686e-08, 1.71260234e+00]]), array([ 0.00012404, 0.00012404, 0.00012404, 0.00012404])) fun: 0.00012404137498459109 message: 'Optimization terminated successfully.' nfev: 431 nit: 240 status: 0 success: True x: array([ 2.45700466e-01, 7.93719271e-09, 1.71257109e+00]) --------------------- general_output = array([ 0.00527974, -0.00561568, -0.00719941, 0.00357748]) ==================== I found in the documentation that all you need to do to adapt this to actual least squares is to specify the function that computes the residuals. # =============================================== # FORMULATION #2: a special least squares problem # Here all that is needeed is a function that computes the vector of residuals # the optimization function takes care of the rest def least_squares_residuals(coeff, data, target): """ Function that returns the vector of residuals between the predicted values and the target value. Here we want each predicted value to be close to zero """ A, B, C = coeff x, y, z = data.T prediction = our_function(coeff, data) vector_of_residuals = (prediction - target) return vector_of_residuals # Here the bounds are specified in the optimization call bound_gt = np.full(shape=num_coeff, fill_value=0, dtype=np.float) bound_lt = np.full(shape=num_coeff, fill_value=np.inf, dtype=np.float) bounds = (bound_gt, bound_lt) lst_sqrs_result = scipy.optimize.least_squares(least_squares_residuals, coeff_0, args=(data, target), bounds=bounds) # Test what the squared error of the returned result is coeff = lst_sqrs_result.x lst_sqrs_output = our_function(coeff, data) print('====================') print('lst_sqrs_result =\n%s' % (lst_sqrs_result,)) print('---------------------') print('lst_sqrs_output = %r' % (lst_sqrs_output,)) print('====================') The output here is: ==================== lst_sqrs_result = active_mask: array([ 0, -1, 0]) cost: 6.197329866927735e-05 fun: array([ 0.00518416, -0.00564099, -0.00710112, 0.00385024]) grad: array([ -4.61826888e-09, 3.70771396e-03, 1.26659198e-09]) jac: array([[-0.72611025, -0.27388975, 0.13653112], [-0.74479565, -0.25520435, 0.1644325 ], [-0.35777232, -0.64222767, 0.11601263], [-0.77338046, -0.22661953, 0.27104366]]) message: '`gtol` termination condition is satisfied.' nfev: 13 njev: 13 optimality: 4.6182688779976278e-09 status: 1 success: True x: array([ 2.46392438e-01, 5.39025298e-17, 1.71555150e+00]) --------------------- lst_sqrs_output = array([ 0.00518416, -0.00564099, -0.00710112, 0.00385024]) ====================
Use optimize.minimize from scipy with 2 variables and interpolated function
I didn't find a way to perform optimize.minimize from scipy with a multidimensional function. In nearly all examples an analytical function is optimized while my function is interpolated. The test data set looks like this: x = np.array([2000,2500,3000,3500]) y = np.array([10,15,25,50]) z = np.array([10,12,17,19,13,13,16,20,17,60,25,25,8,35,15,20]) data = np.array([x,y,z]) While the function is like F(x,y) = z What I want to know is what happens at f(2200,12) and what is the global maximum in the range of x (2000:3500) and y (10:50). The interpolation works fine. But finding the global maximum doesn't work so far. The interpolation self.F2 = interp2d(xx, -yy, z, kind, bounds_error=False) yields <scipy.interpolate.interpolate.interp2d object at 0x0000000002C3BBE0> I tried to optimize via: x0 = [(2000,3500),(10,50)] res = scipy.optimize.minimize(self.F2, x0, method='Nelder-Mead') An exception is thrown: TypeError: __call__() missing 1 required positional argument: 'y' I think that the optimizer can't handle the object from the interpolation. In the examples the people used lambda to get values from their function. What do I have to do in my case? Best, Alex
First, to find global maximum (instead of minimum) you need to interpolate your function with opposite sign: F2 = interp2d(x, y, -z) Second, the callable in minimize takes a tuple of arguments, and interp2d object needs input coordinates to be given as separate positional arguments. Therefore, we cannot use interp2d object in minimize directly; we need a wrapper that will unpack a tuple of arguments from minimize and feed it to interp2d: f = lambda x: F2(*x) And third, to use minimize you need to specify an initial guess for minimum (and bounds, in your case). Any reasonable point will do: x0 = (2200, 12) bounds = [(2000,3500),(10,50)] print minimize(f, x0, method='SLSQP', bounds=bounds) This yields: status: 0 success: True njev: 43 nfev: 243 fun: array([-59.99999488]) x: array([ 2500.00002708, 24.99999931]) message: 'Optimization terminated successfully.' jac: array([ 0.07000017, 1. , 0. ]) nit: 43
One more possible solution (hope you get the idea): One more function is created (f), and the minimized values are sent as arguments to this function. from scipy.optimize import minimize x = data.Height.values y = data.Weight.values def f(params): w0, w1 = params return mse(w0, w1, x, y) optimum = minimize(f, (0,0), method = 'L-BFGS-B', bounds = ((-100, 100), (-5,5)) ) w0 = optimum.x[0] w1 = optimum.x[1] Also tried implementation with lambda function, but had no luck.
Estimate formants using LPC in Python
I'm new to signal processing (and numpy, scipy, and matlab for that matter). I'm trying to estimate vowel formants with LPC in Python by adapting this matlab code: http://www.mathworks.com/help/signal/ug/formant-estimation-with-lpc-coefficients.html Here is my code so far: #!/usr/bin/env python import sys import numpy import wave import math from scipy.signal import lfilter, hamming from scikits.talkbox import lpc """ Estimate formants using LPC. """ def get_formants(file_path): # Read from file. spf = wave.open(file_path, 'r') # http://www.linguistics.ucla.edu/people/hayes/103/Charts/VChart/ae.wav # Get file as numpy array. x = spf.readframes(-1) x = numpy.fromstring(x, 'Int16') # Get Hamming window. N = len(x) w = numpy.hamming(N) # Apply window and high pass filter. x1 = x * w x1 = lfilter([1., -0.63], 1, x1) # Get LPC. A, e, k = lpc(x1, 8) # Get roots. rts = numpy.roots(A) rts = [r for r in rts if numpy.imag(r) >= 0] # Get angles. angz = numpy.arctan2(numpy.imag(rts), numpy.real(rts)) # Get frequencies. Fs = spf.getframerate() frqs = sorted(angz * (Fs / (2 * math.pi))) return frqs print get_formants(sys.argv[1]) Using this file as input, my script returns this list: [682.18960189917243, 1886.3054773107765, 3518.8326108511073, 6524.8112723782951] I didn't even get to the last steps where they filter the frequencies by bandwidth because the frequencies in the list aren't right. According to Praat, I should get something like this (this is the formant listing for the middle of the vowel): Time_s F1_Hz F2_Hz F3_Hz F4_Hz 0.164969 731.914588 1737.980346 2115.510104 3191.775838 What am I doing wrong? Thanks very much UPDATE: I changed this x1 = lfilter([1., -0.63], 1, x1) to x1 = lfilter([1], [1., 0.63], x1) as per Warren Weckesser's suggestion and am now getting [631.44354635609318, 1815.8629524985781, 3421.8288991389031, 6667.5030877036006] I feel like I'm missing something since F3 is very off. UPDATE 2: I realized that the order being passed to scikits.talkbox.lpc was off due to a difference in sampling frequency. Changed it to: Fs = spf.getframerate() ncoeff = 2 + Fs / 1000 A, e, k = lpc(x1, ncoeff) Now I'm getting: [257.86573127888488, 774.59006835496086, 1769.4624576002402, 2386.7093679399809, 3282.387975973973, 4413.0428174593926, 6060.8150432549655, 6503.3090645887842, 7266.5069407315023] Much closer to Praat's estimation!
The problem had to do with the order being passed to the lpc function. 2 + fs / 1000 where fs is the sampling frequency is the rule of thumb according to: http://www.phon.ucl.ac.uk/courses/spsci/matlab/lect10.html
I have not been able to get the results you expect, but I do notice two things which might cause some differences: Your code uses [1, -0.63] where the MATLAB code from the link you provided has [1 0.63]. Your processing is being applied to the entire x vector at once instead of smaller segments of it (see where the MATLAB code does this: x = mtlb(I0:Iend); ). Hope that helps.
There are at least two problems: According to the link, the "pre-emphasis filter is a highpass all-pole (AR(1)) filter". The signs of the coefficients given there are correct: [1, 0.63]. If you use [1, -0.63], you get a lowpass filter. You have the first two arguments to scipy.signal.lfilter reversed. So, try changing this: x1 = lfilter([1., -0.63], 1, x1) to this: x1 = lfilter([1.], [1., 0.63], x1) I haven't tried running your code yet, so I don't know if those are the only problems.
Logistic regression using SciPy
I am trying to code up logistic regression in Python using the SciPy fmin_bfgs function, but am running into some issues. I wrote functions for the logistic (sigmoid) transformation function, and the cost function, and those work fine (I have used the optimized values of the parameter vector found via canned software to test the functions, and those match up). I am not that sure of my implementation of the gradient function, but it looks reasonable. Here is the code: # purpose: logistic regression import numpy as np import scipy.optimize # prepare the data data = np.loadtxt('data.csv', delimiter=',', skiprows=1) vY = data[:, 0] mX = data[:, 1:] intercept = np.ones(mX.shape[0]).reshape(mX.shape[0], 1) mX = np.concatenate((intercept, mX), axis = 1) iK = mX.shape[1] iN = mX.shape[0] # logistic transformation def logit(mX, vBeta): return((1/(1.0 + np.exp(-np.dot(mX, vBeta))))) # test function call vBeta0 = np.array([-.10296645, -.0332327, -.01209484, .44626211, .92554137, .53973828, 1.7993371, .7148045 ]) logit(mX, vBeta0) # cost function def logLikelihoodLogit(vBeta, mX, vY): return(-(np.sum(vY*np.log(logit(mX, vBeta)) + (1-vY)*(np.log(1-logit(mX, vBeta)))))) logLikelihoodLogit(vBeta0, mX, vY) # test function call # gradient function def likelihoodScore(vBeta, mX, vY): return(np.dot(mX.T, ((np.dot(mX, vBeta) - vY)/ np.dot(mX, vBeta)).reshape(iN, 1)).reshape(iK, 1)) likelihoodScore(vBeta0, mX, vY).shape # test function call # optimize the function (without gradient) optimLogit = scipy.optimize.fmin_bfgs(logLikelihoodLogit, x0 = np.array([-.1, -.03, -.01, .44, .92, .53, 1.8, .71]), args = (mX, vY), gtol = 1e-3) # optimize the function (with gradient) optimLogit = scipy.optimize.fmin_bfgs(logLikelihoodLogit, x0 = np.array([-.1, -.03, -.01, .44, .92, .53, 1.8, .71]), fprime = likelihoodScore, args = (mX, vY), gtol = 1e-3) The first optimization (without gradient) ends with a whole lot of stuff about division by zero. The second optimization (with gradient) ends with a matrices not aligned error, which probably means I have got the way the gradient is to be returned wrong. Any help with this is appreciated. If anyone wants to try this, the data is included below. low,age,lwt,race,smoke,ptl,ht,ui 0,19,182,2,0,0,0,1 0,33,155,3,0,0,0,0 0,20,105,1,1,0,0,0 0,21,108,1,1,0,0,1 0,18,107,1,1,0,0,1 0,21,124,3,0,0,0,0 0,22,118,1,0,0,0,0 0,17,103,3,0,0,0,0 0,29,123,1,1,0,0,0 0,26,113,1,1,0,0,0 0,19,95,3,0,0,0,0 0,19,150,3,0,0,0,0 0,22,95,3,0,0,1,0 0,30,107,3,0,1,0,1 0,18,100,1,1,0,0,0 0,18,100,1,1,0,0,0 0,15,98,2,0,0,0,0 0,25,118,1,1,0,0,0 0,20,120,3,0,0,0,1 0,28,120,1,1,0,0,0 0,32,121,3,0,0,0,0 0,31,100,1,0,0,0,1 0,36,202,1,0,0,0,0 0,28,120,3,0,0,0,0 0,25,120,3,0,0,0,1 0,28,167,1,0,0,0,0 0,17,122,1,1,0,0,0 0,29,150,1,0,0,0,0 0,26,168,2,1,0,0,0 0,17,113,2,0,0,0,0 0,17,113,2,0,0,0,0 0,24,90,1,1,1,0,0 0,35,121,2,1,1,0,0 0,25,155,1,0,0,0,0 0,25,125,2,0,0,0,0 0,29,140,1,1,0,0,0 0,19,138,1,1,0,0,0 0,27,124,1,1,0,0,0 0,31,215,1,1,0,0,0 0,33,109,1,1,0,0,0 0,21,185,2,1,0,0,0 0,19,189,1,0,0,0,0 0,23,130,2,0,0,0,0 0,21,160,1,0,0,0,0 0,18,90,1,1,0,0,1 0,18,90,1,1,0,0,1 0,32,132,1,0,0,0,0 0,19,132,3,0,0,0,0 0,24,115,1,0,0,0,0 0,22,85,3,1,0,0,0 0,22,120,1,0,0,1,0 0,23,128,3,0,0,0,0 0,22,130,1,1,0,0,0 0,30,95,1,1,0,0,0 0,19,115,3,0,0,0,0 0,16,110,3,0,0,0,0 0,21,110,3,1,0,0,1 0,30,153,3,0,0,0,0 0,20,103,3,0,0,0,0 0,17,119,3,0,0,0,0 0,17,119,3,0,0,0,0 0,23,119,3,0,0,0,0 0,24,110,3,0,0,0,0 0,28,140,1,0,0,0,0 0,26,133,3,1,2,0,0 0,20,169,3,0,1,0,1 0,24,115,3,0,0,0,0 0,28,250,3,1,0,0,0 0,20,141,1,0,2,0,1 0,22,158,2,0,1,0,0 0,22,112,1,1,2,0,0 0,31,150,3,1,0,0,0 0,23,115,3,1,0,0,0 0,16,112,2,0,0,0,0 0,16,135,1,1,0,0,0 0,18,229,2,0,0,0,0 0,25,140,1,0,0,0,0 0,32,134,1,1,1,0,0 0,20,121,2,1,0,0,0 0,23,190,1,0,0,0,0 0,22,131,1,0,0,0,0 0,32,170,1,0,0,0,0 0,30,110,3,0,0,0,0 0,20,127,3,0,0,0,0 0,23,123,3,0,0,0,0 0,17,120,3,1,0,0,0 0,19,105,3,0,0,0,0 0,23,130,1,0,0,0,0 0,36,175,1,0,0,0,0 0,22,125,1,0,0,0,0 0,24,133,1,0,0,0,0 0,21,134,3,0,0,0,0 0,19,235,1,1,0,1,0 0,25,95,1,1,3,0,1 0,16,135,1,1,0,0,0 0,29,135,1,0,0,0,0 0,29,154,1,0,0,0,0 0,19,147,1,1,0,0,0 0,19,147,1,1,0,0,0 0,30,137,1,0,0,0,0 0,24,110,1,0,0,0,0 0,19,184,1,1,0,1,0 0,24,110,3,0,1,0,0 0,23,110,1,0,0,0,0 0,20,120,3,0,0,0,0 0,25,241,2,0,0,1,0 0,30,112,1,0,0,0,0 0,22,169,1,0,0,0,0 0,18,120,1,1,0,0,0 0,16,170,2,0,0,0,0 0,32,186,1,0,0,0,0 0,18,120,3,0,0,0,0 0,29,130,1,1,0,0,0 0,33,117,1,0,0,0,1 0,20,170,1,1,0,0,0 0,28,134,3,0,0,0,0 0,14,135,1,0,0,0,0 0,28,130,3,0,0,0,0 0,25,120,1,0,0,0,0 0,16,95,3,0,0,0,0 0,20,158,1,0,0,0,0 0,26,160,3,0,0,0,0 0,21,115,1,0,0,0,0 0,22,129,1,0,0,0,0 0,25,130,1,0,0,0,0 0,31,120,1,0,0,0,0 0,35,170,1,0,1,0,0 0,19,120,1,1,0,0,0 0,24,116,1,0,0,0,0 0,45,123,1,0,0,0,0 1,28,120,3,1,1,0,1 1,29,130,1,0,0,0,1 1,34,187,2,1,0,1,0 1,25,105,3,0,1,1,0 1,25,85,3,0,0,0,1 1,27,150,3,0,0,0,0 1,23,97,3,0,0,0,1 1,24,128,2,0,1,0,0 1,24,132,3,0,0,1,0 1,21,165,1,1,0,1,0 1,32,105,1,1,0,0,0 1,19,91,1,1,2,0,1 1,25,115,3,0,0,0,0 1,16,130,3,0,0,0,0 1,25,92,1,1,0,0,0 1,20,150,1,1,0,0,0 1,21,200,2,0,0,0,1 1,24,155,1,1,1,0,0 1,21,103,3,0,0,0,0 1,20,125,3,0,0,0,1 1,25,89,3,0,2,0,0 1,19,102,1,0,0,0,0 1,19,112,1,1,0,0,1 1,26,117,1,1,1,0,0 1,24,138,1,0,0,0,0 1,17,130,3,1,1,0,1 1,20,120,2,1,0,0,0 1,22,130,1,1,1,0,1 1,27,130,2,0,0,0,1 1,20,80,3,1,0,0,1 1,17,110,1,1,0,0,0 1,25,105,3,0,1,0,0 1,20,109,3,0,0,0,0 1,18,148,3,0,0,0,0 1,18,110,2,1,1,0,0 1,20,121,1,1,1,0,1 1,21,100,3,0,1,0,0 1,26,96,3,0,0,0,0 1,31,102,1,1,1,0,0 1,15,110,1,0,0,0,0 1,23,187,2,1,0,0,0 1,20,122,2,1,0,0,0 1,24,105,2,1,0,0,0 1,15,115,3,0,0,0,1 1,23,120,3,0,0,0,0 1,30,142,1,1,1,0,0 1,22,130,1,1,0,0,0 1,17,120,1,1,0,0,0 1,23,110,1,1,1,0,0 1,17,120,2,0,0,0,0 1,26,154,3,0,1,1,0 1,20,106,3,0,0,0,0 1,26,190,1,1,0,0,0 1,14,101,3,1,1,0,0 1,28,95,1,1,0,0,0 1,14,100,3,0,0,0,0 1,23,94,3,1,0,0,0 1,17,142,2,0,0,1,0 1,21,130,1,1,0,1,0
Your problem is that the function you are trying to minimise, logLikelihoodLogit, will return NaN with values very close to your initial estimate. And it will also try to evaluate negative logarithms and encounter other problems. fmin_bfgs doesn't know about this, will try to evaluate the function for such values and run into trouble. I suggest using a bounded optimisation instead. You can use scipy's optimize.fmin_l_bfgs_b for this. It uses a similar algorithm to fmin_bfgs, but it supports bounds in the parameter space. You call it similarly, just add a bounds keyword. Here's a simple example on how you'd call fmin_l_bfgs_b: from scipy.optimize import fmin_bfgs, fmin_l_bfgs_b # list of bounds: each item is a tuple with the (lower, upper) bounds bd = [(0, 1.), ...] test = fmin_l_bfgs_b(logLikelihoodLogit, x0=x0, args=(mX, vY), bounds=bd, approx_grad=True) Here I'm using an approximate gradient (seemed to work fine with your data), but you can pass fprime as in your example (I don't have time to check its correctness). You'll know your parameter space better than me, just make sure to build the bounds array for all the meaningful values that your parameters can take.
Here is the answer I sent back to the SciPy list where this question was cross-posted. Thanks to #tiago for his answer. Basically, I reparametrized the likelihood function. Also, added a call to the check_grad function. #===================================================== # purpose: logistic regression import numpy as np import scipy as sp import scipy.optimize import matplotlib as mpl import os # prepare the data data = np.loadtxt('data.csv', delimiter=',', skiprows=1) vY = data[:, 0] mX = data[:, 1:] # mX = (mX - np.mean(mX))/np.std(mX) # standardize the data; if required intercept = np.ones(mX.shape[0]).reshape(mX.shape[0], 1) mX = np.concatenate((intercept, mX), axis = 1) iK = mX.shape[1] iN = mX.shape[0] # logistic transformation def logit(mX, vBeta): return((np.exp(np.dot(mX, vBeta))/(1.0 + np.exp(np.dot(mX, vBeta))))) # test function call vBeta0 = np.array([-.10296645, -.0332327, -.01209484, .44626211, .92554137, .53973828, 1.7993371, .7148045 ]) logit(mX, vBeta0) # cost function def logLikelihoodLogit(vBeta, mX, vY): return(-(np.sum(vY*np.log(logit(mX, vBeta)) + (1-vY)*(np.log(1-logit(mX, vBeta)))))) logLikelihoodLogit(vBeta0, mX, vY) # test function call # different parametrization of the cost function def logLikelihoodLogitVerbose(vBeta, mX, vY): return(-(np.sum(vY*(np.dot(mX, vBeta) - np.log((1.0 + np.exp(np.dot(mX, vBeta))))) + (1-vY)*(-np.log((1.0 + np.exp(np.dot(mX, vBeta)))))))) logLikelihoodLogitVerbose(vBeta0, mX, vY) # test function call # gradient function def likelihoodScore(vBeta, mX, vY): return(np.dot(mX.T, (logit(mX, vBeta) - vY))) likelihoodScore(vBeta0, mX, vY).shape # test function call sp.optimize.check_grad(logLikelihoodLogitVerbose, likelihoodScore, vBeta0, mX, vY) # check that the analytical gradient is close to # numerical gradient # optimize the function (without gradient) optimLogit = scipy.optimize.fmin_bfgs(logLikelihoodLogitVerbose, x0 = np.array([-.1, -.03, -.01, .44, .92, .53, 1.8, .71]), args = (mX, vY), gtol = 1e-3) # optimize the function (with gradient) optimLogit = scipy.optimize.fmin_bfgs(logLikelihoodLogitVerbose, x0 = np.array([-.1, -.03, -.01, .44, .92, .53, 1.8, .71]), fprime = likelihoodScore, args = (mX, vY), gtol = 1e-3) #=====================================================
I was facing the same issues. When I experimented with different algorithms implementation in scipy.optimize.minimize , I found that for finding optimal logistic regression parameters for my data set , Newton Conjugate Gradient proved helpful. Call can be made to it like: Result = scipy.optimize.minimize(fun = logLikelihoodLogit, x0 = np.array([-.1, -.03, -.01, .44, .92, .53,1.8, .71]), args = (mX, vY), method = 'TNC', jac = likelihoodScore); optimLogit = Result.x;