Python/Numpy/Scipy, Solving non-linear least squares, with grouped covariates? - python

Basically, I'm trying to make this function happen:
Where i'm solving for the beta. gamma, alpha, and x are all from the data.
Originally, I just used the summary statistic mean(xi/gamma_i), which meant that everything in that summation could be pre-calculated, and then i would just present a simple np array to the non-linear optimizer... but now there's no way to pre-calculate the summary statistic, as it's not immediately clear how beta will affect f when f is changing in response to alpha_i. Thus, I'm not sure how to go about presenting that array. is it possible to embed those covariates as lists (numpy Objects) to still present a numpy array, and then unpack the list within the residual function? Am i going about this the wrong way?

Related

Kalman Filtering in Python

I've been trying to work on designing a Kalman Filter for a few weeks now, but I'm pretty sure I'm making a major error because my results are terrible. My common sense tells me it's because I'm using an already-existing matrix as my predicted state instead of using a transition matrix, but I'm not sure how to solve that if it indeed is the issue. By the way, this is my first time using Kalman Filtering, so I may be missing basic stuff.
Here is a detailed explanation:
I have 2 datasets of 81036 observations each, with each observation including 6 datapoints (i.e., I end up with 2 matrices of shape 81036 x 6). The first dataset is the measured state and the other one is the predicted state. I want to end up with a Python code that filters the data using both states, and I need the final covariance and error estimates. Here's the main part of my code:
import numpy as np
#nb of observations
nn=81036
#nb of datapoints
ns=6
#import
ps=np.genfromtxt('.......csv', delimiter=',')
ms=np.genfromtxt('.......csv', delimiter=',')
##kalman filtering with covariance
#initialize data (lazy initialization using means of columns)
xi=np.mean(ms,axis=0)
for i in np.arange(nn):
#errors
d=ms[i,:]-xi
d2=ps[i,:]-xi
#covariance matrices
P=np.zeros((ns,ns))
R=np.zeros((ns,ns))
for j in np.arange(ns):
for s in np.arange(ns):
P[j,s]=d[j]*d[s]
R[j,s]=d2[j]*d2[s]
#Gain
k=P*(P+R)**-1
#Update estimate
xi=xi+np.matmul(k,d2)
#Uncertainty/error
I=np.identity(ns)
mlt=np.matmul((I-k),P)
mlt=np.matmul(mlt,((I-k).T))
mlt2=np.matmul(k,R)
mlt2=np.matmul(mlt2,k.T)
Er=mlt+mlt2
When I run this code, I end up with my filtered state xi going through the roof, so I'm pretty sure this is not the correct code. I've tried to fix it in several ways (e.g., I tried to calculate the covariance matrix in the standard way I'm used to - D'D/n -, I tried to remove my predicted state matrix and simply add random noise to my measured state instead...), but nothing seems to work. I also tried some available libraries for Kalman Filtering (as well as libraries in Matlab and R), but they either work in 1D only or need me to specify variables like the transitional matrix, which I don't have. I'm at the end of my wits here, so I'd appreciate any help.
I've found the solution to this issue. Huge props to Kani for their comment, as it pointed me in the right direction.
It turns out that the issue is simply in the calculation of k. Although the equation is correct, the inverse function was not working properly because of the very small values in some instances of R and P. To solve this, I used the pseudoinverse instead, so the line for calculating k became as follows:
k = P # np.linalg.pinv(P + R)
Note that this might not be as accurate as the inverse in other cases, but it does the trick here.

How to change scipy curve_fit/least_squares step size?

I have a python function that takes a bunch (1 or 2) of arguments and returns a 2D array. I have been trying to use scipy curve_fit and least_squares to optimize the input arguments so that the resultant 2D array matches another 2D array that has be pre-made. I ran into the problem of both the methods returning me the initial guess as the converged solution. After ripping apart much hair from my head, I have figured out that the issue was that since the small increment that it makes to the initial guess is too small to make any difference in the 2D array that my function returns (as the cell values in the array are quantized and are not continuous) and hence scipy assumes that it has reached convergence (or local minimum) at the initial guess.
I was wondering if there is a way around this (such as forcing it to use a bigger increment while guessing).
Thanks.
I have ran into a very similar problem recently and it turns out that these kind of optimizers work only for continous-differentiable functions. That's why they would return the initial parameters, as the function you want to fit cannot be differentiated. In my case, I could manually make my fit function differentiable by first fitting a polynomial function to it before plugging it into the curve_fit optimizer.

SciPy: n-dimensional interpolation of sparse data

I currently have a collection of n-dimensional data points, each with a value associated with it (n typically will range from 2 to 4).
I would like to employ some form of non-linear interpolation on the data points I am supplied with so that I can try and minimise this value. Of course, I am open to better methods of minimising the value.
At the moment, I have code that works for 1D and 2D arrays
mesh = np.meshgrid(*[i['grid2'] for i in self.cambParams], indexing='ij')
chi2 = griddata(data[:,:-1], data[:,-1], tuple(mesh), method='cubic')
However scipy.interpolate.griddata only supports linear interpolation above 2D grids, meaning interpolation is useless as the minimum will be a defined point in the data. Does anyone know of an alternate interpolation method that might work, or a better way of solving the problem in general?
Cheers
Received a tip from an external source that work, so posting the answer in case it helps anyone in the future.
SciPy has an Rbf interpolation method (radial basis function) which allows better than linear interpolation at arbitrary dimensions.
Taking a variable data with rows of (x1,x2,x3...,xn,v) values, the follow code modification to the original post allows for interpolation:
rbfi = Rbf(*data.T)
mesh = np.meshgrid(*[i['grid2'] for i in self.cambParams], indexing='ij')
chi2 = rbfi(*mesh)
The documentation here is useful, and there is a simple and easy to follow example here, which will make more sense than the code snippet above.

Scipy.optimize.minimize only iterates some variables.

I have written python (2.7.3) code wherein I aim to create a weighted sum of 16 data sets, and compare the result to some expected value. My problem is to find the weighting coefficients which will produce the best fit to the model. To do this, I have been experimenting with scipy's optimize.minimize routines, but have had mixed results.
Each of my individual data sets is stored as a 15x15 ndarray, so their weighted sum is also a 15x15 array. I define my own 'model' of what the sum should look like (also a 15x15 array), and quantify the goodness of fit between my result and the model using a basic least squares calculation.
R=np.sum(np.abs(model/np.max(model)-myresult)**2)
'myresult' is produced as a function of some set of parameters 'wts'. I want to find the set of parameters 'wts' which will minimise R.
To do so, I have been trying this:
res = minimize(get_best_weightings,wts,bounds=bnds,method='SLSQP',options={'disp':True,'eps':100})
Where my objective function is:
def get_best_weightings(wts):
wts_tr=wts[0:16]
wts_ti=wts[16:32]
for i,j in enumerate(portlist):
originalwtsr[j]=wts_tr[i]
originalwtsi[j]=wts_ti[i]
realwts=originalwtsr
imagwts=originalwtsi
myresult=make_weighted_beam(realwts,imagwts,1)
R=np.sum((np.abs(modelbeam/np.max(modelbeam)-myresult))**2)
return R
The input (wts) is an ndarray of shape (32,), and the output, R, is just some scalar, which should get smaller as my fit gets better. By my understanding, this is exactly the sort of problem ("Minimization of scalar function of one or more variables.") which scipy.optimize.minimize is designed to optimize (http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.optimize.minimize.html ).
However, when I run the code, although the optimization routine seems to iterate over different values of all the elements of wts, only a few of them seem to 'stick'. Ie, all but four of the values are returned with the same values as my initial guess. To illustrate, I plot the values of my initial guess for wts (in blue), and the optimized values in red. You can see that for most elements, the two lines overlap.
Image:
http://imgur.com/p1hQuz7
Changing just these few parameters is not enough to get a good answer, and I can't understand why the other parameters aren't also being optimised. I suspect that maybe I'm not understanding the nature of my minimization problem, so I'm hoping someone here can point out where I'm going wrong.
I have experimented with a variety of minimize's inbuilt methods (I am by no means committed to SLSQP, or certain that it's the most appropriate choice), and with a variety of 'step sizes' eps. The bounds I am using for my parameters are all (-4000,4000). I only have scipy version .11, so I haven't tested a basinhopping routine to get the global minimum (this needs .12). I have looked at minimize.brute, but haven't tried implementing it yet - thought I'd check if anyone can steer me in a better direction first.
Any advice appreciated! Sorry for the wall of text and the possibly (probably?) idiotic question. I can post more of my code if necessary, but it's pretty long and unpolished.

Numpy or Scipy way to do polynomial fitting in 2 dimensions

I have some data that looks like this
What is the typical way to do a polynomial map of z based on x and y? I have used numpy.polyfit in the past to do similar things in 2 dimensions, so I suppose I could just iterate through all the points and then fit those answers with another 1d polyfit. However, it seems there should be a more straight forward way.
By the way the picture shows 2 different sets of data that would be fit with different equations.
It seems to me that what you really want is to fit a surface (linear or spline) in terms of z(x,y), but you have only one single line of data. This is like solving for two unknowns with only one equation - the problem is, basically, how could you decide if the difference in your red line from A to B was caused by the change in PSI, or the change in V?
My suggestions:
fit a surface to your existing dataset. You will get something.
try to get more data that you can fit a more accurate surface on
do what you first wanted - fit a function separately in each dimensions, combine them and use the best of the three functions (the one fitted for PSI, the one fitted for V and the one combined).
try to combine your PSI and V factors with some fancy physics-based trick into one significant factor that contains both of them

Categories