How can I fit a polynomial to an empirical data set using python such that it fits the "top" of the data -- i.e. for every value of x, the output of function is greater than the largest y at that x. But at the same time it minimizes this such that it hugs the data. An example of what I'm referring to is seen in the image below:
You need to use cvxopt to find the coordinates of the efficient frontier, which is a quadratic programming problem, then feed those coordinates in numpy's ployfit to get the polynomial fitting the frontier. This Quantopian blog post does both: https://blog.quantopian.com/markowitz-portfolio-optimization-2/
Related
I am trying to solve a statistics-related real world problem with Python and am looking for inputs on my ideas: I have N random vectors from a m-dimensional normal distribution. I have no information about the means and the covariance matrix of the underlying distribution, in fact also that it is a normal distribution is only an assumption, a very plausible one though. I want to compute an approximation of the mean vector and covariance matrix of the distribution. The number of random vectors is in the order of magnitude of 100 to 300, the dimensionality of the normal distribution is somewhere between 2 and 5. The time for the calculation should ideally not exceed 1 minute on a standard home computer.
I am currently thinking about three approaches and am happy about all suggestions for other approaches or preferences between those three:
Fitting: Make a multi dimensional histogram of all random vectors and fit a multi dimensional normal distribution to the histogram. Problem about that approach: The covariance matrix has many entries, this could possibly be a problem for the fitting process?
Invert cumulative distribution function: Make a multi dimensional histogram as approximation of the density function of the random vectors. Then integrate this to get a multi dimensional cumulative distribution function. For one dimension, this is invertible and one could use the cum-dist function to distribute random numbers like in the original distribution. Problem: For the multi-dimensional case the cum-dist function is not invertible(?) and I don't know if this approach still works then?
Bayesian: Use Bayesian Statistics with some normal distribution as prior and update for every observation. The result should always be again a normal distribution. Problem: I think this is computationally expensive? Also, I don't want the later updates have more impact on the resulting distribution than the earlier ones.
Also, maybe there is some library which has this task already implemented? I did not find exactly this in Numpy or Scipy, maybe someone has an idea where else to look?
If the simple estimates described in the section Parameter estimation of the wikipedia article on the multivariate normal distribution are sufficient for your needs, you can use numpy.mean to compute the mean and numpy.cov to compute the sample covariance matrix.
As a simple model to represent a knowledge network and learn about properties of weighted graphs, I computed the cosine similarity between Wikipedia articles.
I am looking now at the distribution of the similarity weights for each article (see pictures ).
In the pictures, you see that the curve changes derivative around a certain value (maybe from an exponential, to linear) : I would like to fit the curve and extract that value, where the derivate visibly (or expectedly) change, so that I can divide similar articles in two sets: the "most similar" (left side of the threshold) and the "others" (right side of the threshold).
I want to fit the curve for each article distribution; compare the distribution respect to the mean distribution of all the articles; compare the distribution respect to the distribution of a random weighted network.
(You're suggestions are most welcome in defining working procedure: you know I would like to use this model as a toy model to then train how a network, or an article, may evolve in time).
My background is User Experience with a twist for data science, I wish to comprehend better which model may describe the distribution of values I observed, a proper way to compare distributions, and python tools (or Mathematica 11) to fit the curve and obtain the derivative for each point.
which model do you suggest to describe distribution of observed values for similarity between objects in a weighted network (here, a collaborative knowledge base is represented as a weighted network, where weight is the similarity value of two given articles - should I expect an exponential? a poissonian ? why ?)
how to compute curve fit and extract derivative of the curve at a given point (python or Mathematica 11)
Working with Mathematica, suppose you data is in the list data. Then if you want to find the cubic polynomial that best fits your data, use the Fit function:
Fit[data, {1, x, x^2, x^3}, x]
In general the usage for the Fit command looks like
Fit["data set", "list of functions", "independent variable"]
where Mathematica tries to fit a linear combination of the functions in that list to your data set. I'm not sure what to say about what sort of curve we would expect this data to be best modeled by, but just remember that any smooth function can be approximated to arbitrary precision by a polynomial with sufficiently many terms. So if you have the computational power to spare, just let your list of functions be a long list of powers of x. Although it does look like you have an asymptote at x=0, so maybe allow there to be a 1/x term in there to capture that. And then of course you can use Plot to plot your curve on top of your data to compare them visually.
Now to get this best fit curve as a function in Mathematica that you can take a derivative of:
f[x_] := Fit[data, {1, x, x^2, x^3}, x]
And then the obvious change you are talking about occurs when the second derivative is zero, so to get that x value:
NSolve[f''[x] == 0, x]
I'm currently using the curve_fit function of the scipy.optimize package in Python, and know that if you take the square root of the diagonal entries of the covariance matrix that you get from curve_fit, you get the standard deviation on the parameters that curve_fit calculated. What I'm not sure about, is what exactly this standard deviation means. It's an approximation using a Hesse matrix as far as I understand, but what would the exact calculation be? Standard deviation on the Gaussian Bell Curve tells you what percentage of area is within a certain range of the curve, so I assumed for curve_fit it tells you how many datapoints are between certain parameter values, but apparently that isn't right...
I'm sorry if this should be basic knowledge for curve fitting, but I really can't figure out what the standard deviations do, they express an error on the parameters, but those parameters are calculated as the best possible fit for the function, it's not like there's a whole collection of optimal parameters, and we get the average value of that collection and consequently also have a standard deviation. There's only one optimal value, what is there to compare it with? I guess my question really comes down to this: how can I manually and accurately calculate these standard deviations, and not just get an approximation using a Hesse matrix?
The variance in the fitted parameters represents the uncertainty in the best-fit value based on the quality of the fit of the model to the data. That is, it describes by how much the value could change away from the best-fit value and still have a fit that is almost as good as the best-fit value.
With standard definition of chi-square,
chi_square = ( ( (data - model)/epsilon )**2 ).sum()
and reduced_chi_square = chi_square / (ndata - nvarys) (where data is the array of the data values, model the array of the calculated model, epsilon is uncertainty in the data, ndata is the number of data points, and nvarys the number of variables), a good fit should have reduced_chi_square around 1 or chi_square around ndata-nvary. (Note: not 0 -- the fit will not be perfect as there is noise in the data).
The variance in the best-fit value for a variable gives the amount by which you can change the value (and re-optimize all other values) and increase chi-square by 1. That gives the so-called '1-sigma' value of the uncertainty.
As you say, these values are expressed in the diagonal terms of the covariance matrix returned by scipy.optimize.curve_fit (the off-diagonal terms give the correlations between variables: if a value for one variable is changed away from its optimal value, how would the others respond to make the fit better). This covariance matrix is built using the trial values and derivatives near the solution as the fit is being done -- it calculates the "curvature" of the parameter space (ie, how much chi-square changes when a variables value changes).
You can calculate these uncertainties by hand. The lmfit library (https://lmfit.github.io/lmfit-py/) has routines to more explicitly explore the confidence intervals of variables from least-squares minimization or curve-fitting. These are described in more detail at
https://lmfit.github.io/lmfit-py/confidence.html. It's probably easiest to use lmfit for the curve-fitting rather than trying to re-implement the confidence interval code for curve_fit.
I'm doing a fit of a set results to a predicted function. The function might be interpreted as linear but I might have to change it a little so I am doing curve fitting instead of linear regression. I use the curve_fit function in scipy. Here is how I use it
kappa = 1
alpha=2
popt,pcov = curve_fit(fitFunc1,self.X[0:3],self.Y[0:3],sigma=self.Err[0:3],p0=[kappa,alpha])
and here is fitFunc1
def fitFunc1(X,kappa,alpha):
out = []
for x in X:
y = log(kappa)
y += 4*log(pi)
y += alpha*x
y -= 2*log(2)
out.append(-y)
return np.array(out)
Here is an example of the fit . The green line is a matlab fit. The red one is a scipy fit. I carry the fist over the first three dots.
You are using non-linear fitting routines to fit the data, not linear least-squares as invoked by A\b. The result is that the matlab and/or scipy minimization routines are getting stuck in local minima during the optimizations, leading to different results.
You should get the same results (to within numerical precision) if you apply logs to the raw data prior to linear fitting with A\b (in matlab).
edit
Inspecting function fitFunc1 it looks like the x/y data have already been transformed prior to the fit within scipy.
I performed a linear fit with the data shown, using matlab. The results using linear least squares with the operation polyfit(x,y,1) (essentially a linear fit) is very similar to the scipy result:
In any case, the data looks piecewise linear so a better solution may be to attempt a piecewise linear fit. On the other the log transformation can do all sorts of unwanted stuff, so performing nonlinear fits on the original data without performing a log tranform may be the best solution.
If you don't mind having a little bit of extra work I suggest using PyMinuit or iMinuit, both are minimisation packages based on Seal Minuit.
Then you can minimise a Chi Sq function or maximise the likelihood of your data in relation to your fit function. They also provide all the errors and everything you would like to know about the fit.
Hope this helps! xD
Using scipy's interpolate.splprep function get a parametric spline on parameter u, but the domain of u is not the line integral of the spline, it is a piecewise linear connection of the input coordinates. I've tried integrate.splint, but that just gives the individual integrals over u. Obviously, I can numerically integrate a bunch of Cartesian differential distances, but I was wondering if there was closed-form method for getting the length of a spline or spline segment (using scipy or numpy) that I was overlooking.
Edit: I am looking for a closed-form solution or a very fast way to converge to a machine-precision answer. I have all but given up on the numerical root-finding methods and am now primarily after a closed-form answer. If anyone has any experience integrating elliptical functions or can point me to a good resource (other than Wolfram), That would be great.
I'm going to try Maxima to try to get the indefinite integral of what I believe is the function for one segment of the spline: I cross-posted this on MathOverflow
Because both x & y are cubic parametric functions, there isn't a closed solution in terms of simple functions. Numerical integration is the way to go. Either integrating the arc length expression or simply adding line segment lengths - depends on the accuracy you are after and how much effort you want to exert.
An accurate and fast "Adding length of line segments" method:
Using recurvise subdivision (a form of de Casteljeau's algorithm) to generate points, can give you a highly accurate representation with minimal number of points.
Only subdivide subdivisions if they fail to meet a criteria. Usually the criteria is based on the length joining the control points (the hull or cage).
For cubic, usually comparing closeness of P0P1+P1P2+P2P3 to P0P3, where P0, P1, P2 & P3 are the control points that define your bezier.
You can find some Delphi code here:
link text
It should be relatively easy to convert to Python.
It will generate the points. The code already calculates the length of the segments in order to test the criteria. You can simply accumulate those length values along the way.
You can integrate the function sqrt(x'(u)**2+y'(u)**2) over u, where you calculate the derivatives x' and y' of your coordinates with scipy.interpolate.splev. The integration can be done with one of the routines from scipy.integrate (quad is precise [Clenshaw-Curtis], romberg is generally faster). This should be more precise, and probably faster than adding up lots of small distances (which is equivalent to integrating with the rectangle rule).