I am aware of the existence of this, and this on this topic. However, I would like to finalize on an actual implementation in Python this time.
My only problem is that the elbow point seems to be changing from different instantiations of my code. Observe the two plots shown in this post. While they appear to be visually similar, the value of the elbow point changed significantly. Both the curves were generated from an average of 20 different runs. Even then, there is a significant shift in the value of the elbow point. What precautions can I take to make sure that the value falls within a certain bound?
My attempt is shown below:
def elbowPoint(points):
secondDerivative = collections.defaultdict(lambda:0)
for i in range(1, len(points) - 1):
secondDerivative[i] = points[i+1] + points[i-1] - 2*points[i]
max_index = secondDerivative.values().index(max(secondDerivative.values()))
elbow_point = max_index + 1
return elbow_point
points = [0.80881476685027154, 0.79457906121371058, 0.78071124401504677, 0.77110686192601441, 0.76062373158581287, 0.75174963969985187, 0.74356408965979193, 0.73577573557299236, 0.72782434749305047, 0.71952590556748364, 0.71417942487824781, 0.7076502559300516, 0.70089375208028415, 0.69393584640497064, 0.68550490458450741, 0.68494440529025913, 0.67920157634796108, 0.67280267176628761]
max_point = elbowPoint(points)
Its sounds like your actual concern is how to smooth your data as it contains noise? in which case perhaps you should fit a curve to the data first, then find the elbow of the fitted curve?
Whether this will work would depend on the source of the noise, and if the noise is important for your application? by the way you may want to see how sensitive your fit is to your data by seeing how it changes (or hopefully doesn't) when a point is omitted from the fit (obviously with a high enough polynomial you will always get a good fit to a specific set of data, but you are presumably interested in the general case)
I have no idea if this approach is acceptable, intuitively though i'd think that sensitivity to small errors is bad. ultimately by fitting a curve you are saying that the underlying process is, in the ideal case, modelled by the curve, and any deviation from the curve is an error/noise
Related
I am trying to fit a data set which may fit a gaussian or lorentzian, with scipy optimize curve_fit function.
I am getting the error:
"OptimizeWarning: Covariance of the parameters could not be estimated
warnings.warn('Covariance of the parameters could not be estimated',"
the data set looks like this:
enter image description here
which , as you can see, may fit a gaussian.
my code is :
def gaussian(x,a,b,c,d):
func=a*np.exp(-((x-b)**2)/c)+d
return func
def lorentzian (x,a,b,c):
func=a/(((x-b)**2+a**2)*np.pi)+c
return func
x,y_data= np. loadtxt('in 0.6 out 0.6.dat', unpack = True)
popt, pcov = curve_fit(lorentzian, x, y_data)
thank you!
You're getting this error because the fitting algorithm couldn't find an appropriate solution. No matter where it moved the parameters, the fit quality didn't change. If you provide an initial guess, you're more likely to reach the solution. Given that the function parameters are relatively easily obtained from glancing the curves, you could provide most of them. For example, the center (which you called a) is around 545.5. Wikipedia also has a relationship for the value at the maximum for a slightly different form of your equation, which lacks the c parameter to shift the curve upwards. Providing the guess p0 = (0.1, 545.5, 0) and a bound of (0, 1E10) you get something much closer to your results, yet still unsatisfactory (next time, provide the data array, I had to use a point extractor to plot this)
Now, notice how you're supposed to reach a maximum value of 40, yet that seems unattainable by your model. I took the liberty of normalizing your model simply by dividing it by its maximum value and trying to fit again. I don't know if this is the appropriate normalization, but this is just to illustrate the difference. This time, the result is much more satisfactory:
Yet I think a lorentzian is a bit too narrow for your curve (especially evident if you set c to 0), which looks much more like a gaussian (given you provided its definition but didn't use it, I guess you probably would have used it in the future).
Note how I didn't have to normalize y.
In summary:
Provide an initial guess to your fitting algorithm, and bounds if possible.
Always plot your models and data to see what's going on.
Be aware of the limits of your models, what values it can or can't reach. Use this to evaluate if a fit is even possible in the first place.
I would like to know if in Python, and more precisely, in lmfit library, there is an option for fitting data by parts ? I would like to fit data defined in different ranges and then obtain a unique fit.
Thank you
Without a more concrete example, it is hard to give a concrete answer. But, if I understand your question correctly, you are looking to do a fit to one specific region of your data, then a fit (probably with a different functional form) to another region of your data, and then perhaps combine the multiple regions to get a final fit.
If that is correct, then yes, this can be done with lmfit (and probably with other libraries as well). Let's say you want to fit data that is sort of peak like with an exponential decaying background. First, isolate a region around that peak (it doesn't have to be perfect) and fit a peak (say, Gaussian to that). Then fit an exponential decay to all the data except the peak area. (Aside: numpy.where can be very useful in identifying the regions). Finally, combine the two and fit the whole curve to peak + background.
If that is too vague and doesn't point you in the right direction, please make the question more specific.
I'm analyzing a set of data and I need to find the regression for it. Th number of data points in the dataset are low (~15) and I decided to use the robust linear regression for the job. The problem is that the procedure is selecting some points as outlier that do not seem to be that influential. Here is scatter plot of the data, with their influence used as size:
Point B and C (shown with red circle in the figure) are selected as outliers, while point A which has way higher influence is not. Although point A does not change the general trend of the regression, it is basically defining the slope along with the point with the highest X. Whereas point B and C are only affecting the significance of slope. So my question has two parts:
1) What is the RLM package's method for selecting outliers if the most influential point is not selected and do you know of other packages that have a outlier selection that I have in mind?
2) Do you think that point A is an outlier?
RLM in statsmodels is limited to M-estimators. The default Huber norm is only robust to outliers in y, but not in x, i.e. not robust to bad influential points.
See for example http://www.statsmodels.org/devel/examples/notebooks/generated/robust_models_1.html
line In [51] and after.
Redescending norms like bisquare are able to remove bad influential points but the solution is a local optimum and needs appropriate starting values. Methods that have a low breakdown point and are robust to x outliers like LTS are currently not available in statsmodels nor, AFAIK, anywhere else in Python. R has a more extensive suite of robust estimators that can handle these cases. Some extensions to add more methods and models in statsmodels.robust are in, currently stalled, pull requests.
In general and to answer the second part of the question:
In specific cases it is often difficult to declare or identify an observation as outlier. Very often researchers use robust methods to indicate outlier candidates that need further investigation. One reason for example could be that the "outliers" were sampled from a different population. Using a purely mechanical, statistical identification might not be appropriate in many cases.
In this example: If we fit a steep slope and drop point A as an outlier, then points B and C might fit reasonably well and are not identified as outliers. On the other hand, if A is a reasonable point based on extra information, then maybe the relationship is nonlinear.
My guess is that LTS will declare A as the only outlier and fit a steep regression line.
I am studying physics and ran into a really interesting problem. I'm not an expert on programming so please take this into account while reading this.
I really hope that someone can help me with this problem because I struggle with this matter for about 2 months now and don't see any success.
So here is my Problem:
I got a bunch of data sets (more than 2 less than 20) from numerical calculations. The set is given by x against measurement values. I have a set of sensors and want to find the best positions x for my sensors such that the integral of the interpolation comes as close as possible to the integral of the numerical data set.
As this sounds like a typical mathematical problem I started to look for some theorems but I did not find anything.
So I started to write a python program based on the SLSQP minimizer. I chose this because it can handle bounds and constraints. (Note there is always a sensor at 0 and one at 1)
Constraints: the sensor array must stay sorted all the time such that x_i smaller than x_i+1 and the interval of x is normalized to [0,1].
Before doing an overall optimization I started to look for good starting points and searched for maximums, minimums and linear areas of my given data sets. But an optimization over 40 values turned out to deliver bad results.
In my second try I started to search for these points and defined certain areas. So I optimized each area with 1 to 40 sensors. Then I compared the results and decided which area is worth putting more sensors in. I the last step I wanted to do an overall optimization again. But these idea didn't seem to be the proper solution, too, because the optimization had convergence problem as well.
The big problem was, that my optimizer broke the boundaries. I covered this by interrupting the optimization, because once this boundaries were broken the result was not correct in the end. If this happens I reset my initial setup and a homogeneous distribution. After this there are normally no violence of boundaries but the results seems to be a homogeneous distribution, too, often this is obviously not the perfect distribution.
As my algorithm works for simple examples and dies for more complex data I think there is a general problem and not just some error in my coding. Does anyone have an idea how to move on or knows some theoretical stuff about this matter?
The attached plot show the areas in different colors. The function is shown at the bottom and the sensor positions are represented as dots. Dots at value y=1 are from the optimization with one sensors 2 represents the results of optimization with 2 variables. So as the program reaches higher sensor numbers the whole thing gets more and more homogeneous.
It is easy to see that if n is the number of sensors and n goes to infinity you have a total homogeneous distribution. But as far as I see this this should not happen for just 10 sensors.
I'm trying to use KernelPCA for reducing the dimensionality of a dataset to 2D (both for visualization purposes and for further data analysis).
I experimented computing KernelPCA using a RBF kernel at various values of Gamma, but the result is unstable:
(each frame is a slightly different value of Gamma, where Gamma is varying continuously from 0 to 1)
Looks like it is not deterministic.
Is there a way to stabilize it/make it deterministic?
Code used to generate transformed data:
def pca(X, gamma1):
kpca = KernelPCA(kernel="rbf", fit_inverse_transform=True, gamma=gamma1)
X_kpca = kpca.fit_transform(X)
#X_back = kpca.inverse_transform(X_kpca)
return X_kpca
KernelPCA should be deterministic and evolve continuously with gamma. It is different from RBFSampler that does have built-in randomness in order to provide an efficient (more scalable) approximation of the RBF kernel.
However what can change in KernelPCA is the order of the principal components: in scikit-learn they are returned sorted in order of descending eigenvalue, so if you have 2 eigenvalues close to each other it could be that the order changes with gamma.
My guess (from the gif) is that this is what is happening here: the axes along which you are plotting are not constant so your data seems to jump around.
Could you provide the code you used to produce the gif?
I'm guessing it is a plot of the data points along the 2 first principal components but it would help to see how you produced it.
You could try to further inspect it by looking at the values of kpca.alphas_ (the eigenvectors) for each value of gamma.
Hope this makes sense.
EDIT: As you noted it looks like the points are reflected against the axis, the most plausible explanation is that one of the eigenvector flips sign (note this does not affect the eigenvalue).
I put in a simple gist to reproduce the issue (you'll need a Jupyter notebook to run it). You can see the sign-flipping when you change the value of gamma.
As a complement note that this kind of discrepancy happens only because you fit several times the KernelPCA object several times. Once you settled with a particular gamma value and you've fit kpca once you can call transform several times and get consistent results.
For the classical PCA the docs mention that:
Due to implementation subtleties of the Singular Value Decomposition (SVD), which is used in this implementation, running fit twice on the same matrix can lead to principal components with signs flipped (change in direction). For this reason, it is important to always use the same estimator object to transform data in a consistent fashion.
I don't know about the behavior of a single KernelPCA object that you would fit several times (I did not find anything relevant in the docs).
It does not apply to your case though as you have to fit the object with several gamma values.
So... I can't give you a definitive answer on why KernelPCA is not deterministic. The behavior resembles the differences I've observed between the results of PCA and RandomizedPCA. PCA is deterministic, but RandomizedPCA is not, and sometimes the eigenvectors are flipped in sign relative to the PCA eigenvectors.
That leads me to my vague idea of how you might get more deterministic results....maybe. Use RBFSampler with a fixed seed:
def pca(X, gamma1):
kernvals = RBFSampler(gamma=gamma1, random_state=0).fit_transform(X)
kpca = PCA().fit_transform(X)
X_kpca = kpca.fit_transform(X)
return X_kpca