Related
I have a rectilinear (not regular) grid of data (x,y,V) where V is the value at the position (x,y). I would like to use this data source to interpolate my results so that I can fill in the gaps and plot the interpolated values (inside the range) in the future. (Also I need functionality of griddata to check arbitrary values inside the range).
I looked at the documentation at SciPy and here.
Here is what I tried, and the result:
It clearly doesn't match the data.
# INTERPOLATION ATTEMPT?
from scipy.interpolate import Rbf
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
edges = np.linspace(-0.05, 0.05, 100)
centers = edges[:-1] + np.diff(edges[:2])[0] / 2.
XI, YI = np.meshgrid(centers, centers)
# use RBF
rbf = Rbf(x, y, z, epsilon=2)
ZI = rbf(XI, YI)
# plot the result
plt.subplots(1,figsize=(12,8))
X_edges, Y_edges = np.meshgrid(edges, edges)
lims = dict(cmap='viridis')
plt.pcolormesh(X_edges, Y_edges, ZI, shading='flat', **lims)
plt.scatter(x, y, 200, z, edgecolor='w', lw=0.1, **lims)
#decoration
plt.title('RBF interpolation?')
plt.xlim(-0.05, 0.05)
plt.ylim(-0.05, 0.05)
plt.colorbar()
plt.show()
For reference, here is my data (extracted), it has a circular pattern that I need interpolation to recognize.
#DATA
experiment1raw = np.array([
[0,40,1,11.08,8.53,78.10,2.29],
[24,-32,2,16.52,11.09,69.03,3.37],
[8,-32,4,14.27,10.68,71.86,3.19],
[-8,-32,6,10.86,9.74,76.69,2.72],
[-24,-32,8,6.72,12.74,77.08,3.45],
[32,-24,9,18.49,13.67,64.32,3.52],
[-32,-24,17,6.72,12.74,77.08,3.45],
[16,-16,20,13.41,21.33,59.92,5.34],
[0,-16,22,12.16,14.67,69.04,4.12],
[-16,-16,24,9.07,13.37,74.20,3.36],
[32,-8,27,19.35,17.88,57.86,4.91],
[-32,-8,35,6.72,12.74,77.08,3.45],
[40,0,36,19.25,20.36,54.97,5.42],
[16,0,39,13.41,21.33,59.952,5.34],
[0,0,41,10.81,19.55,64.37,5.27],
[-16,0,43,8.21,17.83,69.34,4.62],
[-40,0,46,5.76,13.43,77.23,3.59],
[32,8,47,15.95,23.61,54.34,6.10],
[-32,8,55,5.97,19.09,70.19,4.75],
[16,16,58,11.27,26.03,56.36,6.34],
[0,16,60,9.19,24.94,60.06,5.79],
[-16,16,62,7.10,22.75,64.57,5.58],
[32,24,65,12.39,29.19,51.17,7.26],
[-32,24,73,5.40,24.55,64.33,5.72],
[24,32,74,10.03,31.28,50.96,7.73],
[8,32,76,8.68,30.06,54.34,6.92],
[-8,32,78,6.88,28.78,57.84,6.49],
[-24,32,80,5.83,26.70,61.00,6.46],
[0,-40,81,7.03,31.55,54.40,7.01],
])
#Atomic Percentages are set here
Cr1 = experiment1raw[:,3]
Mn1 = experiment1raw[:,4]
Fe1 = experiment1raw[:,5]
Co1 = experiment1raw[:,6]
#COORDINATE VALUES IN PRE-T
x_pret = experiment1raw[:,0]/1000
y_pret = experiment1raw[:,1]/1000
#important translation
x = -y_pret
y = -x_pret
z = Cr1
You used a larger epsilon in RBF. Best bet is to set it as default and let scipy calculate an appropriate value. See the implementation here.
So setting default epsilon:
rbf = Rbf(x, y, z)
I got a pretty good interpolation for your data (subjective opinion).
I'm currently trying to fit data to this function to exctract "e/lambda" :
To do so, I tried (for the first time) to fit the data using python and I rearranged a little the fit function :
import matplotlib.pyplot as plt
import scipy.optimize as optimize
import numpy as np
# data
Io = np.array([0.3,0.5,1.4,2.9,3.8])
Is = np.array([2.7,2.7,2.7,2.7,2.7])
R = Io/Is
T = np.array([0.,50,70,80,85])
F = R/R[0]
plt.plot(T, F, 'ro', label="original data")
# curvefit
## a = np.exp(e/lambda)
def func(T, a):
return a * (((np.exp ((np.cos(T)-1)/(np.cos(T)))) - \
(np.exp ((1-np.cos(T))/((np.cos(T))**2)))) \
/ ((np.exp ((np.cos(T)-1)/(np.cos(T)))) - \
(np.exp ((1-np.cos(T))/((np.cos(T)))))))
popt, pcov = optimize.curve_fit(func, T, F, maxfev=100000)
t = np.linspace(0,85)
plt.plot(t, func(t, *popt), label="Fitted Curve")
plt.legend(loc='upper left')
plt.show()
However, I'm getting this message : "Optimal parameters not found: Number of calls to function has reached maxfev = 100000"
This might be more a mathematical issue since I've tried succesfully this code with another function :
def func(T, a, b, c):
return a + np.exp(b*T-c)
Does anyone know if it is possible to fit this function using it's "true" form ?
Thanks !!
I'm trying to fit an asymmetric Gaussian to this data: http://ge.tt/99iNaL53 (csv file).
I have tried to use a skewed Gaussian model from lmfit, and also a spline, but I'm not able to get the Gaussian model to fit well and the splines are not what I'm looking for (I don't want the spline to fit the data exactly as shown below, and altering the level of smoothing isn't helping).
Here is code using the above data that produces the plot below. The second figure is an example of what I'm trying to achieve with the goal of reading the rise and decay time from the fit.
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import CubicSpline
from scipy.interpolate import UnivariateSpline
from lmfit.models import SkewedGaussianModel
data = np.loadtxt('data.csv', delimiter=',')
x = data[:,0]
y = data[:,1]
# Skewed Gaussian fit
model = SkewedGaussianModel()
params = model.make_params(amplitude=400, center=3, sigma=7, gamma=1)
result = model.fit(y, params, x=x)
# Cubic Spline
cs = CubicSpline(x, y)
x_range = np.arange(x[0], x[-1], 0.1)
# Univariate Spline
us = UnivariateSpline(x, y, k = 1)
# Univariate Spline (smoothed)
us2 = UnivariateSpline(x, y, k = 5)
plt.scatter(x, y, marker = '^', color = 'k', linewidth = 0.5, s = 10, label = 'data')
plt.plot(x_range, cs(x_range), label = 'Cubic Spline')
plt.plot(x_range, us(x_range), label = 'Univariate Spline, k = 1')
plt.plot(x_range, us2(x_range), label = 'Univariate Spline, k = 5')
plt.plot(x, result.best_fit, color = 'red', label = 'Skewed Gaussian Attempt')
plt.xlabel('x')
plt.ylabel('y')
plt.yscale('log')
plt.ylim(1,500)
plt.legend()
plt.show()
Is there a question here? I don't see one, actually.
That result from lmfit is the best fit to a skewed Gaussian model.
You've chosen to plot the result on a log-scale. That completely changes the view of the quality of the fit or what is not fit well.
It seems like you're expecting a better fit, but not *too good. Well, it looks like your data is not perfectly represented by a single skewed Gaussian. It seems like you were not expecting it to be. You could try different forms for the model function, say a skewed Lorentzian or something. But your data has that low x shoulder that definitely does not look like your uncited figure.
I wrote something for J. Chem. Ed. [1] that involved fitting asymmetric Gaussian functions to data, you can find the core repo here [2] but below is a snippet on how I went about fitting a data set where x = data[:,0] and y = data[:,1] to the type of function you're working with:
import numpy as np
from scipy.optimize import leastsq
from scipy.special import erf
initials = [6.5, 13, 1, 0] # initial guess
def asymGaussian(x, p):
amp = (p[0] / (p[2] * np.sqrt(2 * np.pi)))
spread = np.exp((-(x - p[1]) ** 2.0) / (2 * p[2] ** 2.0))
skew = (1 + erf((p[3] * (x - p[1])) / (p[2] * np.sqrt(2))))
return amp * spread * skew
def residuals(p,y,x):
return y - asymGaussian(x, p)
# executes least-squares regression analysis to optimize initial parameters
cnsts = leastsq(
residuals,
initials,
args=(
data_set[:,1], # y value
data_set[:,0] # x value
))[0]
y = asymGaussian(data[:,0], cnsts)
finally just plot (y, data[:,0]). Hope this helps!
[1] https://pubs.acs.org/doi/10.1021/acs.jchemed.9b00818
[2] https://github.com/1mikegrn/pyGC
I have very few data points and I want to create a line to best fit the data points when plotted in a semilogy scale. I have tried curve-fit and cubic interpolation from scipy, but none of them seems to be very reasonable to me compared to the data trend.
I would kindly ask you to check if there is a more efficient way to create a straight line fit for the data. Probably extrapolation can do, but I did not find a good documentation on extrapolation on python.
your help is very appreciated
import sys
import os
import numpy
import matplotlib.pyplot as plt
from pylab import *
from scipy.optimize import curve_fit
import scipy.optimize as optimization
from scipy.interpolate import interp1d
from scipy import interpolate
Mass500 = numpy.array([ 13.938 , 13.816, 13.661, 13.683, 13.621, 13.547, 13.477, 13.492, 13.237,
13.232, 13.07, 13.048, 12.945, 12.861, 12.827, 12.577, 12.518])
y500 = numpy.array([ 7.65103978e-06, 4.79865790e-06, 2.08218909e-05, 4.98385924e-06,
5.63462673e-06, 2.90785458e-06, 2.21166794e-05, 1.34501705e-06,
6.26021870e-07, 6.62368879e-07, 6.46735547e-07, 3.68589447e-07,
3.86209019e-07, 5.61293275e-07, 2.41428755e-07, 9.62491134e-08,
2.36892162e-07])
plt.semilogy(Mass500, y500, 'o')
# interpolation
f2 = interp1d(Mass500, y500, kind='cubic')
plt.semilogy(Mass500, f2(Mass500), '--')
# curve-fit
def line(x, a, b):
return 10**(a*x+b)
#Initial guess.
x0 = numpy.array([1.e-6, 1.e-6])
print optimization.curve_fit(line, Mass500, y500, x0)
popt, pcov = curve_fit(line, Mass500, y500)
print popt
plt.semilogy(Mass500, line(Mass500, popt[0], popt[1]), 'r-')
plt.legend(['data', 'cubic', 'curve-fit'], loc='best')
show()
There are many regression functions available in numpy and scipy.
scipy.stats.lingress is one of the simpler functions, and it returns common linear regression parameters.
Here are two options for fitting semi-log data:
Plot Transformed Data
Rescale Axes and Transform Input/Output Function Values
Given
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
%matplotlib inline
# Data
mass500 = np.array([
13.938 , 13.816, 13.661, 13.683,
13.621, 13.547, 13.477, 13.492,
13.237, 13.232, 13.07, 13.048,
12.945, 12.861, 12.827, 12.577,
12.518
])
y500 = np.array([
7.65103978e-06, 4.79865790e-06, 2.08218909e-05, 4.98385924e-06,
5.63462673e-06, 2.90785458e-06, 2.21166794e-05, 1.34501705e-06,
6.26021870e-07, 6.62368879e-07, 6.46735547e-07, 3.68589447e-07,
3.86209019e-07, 5.61293275e-07, 2.41428755e-07, 9.62491134e-08,
2.36892162e-07
])
Code
Option 1: Plot Transformed Data
# Regression Function
def regress(x, y):
"""Return a tuple of predicted y values and parameters for linear regression."""
p = sp.stats.linregress(x, y)
b1, b0, r, p_val, stderr = p
y_pred = sp.polyval([b1, b0], x)
return y_pred, p
# Plotting
x, y = mass500, np.log(y500) # transformed data
y_pred, _ = regress(x, y)
plt.plot(x, y, "mo", label="Data")
plt.plot(x, y_pred, "k--", label="Pred.")
plt.xlabel("Mass500")
plt.ylabel("log y500") # label axis
plt.legend()
Output
A simple approach is to plot transformed data and label the appropriate log axes.
Option 2: Rescale Axes and Transform Input/Output Function Values
Code
x, y = mass500, y500 # data, non-transformed
y_pred, _ = regress(x, np.log(y)) # transformed input
plt.plot(x, y, "o", label="Data")
plt.plot(x, np.exp(y_pred), "k--", label="Pred.") # transformed output
plt.xlabel("Mass500")
plt.ylabel("y500")
plt.semilogy()
plt.legend()
Output
A second option is to alter the axes to semi-log scales (via plt.semilogy()). Here the non-transformed data naturally appears linear. Also notice the labels represent the data as-is.
To make an accurate regression, all that remains is to transform data passed into the regression function (via np.log(x) or np.log10(x)) in order to return the proper regression parameters. This transformation is immediately reversed when plotting predicated values using a complementary operation, i.e. np.exp(x) or 10**x.
If you want a line that will look good on log-y scale, then fit a line to the logarithms of the y-values.
def line(x, a, b):
return a*x+b
popt, pcov = curve_fit(line, Mass500, np.log10(y500))
plt.semilogy(Mass500, 10**line(Mass500, popt[0], popt[1]), 'r-')
This is it; I only left out the cubic interpolation part which didn't seem relevant.
I would like to fit multiple Gaussian curves to Mass spectrometry data in Python. Right now I'm fitting the data one Gaussian at a time -- literally one range at a time.
Is there a more streamlined way to do this? Is there a way I can run the data through a loop to plot a Gaussian at each peak? I'm guessing there's gotta be a better way, but I've combed through the internet.
My graph for two Gaussians is shown below.
My example data can be found at: http://txt.do/dooxv
And here's my current code:
import numpy as np
import matplotlib.pyplot as plt
import scipy.optimize as opt
from scipy.interpolate import interp1d
RGAdata = np.loadtxt("/Users/ilenemitchell/Desktop/RGAscan.txt", skiprows=14)
RGAdata=RGAdata.transpose()
x=RGAdata[0]
y=RGAdata[1]
# graph labels
plt.ylabel('ion current')
plt.xlabel('mass/charge ratio')
plt.xticks(np.arange(min(RGAdata[0]), max(RGAdata[0])+2, 2.0))
plt.ylim([10**-12.5, 10**-9])
plt.title('RGA Data Jul 25, 2017')
plt.semilogy(x, y,'b')
#fitting a guassian to a peak
def gauss(x, a, mu, sig):
return a*np.exp(-(x-mu)**2/(2*sig**2))
fitx=x[(x>40)*(x<43)]
fity=y[(x>40)*(x<43)]
mu=np.sum(fitx*fity)/np.sum(fity)
sig=np.sqrt(np.sum(fity*(fitx-mu)**2)/np.sum(fity))
print (mu, sig, max(fity))
popt, pcov = opt.curve_fit(gauss, fitx, fity, p0=[max(fity),mu, sig])
plt.semilogy(x, gauss(x, popt[0],popt[1],popt[2]), 'r-', label='fit')
#second guassian
fitx2=x[(x>26)*(x<31)]
fity2=y[(x>26)*(x<31)]
mu=np.sum(fitx2*fity2)/np.sum(fity2)
sig=np.sqrt(np.sum(fity2*(fitx2-mu)**2)/np.sum(fity2))
print (mu, sig, max(fity2))
popt2, pcov2 = opt.curve_fit(gauss, fitx2, fity2, p0=[max(fity2),mu, sig])
plt.semilogy(x, gauss(x, popt2[0],popt2[1],popt2[2]), 'm', label='fit2')
plt.show()
In addition to Alex F's answer, you need to identify peaks and analyze their surroundings to identify the xmin and xmax values.
If you have done that, you can use this slightly refactored code and the loop within to plot all relevant data
import numpy as np
import matplotlib.pyplot as plt
import scipy.optimize as opt
from scipy.interpolate import interp1d
def _gauss(x, a, mu, sig):
return a*np.exp(-(x-mu)**2/(2*sig**2))
def gauss(x, y, xmin, xmax):
fitx = x[(x>xmin)*(x<xmax)]
fity = y[(x>xmin)*(x<xmax)]
mu = np.sum(fitx*fity)/np.sum(fity)
sig = np.sqrt(np.sum(fity*(fitx-mu)**2)/np.sum(fity))
print (mu, sig, max(fity))
popt, pcov = opt.curve_fit(_gauss, fitx, fity, p0=[max(fity), mu, sig])
return _gauss(x, popt[0], popt[1], popt[2])
# Load data and define x - y
RGAdata = np.loadtxt("/Users/ilenemitchell/Desktop/RGAscan.txt", skiprows=14)
x, y = RGAdata.T
# Create the plot
fig, ax = plt.subplots()
ax.semilogy(x, y, 'b')
# Plot the Gaussian's between xmin and xmax
for xmin, xmax in [(40, 43), (26, 31)]:
yG = gauss(x, y, xmin, xmax)
ax.semilogy(x, yG)
# Prettify the graph
ax.set_xlabel("mass/charge ratio")
ax.set_ylabel("ion current")
ax.set_xticks(np.arange(min(x), max(x)+2, 2.0))
ax.set_ylim([10**-12.5, 10**-9])
ax.set_title("RGA Data Jul 25, 2017")
plt.show()
Here's some sample code of identifying peaks in a data set to get you started. You can find a link to all the examples here.
import numpy as np
import peakutils
cb = np.array([-0.010223, ... ])
indexes = peakutils.indexes(cb, thres=0.02/max(cb), min_dist=100)
# [ 333 693 1234 1600]
interpolatedIndexes = peakutils.interpolate(range(0, len(cb)), cb, ind=indexes)
# [ 332.61234263 694.94831376 1231.92840845 1600.52446335]
You may find the lmfit module (https://lmfit.github.io/lmfit-py/) helpful. This provides a pre-built GaussianModel class for fitting a peak to a single Gaussian and supports adding multiple Models (not necessarily Gaussians, but also other peak models and other functions that might be useful for backgrounds and so for) into a composite model that can be fit at once.
Lmfit supports fixing or giving a range to some Parameters, so that you could build a model as a sum of Gaussians with fixed positions, limiting the value for the centroid to vary with some range (so the peaks cannot get confused). In addition, you can impose simple mathematical constraints on parameter values, so that you might require that all peak widths are the same size (or related in some simple form).
In particular, you might look to https://lmfit.github.io/lmfit-py/builtin_models.html#example-3-fitting-multiple-peaks-and-using-prefixes for an example a fit using 2 Gaussians and a background function.
For peak finding, I've found scipy.signal.find_peaks_cwt to be pretty good.