I have an energy spectrum from a cosmic ray detector. The spectrum follows an exponential curve but it will have broad (and maybe very slight) lumps in it. The data, obviously, contains an element of noise.
I'm trying to smooth out the data and then plot its gradient.
So far I've been using the scipy sline function to smooth it and then the np.gradient().
As you can see from the picture, the gradient function's method is to find the differences between each point, and it doesn't show the lumps very clearly.
I basically need a smooth gradient graph. Any help would be amazing!
I've tried 2 spline methods:
def smooth_data(y,x,factor):
print "smoothing data by interpolation..."
xnew=np.linspace(min(x),max(x),factor*len(x))
smoothy=spline(x,y,xnew)
return smoothy,xnew
def smooth2_data(y,x,factor):
xnew=np.linspace(min(x),max(x),factor*len(x))
f=interpolate.UnivariateSpline(x,y)
g=interpolate.interp1d(x,y)
return g(xnew),xnew
edit: Tried numerical differentiation:
def smooth_data(y,x,factor):
print "smoothing data by interpolation..."
xnew=np.linspace(min(x),max(x),factor*len(x))
smoothy=spline(x,y,xnew)
return smoothy,xnew
def minim(u,f,k):
""""functional to be minimised to find optimum u. f is original, u is approx"""
integral1=abs(np.gradient(u))
part1=simps(integral1)
part2=simps(u)
integral2=abs(part2-f)**2.
part3=simps(integral2)
F=k*part1+part3
return F
def fit(data_x,data_y,denoising,smooth_fac):
smy,xnew=smooth_data(data_y,data_x,smooth_fac)
y0,xnnew=smooth_data(smy,xnew,1./smooth_fac)
y0=list(y0)
data_y=list(data_y)
data_fit=fmin(minim, y0, args=(data_y,denoising), maxiter=1000, maxfun=1000)
return data_fit
However, it just returns the same graph again!
There is an interesting method published on this: Numerical Differentiation of Noisy Data. It should give you a nice solution to your problem. More details are given in another, accompanying paper. The author also gives Matlab code that implements it; an alternative implementation in Python is also available.
If you want to pursue the interpolation with splines method, I would suggest to adjust the smoothing factor s of scipy.interpolate.UnivariateSpline().
Another solution would be to smooth your function through convolution (say with a Gaussian).
The paper I linked to claims to prevent some of the artifacts that come up with the convolution approach (the spline approach might suffer from similar difficulties).
I won't vouch for the mathematical validity of this; it looks like the paper from LANL that EOL cited would be worth looking into. Anyway, I’ve gotten decent results using SciPy’s splines’ built-in differentiation when using splev.
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from scipy.interpolate import splrep, splev
x = np.arange(0,2,0.008)
data = np.polynomial.polynomial.polyval(x,[0,2,1,-2,-3,2.6,-0.4])
noise = np.random.normal(0,0.1,250)
noisy_data = data + noise
f = splrep(x,noisy_data,k=5,s=3)
#plt.plot(x, data, label="raw data")
#plt.plot(x, noise, label="noise")
plt.plot(x, noisy_data, label="noisy data")
plt.plot(x, splev(x,f), label="fitted")
plt.plot(x, splev(x,f,der=1)/10, label="1st derivative")
#plt.plot(x, splev(x,f,der=2)/100, label="2nd derivative")
plt.hlines(0,0,2)
plt.legend(loc=0)
plt.show()
You can also use scipy.signal.savgol_filter.
Result
Example
import matplotlib.pyplot as plt
import numpy as np
import scipy
from random import random
# generate data
x = np.array(range(100))/10
y = np.sin(x) + np.array([random()*0.25 for _ in x])
dydx = scipy.signal.savgol_filter(y, window_length=11, polyorder=2, deriv=1)
# Plot result
plt.plot(x, y, label='Original signal')
plt.plot(x, dydx*10, label='1st Derivative')
plt.plot(x, np.cos(x), label='Expected 1st Derivative')
plt.legend()
plt.show()
Related
I have been thinking about it for a long time, but I don't find out what the problem is. Hope you can help me, Thank you.
F(s) Gaussian function
F(s)=1/(√2π s) e^(-(w-μ)^2/(2s^2 ))
Code:
import numpy as np
from matplotlib import pyplot as plt
from math import pi
from scipy.fft import fft
def F_S(w, mu, sig):
return (np.exp(-np.power(w-mu, 2)/(2 * np.power(sig, 2))))/(np.power(2*pi, 0.5)*sig)
w=np.linspace(-5,5,100)
plt.plot(w, np.real(np.fft.fft(F_S(w, 0, 1))))
plt.show()
Result:
As was mentioned before you want the absolute value, not the real part.
A minimal example, showing the the re/im , abs/phase spectra.
import numpy as np
import matplotlib.pyplot as p
%matplotlib inline
n=1001 # add 1 to keep the interval a round number when using linspace
t = np.linspace(-5, 5, n ) # presumed to be time
dt=t[1]-t[0] # time resolution
print(f'sampling every {dt:.3f} sec , so at {1/dt:.1f} Sa/sec, max. freq will be {1/2/dt:.1f} Hz')
y = np.exp(-(t**2)/0.01) # signal in time
fr= np.fft.fftshift(np.fft.fftfreq(n, dt)) # shift helps with sorting the frequencies for better plotting
ft=np.fft.fftshift(np.fft.fft(y)) # fftshift only necessary for plotting in sequence
p.figure(figsize=(20,12))
p.subplot(231)
p.plot(t,y,'.-')
p.xlabel('time (secs)')
p.title('signal in time')
p.subplot(232)
p.plot(fr,np.abs(ft), '.-',lw=0.3)
p.xlabel('freq (Hz)')
p.title('spectrum, abs');
p.subplot(233)
p.plot(fr,np.real(ft), '.-',lw=0.3)
p.xlabel('freq (Hz)')
p.title('spectrum, real');
p.subplot(235)
p.plot(fr,np.angle(ft), '.-', lw=0.3)
p.xlabel('freq (Hz)')
p.title('spectrum, phase');
p.subplot(236)
p.plot(fr,np.imag(ft), '.-',lw=0.3)
p.xlabel('freq (Hz)')
p.title('spectrum, imag');
you have to change from time scale to frequency scale
When you make a FFT you will get the simetric tranformation, i.e, mirror of the positive to negative curve. Usually, you only will look at the positive side.
Also, you should take care with sample rate, as FFT is designed to transform time domain input to frequency domain, the time, or sample rate, of input info matters. So add timestep in np.fft.fftfreq(n, d=timestep) for your sample rate.
If you simple want to make a fft of normal dist signal, here is another question with it and some good explanations on why are you geting this behavior:
Fourier transform of a Gaussian is not a Gaussian, but thats wrong! - Python
There are two mistakes in your code:
Don't take the real part, take the absoulte value when plotting.
From the docs:
If A = fft(a, n), then A[0] contains the zero-frequency term (the mean
of the signal), which is always purely real for real inputs. Then
A[1:n/2] contains the positive-frequency terms, and A[n/2+1:] contains
the negative-frequency terms, in order of decreasingly negative
frequency.
You can rearrange the elements with np.fft.fftshift.
The working code:
import numpy as np
from matplotlib import pyplot as plt
from math import pi
from scipy.fftpack import fft, fftshift
def F_S(w, mu, sig):
return (np.exp(-np.power(w-mu, 2)/(2 * np.power(sig, 2))))/(np.power(2*pi, 0.5)*sig)
w=np.linspace(-5,5,100)
plt.plot(w, fftshift(np.abs(np.fft.fft(F_S(w, 0, 1)))))
plt.show()
Also, you might want to consider scaling the x axis too.
Using PyWavelets and Matplotbib.Specgram on a signal gives more detailed plots with pywt.dwt then pywt.cwt. How can I get a pywt.cwt specgram in a similar way?
With dwt:
import pywt
import pywt.data
import matplotlib.pyplot as plot
from scipy import signal
from scipy.io import wavfile
bA, bD = pywt.dwt(datamean, 'db2')
powerSpectrum, freqenciesFound, time, imageAxis = plot.specgram(bA, NFFT = 387, Fs=100)
plot.xlabel('Time')
plot.ylabel('Frequency')
plot.show()
with this spectrogram plot:
https://imgur.com/a/bYb8bBS
With cwt:
widths = np.arange(1,5)
coef, freqs = pywt.cwt(datamean, widths,'morl')
powerSpectrum, freqenciesFound, time, imageAxis = plot.specgram(coef, NFFT = 129, Fs=100)
plot.xlabel('Time')
plot.ylabel('Frequency')
plot.show()
with this spectrogram plot:
https://imgur.com/a/GIINzJp
and for better results:
sig = datamean
widths = np.arange(1, 31)
cwtmatr = signal.cwt(sig, signal.ricker, widths)
plt.imshow(cwtmatr, extent=[-1, 1, 1, 5], cmap='PRGn', aspect='auto',
vmax=abs(cwtmatr).max(), vmin=-abs(cwtmatr).max())
plt.show()
with this spectrogram plot:
https://imgur.com/a/TnXqgGR
How can I get for cwt (spectrogram plot 2 and 3) a similar spectogram plot and style like in the first one?
It seems like the 1st spectrogram plot compared to the 3rd has much more details.
This would be better as a comment, but since I lack the Karma to do that:
You don't want to make a spectrogram with wavelets, but a scalogram instead. What it looks like you're doing above is projecting your data in a scale subspace (that correlates to frequency), then taking those scales and finding the frequency content of them which is not what you probably want.
The detail and approximation coefficients are what you would want to use directly. Unfortunately, PyWavelets doesn't have a simple plotting function to do this for you, AFAIK. Matlab does, and their help page may be illuminating if I fail.
def scalogram(data):
wave='db4'
coeff=pywt.wavedec(data,wave)
levels=len(coeff)
lengths=[len(co) for co in coeff]
col=np.max(lengths)
im=np.ones([levels,col])
col=col.astype(float)
for level in range(levels):
#print [lengths[level],col]
y=coeff[level]
if lengths[1+level]<col:
x=col/(lengths[1+level]+1)*np.arange(1,len(y)+1)
xi=np.linspace(0,int(col),int(col))
yi=griddata(points=x,values=y,xi=xi,method='nearest')
else:
yi=y
im[level,:]=yi
im[im==0]=np.nan
tiles=sum(lengths)-lengths[0]
return im,tiles
Wxx,tiles=scalogram(data)
IM=plt.imshow(np.log10(abs(Wxx)),aspect='auto')
plt.show()
There are better ways of doing that, but it works. This produces a square matrix similar to spectrogram in "Wxx", and tiles is simply a counter of the number of time-frequency tilings to compare to the number used in a SFFT.
I've attached a picture of what these tilings look like
I am having two list:
# on x-axis:
# list1:
[70.434654, 37.147266, 8.5787086, 161.40877, -27.31284, 80.429482, -81.918106, 52.320129, 64.064552, -156.40771, 12.37026, 15.599689, 166.40984, 134.93636, 142.55002, -38.073524, -38.073524, 123.88509, -82.447571, 97.934402, 106.28793]
# on y-axis:
# list2:
[86683.961, -40564.863, 50274.41, 80570.828, 63628.465, -87284.016, 30571.402, -79985.648, -69387.891, 175398.62, -132196.5, -64803.133, -269664.06, 36493.316, 22769.121, 25648.252, 25648.252, 53444.855, 684814.69, 82679.977, 103244.58]
I need to fit a sine curve a+bsine(2*3.14*list1+c) in the data points obtained by plotting list1(on x-axis) against(on-y-axis) using python.
I am not able to get any good result.Can anyone help me with a suitable code,explanation...
Thanks!
this is my graph after plotting the list1(on x-axis) and list2(on y-axis)
Well, if you used lmfit setting up and running your fit would look like this:
xdeg = [70.434654, 37.147266, 8.5787086, 161.40877, -27.31284, 80.429482, -81.918106, 52.320129, 64.064552, -156.40771, 12.37026, 15.599689, 166.40984, 134.93636, 142.55002, -38.073524, -38.073524, 123.88509, -82.447571, 97.934402, 106.28793]
y = [86683.961, -40564.863, 50274.41, 80570.828, 63628.465, -87284.016, 30571.402, -79985.648, -69387.891, 175398.62, -132196.5, -64803.133, -269664.06, 36493.316, 22769.121, 25648.252, 25648.252, 53444.855, 684814.69, 82679.977, 103244.58]
import numpy as np
from lmfit import Model
import matplotlib.pyplot as plt
def sinefunction(x, a, b, c):
return a + b * np.sin(x*np.pi/180.0 + c)
smodel = Model(sinefunction)
result = smodel.fit(y, x=xdeg, a=0, b=30000, c=0)
print(result.fit_report())
plt.plot(xdeg, y, 'o', label='data')
plt.plot(xdeg, result.best_fit, '*', label='fit')
plt.legend()
plt.show()
That is assuming your X data is in degrees, and that you really intended to convert that to radians (as numpy's sin() function requires).
But that just addresses the mechanics of how to do the fit (and I'll leave the display of results up to you - it seems like you may need the practice).
The fit result is terrible, because these data are not sinusoidal. They are also not well ordered, which isn't a problem for doing the fit, but does make it harder to see what is going on.
I have some data, y vs x, which I would like to interpolate at a finer resolution xx using a cubic spline.
Here is my dataset:
import numpy as np
print np.version.version
import scipy
print scipy.version.version
1.9.2
0.15.1
x = np.array([0.5372973, 0.5382103, 0.5392305, 0.5402197, 0.5412042, 0.54221, 0.543209,
0.5442277, 0.5442277, 0.5452125, 0.546217, 0.5472153, 0.5482086,
0.5492241, 0.5502117, 0.5512249, 0.5522136, 0.5532056, 0.5532056,
0.5542281, 0.5552039, 0.5562125, 0.5567836])
y = np.array([0.01, 0.03108, 0.08981, 0.18362, 0.32167, 0.50941, 0.72415, 0.90698,
0.9071, 0.97955, 0.99802, 1., 0.97863, 0.9323, 0.85344, 0.72936,
0.56413, 0.36997, 0.36957, 0.17623, 0.05922, 0.0163, 0.01, ])
xx = np.array([0.5372981, 0.5374106, 0.5375231, 0.5376356, 0.5377481, 0.5378606,
0.5379731, 0.5380856, 0.5381981, 0.5383106, 0.5384231, 0.5385356,
0.5386481, 0.5387606, 0.5388731, 0.5389856, 0.5390981, 0.5392106,
0.5393231, 0.5394356, 0.5395481, 0.5396606, 0.5397731, 0.5398856,
0.5399981, 0.5401106, 0.5402231, 0.5403356, 0.5404481, 0.5405606,
0.5406731, 0.5407856, 0.5408981, 0.5410106, 0.5411231, 0.5412356,
0.5413481, 0.5414606, 0.5415731, 0.5416856, 0.5417981, 0.5419106,
0.5420231, 0.5421356, 0.5422481, 0.5423606, 0.5424731, 0.5425856,
0.5426981, 0.5428106, 0.5429231, 0.5430356, 0.5431481, 0.5432606,
0.5433731, 0.5434856, 0.5435981, 0.5437106, 0.5438231, 0.5439356,
0.5440481, 0.5441606, 0.5442731, 0.5443856, 0.5444981, 0.5446106,
0.5447231, 0.5448356, 0.5449481, 0.5450606, 0.5451731, 0.5452856,
0.5453981, 0.5455106, 0.5456231, 0.5457356, 0.5458481, 0.5459606,
0.5460731, 0.5461856, 0.5462981, 0.5464106, 0.5465231, 0.5466356,
0.5467481, 0.5468606, 0.5469731, 0.5470856, 0.5471981, 0.5473106,
0.5474231, 0.5475356, 0.5476481, 0.5477606, 0.5478731, 0.5479856,
0.5480981, 0.5482106, 0.5483231, 0.5484356, 0.5485481, 0.5486606,
0.5487731, 0.5488856, 0.5489981, 0.5491106, 0.5492231, 0.5493356,
0.5494481, 0.5495606, 0.5496731, 0.5497856, 0.5498981, 0.5500106,
0.5501231, 0.5502356, 0.5503481, 0.5504606, 0.5505731, 0.5506856,
0.5507981, 0.5509106, 0.5510231, 0.5511356, 0.5512481, 0.5513606,
0.5514731, 0.5515856, 0.5516981, 0.5518106, 0.5519231, 0.5520356,
0.5521481, 0.5522606, 0.5523731, 0.5524856, 0.5525981, 0.5527106,
0.5528231, 0.5529356, 0.5530481, 0.5531606, 0.5532731, 0.5533856,
0.5534981, 0.5536106, 0.5537231, 0.5538356, 0.5539481, 0.5540606,
0.5541731, 0.5542856, 0.5543981, 0.5545106, 0.5546231, 0.5547356,
0.5548481, 0.5549606, 0.5550731, 0.5551856, 0.5552981, 0.5554106,
0.5555231, 0.5556356, 0.5557481, 0.5558606, 0.5559731, 0.5560856,
0.5561981, 0.5563106, 0.5564231, 0.5565356, 0.5566481, 0.5567606])
I am trying to fit using the scipy InterpolatedUnivariateSpline method, interpolated with a 3rd order spline k=3, and extrapolated as zeros ext='zeros':
import scipy.interpolate as interp
yspline = interp.InterpolatedUnivariateSpline(x,y, k=3, ext='zeros')
yvals = yspline(xx)
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(x, y, 'ko', label='Values')
ax.plot(xx, yvals, 'b-.', lw=2, label='Spline')
plt.xlim([min(x), max(x)])
However, as you can see in this image, my Spline returns NaN values :(
Is there a reason? I am pretty sure my x values are all increasing, so I am stumped as to why this is happening. I have many other datasets I am fitting using this method, and it only fails on this specific set of data.
Any help is greatly appreciated.
Thank you for reading.
EDIT!
The solution was that I have duplicate x values, with differing y values!
For this interpolation, you should rather use scipy.interpolate.interp1d with the argument kind='cubic' (see a related SO question )
I have yet to find a use case where InterpolatedUnivariateSpline can be used in practice (or maybe I just don't understand its purpose). With your code I get,
So the interpolation works but shows extremely strong oscillations, making it unusable, which is typically the result I was getting with this interpolation method in the past. With a lower order spline (e.g. k=1) that works better, but then you lose the advantage of cubic interpolation.
I've also encountered the problem with InterpolatedUnivariateSpline returning NaN values. But in my case the reason was not in having duplicates in x array but because values in x were decreasing when docs states that values "must be increasing".
So, in such a case, instead of original x and y one must supply them reversed: x[::-1] and y[::-1].
I'm trying to fit some data points in order to find the center of a circle. All of the following points are noisy data points around the circumference of the circle:
data = [(2.2176383052987667, 4.218574252410221),
(3.3041214516913033, 5.223500807396272),
(4.280815855023374, 6.461487709813785),
(4.946375258539319, 7.606952538212697),
(5.382428804463699, 9.045717060494576),
(5.752578028217334, 10.613667377465823),
(5.547729017414035, 11.92662513852466),
(5.260208374620305, 13.57722448066025),
(4.642126672822957, 14.88238955729078),
(3.820310290976751, 16.10605425390148),
(2.8099420132544024, 17.225880123445773),
(1.5731539516426183, 18.17052077121059),
(0.31752822350872545, 18.75261434891438),
(-1.2408437559671106, 19.119355580780265),
(-2.680901948575409, 19.15018791257732),
(-4.190406775175328, 19.001321726517297),
(-5.533990404926917, 18.64857428377178),
(-6.903383826792998, 17.730112542165955),
(-8.082883753215347, 16.928080323602334),
(-9.138397388219254, 15.84088004983959),
(-9.92610373064812, 14.380575762984085),
(-10.358670204629814, 13.018017342781242),
(-10.600053524240247, 11.387283417089911),
(-10.463673966507077, 10.107554951600699),
(-10.179820255235496, 8.429558128401448),
(-9.572153386953028, 7.1976672709797676),
(-8.641475289758178, 5.8312286526738175),
(-7.665976739804268, 4.782663065707469),
(-6.493033077746997, 3.8549965442534684),
(-5.092340806635571, 3.384419909199452),
(-3.6530364510489073, 2.992272643733981),
(-2.1522365767310796, 3.020780664301393),
(-0.6855406924835704, 3.0767643753777447),
(0.7848958776292426, 3.6196842530995332),
(2.0614188482646947, 4.32795711960546),
(3.2705467984691508, 5.295836809444288),
(4.359297538484424, 6.378324784240816),
(4.981264502955681, 7.823851404553242)]
I was trying to use some library like Scipy, but I'm having trouble using the available functions.
There is for example:
# == METHOD 2 ==
from scipy import optimize
method_2 = "leastsq"
def calc_R(xc, yc):
""" calculate the distance of each 2D points from the center (xc, yc) """
return sqrt((x-xc)**2 + (y-yc)**2)
def f_2(c):
""" calculate the algebraic distance between the data points and the mean circle centered at c=(xc, yc) """
Ri = calc_R(*c)
return Ri - Ri.mean()
center_estimate = x_m, y_m
center_2, ier = optimize.leastsq(f_2, center_estimate)
xc_2, yc_2 = center_2
Ri_2 = calc_R(*center_2)
R_2 = Ri_2.mean()
residu_2 = sum((Ri_2 - R_2)**2)
But this seems to be using a single xy? Any ideas on how to plug this function to my data example?
Your data points seem fairly clean and I see no outliers, so many circle fitting algorithms will work.
I recommend you to start with the Coope method, which works by magically linearizing the problem:
(X-Xc)² + (Y-Yc)² = R² is rewritten as
2 Xc X + 2 Yc Y + R² - Xc² - Yc² = X² + Y², then
A X + B Y + C = X² + Y², solved by linear least squares.
As a follow up to Bas Swinckels post, I figured I'd post my code implementing the Halir and Flusser method of fitting an ellipse
https://github.com/bdhammel/least-squares-ellipse-fitting
pip install lsq-ellipse
Now make the data into a numpy array using
import numpy as np
data=np.array(data)
Using the above code you can find the center with the following method.
from ellipse import LsqEllipse
import numpy as np
import matplotlib.pyplot as plt
import statistics
from statistics import mean
from matplotlib.patches import Ellipse
lsqe = LsqEllipse()
lsqe.fit(data)
center, width, height, phi = lsqe.as_parameters()
plt.close('all')
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(111)
ax.axis('equal')
ax.plot(data[:,0], data[:,1], 'ro', label='test data', zorder=1)
ellipse = Ellipse(xy=center, width=2*width, height=2*height, angle=np.rad2deg(phi),
edgecolor='b', fc='None', lw=2, label='Fit', zorder = 2)
ax.add_patch(ellipse)
print('center of fitted circle =',center, '\n','radius =', mean([width,height]),'+/- stddev=',statistics.stdev([width,height]))
plt.legend()
plt.show()
Here we just take average of height and width of this ellipse as the radius of the circular fit and their standard deviation as the error. This can be modified
I do not have any experience fitting circles, but I have worked with the more general case of fitting ellipses. Doing this in a correct way with noisy data is not trivial. For this problem, the algorithm described in Numerically stable direct least squares fitting of ellipses by Halir and Flusser works pretty well. The paper includes Matlab code, which should be straightforward to translate to Numpy. Maybe you could use this algorithm to fit an ellipse and then take the average of the two axis as the radius or so. Some of the references in the paper also mention fitting circles, you might want to look those up.
I know this is an old question, but in 2019 there's a circle fitting library in python called circle-fit.
pip install circle-fit
you can use one of two algorithms to solve, least_squares_circle or hyper_fit.
import circle_fit as cf
xc,yc,r,_ = cf.least_squares_circle((data)
then you get xc, yc as the coordinate pair for the solution circle center.
Ian Coope's algorithm (paper here) linearizes the problem using a variable substitution. It's the most generally robust and fastest algorithm I've come across for circle fitting so far. I've implemented the algorithm in python in scikit-guess. Using the function skg.nsphere_fit for a one-line solution:
>>> r, c = skg.nsphere_fit(data)
>>> r
8.138962707494084
>>> c
array([-2.45595128, 11.04352796])
Here is a plot of the result:
t = np.linspace(0, 2 * np.pi, 1000, endpoint=True)
plt.scatter(*np.array(data).T, color='r')
plt.plot(r * np.cos(t) + c[0], r * np.sin(t) + c[1])
plt.axis('equal')
plt.show()