My data file is shared in the following link.
We can plot this data using the following script.
import matplotlib as mpl
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cbook as cbook
def read_datafile(file_name):
data = np.loadtxt(file_name, delimiter=',')
return data
data = read_datafile('mah_data.csv')
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.set_title("Data")
ax1.set_xlabel('t')
ax1.set_ylabel('s')
ax1.plot(x,y, c='r', label='My data')
leg = ax1.legend()
plt.show()
How can we detect peaks in python? I can't find a suitable peak detection algorithm in Python.
You can use the argrelextrema function in scipy.signal to return the indices of the local maxima or local minima of an array. This works for multi-dimensional arrays as well by specifying the axis.
from scipy.signal import argrelextrema
ind_max = argrelextrema(z, np.greater) # indices of the local maxima
ind_min = argrelextrema(z, np.less) # indices of the local minima
maxvals = z[ind_max]
minvals = z[ind_min]
More specifically, one can use the argrelmax or argrelmin to find the local maximas or local minimas. This also works for multi dimensional arrays using the axis argument.
from scipy.signal import argrelmax, argrelmin
ind_max = argrelmax(z, np.greater) # indices of the local maxima
ind_min = argrelmin(z, np.less) # indices of the local minima
maxvals = z[ind_max]
minvals = z[ind_min]
For more details, one can refer to this link: https://docs.scipy.org/doc/scipy/reference/signal.html#peak-finding
Try using peakutil (http://pythonhosted.org/PeakUtils/). Here is my solution to your question using peakutil.
import pandas as pd
import peakutils
data = pd.read_csv("mah_data.csv", header=None)
ts = data[0:10000][1] # Get the second column in the csv file
print(ts[0:10]) # Print the first 10 rows, for quick testing
# check peakutils for all the parameters.
# indices are the index of the points where peaks appear
indices = peakutils.indexes(ts, thres=0.4, min_dist=1000)
print(indices)
You should also checkout peak finding in scipy (https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.find_peaks_cwt.html)
Try the findpeaks library.
pip install findpeaks
I can not find the data attached but suppose the data is a vector and stored in data:
import pandas as pd
data = pd.read_csv("mah_data.csv", header=None).values
# Import library
from findpeaks import findpeaks
# If the resolution of your data is low, I would recommend the ``lookahead`` parameter, and if your data is "bumpy", also the ``smooth`` parameter.
fp = findpeaks(lookahead=1, interpolate=10)
# Find peaks
results = fp.fit(data)
# Make plot
fp.plot()
# Results with respect to original input data.
results['df']
# Results based on interpolated smoothed data.
results['df_interp']
Related
I am trying to implement a butterworthfilter with python in jupyter Notebook. I wrote this code by a tutorial.
The data are from a CSV-File, it calls Samples.csv
The data in Samples.csv are like
998,4778415
1009,209592
1006,619094
1001,785406
993,9426543
990,1408991
992,736118
995,8127334
1002,381664
1006,094429
1000,634799
999,3287747
1002,318812
999,3287747
1004,427698
1008,516733
1007,964781
1002,680906
1000,14449
994,257009
The column calls Euclidian Norm. The range of the data are from 0 to 1679.286158 and theyre are 1838 rows.
This is the code in Jupyter:
from scipy.signal import filtfilt
from scipy import stats
import csv
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import scipy
def plot():
data=pd.read_csv('Samples.csv',sep=";", decimal=",")
sensor_data=data[['Euclidian Norm']]
sensor_data=np.array(sensor_data)
time=np.linspace(0,1679.286158,1838)
plt.plot(time,sensor_data)
plt.show()
filtered_signal=bandPassFilter(sensor_data)
plt.plot(time,sensor_data)
plt.show()
def bandPassFilter(signal):
fs = 4000.0
lowcut=20.0
highcut=50.0
nyq=0.5*fs
low=lowcut/nyq
high=highcut/nyq
order =2
b,a=scipy.signal.butter(order,[low,high],'bandpass',analog=False)
y=scipy.signal.filtfilt(b,a,signal,axis=0)
return(y)
plot()
My problem is that nothing changes in my data. It doesnt filtered my data. The graph of the filtered data is the same like the source data. Does anyone know what could be wrong?
The first graph is the source data and the second graph is the filtered graph. It looks very similar. Its like the same graph
I can't comment yet.
You're never using filtered_signal and plot with the same arguments twice.
Here`s one of my implementations with added interpolation, very similar to yours:
def butterFit(data, freq, order=2):
ar = scipy.signal.butter(order, freq) # Gets params for filttilt
return spfilter.filtfilt(ar[0], ar[1], data)
def plotFilteredSplines(timeframe, data, amount_points):
# Generate evenly spread indices for the data points.
indices = np.arange(0, len(data), amount_points)
cutoff_freq = 2 / (2/10 * len(timeframe))
# Reshape the data with butter :)
data = butterFit(data, cutoff_freq)
# Plot Fitlered data
plt.plot(timeframe, data, '-.')
interpol_x = np.linspace(timeframe[0], timeframe[-1], 100)
# Get the cubic spline approx function
interpolation = sp.interpolate.interp1d(timeframe, data, kind='cubic')
# Plot the interpolation over the extended time frame.
plt.plot(interpol_x, interpolation(interpol_x), '-r')
I am having difficulties accessing (the right) data when using holoviews/bokeh, either for connected plots showing a different aspect of the dataset, or just customising a plot with dynamic access to the data as plotted (say a tooltip).
TLDR: How to add a projection plot of my dataset (different set of dimensions and linked to main plot, like a marginal distribution but, you know, not restricted to histogram or distribution) and probably with a similar solution a related question I asked here on SO
Let me exemplify (straight from a ipynb, should be quite reproducible):
import numpy as np
import random, pandas as pd
import bokeh
import datashader as ds
import holoviews as hv
from holoviews import opts
from holoviews.operation.datashader import datashade, shade, dynspread, spread, rasterize
hv.extension('bokeh')
With imports set up, let's create a dataset (N target 10e12 ;) to use with datashader. Beside the key dimensions, I really need some value dimensions (here z and z2).
import numpy as np
import pandas as pd
N = int(10e6)
x_r = (0,100)
y_r = (100,2000)
z_r = (0,10e8)
x = np.random.randint(x_r[0]*1000,x_r[1]*1000,size=(N, 1))
y = np.random.randint(y_r[0]*1000,y_r[1]*1000,size=(N, 1))
z = np.random.randint(z_r[0]*1000,z_r[1]*1000,size=(N, 1))
z2 = np.ones((N,1)).astype(int)
df = pd.DataFrame(np.column_stack([x,y,z,z2]), columns=['x','y','z','z2'])
df[['x','y','z']] = df[['x','y','z']].div(1000, axis=0)
df
Now I plot the data, rasterised, and also activate the tooltip to see the defaults. Sure, x/y is trivial, but as I said, I care about the value dimensions. It shows z2 as x_y z2. I have a question related to tooltips with the same sort of data here on SO for value dimension access for the tooltips.
from matplotlib.cm import get_cmap
palette = get_cmap('viridis')
# palette_inv = palette.reversed()
p=hv.Points(df,['x','y'], ['z','z2'])
P=rasterize(p, aggregator=ds.sum("z2"),x_range=(0,100)).opts(cmap=palette)
P.opts(tools=["hover"]).opts(height=500, width=500,xlim=(0,100),ylim=(100,2000))
Now I can add a histogram or a marginal distribution which is pretty close to what I want, but there are issues with this soon past the trivial defaults. (E.g.: P << hv.Distribution(p, kdims=['y']) or P.hist(dimension='y',weight_dimension='x_y z',num_bins = 2000,normed=True))
Both are close approaches, but do not give me the other value dimension I'd like visualise. If I try to access the other value dimension ('x_y z') this fails. Also, the 'x_y z2' way seems very clumsy, is there a better way?
When I do something like this, my browser/notebook-extension blows up, of course.
transformed = p.transform(x=hv.dim('z'))
P << hv.Curve(transformed)
So how do I access all my data in the right way?
I'm trying to solve a correlation problem where I need to find where a pattern sequence is found inside a signal sequence. At some point I was able to find the correct solution only to begin trying to optimize the code, and the code I had accomplished wasn't saved. Now the cross correlation function just won't solve correctly and I don't know why. I have restarted the kernel multiple times.
Here is the code and the links to the text files that contain the signal and the pattern.
https://drive.google.com/file/d/1tBzHMUfmcx_gGR0arYPaQ5GB9MybXKRv/view?usp=sharing
https://drive.google.com/file/d/1TeSe9t8TeVHEp2BxKXYz6Ndlpah--yLg/view?usp=sharing
import numpy as np
import matplotlib.pyplot as plt
patron = np.loadtxt('patron.txt', delimiter=',', skiprows=1)
senal = np.loadtxt('señal.txt', delimiter=',', skiprows=1)
Fs=100
ts = np.arange(0,len(senal))
plt.figure(figsize=(20,8))
plt.subplot(3,1,1)
plt.plot(ts,patron)
plt.subplot(3,1,2)
plt.plot(ts,senal)
corr = np.correlate(senal,patron,"same")
print(np.where(corr == np.amax(corr))) #this should be where correlation reaches its maximum value, and where the functions are most "similar"
plt.subplot(3,1,3)
plt.plot(ts,corr, 'r')
How do I know I had it right? I plotted the "senal" sequence shifted 799 places (the value I had when the code was right) with:
np.roll(senal,799)
plt.plot(senal)
which resulted in this graph. It looks pretty intuitive when it resulted in a maximum correlation at index 799:
Hello I fliped the 'patron' and 'senal' values in the correlate function function and it seems good:
import numpy as np
import matplotlib.pyplot as plt
patron = np.loadtxt('patron.txt', delimiter=',', skiprows=1)
senal = np.loadtxt('señal.txt', delimiter=',', skiprows=1)
Fs=100
ts = np.arange(0,len(senal))
plt.figure(figsize=(20,8))
plt.subplot(3,1,1)
plt.plot(ts,patron)
plt.subplot(3,1,2)
plt.plot(ts,senal)
corr = np.correlate(patron,senal,'same')
print(np.argmax(corr)) #this should be where correlation reaches its maximum value, and where the functions are most "similar"
plt.subplot(3,1,3)
plt.plot(corr, 'r')
I have to analyse a PPG signal. I found something to find the peaks but I can't use the values of the heights. They are stored in like a dictionary array or something and I don't know how to extract the values out of it. I tried using dict.values() but that didn't work.
import matplotlib.pyplot as plt
import numpy as np
from scipy.signal import savgol_filter
data = pd.read_excel('test_heartpy.xlsx')
arr = np.array(data)
time = arr[1:,0] # time in s
ECG = arr[1:,1] # ECG
PPG = arr[1:,2] # PPG
filtered = savgol_filter(PPG, 251, 3)
plt.plot(time, filtered)
plt.xlabel('Time (in s)')
plt.ylabel('PPG')
plt.grid('on')
The PPG signal looks like this. To search for the peaks I used:
# searching peaks
from scipy.signal import find_peaks
peaks, heights_peak_0 = find_peaks(PPG, height=0.2)
heights_peak = heights_peak_0.values()
plt.plot(PPG)
plt.plot(peaks, np.asarray(PPG)[peaks], "x")
plt.plot(np.zeros_like(PPG), "--", color="gray")
plt.title("PPG peaks")
plt.show()
print(heights_peak_0)
print(heights_peak)
print(peaks)
Printing:
{'peak_heights': array([0.4822998 , 0.4710083 , 0.43884277, 0.46728516, 0.47094727,
0.44702148, 0.43029785, 0.44146729, 0.43933105, 0.41400146,
0.45318604, 0.44335938])}
dict_values([array([0.4822998 , 0.4710083 , 0.43884277, 0.46728516, 0.47094727,
0.44702148, 0.43029785, 0.44146729, 0.43933105, 0.41400146,
0.45318604, 0.44335938])])
[787 2513 4181 5773 7402 9057 10601 12194 13948 15768 17518 19335]
Signal with highlighted peaks looks like this.
heights_peak_0 is the properties dict returned by scipy.signal.find_peaks
You can find more information about what is returned here
You can extract the array containing all the heights of the peaks with heights_peak_0["peak_heights"]
# the following will give you an array with the values of peaks
heights_peak_0['peak_heights']
# peaks seem to be the indices where find_peaks function foud peaks in the original signal. So you can get the peak values this way also
PPG[peaks]
According to the docs, the find_peaks() functions returns a tuple consisting of the peaks itself and a properties dict. As you are only interested in the peak values, you can simply ignore the second element of the tuple and only use the first one.
Assuming you want to have the 'coordinates' of your peaks you could then combine the peak heights (y-values) with its positions (x-values) like so (based on the first code snippet given in the docs):
import matplotlib.pyplot as plt
from scipy.misc import electrocardiogram
from scipy.signal import find_peaks
x = electrocardiogram()[2000:4000]
peaks, _ = find_peaks(x, distance=150)
peaks_x_values = peaks
peaks_y_values = x[peaks]
peak_coordinates = list(zip(peaks_x_values, peaks_y_values))
print(peak_coordinates)
plt.plot(x)
plt.plot(peaks_x_values, peaks_y_values, "x")
plt.show()
Printing:
[(65, 0.705), (251, 1.155), (431, 1.705), (608, 1.96), (779, 1.925), (956, 2.09), (1125, 1.745), (1292, 1.37), (1456, 1.2), (1614, 0.81), (1776, 0.665), (1948, 0.665)]
I want to apply Hough Transform on stock prices (array of numbers).
I read OpenCV and scikit-image docs and examples ,but got nothing how to apply the transformation to the arrays of numbers instead of images.
I created 2D array from data. First dimension is X(simply index of data) and second dimension is close prices.
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import pywt as wt
from skimage.transform import (hough_line, hough_line_peaks,probabilistic_hough_line)
from matplotlib import cm
path = "22-31May-100Tick.csv"
df = pd.read_csv(path)
y = df.Close.values
x = np.arange(0,len(y),1)
data = []
for i in x:
a = [i,y[i]]
data.append(a)
data = np.array(data)
How is it possible to apply the transformation with OpenCV or sickit-image?
Thank you