I have numpy array named data_saw that contains thousands of float numbers.
When visualized, it looks like this
My task is to shift down each element before gap by 180 (because each gap range here is around 180), so I will get continuous growing only line without gaps
I've ended up with looping through array and checking for a gap at every index (the last element is for comparison only and is not needed in further calculations, so it is just deleted after alignment):
for i in range(1, len(data_saw)):
if data_saw[i - 1] > data_saw[i]:
data_saw[:i] -= 180
data_saw = np.delete(data_saw, -1)
Trying to find out if there are more correct ways to do this with numpy array. Are there any?
np.diff will tell you when the gap is bigger than some threshold
mask = np.diff(data_saw) < -90
To make mask the same size as data_saw, prepend a zero, because the result of diff is always smaller than the input by one element. For what I have in mind, you'll also want to convert to an integer type:
offset = np.concatenate(([0], mask)).cumsum()
To normalize the data, just add 180 * offset plus some arbitrary bias:
data_fixed = data_saw + 180 * offset
To keep the last segment at its original value:
data_fixed = data_saw + 180 * (offset - offset[1])
To keep the second segment as-is:
data_fixed = data_saw + 180 * (offset - offset[-1])
You can use a similar method to adjust data not only with arbitrary numbers of gaps, but even arbitrary gap sizes above some threshold.
First, compute the indices corresponding to the orignal mask using np.flatnonzero:
delta = np.diff(data_saw)
indices = np.flatnonzero(delta < -90)
Now you can simply fill in the bad elements of delta, for example with the average of the two surrounding elements:
delta[indices] = 0.5 * (delta[indices - 1] + delta[indices + 1])
The fixed data is the cumulative sum (with a zero prepended):
data_fixed = np.concatenate(([0], delta)).cumsum() + data_saw[0]
Not sure if it is more correct, but uses numpy's native methods.
import numpy as np
import matplotlib.pyplot as plt
array = np.arange(90)
array = np.concatenate([array[:30], array[30:] + 180, array[60:] + 2 * 180 + 30])
plt.plot(array)
adjusted = array - np.concatenate([[0], ((np.diff(array) >= 180).cumsum() * 180)])
plt.plot(adjusted)
Related
I am working on finding the frequencies from a given dataset and I am struggling to understand how np.fft.fft() works. I thought I had a working script but ran into a weird issue that I cannot understand.
I have a dataset that is roughly sinusoidal and I wanted to understand what frequencies the signal is composed of. Once I took the FFT, I got this plot:
However, when I take the same dataset, slice it in half, and plot the same thing, I get this:
I do not understand why the frequency drops from 144kHz to 128kHz which technically should be the same dataset but with a smaller length.
I can confirm a few things:
Step size between data points 0.001
I have tried interpolation with little luck.
If I slice the second half of the dataset I get a different frequency as well.
If my dataset is indeed composed of both 128 and 144kHz, then why doesn't the 128 peak show up in the first plot?
What is even more confusing is that I am running a script with pure sine waves without issues:
T = 0.001
fs = 1 / T
def find_nearest_ind(data, value):
return (np.abs(data - value)).argmin()
x = np.arange(0, 30, T)
ff = 0.2
y = np.sin(2 * ff * np.pi * x)
x = x[:len(x) // 2]
y = y[:len(y) // 2]
n = len(y) # length of the signal
k = np.arange(n)
T = n / fs
frq = k / T * 1e6 / 1000 # two sides frequency range
frq = frq[:len(frq) // 2] # one side frequency range
Y = np.fft.fft(y) / n # dft and normalization
Y = Y[:n // 2]
frq = frq[:50]
Y = Y[:50]
fig, (ax1, ax2) = plt.subplots(2)
ax1.plot(x, y)
ax1.set_xlabel("Time (us)")
ax1.set_ylabel("Electric Field (V / mm)")
peak_ind = find_nearest_ind(abs(Y), np.max(abs(Y)))
ax2.plot(frq, abs(Y))
ax2.axvline(frq[peak_ind], color = 'black', linestyle = '--', label = F"Frequency = {round(frq[peak_ind], 3)}kHz")
plt.legend()
plt.xlabel('Freq(kHz)')
ax1.title.set_text('dV/dX vs. Time')
ax2.title.set_text('Frequencies')
fig.tight_layout()
plt.show()
Here is a breakdown of your code, with some suggestions for improvement, and extra explanations. Working through it carefully will show you what is going on. The results you are getting are completely expected. I will propose a common solution at the end.
First set up your units correctly. I assume that you are dealing with seconds, not microseconds. You can adjust later as long as you stay consistent.
Establish the period and frequency of the sampling. This means that the Nyquist frequency for the FFT will be 500Hz:
T = 0.001 # 1ms sampling period
fs = 1 / T # 1kHz sampling frequency
Make a time domain of 30e3 points. The half domain will contain 15000 points. That implies a frequency resolution of 500Hz / 15k = 0.03333Hz.
x = np.arange(0, 30, T) # time domain
n = x.size # number of points: 30000
Before doing anything else, we can define our time domain right here. I prefer a more intuitive approach than what you are using. That way you don't have to redefine T or introduce the auxiliary variable k. But as long as the results are the same, it does not really matter:
F = np.linspace(0, 1 - 1/n, n) / T # Notice F[1] = 0.03333, as predicted
Now define the signal. You picked ff = 0.2. Notice that 0.2Hz. 0.2 / 0.03333 = 6, so you would expect to see your peak in exactly bin index 6 (F[6] == 0.2). To better illustrate what is going on, let's take ff = 0.22. This will bleed the spectrum into neighboring bins.
ff = 0.22
y = np.sin(2 * np.pi * ff * x)
Now take the FFT:
Y = np.fft.fft(y) / n
maxbin = np.abs(Y).argmax() # 7
maxF = F[maxbin] # 0.23333333: This is the nearest bin
Since your frequency bins are 0.03Hz wide, the best resolution you can expect 0.015Hz. For your real data, which has much lower resolution, the error is much larger.
Now let's take a look at what happens when you halve the data size. Among other things, the frequency resolution becomes smaller. Now you have a maximum frequency of 500Hz spread over 7.5k samples, not 15k: the resolution drops to 0.066666Hz per bin:
n2 = n // 2 # 15000
F2 = np.linspace(0, 1 - 1 / n2, n2) / T # F[1] = 0.06666
Y2 = np.fft.fft(y[:n2]) / n2
Take a look what happens to the frequency estimate:
maxbin2 = np.abs(Y2).argmax() # 3
maxF2 = F2[maxbin2] # 0.2: This is the nearest bin
Hopefully, you can see how this applies to your original data. In the full FFT, you have a resolution of ~16.1 per bin with the full data, and ~32.2kHz with the half data. So your original result is within ~±8kHz of the right peak, while the second one is within ~±16kHz. The true frequency is therefore between 136kHz and 144kHz. Another way to look at it is to compare the bins that you showed me:
full: 128.7 144.8 160.9
half: 96.6 128.7 160.9
When you take out exactly half of the data, you drop every other frequency bin. If your peak was originally closest to 144.8kHz, and you drop that bin, it will end up in either 128.7 or 160.9.
Note: Based on the bin numbers you show, I suspect that your computation of frq is a little off. Notice the 1 - 1/n in my linspace expression. You need that to get the right frequency axis: the last bin is (1 - 1/n) / T, not 1 / T, no matter how you compute it.
So how to get around this problem? The simplest solution is to do a parabolic fit on the three points around your peak. That is usually a sufficiently good estimator of the true frequency in the data when you are looking for essentially perfect sinusoids.
def peakF(F, Y):
index = np.abs(Y).argmax()
# Compute offset on normalized domain [-1, 0, 1], not F[index-1:index+2]
y = np.abs(Y[index - 1:index + 2])
# This is the offset from zero, which is the scaled offset from F[index]
vertex = (y[0] - y[2]) / (0.5 * (y[0] + y[2]) - y[1])
# F[1] is the bin resolution
return F[index] + vertex * F[1]
In case you are wondering how I got the formula for the parabola: I solved the system with x = [-1, 0, 1] and y = Y[index - 1:index + 2]. The matrix equation is
[(-1)^2 -1 1] [a] Y[index - 1]
[ 0^2 0 1] # [b] = Y[index]
[ 1^2 1 1] [c] Y[index + 1]
Computing the offset using a normalized domain and scaling afterwards is almost always more numerically stable than using whatever huge numbers you have in F[index - 1:index + 2].
You can plug the results in the example into this function to see if it works:
>>> peakF(F, Y)
0.2261613409657391
>>> peakF(F2, Y2)
0.20401580936430794
As you can see, the parabolic fit gives an improvement, however slight. There is no replacement for just increasing frequency resolution through more samples though!
I have a large field of 2D-position data, given as two arrays x and y, where len(x) == len(y). I would like to return the array of indices idx_masked at which (x[idx_masked], y[idx_masked]) is masked by an N x N int array called mask. That is, mask[x[idx_masked], y[idx_masked]] == 1. The mask array consists of 0s and 1s only.
I have come up with the following solution, but it (specifically, the last line below) is very slow, given that I have N x N = 5000 x 5000, repeated 1000s of times:
import numpy as np
import matplotlib.pyplot as plt
# example mask of one corner of a square
N = 100
mask = np.zeros((N, N))
mask[0:10, 0:10] = 1
# example x and y position arrays in arbitrary units
x = np.random.uniform(0, 1, 1000)
y = np.random.uniform(0, 1, 1000)
x_bins = np.linspace(np.min(x), np.max(x), N)
y_bins = np.linspace(np.min(y), np.max(y), N)
x_bin_idx = np.digitize(x, x_bins)
y_bin_idx = np.digitize(y, y_bins)
idx_masked = np.ravel(np.where(mask[y_bin_idx - 1, x_bin_idx - 1] == 1))
plt.imshow(mask[::-1, :])
plt.scatter(x, y, color='red')
plt.scatter(x[idx_masked], y[idx_masked], color='blue')
Is there a more efficient way of doing this?
Given that mask overlays your field with identically-sized bins, you do not need to define the bins explicitly. *_bin_idx can be determined at each location from a simple floor division, since you know that each bin is 1 / N in size. I would recommend using 1 - 0 for the total width (what you passed into np.random.uniform) instead of x.max() - x.min(), if of course you know the expected size of the range.
x0 = 0 # or x.min()
x1 = 1 # or x.max()
x_bin = (x1 - x0) / N
x_bin_idx = ((x - x0) // x_bin).astype(int)
# ditto for y
This will be faster and simpler than digitizing, and avoids the extra bin at the beginning.
For most purposes, you do not need np.where. 90% of the questions asking about it (including this one) should not be using where. If you want a fast way to access the necessary elements of x and y, just use a boolean mask. The mask is simply
selction = mask[x_bin_idx, y_bin_idx].astype(bool)
If mask is already a boolean (which it should be anyway), the expression mask[x_bin_idx, y_bin_idx] is sufficient. It results in an array of the same size as x_bin_idx and y_bin_idx (which are the same size as x and y) containing the mask value for each of your points. You can use the mask as
x[selection] # Elements of x in mask
y[selection] # Elements of y in mask
If you absolutely need the integer indices, where is sill not your best option.
indices = np.flatnonzero(selection)
OR
indices = selection.nonzero()[0]
If your goal is simply to extract values from x and y, I would recommend stacking them together into a single array:
coords = np.stack((x, y), axis=1)
This way, instead of having to apply indices twice, you can extract the values with just
coords[selection, :]
OR
coords[indices, :]
Depending on the relative densities of mask and x and y, either the boolean masking or linear indexing may be faster. You will have to time some relevant cases to get a better intuition.
I have the following code snippet (for Hough circle transform):
for r in range(1, 11):
for t in range(0, 360):
trad = np.deg2rad(t)
b = x - r * np.cos(trad)
a = y - r * np.sin(trad)
b = np.floor(b).astype('int')
a = np.floor(a).astype('int')
A[a, b, r-1] += 1
Where A is a 3D array of shape (height, width, 10), and
height and width represent the size of a given image.
My goal is to convert the snippet exclusively to numpy code.
My attempt is this:
arr_r = np.arange(1, 11)
arr_t = np.deg2rad(np.arange(0, 360))
arr_cos_t = np.cos(arr_t)
arr_sin_t = np.sin(arr_t)
arr_rcos = arr_r[..., np.newaxis] * arr_cos_t[np.newaxis, ...]
arr_rsin = arr_r[..., np.newaxis] * arr_sin_t[np.newaxis, ...]
arr_a = (y - arr_rsin).flatten().astype('int')
arr_b = (x - arr_rcos).flatten().astype('int')
Where x and y are two scalar values.
I am having trouble at converting the increment part: A[a,b,r] += 1. I thought of this: A[a,b,r] counts the number of occurrences of the pair (a,b,r), so a clue was to use a Cartesian product (but the arrays are too large).
Any tips or tricks I can use?
Thank you very much!
Edit: after filling A, I need (a,b,r) as argmax(A). The tuple (a,b,r) identifies a circle and its value in A represents the confidence value. So I want that tuple with the highest value in A. This is part of the voting algorithm from Hough circle transform: find circle parameter with unknown radius.
Method #1
Here's one way leveraging broadcasting to get the counts and update A (this assumes the a and b values computed in the intermediate steps are positive ones) -
d0,d1,d2 = A.shape
arr_r = np.arange(1, 11)
arr_t = np.deg2rad(np.arange(0, 360))
arr_b = np.floor(x - arr_r[:,None] * np.cos(arr_t)).astype('int')
arr_a = np.floor(y - arr_r[:,None] * np.sin(arr_t)).astype('int')
idx = (arr_a*d1*d2) + (arr_b * d2) + (arr_r-1)[:,None]
A.flat[:idx.max()+1] += np.bincount(idx.ravel())
# OR A.flat += np.bincount(idx.ravel(), minlength=A.size)
Method #2
Alternatively, we could avoid bincount to replace the last step in approach #1, like so -
idx.ravel().sort()
idx.shape = (-1)
grp_idx = np.flatnonzero(np.concatenate(([True], idx[1:]!=idx[:-1],[True])))
A.flat[idx[grp_idx[:-1]]] += np.diff(grp_idx)
Improvement with numexpr
We could also leverage numexpr module for faster sine, cosine computations, like so -
import numexpr as ne
arr_r2D = arr_r[:,None]
arr_b = ne.evaluate('floor(x - arr_r2D * cos(arr_t))').astype(int)
arr_a = ne.evaluate('floor(y - arr_r2D * sin(arr_t))').astype(int)
np.add(np.array ([arr_a, arr_b, 10]), 1)
I have a numpy array filled with intensity readings at different radii in a uniform circle (for context, this is a 1D radiative transfer project for protostellar formation models: while much better models exist, my supervisor wasnts me to have the experience of producing one so I understand how others work).
I want to take that 1d array, and "rotate" it through a circle, forming a 2D array of intensities that could then be shown with imshow (or, with a bit of work, aplpy). The final array needs to be 2d, and the projection needs to be Cartesian, not polar.
I can do it with nested for loops, and I can do it with lookup tables, but I have a feeling there must be a neat way of doing it in numpy or something.
Any ideas?
EDIT:
I have had to go back and recreate my (frankly horrible) mess of for loops and if statements that I had before. If I really tried, I could probably get rid of one of the loops and one of the if statements by condensing things down. However, the aim is not to make it work with for loops, but see if there is a built in way to rotate the array.
impB is an array that differs slightly from what I stated it was before. Its actually just a list of radii where particles are detected. I then bin those into radius bins to get the intensity (or frequency if you prefer) in each radius. R is the scale factor for my radius as I run the model in a dimensionless way. iRes is a resolution scale factor, essentially how often I want to sample my radial bins. Everything else should be clear.
radJ = np.ndarray(shape=(2*iRes, 2*iRes)) # Create array of 2xRadius square
for i in range(iRes):
n = len(impB[np.where(impB[:] < ((i+1.) * (R / iRes)))]) # Count number of things within this radius +1
m = len(impB[np.where(impB[:] <= ((i) * (R / iRes)))]) # Count number of things in this radius
a = (((i + 1) * (R / iRes))**2 - ((i) * (R / iRes))**2) * math.pi # A normalisation factor based on area.....dont ask
for x in range(iRes):
for y in range(iRes):
if (x**2 + y**2) < (i * iRes)**2:
if (x**2 + y**2) >= (i * iRes)**2: # Checks for radius, and puts in cartesian space
radJ[x+iRes,y+iRes] = (n-m) / a # Put in actual intensity bins
radJ[x+iRes,-y+iRes] = (n-m) / a
radJ[-x+iRes,y+iRes] = (n-m) / a
radJ[-x+iRes,-y+iRes] = (n-m) / a
Nested loops are a simple approach for that. With ri_data_r and y containing your radius values (difference to the middle pixel) and the array for rotation, respectively, I would suggest:
from scipy import interpolate
import numpy as np
y = np.random.rand(100)
ri_data_r = np.linspace(-len(y)/2,len(y)/2,len(y))
interpol_index = interpolate.interp1d(ri_data_r, y)
xv = np.arange(-1, 1, 0.01) # adjust your matrix values here
X, Y = np.meshgrid(xv, xv)
profilegrid = np.ones(X.shape, float)
for i, x in enumerate(X[0, :]):
for k, y in enumerate(Y[:, 0]):
current_radius = np.sqrt(x ** 2 + y ** 2)
profilegrid[i, k] = interpol_index(current_radius)
print(profilegrid)
This will give you exactly what you are looking for. You just have to take in your array and calculate an symmetric array ri_data_r that has the same length as your data array and contains the distance between the actual data and the middle of the array. The code is doing this automatically.
I stumbled upon this question in a different context and I hope I understood it right. Here are two other ways of doing this. The first uses skimage.transform.warp with interpolation of desired order (here we use order=0 Nearest-neighbor). This method is slower but more precise and needs less memory then the second method.
The second one does not use interpolation, therefore is faster but also less precise and needs way more memory because it stores each 2D array containing one tilt until the end, where they are averaged with np.nanmean().
The difference between both solutions stemmed from the problem of handling the center of the final image where the tilts overlap the most, i.e. the first one would just add values with each tilt ending up out of the original range. This was "solved" by clipping the matrix in each step to a global_min and global_max (consult the code). The second one solves it by taking the mean of the tilts where they overlap, which forces us to use the np.nan.
Please, read the Example of usage and Sanity check sections in order to understand the plot titles.
Solution 1:
import numpy as np
from skimage.transform import warp
def rotate_vector(vector, deg_angle):
# Credit goes to skimage.transform.radon
assert vector.ndim == 1, 'Pass only 1D vectors, e.g. use array.ravel()'
center = vector.size // 2
square = np.zeros((vector.size, vector.size))
square[center,:] = vector
rad_angle = np.deg2rad(deg_angle)
cos_a, sin_a = np.cos(rad_angle), np.sin(rad_angle)
R = np.array([[cos_a, sin_a, -center * (cos_a + sin_a - 1)],
[-sin_a, cos_a, -center * (cos_a - sin_a - 1)],
[0, 0, 1]])
# Approx. 80% of time is spent in this function
return warp(square, R, clip=False, output_shape=((vector.size, vector.size)))
def place_vectors(vectors, deg_angles):
matrix = np.zeros((vectors.shape[-1], vectors.shape[-1]))
global_min, global_max = 0, 0
for i, deg_angle in enumerate(deg_angles):
tilt = rotate_vector(vectors[i], deg_angle)
global_min = tilt.min() if global_min > tilt.min() else global_min
global_max = tilt.max() if global_max < tilt.max() else global_max
matrix += tilt
matrix = np.clip(matrix, global_min, global_max)
return matrix
Solution 2:
Credit for the idea goes to my colleague Michael Scherbela.
import numpy as np
def rotate_vector(vector, deg_angle):
assert vector.ndim == 1, 'Pass only 1D vectors, e.g. use array.ravel()'
square = np.ones([vector.size, vector.size]) * np.nan
radius = vector.size // 2
r_values = np.linspace(-radius, radius, vector.size)
rad_angle = np.deg2rad(deg_angle)
ind_x = np.round(np.cos(rad_angle) * r_values + vector.size/2).astype(np.int)
ind_y = np.round(np.sin(rad_angle) * r_values + vector.size/2).astype(np.int)
ind_x = np.clip(ind_x, 0, vector.size-1)
ind_y = np.clip(ind_y, 0, vector.size-1)
square[ind_y, ind_x] = vector
return square
def place_vectors(vectors, deg_angles):
matrices = []
for deg_angle, vector in zip(deg_angles, vectors):
matrices.append(rotate_vector(vector, deg_angle))
matrix = np.nanmean(np.array(matrices), axis=0)
return np.nan_to_num(matrix, copy=False, nan=0.0)
Example of usage:
r = 100 # Radius of the circle, i.e. half the length of the vector
n = int(np.pi * r / 8) # Number of vectors, e.g. number of tilts in tomography
v = np.ones(2*r) # One vector, e.g. one tilt in tomography
V = np.array([v]*n) # All vectors, e.g. a sinogram in tomography
# Rotate 1D vector to a specific angle (output is 2D)
angle = 45
rotated = rotate_vector(v, angle)
# Rotate each row of a 2D array according to its angle (output is 2D)
angles = np.linspace(-90, 90, num=n, endpoint=False)
inplace = place_vectors(V, angles)
Sanity check:
These are just simple checks which by no means cover all possible edge cases. Depending on your use case you might want to extend the checks and adjust the method.
# I. Sanity check
# Assuming n <= πr and v = np.ones(2r)
# Then sum(inplace) should be approx. equal to (n * (2πr - n)) / π
# which is an area that should be covered by the tilts
desired_area = (n * (2 * np.pi * r - n)) / np.pi
covered_area = np.sum(inplace)
covered_frac = covered_area / desired_area
print(f'This method covered {covered_frac * 100:.2f}% '
'of the area which should be covered in total.')
# II. Sanity check
# Assuming n <= πr and v = np.ones(2r)
# Then a circle M with radius m <= r should be the largest circle which
# is fully covered by the vectors. I.e. its mean should be no less than 1.
# If n = πr then m = r.
# m = n / π
m = int(n / np.pi)
# Code for circular mask not included
mask = create_circular_mask(2*r, 2*r, center=None, radius=m)
m_area = np.mean(inplace[mask])
print(f'Full radius r={r}, radius m={m}, mean(M)={m_area:.4f}.')
Code for plotting:
import matplotlib.pyplot as plt
plt.figure(figsize=(16, 8))
plt.subplot(121)
rotated = np.nan_to_num(rotated) # not necessary in case of the first method
plt.title(
f'Output of rotate_vector(), angle={angle}°\n'
f'Sum is {np.sum(rotated):.2f} and should be {np.sum(v):.2f}')
plt.imshow(rotated, cmap=plt.cm.Greys_r)
plt.subplot(122)
plt.title(
f'Output of place_vectors(), r={r}, n={n}\n'
f'Covered {covered_frac * 100:.2f}% of the area which should be covered.\n'
f'Mean of the circle M is {m_area:.4f} and should be 1.0.')
plt.imshow(inplace)
circle=plt.Circle((r, r), m, color='r', fill=False)
plt.gcf().gca().add_artist(circle)
plt.gcf().gca().legend([circle], [f'Circle M (m={m})'])
I have a array of CSV values representing a digital output. It has been gathered using an analog oscilloscope so it is not a perfect digital signal. I'm trying to filter out the data to have a perfect digital signal for calculating the periods (which may vary).
I would also like to define the maximum error i get from this filtration.
Something like this:
Idea
Apply a treshold od the data. Here is a pseudocode:
for data_point_raw in data_array:
if data_point_raw < 0.8: data_point_perfect = LOW
if data_point_raw > 2 : data_point_perfect = HIGH
else:
#area between thresholds
if previous_data_point_perfect == Low : data_point_perfect = LOW
if previous_data_point_perfect == HIGH: data_point_perfect = HIGH
There are two problems bothering me.
This seems like a common problem in digital signal processing, however i haven't found a predefined standard function for it. Is this an ok way to perform the filtering?
How would I get the maximum error?
Here's a bit of code that might help.
from __future__ import division
import numpy as np
def find_transition_times(t, y, threshold):
"""
Given the input signal `y` with samples at times `t`,
find the times where `y` increases through the value `threshold`.
`t` and `y` must be 1-D numpy arrays.
Linear interpolation is used to estimate the time `t` between
samples at which the transitions occur.
"""
# Find where y crosses the threshold (increasing).
lower = y < threshold
higher = y >= threshold
transition_indices = np.where(lower[:-1] & higher[1:])[0]
# Linearly interpolate the time values where the transition occurs.
t0 = t[transition_indices]
t1 = t[transition_indices + 1]
y0 = y[transition_indices]
y1 = y[transition_indices + 1]
slope = (y1 - y0) / (t1 - t0)
transition_times = t0 + (threshold - y0) / slope
return transition_times
def periods(t, y, threshold):
"""
Given the input signal `y` with samples at times `t`,
find the time periods between the times at which the
signal `y` increases through the value `threshold`.
`t` and `y` must be 1-D numpy arrays.
"""
transition_times = find_transition_times(t, y, threshold)
deltas = np.diff(transition_times)
return deltas
if __name__ == "__main__":
import matplotlib.pyplot as plt
# Time samples
t = np.linspace(0, 50, 501)
# Use a noisy time to generate a noisy y.
tn = t + 0.05 * np.random.rand(t.size)
y = 0.6 * ( 1 + np.sin(tn) + (1./3) * np.sin(3*tn) + (1./5) * np.sin(5*tn) +
(1./7) * np.sin(7*tn) + (1./9) * np.sin(9*tn))
threshold = 0.5
deltas = periods(t, y, threshold)
print("Measured periods at threshold %g:" % threshold)
print(deltas)
print("Min: %.5g" % deltas.min())
print("Max: %.5g" % deltas.max())
print("Mean: %.5g" % deltas.mean())
print("Std dev: %.5g" % deltas.std())
trans_times = find_transition_times(t, y, threshold)
plt.plot(t, y)
plt.plot(trans_times, threshold * np.ones_like(trans_times), 'ro-')
plt.show()
The output:
Measured periods at threshold 0.5:
[ 6.29283207 6.29118893 6.27425846 6.29580066 6.28310224 6.30335003]
Min: 6.2743
Max: 6.3034
Mean: 6.2901
Std dev: 0.0092793
You could use numpy.histogram and/or matplotlib.pyplot.hist to further analyze the array returned by periods(t, y, threshold).
This is not an answer for your question, just and suggestion that may help. Im writing it here because i cant put image in comment.
I think you should normalize data somehow, before any processing.
After normalization to range of 0...1 you should apply your filter.
If you're really only interested in the period, you could plot the Fourier Transform, you'll have a peak where the frequency of the signals occurs (and so you have the period). The wider the peak in the Fourier domain, the larger the error in your period measurement
import numpy as np
data = np.asarray(my_data)
np.fft.fft(data)
Your filtering is fine, it's basically the same as a schmitt trigger, but the main problem you might have with it is speed. The benefit of using Numpy is that it can be as fast as C, whereas you have to iterate once over each element.
You can achieve something similar using the median filter from SciPy. The following should achieve a similar result (and not be dependent on any magnitudes):
filtered = scipy.signal.medfilt(raw)
filtered = numpy.where(filtered > numpy.mean(filtered), 1, 0)
You can tune the strength of the median filtering with medfilt(raw, n_samples), n_samples defaults to 3.
As for the error, that's going to be very subjective. One way would be to discretise the signal without filtering and then compare for differences. For example:
discrete = numpy.where(raw > numpy.mean(raw), 1, 0)
errors = np.count_nonzero(filtered != discrete)
error_rate = errors / len(discrete)