Python rising/falling edge oscilloscope-like trigger - python

I'm trying to detect rising and/or falling edges in a numpy vector, based on a trigger value. This is kinda like how oscilloscope triggering works.
The numpy vector contains floating point values. The trigger itself is a floating point value. I would expect this to work as such:
import numpy as np
data = np.array([-1, -0.5, 0, 0.5, 1, 1.5, 2])
trigger = rising_edge(data, 0.3)
print(trigger)
[3]
In other words, it would work like np.where, returning a vector containing the positions where the condition is true.
I know i can simply iterate over the vector and get the same result (which is what i'm doing), but it isn't ideal, as you can imagine. Is there some functionality built into numpy that can do this using optimized C code? Or maybe in some other library?
Thank you.

We could slice one-off and compare against the trigger for smaller than and greater than, like so -
In [41]: data = np.array([-1, -0.5, 0, 0.5, 1, 1.5, 2, 0, 0.5])
In [43]: trigger_val = 0.3
In [44]: np.flatnonzero((data[:-1] < trigger_val) & (data[1:] > trigger_val))+1
Out[44]: array([3, 8])
If you would like to include equality as well, i.e. <= or >=, simply add that into the comparison.
To include for both rising and falling edges, add the comparison the other way -
In [75]: data = np.array([-1, -0.5, 0, 0.5, 1, 1.5, 2, 0.5, 0])
In [76]: trigger_val = 0.3
In [77]: mask1 = (data[:-1] < trigger_val) & (data[1:] > trigger_val)
In [78]: mask2 = (data[:-1] > trigger_val) & (data[1:] < trigger_val)
In [79]: np.flatnonzero(mask1 | mask2)+1
Out[79]: array([3, 8])

So I was just watching the latest 3Blue1Brown video on convolution when I realized a new way of doing this:
def rising_edge(data, thresh):
sign = data >= thresh
pos = np.where(np.convolve(sign, [1, -1]) == 1)
return pos
So, get all the positions where the data is larger or equal to the threshold, do a convolution over it with [1, -1], and then just find where the convolution returns a 1 for a rising edge. Want a falling edge? Look for -1 instead.
Pretty neat, if I do say so myself. And it's about 5-10% faster.

Related

Getting the right sign when calculating repeated sign switches in numpy array

I am trying to simulate a grid of spins in python that can change their orientation (represented by the sign):
>>> import numpy as np
>>> spin_values = np.random.choice([-1, 1], (2, 2))
>>> spin_values
array([[-1, 1],
[ 1, 1]])
I then throw two sets of random indices of that grid for spins that have a certain probability to switch their orientation, let's say:
>>> i = np.array([1, 1])
>>> j = np.array([0, 0])
>>> switches = np.array([-1, -1])
i and j here contain the indices that might change and switches states whether they do switch (-1) or keep their orientation (1). My idea for calculating the new orientations was:
>>> spin_values[i, j] *= switches
When a spin orientation only changes once this works fine. However, when it is supposed to change twice (as with the example values) it only changes once, therefore giving me a wrong result.
>>> spin_values
array([[-1, 1],
[-1, 1]])
How could I get the right results while having a short run time (this has to be done many times on a bigger grid)?
I would use numpy.unique to get the count of unique pairs of indices and compute -1 ** n:
idx, cnt = np.unique(np.vstack([i, j]), axis=1, return_counts=True)
spin_values[tuple(idx)] = (-1) ** cnt
Updated spin_values:
array([[-1, 1],
[ 1, 1]])

Map numbers to their percentiles

I would like to apply the result of numpy.percentile to its argument, i.e., map every number in the input vector to its quantile.
E.g., if v=np.array([1,2,3,4]), and I want just two quantiles (bigger and smaller than the median), I would get np.array([0,0,1,1]) telling me that the first two elements of v are smaller than the median and the last two are bigger than the median.
Note that I am interested in, say, deciles, not just the median!
IOW, #PaulPanzer hit the nail:
np.digitize(v,np.percentile(v,quantiles))
thanks!
(v > np.percentile(v, 50)).astype(int)
Out[93]:
array([0, 0, 1, 1])
Use np.digitize:
perc = np.percentile(data, q)
indices = np.digitize(data, perc)
Example q = [25,50,75], data = np.arange(8):
indices
# array([0, 0, 1, 1, 2, 2, 3, 3])

Wrapped (circular) 2D interpolation in Python

I have angular data on a domain that is wrapped at pi radians (i.e. 0 = pi). The data are 2D, where one dimension represents the angle. I need to interpolate this data onto another grid in a wrapped way.
In one dimension, the np.interp function takes a period kwarg (for NumPy 1.10 and later):
http://docs.scipy.org/doc/numpy/reference/generated/numpy.interp.html
This is exactly what I need, but I need it in two dimensions. I'm currently just stepping through columns in my array and using np.interp, but this is of course slow.
Anything out there that could achieve this same outcome but faster?
An explanation of how np.interp works
Use the source, Luke!
The numpy doc for np.interp makes the source particularly easy to find, since it has the link right there, along with the documentation. Let's go through this, line by line.
First, recall the parameters:
"""
x : array_like
The x-coordinates of the interpolated values.
xp : 1-D sequence of floats
The x-coordinates of the data points, must be increasing if argument
`period` is not specified. Otherwise, `xp` is internally sorted after
normalizing the periodic boundaries with ``xp = xp % period``.
fp : 1-D sequence of floats
The y-coordinates of the data points, same length as `xp`.
period : None or float, optional
A period for the x-coordinates. This parameter allows the proper
interpolation of angular x-coordinates. Parameters `left` and `right`
are ignored if `period` is specified.
"""
Let's take a simple example of a triangular wave while going through this:
xp = np.array([-np.pi/2, -np.pi/4, 0, np.pi/4])
fp = np.array([0, -1, 0, 1])
x = np.array([-np.pi/8, -5*np.pi/8]) # Peskiest points possible }:)
period = np.pi
Now, I start off with the period != None branch in the source code, after all the type-checking happens:
# normalizing periodic boundaries
x = x % period
xp = xp % period
This just ensures that all values of x and xp supplied are between 0 and period. So, since the period is pi, but we specified x and xp to be between -pi/2 and pi/2, this will adjust for that by adding pi to all values in the range [-pi/2, 0), so that they effectively appear after pi/2. So our xp now reads [pi/2, 3*pi/4, 0, pi/4].
asort_xp = np.argsort(xp)
xp = xp[asort_xp]
fp = fp[asort_xp]
This is just ordering xp in increasing order. This is especially required after performing that modulo operation in the previous step. So, now xp is [0, pi/4, pi/2, 3*pi/4]. fp has also been shuffled accordingly, [0, 1, 0, -1].
xp = np.concatenate((xp[-1:]-period, xp, xp[0:1]+period))
fp = np.concatenate((fp[-1:], fp, fp[0:1]))
return compiled_interp(x, xp, fp, left, right) # Paraphrasing a little
np.interp does linear interpolation. When trying to interpolate between two points a and b present in xp, it only uses the values of f(a) and f(b) (i.e., the values of fp at the corresponding indices). So what np.interp is doing in this last step is to take the point xp[-1] and put it in front of the array, and take the point xp[0] and put it after the array, but after subtracting and adding one period respectively. So you now have a new xp that looks like [-pi/4, 0, pi/4, pi/2, 3*pi/4, pi]. Likewise, fp[0] and fp[-1] have been concatenated around, so fp is now [-1, 0, 1, 0, -1, 0].
Note that after the modulo operations, x had been brought into the [0, pi] range too, so x is now [7*pi/8, 3*pi/8]. Which lets you easily see that you'll get back [-0.5, 0.5].
Now, coming to your 2D case:
Say you have a grid and some values. Let's just take all values to be between [0, pi] off the bat so we don't need to worry about modulos and shufflings.
xp = np.array([0, np.pi/4, np.pi/2, 3*np.pi/4])
yp = np.array([0, 1, 2, 3])
period = np.pi
# Put x on the 1st dim and y on the 2nd dim; f is linear in y
fp = np.array([0, 1, 0, -1])[:, np.newaxis] + yp[np.newaxis, :]
# >>> fp
# array([[ 0, 1, 2, 3],
# [ 1, 2, 3, 4],
# [ 0, 1, 2, 3],
# [-1, 0, 1, 2]])
We now know that all you need to do is to add xp[[-1]] in front of the array and xp[[0]] at the end, adjusting for the period. Note how I've indexed using the singleton lists [-1] and [0]. This is a trick to ensure that dimensions are preserved.
xp = np.concatenate((xp[[-1]]-period, xp, xp[[0]]+period))
fp = np.concatenate((fp[[-1], :], fp, fp[[0], :]))
Finally, you are free to use scipy.interpolate.interpn to achieve your result. Let's get the value at x = pi/8 for all y:
from scipy.interpolate import interpn
interp_points = np.hstack(( (np.pi/8 * np.ones(4))[:, np.newaxis], yp[:, np.newaxis] ))
result = interpn((xp, yp), fp, interp_points)
# >>> result
# array([ 0.5, 1.5, 2.5, 3.5])
interp_points has to be specified as an Nx2 matrix of points, where the first dimension is for each point you want interpolation at the second dimension gives the x- and y-coordinate of that point. See this answer for a detailed explanation.
If you want to get the value outside of the range [0, period], you'll need to modulo it yourself:
x = 21 * np.pi / 8
x_equiv = x % period # Now within [0, period]
interp_points = np.hstack(( (x_equiv * np.ones(4))[:, np.newaxis], yp[:, np.newaxis] ))
result = interpn((xp, yp), fp, interp_points)
# >>> result
# array([-0.5, 0.5, 1.5, 2.5])
Again, if you want to generate interp_points for a bunch of x- and y- values, look at this answer.

Randomize part of an array

I'm working on a project involving binary patterns (here np.arrays of 0 and 1).
I'd like to modify a random subset of these and return several altered versions of the pattern where a given fraction of the values have been changed (like map a function to a random subset of an array of fixed size)
ex : take the pattern [0 0 1 0 1] and rate 0.2, return [[0 1 1 0 1] [1 0 1 0 1]]
It seems possible by using auxiliary arrays and iterating with a condition, but is there a "clean" way to do that ?
Thanks in advance !
The map function works on boolean arrays too. You could add the subsample logic to your function, like so:
import numpy as np
rate = 0.2
f = lambda x: np.random.choice((True, x),1,p=[rate,1-rate])[0]
a = np.array([0,0,1,0,1], dtype='bool')
map(f, a)
# This will output array a with on average 20% of the elements changed to "1"
# it can be slightly more or less than 20%, by chance.
Or you could rewrite a map function, like so:
import numpy as np
def map_bitarray(f, b, rate):
'''
maps function f on a random subset of b
:param f: the function, should take a binary array of size <= len(b)
:param b: the binary array
:param rate: the fraction of elements that will be replaced
:return: the modified binary array
'''
c = np.copy(b)
num_elem = len(c)
idx = np.random.choice(range(num_elem), num_elem*rate, replace=False)
c[idx] = f(c[idx])
return c
f = lambda x: True
b = np.array([0,0,1,0,1], dtype='bool')
map_bitarray(f, b, 0.2)
# This will output array b with exactly 20% of the elements changed to "1"
rate=0.2
repeats=5
seed=[0,0,1,0,1]
realizations=np.tile(seed,[repeats,1]) ^ np.random.binomial(1,rate,[repeats,len(seed)])
Use np.tile() to generate a matrix from the seed row.
np.random.binomial() to generate a binomial mask matrix with your requested rate.
Apply the mask with the xor binary operator ^
EDIT:
Based on #Jared Goguen comments, if you want to change 20% of the bits, you can elaborate a mask by choosing elements to change randomly:
seed=[1,0,1,0,1]
rate=0.2
repeats=10
mask_list=[]
for _ in xrange(repeats):
y=np.zeros(len(seed),np.int32)
y[np.random.choice(len(seed),0.2*len(seed))]=1
mask_list.append(y)
mask = np.vstack(mask_list)
realizations=np.tile(seed,[repeats,1]) ^ mask
So, there's already an answer that provides sequences where each element has a random transition probability. However, it seems like you might want an exact fraction of the elements to change instead. For example, [1, 0, 0, 1, 0] can change to [1, 1, 0, 1, 0] or [0, 0, 0, 1, 0], but not [1, 1, 1, 1, 0].
The premise, based off of xvan's answer, uses the bit-wise xor operator ^. When a bit is xor'd with 0, it's value will not change. When a bit is xor'd with 1, it will flip. From your question, it seems like you want to change len(seq)*rate number of bits in the sequence. First create mask which contains len(seq)*rate number of 1's. To get an altered sequence, xor the original sequence with a shuffled version of mask.
Here's a simple, inefficient implementation:
import numpy as np
def edit_sequence(seq, rate, count):
length = len(seq)
change = int(length * rate)
mask = [0]*(length - change) + [1]*change
return [seq ^ np.random.permutation(mask) for _ in range(count)]
rate = 0.2
seq = np.array([0, 0, 1, 0, 1])
print edit_sequence(seq, rate, 5)
# [0, 0, 1, 0, 0]
# [0, 1, 1, 0, 1]
# [1, 0, 1, 0, 1]
# [0, 1, 1, 0, 1]
# [0, 0, 0, 0, 1]
I don't really know much about NumPy, so maybe someone with more experience can make this efficient, but the approach seems solid.
Edit: Here's a version that times about 30% faster:
def edit_sequence(seq, rate, count):
mask = np.zeros(len(seq), dtype=int)
mask[:len(seq)*rate] = 1
output = []
for _ in range(count):
np.random.shuffle(mask)
output.append(seq ^ mask)
return output
It appears that this updated version scales very well with the size of seq and the value of count. Using dtype=bool in seq and mask yields another 50% improvement in the timing.

How do I interate through a paired list when using map and lambda?

I'm stuck on how to iterate through a paired list while i'm using the map and lambda functions. I want to create a series of histograms based on a central location and the distances of selected locations (x,y) to the center and the number of times a particular distance appears, but I keep getting an index out of range error and I don't understand why. I'm not sure how to iterate through locations where I need to specify which two values out of it. The whole thing works except the n part.
Sorry for not being clearer, the locations=numpy.array((x,y)) are locations from a boolean array that produces specific locations that I wanted to test instead of for whole array. The values (x,y) produced are a two row array where the values I want are paired column-wise. The code before this was:
def detect_peaks(data):
average=numpy.average(data)*2
local_max = data > average
return local_max
(x,y) = numpy.where(detect_peaks(data))
for test_x in range(0, 8):
for test_y in range(0,8):
distances=[]
locations=numpy.array((x,y))
central=numpy.array((test_x,test_y))
[map(lambda x1: distances.append(numpy.linalg.norm(locations[(x1,0), (x1,1)]-central)), n) for n in locations]
histogram=numpy.histogram(distances, bins=10)
I'll rewrite the map/lambda function and come back. Thanks!
Is this what you want? x and y are arrays of int, not float.
Not that I like the double for loops, they should be replaced with a more vectorized algorithm, but to keep the change minimal and high light the problematic line, here is it:
>>> a2
array([[ 0.92607265, 1.26155686, 0.31516174, 0.91750943, 0.30713193],
[ 1.0601752 , 1.10404664, 0.67766044, 0.36434503, 0.64966887],
[ 1.29878971, 0.66417322, 0.48084284, 1.0956822 , 0.27142741],
[ 0.27654032, 0.29566566, 0.33565457, 0.29749312, 0.34113315],
[ 0.33608323, 0.25230828, 0.41507646, 0.37872512, 0.60471438]])
>>> numpy.where(detect_peaks(a2))
(array([0, 2]), array([1, 0]))
>>> def func1(locations): #You don't need to unpack the numpy.where result.
for test_x in range(0, 4):
for test_y in range(0, 4):
locations=numpy.array(locations)
central=numpy.array((test_x,test_y))
#Vectorization is almost always better.
#Be careful, iterate an array means iterate its rows, so, transpose it first.
distances=map(numpy.linalg.norm, (locations-central.reshape((2,-1))).T)
histogram=numpy.histogram(distances, bins=10)
print 'cental point:', central
print 'distance lst:', distances
print histogram
print '-------------------------'
And the result:
>>> func1(numpy.where(detect_peaks(a2)))
cental point: [0 0]
distance lst: [1.0, 2.0]
(array([1, 0, 0, 0, 0, 0, 0, 0, 0, 1]), array([ 1. , 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2. ]))
-------------------------
cental point: [0 1]
distance lst: [0.0, 2.2360679774997898]
(array([1, 0, 0, 0, 0, 0, 0, 0, 0, 1]), array([ 0. , 0.2236068 , 0.4472136 , 0.67082039, 0.89442719, 1.11803399, 1.34164079, 1.56524758, 1.78885438, 2.01246118, 2.23606798]))
-------------------------#and more
Came up with this:
def detect_peaks(arrayfinal):
average=numpy.average(arrayfinal)
local_max = arrayfinal > average
return local_max
def dist(distances, center, n):
distance=numpy.linalg.norm(n-center)
distances.append(distance)
def histotest():
peaks = numpy.where(detect_peaks(arrayfinal))
ordered= zip(peaks[0],peaks[1])
for test_x in range(0, 2048):
for test_y in range(0,2048):
distances=[]
center=numpy.array((test_x,test_y))
[dist(distances, center, n) for n in ordered]
histogram=numpy.histogram(distances, bins=30)
print histogram
It appears to work, but I like yours better.

Categories