I'm currently trying to replace the for-loops in this code chunk with vectorized operations in Numpy:
def classifysignal(samplemat, binedges, nbinmat, nodatacode):
ndata, nsignals = np.shape(samplemat)
classifiedmat = np.zeros(shape=(ndata, nsignals))
ncounts = 0
for i in range(ndata):
for j in range(nsignals):
classifiedmat[i,j] = nbinmat[j]
for e in range(nbinmat[j]):
if samplemat[i,j] == nodatacode:
classifiedmat[i,j] == nodatacode
break
elif samplemat[i,j] <= binedges[j, e]:
classifiedmat[i,j] = e
ncounts += 1
break
ncounts = float(ncounts/nsignals)
return classifiedmat, ncounts
However, I'm having a little trouble conceptualizing how to replace the third for loop (i.e. the one beginning with for e in range(nbinmat[j]), since it entails comparing individual elements of two separate matrices before assigning a value, with the indices of these elements (i and e) being completely decoupled from each other. Is there a simple way to do this using whole-array operations, or would sticking with for-loops be best?
PS: My first Stackoverflow question, so if anything's unclear/more details are needed, please let me know! Thanks.
Without some concrete examples and explanation it's hard (or at least work) to figure out what you are trying to do, especially in the inner loop. So let's tackle a few pieces and try to simplify them
In [59]: C=np.zeros((3,4),int)
In [60]: N=np.arange(4)
In [61]: C[:]=N
In [62]: C
Out[62]:
array([[0, 1, 2, 3],
[0, 1, 2, 3],
[0, 1, 2, 3]])
means that classifiedmat[i,j] = nbinmat[j] can be moved out of the loops
classifiedmat = np.zeros(samplemat.shape)
classifiedmat[:] = nbinmat
and
In [63]: S=np.arange(12).reshape(3,4)
In [64]: C[S>8]=99
In [65]: C
Out[65]:
array([[ 0, 1, 2, 3],
[ 0, 1, 2, 3],
[ 0, 99, 99, 99]])
suggests that
if samplemat[i,j] == nodatacode:
classifiedmat[i,j] == nodatacode
could be replaced with
classifiedmat[samplemat==nodatacode] = nodatacode
I haven't worked out whether loop and break modifies this replacement or not.
a possible model for inner loop is:
In [83]: B=np.array((np.arange(4),np.arange(2,6))).T
In [84]: for e in range(2):
C[S<=B[:,e]]=e
....:
In [85]: C
Out[85]:
array([[ 1, 1, 1, 1],
[ 0, 1, 2, 3],
[ 0, 99, 99, 99]])
You could also compare all values of S and B with:
In [86]: S[:,:,None]<=B[None,:,:]
Out[86]:
array([[[ True, True],
[ True, True],
[ True, True],
[ True, True]],
[[False, False],
[False, False],
[False, False],
[False, False]],
[[False, False],
[False, False],
[False, False],
[False, False]]], dtype=bool)
The fact that you are iterating over:
for e in range(nbinmat[j]):
may throw out all these equivalences. I'm not going try to figure out its significance. But maybe I've given you some ideas.
Well, if you want to use vector operations you need to solve the problem using linear algebra. I can't rethink the problem for you, but the general approach I would take is something like:
res = Subtract samplemat from binedges
res = Normalize values in res to 0 and 1 (use clip?). i.e if > 0, then 1 else 0.
ncount = sum ( res )
classifiedMat = res * binedges
And so on.
Related
I have many very large padded numpy 2d arrays, simplified to array A, shown below. Array Z is the basic pad array:
A = np.array(([1 , 2, 3], [2, 3, 4], [0, 0, 0], [0, 0, 0], [0, 0, 0]))
Z = np.array([0, 0, 0])
How to count the number of pads in array A in the simplest / fastest pythonic way?
This works (zCount=3), but seems verbose, loopy and unpythonic:
zCount = 0
for a in A:
if a.any() == Z.any():
zCount += 1
zCount
Also tried a one-line list comprehension, which doesn't work (dont know why not):
[zCount += 1 for a in A if a.any() == Z.any()]
zCount
Also tried a list count, but 'truth value of array with more than one element is ambiguous':
list(A).count(Z)
Have searched for a simple numpy expression without success. np.count_nonzero gives full elementwise boolean for [0]. Is there a one-word / one-line counting expression for [0, 0, 0]? (My actual arrays are approx. shape (100,30) and I have up to millions of these. I am trying to deal with them in batches, so any simple time savings generating a count would be helpful). thx
Try:
>>> np.equal(A, Z).all(axis=1).sum()
3
Step by step:
>>> np.equal(A, Z)
array([[False, False, False],
[False, False, False],
[ True, True, True],
[ True, True, True],
[ True, True, True]])
>>> np.equal(A, Z).all(axis=1)
array([False, False, True, True, True])
>>> np.equal(A, Z).all(axis=1).sum()
3
let say i have 2D array, f.e.:
In [136]: ary
array([[6, 7, 9],
[0, 2, 5],
[3, 3, 4],
[2, 2, 8],
[3, 4, 9],
[0, 5, 7],
[2, 4, 9],
[3, 5, 7],
[7, 8, 8],
[0, 2, 3]])
I want to calculate overlap with 1D vector, FAST.
I can almost do it with (8ms on big array):
(ary == match) # .sum(axis=1).argsort()[::-1]
The problem with it is that it only matches if both Position and Value match.
match == [6,5,4]
array([[ True, False, False],
[False, False, False],
[False, False, True],
[False, False, False],
[False, False, False],
[False, True, False],
[False, False, False],
[False, True, False],
[False, False, False],
[False, False, False]])
F.e. 5 in 2nd column of 1d vec did not match with 5 in 3rd column on the 2nd row.
It works with .isin()
np.isin(ary,match,assume_unique=True).sum(axis=1).argsort()[::-1][:5]
but it is slow on big arrays (200000,10) ~20ms
Help me extend the first case so that it can match the Value in any position of 1D vector with the row.
the expected result is row-indexes ordered by OVERLAP COUNT, lets use [2,4,5] because it has more matches:
In [147]: np.isin(ary,[2,5,4],assume_unique=True)
Out[147]:
array([[False, False, False],
[False, True, True],
[False, False, True],
[ True, True, False],
[False, True, False],
[False, True, False],
[ True, True, False],
[False, True, False],
[False, False, False],
[False, True, False]])
Overlap :
In [149]: np.isin(ary,[2,5,4],assume_unique=True).sum(axis=1)
Out[149]: array([0, 2, 1, 2, 1, 1, 2, 1, 0, 1])
Order by overlap :
In [148]: np.isin(ary,[2,5,4],assume_unique=True).sum(axis=1).argsort()[::-1]
Out[148]: array([6, 3, 1, 9, 7, 5, 4, 2, 8, 0])
See rows : 6,3,1 have Overlap:2 that why they are first
Variants:
#could be from 1000,10,10 to 2000,100,20 .. ++
def data(cells=2000,seg=100,items=10):
ary = np.random.randint(0,cells,(cells*seg,items))
rnd = np.random.randint(0,cells*seg)
return ary, ary[rnd]
def best2(match,ary): #~20ms, (200000,10)
return np.isin(ary,match,assume_unique=True).sum(axis=1).argsort()[::-1][:5]
def best3(match,ary): #Corralien ~20ms
return np.logical_or.reduce(np.ravel(ary) == match[:, None], axis=0).reshape(ary.shape).sum(axis=1).argsort()[::-1][:5]
Can this be sped if using numba+cuda OR cupy on GPU ?
The main problem of all approach so fast is that they create huge temporary array while finally only 5 items are important. Numba can be used to compute the arrays on the fly (with efficient JIT-compiled loops) avoiding some temporary array. Moreover, a full sort is not required as only the top 5 items need to be retrieved. A partition can be used instead. It is even possible to use a faster approach since only the 5 selected items matters and not the others. Here is the resulting code:
#nb.njit('int32[::1](int32[::1], int32[:,::1])')
def computeScore(match, ary):
n, m = ary.shape
assert m == match.shape[0]
tmp = np.empty(n, dtype=np.int32)
for i in range(n):
s = 0
# Count the number of matching items (with repetition)
for j in range(m):
# Find a match
item = ary[i, j]
found = False
for k in range(m):
found |= item == match[k]
s += found
tmp[i] = s
return tmp
def best4(match, ary):
n, m = ary.shape
score = computeScore(match, ary)
bestItems = np.argpartition(score, n-5)[n-5:] # sadly not supported by Numba yet
order = np.argsort(-score[bestItems]) # bastItems is not sorted and likely needs to be
return bestItems[order]
Note that best4 can provide results different to best2 when the matching score (stored in tmp) is equal between multiple items. This is due to the sorting algorithm which is not stable by default in Numpy (the kind parameter can be used to adapt this behavior). This is also true for the partition algorithm although Numpy does not seems to provide a stable partition algorithm yet.
This code should be faster than other implementation, but not by a large margin. One of the issue is that Numba (and most C/C++ compilers like the one used to compile Numpy) do not succeed to generate a fast code since it does not know the value m at compile time. As a result, the most aggressive optimizations (eg. unrolling loops and using of SIMD instructions) can hardly be applied. You can help Numba using assertions or escaping conditionals.
Moreover, the code can be parallelized using multiple threads to make it much faster on mainstream platforms. Note that the parallelized version may not faster on small data nor all platforms since creating threads introduces an overhead that could be bigger than the actual computation.
Here is the resulting implementation:
#nb.njit('int32[::1](int32[::1], int32[:,::1])', parallel=True)
def computeScoreOpt(match, ary):
n, m = ary.shape
assert m == match.shape[0]
assert m == 10
tmp = np.empty(n, dtype=np.int32)
for i in nb.prange(n):
# Thie enable Numba to assume m=10 in the following code
# and generate a very efficient code for this specific case.
# The assert should be enough but the internals of Numba
# prevent the information to be propagatted to this portion
# of the code when it is parallelized.
if m != 10: continue
s = 0
for j in range(m):
item = ary[i, j]
found = False
for k in range(m):
found |= item == match[k]
s += found
tmp[i] = s
return tmp
def best5(match, ary):
n, m = ary.shape
score = computeScoreOpt(match, ary)
bestItems = np.argpartition(score, n-5)[n-5:]
order = np.argsort(-score[bestItems])
return bestItems[order]
Here are the timings on my machine with the example dataset:
best2: 18.2 ms
best3: 17.8 ms
best4 (sequential -- default): 12.0 ms
best4 (parallel): 3.1 ms
best5 (sequential): 3.2 ms
best5 (parallel -- default): 1.2 ms
The fastest implementation is 15 times faster than the original reference implementation.
Note that if m is greater than about 30, it should be better to use a more advanced set-based algorithm. An alternative solution is to sort match first and then use np.isin in the i-based loop in this case.
Use broadcasting and np.logical_or.reduce:
# match = np.array(match)
>>> np.logical_or.reduce(np.ravel(ary) == match[:, None], axis=0) \
.reshape(ary.shape)
array([[ True, False, False],
[False, False, True],
[False, False, True],
[False, False, False],
[False, True, False],
[False, True, False],
[False, True, False],
[False, True, False],
[False, False, False],
[False, False, False]])
Performance
match = np.array([6, 5, 4])
ary = np.random.randint(0, 10, (200000, 10))
%timeit np.logical_or.reduce(np.ravel(ary) == match[:, None], axis=0).reshape(ary.shape)
7.49 ms ± 174 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
I have a boolean mask shaped (M, N). Each column in the mask may have a different number of True elements, but is guaranteed to have at least two. I want to find the row index of the last two such elements as efficiently as possible.
If I only wanted one element, I could do something like (M - 1) - np.argmax(mask[::-1, :], axis=0). However, that won't help me get the second-to-last index.
I've come up with an iterative solution using np.where or np.nonzero:
M = 4
N = 3
mask = np.array([
[False, True, True],
[True, False, True],
[True, False, True],
[False, True, False]
])
result = np.zeros((2, N), dtype=np.intp)
for col in range(N):
result[:, col] = np.flatnonzero(mask[:, col])[-2:]
This creates the expected result:
array([[1, 0, 1],
[2, 3, 2]], dtype=int64)
I would like to avoid the final loop. Is there a reasonably vectorized form of the above? I am looking for specifically two rows, which are always guaranteed to exist. A general solution for arbitrary element counts is not required.
An argsort does it -
In [9]: np.argsort(mask,axis=0,kind='stable')[-2:]
Out[9]:
array([[1, 0, 1],
[2, 3, 2]])
Another with cumsum -
c = mask.cumsum(0)
out = np.where((mask & (c>=c[-1]-1)).T)[1].reshape(-1,2).T
Specifically for exactly two rows, one way with argmax -
c = mask.copy()
idx = len(c)-c[::-1].argmax(0)-1
c[idx,np.arange(len(idx))] = 0
idx2 = len(c)-c[::-1].argmax(0)-1
out = np.vstack((idx2,idx))
I have the following array:
a= [[2,3,50], [5,6,5], [8,10,5], [1,3,51] , [8,10,12]]
I would like to compare rows and remove those having nearly identical values.
For instance [2,3,50] and [1,3,51] are almost identical (the difference in each value is less than 1).
At the end, I should get the following array:
a= [[2,3,50], [5,6,5], [8,10,5], [8,10,12]]
where [1,3,51] has been removed.
Is there an efficient to do this in Python, avoiding multiple loops ?
Best
We can perform outer subtraction, outer on the first axes of two versions of a and then get the absolute value and check if all values along the common axis is lesser than or equal to the threshold value of 1. This will give us a 2D mask. We need to select the upper triangular mask to make sure the closeby pairs are not accounted for more than once. Reset the diagonal ones that correspong to own cases. Finally, check if there's at least match in each col, which are the closeby ones that we need to remove. Hence, invert the mask and select rows off a.
The implementation would be -
a[~np.triu((np.abs(a[:,None,:]-a)<=1).all(2),1).any(0)]
Sample run with step-by-step execution should help clarify.
Input array :
In [112]: a
Out[112]:
array([[ 2, 3, 50],
[ 5, 6, 5],
[ 8, 10, 5],
[ 1, 3, 51],
[ 8, 10, 12]])
Steps :
In [114]: (np.abs(a[:,None,:]-a)<=1).all(2)
Out[114]:
array([[ True, False, False, True, False],
[False, True, False, False, False],
[False, False, True, False, False],
[ True, False, False, True, False],
[False, False, False, False, True]])
In [115]: np.triu((np.abs(a[:,None,:]-a)<=1).all(2),1)
Out[115]:
array([[False, False, False, True, False],
[False, False, False, False, False],
[False, False, False, False, False],
[False, False, False, False, False],
[False, False, False, False, False]])
In [116]: np.triu((np.abs(a[:,None,:]-a)<=1).all(2),1).any(0)
Out[116]: array([False, False, False, True, False])
In [117]: ~np.triu((np.abs(a[:,None,:]-a)<=1).all(2),1).any(0)
Out[117]: array([ True, True, True, False, True])
In [118]: a[~np.triu((np.abs(a[:,None,:]-a)<=1).all(2),1).any(0)]
Out[118]:
array([[ 2, 3, 50],
[ 5, 6, 5],
[ 8, 10, 5],
[ 8, 10, 12]])
Just to do one more round of verification, let's set the last row as another closeby of the second-last row. This should lead to the last row being removed too. Hence -
In [120]: a[-1] = [0,3,52]
In [122]: a
Out[122]:
array([[ 2, 3, 50],
[ 5, 6, 5],
[ 8, 10, 5],
[ 1, 3, 51],
[ 0, 3, 52]])
In [123]: a[~np.triu((np.abs(a[:,None,:]-a)<=1).all(2),1).any(0)]
Out[123]:
array([[ 2, 3, 50],
[ 5, 6, 5],
[ 8, 10, 5]])
With one-loop for memory-efficiency
We can use one loop to save on memory and hence being efficient on that front and also make use of slicing in the process -
n = len(a)
mask = np.zeros(n, dtype=bool)
for i in range(n-1):
mask[i+1:] |= (np.abs(a[i+1:]-a[i])<=1).all(1)
out = a[~mask]
A few questions about the definition of the problem.
First: Suppose we have an array [ ... a1 ... a2 ... ], where a1 and a2 are "nearly identical"; which one do we remove? This is pretty easy to resolve: pick the first one.
Second: Suppose we have an array [ b1 ... bN ] where bi and bi+1 are nearly identical but bi and bi+2 are not nearly identical. Which ones do we remove? In this case, I guess you could remove all the odd entries or all the even entries.
Third: what about a midway situation where you have a mix-and-match of nearly identical successive pairs? What's the prescription?
I think the problem is related to the fact that "nearly identical" is not transitive, unlike "strictly identical". This suggests the following approach, which defines a somewhat different criterion for "nearly identical": Define a hash function that maps rows into "OK rows"; for example, round all elements of the rows down to an even number. Then define "nearly identical" to be all rows that map into the same "OK row". You could define a map from "OK row" to list of nearly identical rows in a, and return the first element of each list.
Perhaps it would help to have a little more context for the question. For example, I'm working on a problem involving a large number of time series's. I'd like to predict the next value in each of the series's, using SARIMA. However, the cost of building one SARIMA model for each series is prohibitive, so what I do (in brief) is cluster the series's using K-Means clustering with a value of K such that building K SARIMA models is acceptable. In this case, what I'm hoping is that different series's in the same cluster are "nearly identical enough" that one prediction serves for both.
Im trying to delete specific rows in my numpy array that following certain conditions.
This is an example:
a = np.array ([[1,1,0,0,1],
[0,0,1,1,1],
[0,1,0,1,1],
[1,0,1,0,1],
[0,0,1,0,1],
[1,0,1,0,0]])
I want to able to delete all rows, where specific columns are zero, this array could be a lot bigger.
In this example, if first two element are zero, or if last two elements are zero, the rows will be deleted.
It could be any combination, no only first element or last ones.
This should be the final:
a = np.array ([[1,1,0,0,1],
[0,1,0,1,1],
[1,0,1,0,1]])
For example If I try:
a[:,0:2] == 0
After reading:
Remove lines with empty values from multidimensional-array in php
and this question: How to delete specific rows from a numpy array using a condition?
But they don't seem to apply to my case, or probably I'm not understanding something here as nothing works my case.
This gives me all rows there the first two cases are zero, True, True
array([[False, False],
[ True, True],
[ True, False],
[False, True],
[ True, True],
[False, True]])
and for the last two columns being zero, the last row should be deleted too. So at the end I will only be left with 2 rows.
a[:,3:5] == 0
array([[ True, False],
[False, False],
[False, False],
[ True, False],
[ True, False],
[ True, True]])
Im trying something like this, but I don't understand now how to tell it to only give me the rows that follow the condition, although this only :
(a[a[:,0:2]] == 0).all(axis=1)
array([[ True, True, False, False, False],
[False, False, True, True, False],
[False, False, False, False, False],
[False, False, False, False, False],
[False, False, True, True, False],
[False, False, False, False, False]])
(a[((a[:,0])& (a[:,1])) ] == 0).all(axis=1)
and this shows everything as False
could you please guide me a bit?
thank you
Just adding in the question, that the case it wont always be the first 2 or the last 2. If my matrix has 35 columns, it could be the column 6th to 10th, and then column 20th and 25th. An user will be able to decide which columns they want to get deleted.
Try this
idx0 = (a[:,0:2] == 0).all(axis=1)
idx1 = (a[:,-2:] == 0).all(axis=1)
a[~(idx0 | idx1)]
The first two steps select the indices of the rows that match your filtering criteria. Then do an or (|) operation, and the not (~) operation to obtain the final indices you want.
If I understood correctly you could do something like this:
import numpy as np
a = np.array([[1, 1, 0, 0, 1],
[0, 0, 1, 1, 1],
[0, 1, 0, 1, 1],
[1, 0, 1, 0, 1],
[0, 0, 1, 0, 1],
[1, 0, 1, 0, 0]])
left = np.count_nonzero(a[:, :2], axis=1) != 0
a = a[left]
right = np.count_nonzero(a[:, -2:], axis=1) != 0
a = a[right]
print(a)
Output
[[1 1 0 0 1]
[0 1 0 1 1]
[1 0 1 0 1]]
Or, a shorter version:
left = np.count_nonzero(a[:, :2], axis=1) != 0
right = np.count_nonzero(a[:, -2:], axis=1) != 0
a = a[(left & right)]
Use the following mask:
[np.any(a[:,:2], axis=1) & np.any(a[:,:-2], axis=1)]
if you want to create a filtered view:
a[np.any(a[:,:2], axis=1) & np.any(a[:,:-2], axis=1)]
if you want to create a new array:
np.delete(a,np.where(~(np.any(a[:,:2], axis=1) & np.any(a[:,:-2], axis=1))), axis=0)