Dropping array rows that DUPLICATE defined column elements of other array rows - python

Consider the np array sample below:
import numpy as np
arr = np.array([[1,2,5, 4,2,7, 5,2,9],
[4,4,1, 4,2,0, 3,6,4],
[1,2,1, 4,2,2, 5,2,0],
[1,2,7, 2,4,1, 5,2,8],
[1,2,9, 4,2,8, 5,2,1],
[4,2,0, 4,4,1, 5,2,4],
[4,4,0, 4,2,6, 3,6,6],
[1,2,1, 4,2,2, 5,2,0]])
PROBLEM: We are concerned only with the first TWO columns of each element triplet. I want to remove array rows that duplicate these two elements of each triplet (in the same order).
In the example above, the rows with indices 0,2,4, and 7 are all of the form [1,2,_, 4,2,_, 5,2,_]. So, we should keep arr[0],and drop the other three. Similarly, row[6] is dropped because it has the same pattern as row[1], namely [4,4,_, 4,2,_, 3,6,_].
In the example given, the output should look like:
[[1,2,5, 4,2,7, 5,2,9],
[4,4,1 4,2,0, 3,6,4],
[1,2,7, 2,4,1, 5,2,8],
[4,2,0, 4,4,1 5,2,4]]
The part I'm struggling with most is that the solution should be general enough to handle arrays of 3, 6, 9, 12... columns. (always a multiple of 3, and we are always interested in duplications of the first two columns of each triplet.

If you can create an array withonly the values you are interested in, you can pass that to np.unique() which has an option to return_index.
One way to get the groups you want is to delete every third column. Pass that to np.unique() and get the indices:
import numpy as np
arr = np.array([[1,2,5, 4,2,7, 5,2,9],
[4,4,1, 4,2,0, 3,6,4],
[1,2,1, 4,2,2, 5,2,0],
[1,2,7, 2,4,1, 5,2,8],
[1,2,9, 4,2,8, 5,2,1],
[4,2,0, 4,4,1, 5,2,4],
[4,4,0, 4,2,6, 3,6,6],
[1,2,1, 4,2,2, 5,2,0]])
unique_cols = np.delete(arr, slice(2, None, 3), axis=1)
vals, indices = np.unique(unique_cols, axis=0, return_index=True)
arr[sorted(indices)]
output:
array([[1, 2, 5, 4, 2, 7, 5, 2, 9],
[4, 4, 1, 4, 2, 0, 3, 6, 4],
[1, 2, 7, 2, 4, 1, 5, 2, 8],
[4, 2, 0, 4, 4, 1, 5, 2, 4]])

Related

2D Vectorization of unique values per row with condition

Consider the array and function definition shown:
import numpy as np
a = np.array([[2, 2, 5, 6, 2, 5],
[1, 5, 8, 9, 9, 1],
[0, 4, 2, 3, 7, 9],
[1, 4, 1, 1, 5, 1],
[6, 5, 4, 3, 2, 1],
[3, 6, 3, 6, 3, 6],
[0, 2, 7, 6, 3, 4],
[3, 3, 7, 7, 3, 3]])
def grpCountSize(arr, grpCount, grpSize):
count = [np.unique(row, return_counts=True) for row in arr]
valid = [np.any(np.count_nonzero(row[1] == grpSize) == grpCount) for row in count]
return valid
The point of the function is to return the rows of array a that have exactly grpCount groups of elements that each hold exactly grpSize identical elements.
For example:
# which rows have exactly 1 group that holds exactly 2 identical elements?
out = a[grpCountSize(a, 1, 2)]
As expected, the code outputs out = [[2, 2, 5, 6, 2, 5], [3, 3, 7, 7, 3, 3]].
The 1st output row has exactly 1 group of 2 (ie: 5,5), while the 2nd output row also has exactly 1 group of 2 (ie: 7,7).
Similarly:
# which rows have exactly 2 groups that each hold exactly 3 identical elements?
out = a[grpCountSize(a, 2, 3)]
This produces out = [[3, 6, 3, 6, 3, 6]], because only this row has exactly 2 groups each holding exactly 3 elements (ie: 3,3,3 and 6,6,6)
PROBLEM: My actual arrays have just 6 columns, but they can have many millions of rows. The code works perfectly as intended, but it is VERY SLOW for long arrays. Is there a way to speed this up?
np.unique sorts the array which makes it less efficient for your purpose. Use np.bincount and that way you most likely will save some time(depending on your array shape and values in the array). You also will not need np.any anymore:
def grpCountSize(arr, grpCount, grpSize):
count = [np.bincount(row) for row in arr]
valid = [np.count_nonzero(row == grpSize) == grpCount for row in count]
return valid
Another way that might even save more time is using same number of bins for all rows and create one array:
def grpCountSize(arr, grpCount, grpSize):
m = arr.max()
count = np.stack([np.bincount(row, minlength=m+1) for row in arr])
return (count == grpSize).sum(1)==grpCount
Another yet upgrade is to use vectorized 2D bin count from this post. For example (note that Numba solutions tested in the post above is faster. I just provided the numpy solution for example. You can replace the function with any of the suggested ones in the post linked above):
def grpCountSize(arr, grpCount, grpSize):
count = bincount2D_vectorized(arr)
return (count == grpSize).sum(1)==grpCount
#from the post above
def bincount2D_vectorized(a):
N = a.max()+1
a_offs = a + np.arange(a.shape[0])[:,None]*N
return np.bincount(a_offs.ravel(), minlength=a.shape[0]*N).reshape(-1,N)
output of all solutions above:
a[grpCountSize2(a, 1, 2)]
#array([[2, 2, 5, 6, 2, 5],
# [3, 3, 7, 7, 3, 3]])

Searching a Numpy array column for 3 or more consecuative values. Then taking a value from another column

Essentially I want to scan a numpy array column for 3 or consecutive values. If there are 3 or more consecutive values I would like to take another value from the same row different column for where the consecutive values started and then end.
Example
numpy arr = [
[2, 7, 2, 1]
[1, 2, 3, 4]
[4, 6, 6, 4]
[8, 2, 6, 4]
[9, 3, 1, 4]
[2, 7, 2, 1]
]
From the array above. I want to scan col 4 to see if 4 occurs more then 3 times in a row. If it does I want to take the value from the second col where it starts and ends and store in another array. In this case it would be 3 and 1
You can achieve this using pandas by shifting the column you want to compare to detect for changes and counting how many times that column repeats.
You did not specify the case of what happens if you have multiple repeats of the same series of numbers, so I will provide a generic solution for that. If you know beforehand that the same sequence of numbers cannot be repeated again, you can probably simplify this solution.
# Imports and define data
import numpy as np
import pandas as pd
data = [[2, 7, 2, 1],
[1, 2, 3, 4],
[4, 6, 6, 4],
[8, 2, 6, 4],
[9, 3, 1, 4],
[2, 7, 2, 1]]
df = pd.DataFrame(data, columns=['A', 'B', 'C', 'D'])
# Compare the last column, see where we have a change and label it 1
df['shift'] = df['D'].shift()
df['change'] = np.where(df['D'] == df['shift'], 0, 1)
# Assign a group number for each change (in case same sequence repeats later)
df['group'] = df['change'].cumsum()
# Build a dictionary mapping no. of repeats to group number and assign back to df
consecutives = df.groupby('group')['D'].count()
df['num_consecutives'] = df['group'].map(consecutives)
# Specify the number of consecutives to filter by, group by the "group" col
# and the last col in case there are repeats, so you can identify each instance
# of the first and last appearances, then find the first and last values of
# the col of interest. You mention 3 and 1, so I assume that's the third col.
df[df['num_consecutives']>3].groupby(['group', 'D'])['C'].agg(['first', 'last'])

Sorting 2D array by the first n rows

How can I sort an array in NumPy by the two first rows?
For example,
A=array([[9, 2, 2],
[4, 5, 6],
[7, 0, 5]])
And I'd like to sort columns by the first two rows, such that I get back:
A=array([[2, 2, 9],
[5, 6, 4],
[0, 5, 7]])
Thank you!
One approach is to transform the 2D array over which we want to take the argsort into an easier to handle 1D array. For that one idea could be to multiply the rows to take into accounts for the sorting purpose by successively decreasing values in the power of 10 sequence, sum them and then use argsort (note: this method will be numerically unstable for high values of k. Meant for values up to ~20):
def sort_on_first_k_rows(x, k):
# normalize each row so that its max value is 1
a = (x[:k,:]/x[:k,:,None].max(1)).astype('float64')
# multiply each row by the seq 10^n, for n=k-1,k-2...0
# Ensures that the contribution of each row in the sorting is
# captured in the final sum
a_pow = (a*10**np.arange(a.shape[0]-1,-1,-1)[:,None])
# Sort with the argsort on the resulting sum
return x[:,a_pow.sum(0).argsort()]
Checking with the shared example:
sort_on_first_k_rows(A, 2)
array([[2, 2, 9],
[5, 6, 4],
[0, 5, 7]])
Or with another example:
A=np.array([[9, 2, 2, 1, 5, 2, 9],
[4, 7, 6, 0, 9, 3, 3],
[7, 0, 5, 0, 2, 1, 2]])
sort_on_first_k_rows(A, 2)
array([[1, 2, 2, 2, 5, 9, 9],
[0, 3, 6, 7, 9, 3, 4],
[0, 1, 5, 0, 2, 2, 7]])
The pandas library is very flexible for sorting DataFrames - but only based on columns. So I suggest to transpose and convert your array to a DataFrame like this (note that you need to specify column names for later defining the sorting criteria):
df = pd.DataFrame(A.transpose(), columns=['col'+str(i) for i in range(len(A))])
Then sort it and convert it back like this:
A_new = df.sort_values(['col0', 'col1'], ascending=[True, True]).to_numpy().transpose()

Indexing in NumPy: Access every other group of values

The [::n] indexing option in numpy provides a very useful way to index every nth item in a list. However, is it possible to use this feature to extract multiple values, e.g. every other pair of values?
For example:
a = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])
And I want to extract every other pair of values i.e. I want to return
a[0, 1, 4, 5, 8, 9,]
Of course the index could be built using loops or something, but I wonder if there's a faster way to use ::-style indexing in numpy but also specifying the width of the pattern to take every nth iteration of.
Thanks
With length of array being a multiple of the window size -
In [29]: W = 2 # window-size
In [30]: a.reshape(-1,W)[::2].ravel()
Out[30]: array([0, 1, 4, 5, 8, 9])
Explanation with breaking-down-the-steps -
# Reshape to split into W-sized groups
In [43]: a.reshape(-1,W)
Out[43]:
array([[ 0, 1],
[ 2, 3],
[ 4, 5],
[ 6, 7],
[ 8, 9],
[10, 11]])
# Use stepsize to select every other pair starting from the first one
In [44]: a.reshape(-1,W)[::2]
Out[44]:
array([[0, 1],
[4, 5],
[8, 9]])
# Flatten for desired output
In [45]: a.reshape(-1,W)[::2].ravel()
Out[45]: array([0, 1, 4, 5, 8, 9])
If you are okay with 2D output, skip the last step as that still be a view into the input and virtually free on runtime. Let's verify the view-part -
In [47]: np.shares_memory(a,a.reshape(-1,W)[::2])
Out[47]: True
For generic case of not necessarily a multiple, we can use a masking based one -
In [64]: a[(np.arange(len(a))%(2*W))<W]
Out[64]: array([0, 1, 4, 5, 8, 9])
You can do that reshaping the array into a nx3 matrix, then slice up the first two elements for each row and finally flatten up the reshaped array:
a.reshape((-1,3))[:,:2].flatten()
resulting in:
array([ 0, 1, 3, 4, 6, 7, 9, 10])

Apply same permutation for every row in a 2D numpy array

To permute a 1D array A I know that you can run the following code:
import numpy as np
A = np.random.permutation(A)
I have a 2D array and want to apply exactly the same permutation for every row of the array. Is there any way you can specify the numpy to do that for you?
Generate random permutations for the number of columns in A and index into the columns of A, like so -
A[:,np.random.permutation(A.shape[1])]
Sample run -
In [100]: A
Out[100]:
array([[3, 5, 7, 4, 7],
[2, 5, 2, 0, 3],
[1, 4, 3, 8, 8]])
In [101]: A[:,np.random.permutation(A.shape[1])]
Out[101]:
array([[7, 5, 7, 4, 3],
[3, 5, 2, 0, 2],
[8, 4, 3, 8, 1]])
Actually you do not need to do this, from the documentation:
If x is a multi-dimensional array, it is only shuffled along its first
index.
So, taking Divakar's array:
a = np.array([
[3, 5, 7, 4, 7],
[2, 5, 2, 0, 3],
[1, 4, 3, 8, 8]
])
you can just do: np.random.permutation(a) and get something like:
array([[2, 5, 2, 0, 3],
[3, 5, 7, 4, 7],
[1, 4, 3, 8, 8]])
P.S. if you need to perform column permutations - just do np.random.permutation(a.T).T. Similar things apply to multi-dim arrays.
It depends what you mean on every row.
If you want to permute all values (regardless of row and column), reshape your array to 1d, permute, reshape back to 2d.
If you want to permutate each row but not shuffle the elements among the different columns you need to loop trough the one axis and call permutation.
for i in range(len(A)):
A[i] = np.random.permutation(A[i])
It can probably done shorter somehow but that is how it can be done.

Categories