Extract K left-most columns from a 2D tensor with mask - python

Let's suppose we have 2 tensors like
A = [[1, 2, 3, 4],
[5, 6, 7, 8]]
B = [[True, True, True, True],
[True, False, True, True]]
I want to extract K left-most columns from A where its corresponding boolean mask in B is True. In the above example, if K=2, the results should be
C = [[1, 2],
[5, 7]]
6 is not included in C because its corresponding boolean mask is False.
I was able to do that with the following code:
batch_size = 2
C = tf.zeros((batch_size, K), tf.int32)
for batch_idx in tf.range(batch_size):
a = A[batch_idx]
b = B[batch_idx]
tmp = tf.boolean_mask(a, b)
tmp = tmp[:K]
C = tf.tensor_scatter_nd_update(
C, [[batch_idx]], tf.expand_dims(tmp, axis=0))
But I don't want to iterate over A and B with for loop.
Is there any way to do this with matrix operators only?

Not sure if it will work for all corner cases, but you could try using a tf.ragged.boolean_mask
import tensorflow as tf
A = [[1, 2, 3, 4],
[5, 6, 7, 8]]
B = [[True, True, True, True],
[True, False, True, True]]
K = 2
tmp = tf.ragged.boolean_mask(A, B)
C = tmp[:, :K].to_tensor()
tf.Tensor(
[[1 2]
[5 7]], shape=(2, 2), dtype=int32)
K = 3:
tf.Tensor(
[[1 2 3]
[5 7 8]], shape=(2, 3), dtype=int32)

Related

Tensorflow - mask tensor element with condition

I have two tensors, one is 3d tensor with shape[batch_size, timestep, feature], another one is 2d tensor with shape[batch_size, timestep]
eg.
data = tf.constant([[[0, 0], [1, 1], [2, 2]], [[3, 3], [4, 4], [5, 5]]], dtype=tf.float32) # shape=(2, 3, 2)
mask = tf.constant([[True, True, False], [False, True, True]]) # shape=(2, 3)
and I would want to adapt the mask to the data
#Desired output (mask timestep with value -1)
[[[0, 0], [1, 1], [-1, -1]], [[-1, -1], [4, 4], [5, 5]]]
Are there any solutions with tensorflow build-in function or other workarounds to do this?
I prefer to use tf.tile() operation to expand the mask:
data = tf.constant([[[0, 0], [1, 1], [2, 2]], [[3, 3], [4, 4], [5, 5]]], dtype=tf.float32)
mask = tf.constant([[True, True, False], [False, True, True]])
mask_expand = tf.tile(tf.expand_dims(mask, axis=-1), multiples=[1,1, tf.shape(data)[-1]])
minus_ones = tf.fill(tf.shape(data), tf.constant(-1, dtype=data.dtype))
data = tf.where(mask_expand, data, minus_ones)
I do a simple workaround (change the shape of mask), maybe there is a better method, but I can't figure out now.
# reshape mask to the same shape with data
batch_size, total_timestep, feature_dimension = tf.shape(data)
# mask = [[[True], [True], [False]], [[False], [True], [True]]]
mask = tf.reshape(mask, [batch_size, total_timestep, 1]) # shape=(2, 3, 1)
# mask = [[[True, True], [True, True], [False, False]], [[False, False], [True, True], [True, True]]]
mask = tf.broadcast_to(mask, [batch_size, total_timestep, feature_dimension]) # shape=(2, 3, 2)
# adapt mask
data = tf.where(mask, data, tf.constant(-1, dtype=data.dtype) )

How exactly does numpy.where() select the elements in this example?

From numpy docs
>>> np.where([[True, False], [True, True]],
... [[1, 2], [3, 4]],
... [[9, 8], [7, 6]])
array([[1, 8],
[3, 4]])
Am I right in assuming that the [[True, False], [True, True]] part is the condition and [[1, 2], [3, 4]] and [[9, 8], [7, 6]] are x and y respectively according to the docs parameters.
Then how exactly is the function choosing the elements in the following examples?
Also, why is the element type in these examples a list?
>>> np.where([[True, False,True], [False, True]], [[1, 2,56], [3, 4]], [[9, 8,79], [7, 6]])
array([list([1, 2, 56]), list([3, 4])], dtype=object)
>>> np.where([[False, False,True,True], [False, True]], [[1, 2,56,69], [3, 4]], [[9, 8,90,100], [7, 6]])
array([list([1, 2, 56, 69]), list([3, 4])], dtype=object)
In the first case, each term is a (2,2) array (or rather list that can be made into such an array). For each True in the condition, it returns the corresponding term in x, the [[1 -][3,4]], and for each False, the term from y [[- 8][- -]]
In the second case, the lists are ragged
In [1]: [[True, False,True], [False, True]]
Out[1]: [[True, False, True], [False, True]]
In [2]: np.array([[True, False,True], [False, True]])
Out[2]: array([list([True, False, True]), list([False, True])], dtype=object)
the array is (2,), with 2 lists. And when cast as boolean, a 2 element array, with both True. Only an empty list would produce False.
In [3]: _.astype(bool)
Out[3]: array([ True, True])
The where then returns just the x values.
This second case is understandable, but pathological.
more details
Let's demonstrate where in more detail, with a simpler case. Same condition array:
In [57]: condition = np.array([[True, False], [True, True]])
In [58]: condition
Out[58]:
array([[ True, False],
[ True, True]])
The single argument version, which is the equivalent to condition.nonzero():
In [59]: np.where(condition)
Out[59]: (array([0, 1, 1]), array([0, 0, 1]))
Some find it easier to visualize the transpose of that tuple - the 3 pairs of coordinates where condition is True:
In [60]: np.argwhere(condition)
Out[60]:
array([[0, 0],
[1, 0],
[1, 1]])
Now the simplest version with 3 arguments, with scalar values.
In [61]: np.where(condition, True, False) # same as condition
Out[61]:
array([[ True, False],
[ True, True]])
In [62]: np.where(condition, 100, 200)
Out[62]:
array([[100, 200],
[100, 100]])
A good way of visualizing this action is with two masked assignments.
In [63]: res = np.zeros(condition.shape, int)
In [64]: res[condition] = 100
In [65]: res[~condition] = 200
In [66]: res
Out[66]:
array([[100, 200],
[100, 100]])
Another way to do this is to initial an array with the y value(s), and where the nonzero where to fill in the x value.
In [69]: res = np.full(condition.shape, 200)
In [70]: res
Out[70]:
array([[200, 200],
[200, 200]])
In [71]: res[np.where(condition)] = 100
In [72]: res
Out[72]:
array([[100, 200],
[100, 100]])
If x and y are arrays, not scalars, this masked assignment will require refinements, but hopefully for a start this will help.
np.where(condition,x,y)
It checks the condition and if its True returns x else it returns y
np.where([[True, False], [True, True]],
[[1, 2], [3, 4]],
[[9, 8], [7, 6]])
Here you condition is[[True, False], [True, True]]
x = [[1 , 2] , [3 , 4]]
y = [[9 , 8] , [7 , 6]]
First condition is true so it return 1 instead of 9
Second condition is false so it returns 8 instead of 2
After reading about broadcasting as #hpaulj suggested I think I know how the function works.
It will try to broadcast the 3 arrays,then if the broadcast was successful it will use the True and False values to pick elements either from x or y.
In the example
>>>np.where([[True, False,True], [False, True]], [[1, 2,56], [3, 4]], [[9, 8,79], [7, 6]])
We have
cnd=np.array([[True, False,True], [False, True]])
x=np.array([[1, 2,56], [3, 4]])
y=np.array([[9, 8,79], [7, 6]])
Now
>>>x.shape
Out[7]: (2,)
>>>y.shape
Out[8]: (2,)
>>>cnd.shape
Out[9]: (2,)
So all three are just arrays with 2 elements(of type list) even the condition(cnd).So both [True, False,True] and [False, True] will be evaluated as True.And both the elements will be selected from x.
>>>np.where([[True, False,True], [False, True]], [[1, 2,56], [3, 4]], [[9, 8,79], [7, 6]])
Out[10]: array([list([1, 2, 56]), list([3, 4])], dtype=object)
I also tried it with a more complex example(a 2x2x2 broadcast) and it still explains it.
np.where([[[True,False],[True,True]], [[False,False],[True,False]]],
[[[12,45],[10,50]], [[100,10],[17,81]]],
[[[90,93],[85,13]], [[12,345], [190,56,34]]])
Where
cnd=np.array([[[True,False],[True,True]], [[False,False],[True,False]]])
x=np.array([[[12,45],[10,50]], [[100,10],[17,81]]])
y=np.array( [[[90,93],[85,13]], [[12,345], [190,56,34]]])
Here cnd and x have the shape (2,2,2) and y has the shape (2,2).
>>>cnd.shape
Out[14]: (2, 2, 2)
>>>x.shape
Out[15]: (2, 2, 2)
>>>y.shape
Out[16]: (2, 2)
Now as #hpaulj commented y will be broadcasted to (2,2,2).
And it'll probably look like this
>>>cnd
Out[6]:
array([[[ True, False],
[ True, True]],
[[False, False],
[ True, False]]])
>>>x
Out[7]:
array([[[ 12, 45],
[ 10, 50]],
[[100, 10],
[ 17, 81]]])
>>>np.broadcast_to(y,(2,2,2))
Out[8]:
array([[[list([90, 93]), list([85, 13])],
[list([12, 345]), list([190, 56, 34])]],
[[list([90, 93]), list([85, 13])],
[list([12, 345]), list([190, 56, 34])]]], dtype=object)
And the result can be easily predicted to be
>>>np.where([[[True,False],[True,True]], [[False,False],[True,False]]], [[[12,45],[10,50]], [[100,10],[17,81]]],[[[90,93],[85,13]], [[12,345], [190,56,34]]])
Out[9]:
array([[[12, list([85, 13])],
[10, 50]],
[[list([90, 93]), list([85, 13])],
[17, list([190, 56, 34])]]], dtype=object)

Check if 2d array exists in 3d array in Python?

I have an 3d array with shape (1000, 12, 30), and I have a list of 2d array's of shape (12, 30), what I want to do is check if these 2d arrays exist in the 3d array. Is there a simple way in Python to do this? I tried keyword in but it doesn't work.
There is a way in numpy , you can do with np.all
a = np.random.rand(3, 1, 2)
b = a[1][0]
np.all(np.all(a == b, 1), 1)
Out[612]: array([False, True, False])
Solution from bnaecker
np.all(a == b, axis=(1, 2))
If only want to check exit or not
np.any(np.all(a == b, axis=(1, 2)))
Here is a fast method (previously used by #DanielF as well as #jaime and others, no doubt) that uses a trick to benefit from short-circuiting: view-cast template-sized blocks to single elements of dtype void. When comparing two such blocks numpy stops after the first difference, yielding a huge speed advantage.
>>> def in_(data, template):
... dv = data.reshape(data.shape[0], -1).view(f'V{data.dtype.itemsize*np.prod(data.shape[1:])}').ravel()
... tv = template.ravel().view(f'V{template.dtype.itemsize*template.size}').reshape(())
... return (dv==tv).any()
Example:
>>> a = np.random.randint(0, 100, (1000, 12, 30))
>>> check = a[np.random.randint(0, 1000, (10,))]
>>> check += np.random.random(check.shape) < 0.001
>>>
>>> [in_(a, c) for c in check]
[True, True, True, False, False, True, True, True, True, False]
# compare to other method
>>> (a==check[:, None]).all((-1,-2)).any(-1)
array([ True, True, True, False, False, True, True, True, True,
False])
Gives same result as "direct" numpy approach, but is almost 20x faster:
>>> from timeit import timeit
>>> kwds = dict(globals=globals(), number=100)
>>>
>>> timeit("(a==check[:, None]).all((-1,-2)).any(-1)", **kwds)
0.4793281531892717
>>> timeit("[in_(a, c) for c in check]", **kwds)
0.026218891143798828
Numpy
Given
a = np.arange(12).reshape(3, 2, 2)
lst = [
np.arange(4).reshape(2, 2),
np.arange(4, 8).reshape(2, 2)
]
print(a, *lst, sep='\n{}\n'.format('-' * 20))
[[[ 0 1]
[ 2 3]]
[[ 4 5]
[ 6 7]]
[[ 8 9]
[10 11]]]
--------------------
[[0 1]
[2 3]]
--------------------
[[4 5]
[6 7]]
Notice that lst is a list of arrays as per OP. I'll make that a 3d array b below.
Use broadcasting. Using the broadcasting rules. I want the dimensions of a as (1, 3, 2, 2) and b as (2, 1, 2, 2).
b = np.array(lst)
x, *y = b.shape
c = np.equal(
a.reshape(1, *a.shape),
np.array(lst).reshape(x, 1, *y)
)
I'll use all to produce a (2, 3) array of truth values and np.where to find out which among the a and b sub-arrays are actually equal.
i, j = np.where(c.all((-2, -1)))
This is just a verification that we achieved what we were after. We are supposed to observe that for each paired i and j values, the sub-arrays are actually the same.
for t in zip(i, j):
print(a[t[0]], b[t[1]], sep='\n\n')
print('------')
[[0 1]
[2 3]]
[[0 1]
[2 3]]
------
[[4 5]
[6 7]]
[[4 5]
[6 7]]
------
in
However, to complete OP's thought on using in
a_ = a.tolist()
list(filter(lambda x: x.tolist() in a_, lst))
[array([[0, 1],
[2, 3]]), array([[4, 5],
[6, 7]])]

Using numpy.where() to iterate through a matrix

There's something about numpy.where() I do not understand:
Let's say I have a 2D numpy ndarray:
import numpy as np
twodim = np.array([[1, 2, 3, 4], [1, 6, 7, 8], [1, 1, 1, 12], [17, 3, 15, 16], [17, 3, 18, 18]])
Now, would like to create a function which "checks" this numpy array for a variety of conditions.
array([[ 1, 2, 3, 4],
[ 1, 6, 7, 8],
[ 1, 1, 1, 12],
[17, 3, 15, 16],
[17, 3, 18, 18]])
For example, which entries in this array have (A) even numbers (B) greater than 7 (C) divisible by 3?
I would like to use numpy.where() for this, and iterate through each entry of this array, finally finding the elements which match all conditions (if such an entry exists):
even_entries = np.where(twodim % 2 == 0)
greater_seven = np.where(twodim > 7 )
divisible_three = np.where(twodim % 3 == 0)
How does one do this? I am not sure how to iterate through Booleans...
I could access the indices of the matrix (i,j) via
np.argwhere(even_entries)
We could do something like
import numpy as np
twodim = np.array([[1, 2, 3, 4], [1, 6, 7, 8], [1, 1, 1, 12], [17, 3, 15, 16], [17, 3, 18, 18]])
even_entries = np.where(twodim % 2 == 0)
greater_seven = np.where(twodim > 7 )
divisible_three = np.where(twodim % 3 == 0)
for row in even_entries:
for item in row:
if item: #equivalent to `if item == True`
for row in greater_seven:
for item in row:
if item: #equivalent to `if item == True`
for row in divisible_three:
for item in row:
if item: #equivalent to `if item == True`
# something like print(np.argwhere())
Any advice?
EDIT1: Great ideas below. As #hpaulj mentions "Your tests produce a boolean matrix of the same shape as twodim"
This is a problem I'm running into as I toy around---not all conditionals produce matrices the same shape as my starting matrix. For instance, let's say I'm comparing whether the array element has a matching array to the left or right (i.e. horizontally)
twodim[:, :-1] == twodim[:, 1:]
That results in a (5,3) Boolean array, whereas our original matrix is a (5,4) array
array([[False, False, False],
[False, False, False],
[ True, True, False],
[False, False, False],
[False, False, True]], dtype=bool)
If we do the same vertically, that results in a (4,4) Boolean array, whereas the original matrix is (5,4)
twodim[:-1] == twodim[1:]
array([[ True, False, False, False],
[ True, False, False, False],
[False, False, False, False],
[ True, True, False, False]], dtype=bool)
If we wished to know which entries have both vertical and horizontal pairs, it is non-trivial to figure out which dimension we are in.
Your tests produce a boolean matrix of the same shape as twodim:
In [487]: mask3 = twodim%3==0
In [488]: mask3
Out[488]:
array([[False, False, True, False],
[False, True, False, False],
[False, False, False, True],
[False, True, True, False],
[False, True, True, True]], dtype=bool)
As other answers noted you can combine tests logically - with and and or.
np.where is the same as np.nonzero (in this use), and just returns the coordinates of the True values - as a tuple of 2 arrays.
In [489]: np.nonzero(mask3)
Out[489]:
(array([0, 1, 2, 3, 3, 4, 4, 4], dtype=int32),
array([2, 1, 3, 1, 2, 1, 2, 3], dtype=int32))
argwhere returns the same values, but as a transposed 2d array.
In [490]: np.argwhere(mask3)
Out[490]:
array([[0, 2],
[1, 1],
[2, 3],
[3, 1],
[3, 2],
[4, 1],
[4, 2],
[4, 3]], dtype=int32)
Both the mask and tuple can be used to index your array directly:
In [494]: twodim[mask3]
Out[494]: array([ 3, 6, 12, 3, 15, 3, 18, 18])
In [495]: twodim[np.nonzero(mask3)]
Out[495]: array([ 3, 6, 12, 3, 15, 3, 18, 18])
The argwhere can't be used directly for indexing, but may be more suitable for iteration, especially if you want the indexes as well as the values:
In [496]: for i,j in np.argwhere(mask3):
.....: print(i,j,twodim[i,j])
.....:
0 2 3
1 1 6
2 3 12
3 1 3
3 2 15
4 1 3
4 2 18
4 3 18
The same thing with where requires a zip:
for i,j in zip(*np.nonzero(mask3)): print(i,j,twodim[i,j])
BUT in general in numpy we try to avoid iteration. If you can use twodim[mask] directly your code will be much faster.
Logical combinations of the boolean masks are easier to produce than combinations of the where indices. To use the indices I'd probably resort to set operations (union, intersect, difference).
As for a reduced size test, you have to decide how that maps on to the original array (and other tests). e.g.
A (5,3) mask (difference between columns):
In [505]: dmask=np.diff(twodim, 1).astype(bool)
In [506]: dmask
Out[506]:
array([[ True, True, True],
[ True, True, True],
[False, False, True],
[ True, True, True],
[ True, True, False]], dtype=bool)
It can index 3 columns of the original array
In [507]: twodim[:,:-1][dmask]
Out[507]: array([ 1, 2, 3, 1, 6, 7, 1, 17, 3, 15, 17, 3])
In [508]: twodim[:,1:][dmask]
Out[508]: array([ 2, 3, 4, 6, 7, 8, 12, 3, 15, 16, 3, 18])
It can also be combined with 3 columns of another mask:
In [509]: dmask & mask3[:,:-1]
Out[509]:
array([[False, False, True],
[False, True, False],
[False, False, False],
[False, True, True],
[False, True, False]], dtype=bool)
It is still easier to combine tests in the boolean array form than with where indices.
import numpy as np
twodim = np.array([[1, 2, 3, 4], [1, 6, 7, 8], [1, 1, 1, 12], [17, 3, 15, 16], [17, 3, 18, 18]])
condition = (twodim % 2. == 0.) & (twodim > 7.) & (twodim % 3. ==0.)
location = np.argwhere(condition == True)
for i in location:
print i, twodim[i[0],i[1]],
>>> [2 3] 12 [4 2] 18 [4 3] 18
If you want to find where all three conditions are satisfied:
import numpy as np
twodim = np.array([[1, 2, 3, 4], [1, 6, 7, 8], [1, 1, 1, 12], [17, 3, 15, 16], [17, 3, 18, 18]])
mask = (twodim % 2 == 0) & (twodim > 7) & (twodim % 3 =0)
print(twodim[mask])
[12 18 18]
Not sure what you want in the end whether all elements in the row must satisfy the condition and to find those rows or if you want individual elements.

Matrix row difference, output a boolean vector

I have an m x 3 matrix A and its row subset B (n x 3). Both are sets of indices into another, large 4D matrix; their data type is dtype('int64'). I would like to generate a boolean vector x, where x[i] = True if B does not contain row A[i,:].
There are no duplicate rows in either A or B.
I was wondering if there's an efficient way how to do this in Numpy? I found an answer that's somewhat related: https://stackoverflow.com/a/11903368/265289; however, it returns the actual rows (not a boolean vector).
You could follow the same pattern as shown in jterrace's answer, except use np.in1d instead of np.setdiff1d:
import numpy as np
np.random.seed(2015)
m, n = 10, 5
A = np.random.randint(10, size=(m,3))
B = A[np.random.choice(m, n, replace=False)]
print(A)
# [[2 2 9]
# [6 8 5]
# [7 8 0]
# [6 7 8]
# [3 8 6]
# [9 2 3]
# [1 2 6]
# [2 9 8]
# [5 8 4]
# [8 9 1]]
print(B)
# [[2 2 9]
# [1 2 6]
# [2 9 8]
# [3 8 6]
# [9 2 3]]
def using_view(A, B, assume_unique=False):
Ad = np.ascontiguousarray(A).view([('', A.dtype)] * A.shape[1])
Bd = np.ascontiguousarray(B).view([('', B.dtype)] * B.shape[1])
return ~np.in1d(Ad, Bd, assume_unique=assume_unique)
print(using_view(A, B, assume_unique=True))
yields
[False True True True False False False False True True]
You can use assume_unique=True (which can speed up the calculation) since
there are no duplicate rows in A or B.
Beware that A.view(...) will raise
ValueError: new type not compatible with array.
if A.flags['C_CONTIGUOUS'] is False (i.e. if A is not a C-contiguous array).
Therefore, in general we need to use np.ascontiguous(A) before calling view.
As B.M. suggests, you could instead view each row using the "void"
dtype:
def using_void(A, B):
dtype = 'V{}'.format(A.dtype.itemsize * A.shape[-1])
Ad = np.ascontiguousarray(A).view(dtype)
Bd = np.ascontiguousarray(B).view(dtype)
return ~np.in1d(Ad, Bd, assume_unique=True)
This is safe to use with integer dtypes. However, note that
In [342]: np.array([-0.], dtype='float64').view('V8') == np.array([0.], dtype='float64').view('V8')
Out[342]: array([False], dtype=bool)
so using np.in1d after viewing as void may return incorrect results for arrays
with float dtype.
Here is a benchmark of some of the proposed methods:
import numpy as np
np.random.seed(2015)
m, n = 10000, 5000
# Note A may contain duplicate rows,
# so don't use assume_unique=True for these benchmarks.
# In this case, using assume_unique=False does not improve the speed much anyway.
A = np.random.randint(10, size=(2*m,3))
# make A not C_CONTIGUOUS; the view methods fail for non-contiguous arrays
A = A[::2]
B = A[np.random.choice(m, n, replace=False)]
def using_view(A, B, assume_unique=False):
Ad = np.ascontiguousarray(A).view([('', A.dtype)] * A.shape[1])
Bd = np.ascontiguousarray(B).view([('', B.dtype)] * B.shape[1])
return ~np.in1d(Ad, Bd, assume_unique=assume_unique)
from scipy.spatial import distance
def using_distance(A, B):
return ~np.any(distance.cdist(A,B)==0,1)
from functools import reduce
def using_loop(A, B):
pred = lambda i: A[:, i:i+1] == B[:, i]
return ~reduce(np.logical_and, map(pred, range(A.shape[1]))).any(axis=1)
from pandas.core.groupby import get_group_index, _int64_overflow_possible
from functools import partial
def using_pandas(A, B):
shape = [1 + max(A[:, i].max(), B[:, i].max()) for i in range(A.shape[1])]
assert not _int64_overflow_possible(shape)
encode = partial(get_group_index, shape=shape, sort=False, xnull=False)
a1, b1 = map(encode, (A.T, B.T))
return ~np.in1d(a1, b1)
def using_void(A, B):
dtype = 'V{}'.format(A.dtype.itemsize * A.shape[-1])
Ad = np.ascontiguousarray(A).view(dtype)
Bd = np.ascontiguousarray(B).view(dtype)
return ~np.in1d(Ad, Bd)
# Sanity check: make sure all the functions return the same result
for func in (using_distance, using_loop, using_pandas, using_void):
assert (func(A, B) == using_view(A, B)).all()
In [384]: %timeit using_pandas(A, B)
100 loops, best of 3: 1.99 ms per loop
In [381]: %timeit using_void(A, B)
100 loops, best of 3: 6.72 ms per loop
In [378]: %timeit using_view(A, B)
10 loops, best of 3: 35.6 ms per loop
In [383]: %timeit using_loop(A, B)
1 loops, best of 3: 342 ms per loop
In [379]: %timeit using_distance(A, B)
1 loops, best of 3: 502 ms per loop
since there are only 3 columns, one solution would be to just reduce accross columns:
>>> a
array([[2, 2, 9],
[6, 8, 5],
[7, 8, 0],
[6, 7, 8],
[3, 8, 6],
[9, 2, 3],
[1, 2, 6],
[2, 9, 8],
[5, 8, 4],
[8, 9, 1]])
>>> b
array([[2, 2, 9],
[1, 2, 6],
[2, 9, 8],
[3, 8, 6],
[9, 2, 3]])
>>> from functools import reduce
>>> pred = lambda i: a[:, i:i+1] == b[:,i]
>>> reduce(np.logical_and, map(pred, range(a.shape[1]))).any(axis=1)
array([ True, False, False, False, True, True, True, True, False, False], dtype=bool)
though this would create an m x n intermediate array which may not be memory efficient.
Alternatively, if the values are indices, i.e. non-negative integers, you may use pandas.groupby.get_group_index to reduce to one dimensional arrays. This is an efficient algorithm which pandas use internally for groupby operations; The only caveat is that you may need to verify that there will not be any integer overflow:
>>> from pandas.core.groupby import get_group_index, _int64_overflow_possible
>>> from functools import partial
>>> shape = [1 + max(a[:, i].max(), b[:, i].max()) for i in range(a.shape[1])]
>>> assert not _int64_overflow_possible(shape)
>>> encode = partial(get_group_index, shape=shape, sort=False, xnull=False)
>>> a1, b1 = map(encode, (a.T, b.T))
>>> np.in1d(a1, b1)
array([ True, False, False, False, True, True, True, True, False, False], dtype=bool)
You can treat A and B as two sets of XYZ arrays and calculate the euclidean distances between them with scipy.spatial.distance.cdist. The zero distances would be of interest to us. This distance calculation is supposed to be a pretty efficient implementation, so hopefully we would have an efficient solution to solve our case. So, the implementation to find such a boolean output would look like this -
from scipy.spatial import distance
out = ~np.any(distance.cdist(A,B)==0,1)
# OR np.all(distance.cdist(A,B)!=0,1)
Sample run -
In [582]: A
Out[582]:
array([[0, 2, 2],
[1, 0, 3],
[3, 3, 3],
[2, 0, 3],
[2, 0, 1],
[1, 1, 1]])
In [583]: B
Out[583]:
array([[2, 0, 3],
[2, 3, 3],
[1, 1, 3],
[2, 0, 1],
[0, 2, 2],
[2, 2, 2],
[1, 2, 3]])
In [584]: out
Out[584]: array([False, True, True, False, False, True], dtype=bool)

Categories