How to elegantly drop unnecessary elements in numpy? - python

I have an ndarray of shape [batch_size, seq_len, num_features]. However, some of elements in the end of the sequential dimension is not necessary, and therefore I want to drop them and merge the sequential dimension into the batch dimension. For example, the ndarray a I want to manipulate is
batch_size = 2
seq_len = 3
num_features = 1
a = np.random.randn(batch_size, seq_len, num_features)
mask = np.ones((batch_size, seq_len), dtype=np.bool)
mask[0][1:] = 0
mask[1][2:] = 0
"""
>>> a = [[[-0.3908401 ]
[ 0.89686512]
[ 0.07594243]]
[[-0.12256737]
[-1.00838131]
[ 0.56543754]]]
mask=[[ True False False]
[ True True False]]
"""
where mask is used to indicate whether the elements in a is useful. I can get what I want using the following code
res = []
for seq, m in zip(a, mask):
res.append(seq[:sum(m)])
np.concatenate(res, axis=0)
"""
>>>array([[0.08676509],
[0.47162315],
[0.98070665]])
"""
I'm wondering if there is a more elegant way to do this in numpy?

Not sure if this is what your asking but the results look fine
res = a[mask]

Since dimensions related to batch and seq are going to be merged, you could reshape both a and mask to 2D array of shape (batch_size * seq_len, num_features).
Next, simply filter important samples using boolean index. See the code:
mask2d = mask.reshape(-1) # or mask.ravel()
a2d = a.reshape(-1, num_features)
result = a2d[mask2d]

Related

How to use tf.gather_nd for multi-dimensional tensor

I don't fully understand how I should use tf.gather_nd() to pick up elements along some axis if I have multi-dimensional tensor. Let's take a small example (if I get answer for this simple example, it solves also my more complex original problem). Let's say that I have rgb image and I am trying to pick the smallest pixel value along channels (last dimension if data order is (B,H,W,C)). I know that this can be done with tf.recude_min(x, axis=-1) but I would like to know that is it also possible to do the same thing with tf.argmin() and tf.gather_nd()?
from skimage import data
import tensorflow as tf
import numpy as np
# Load RGB image from skimage, cast it to float32 and put it in order (B,H,W,C)
image = data.astronaut()
image = tf.cast(image, tf.float32)
image = tf.expand_dims(image, axis=0)
# Take minimum pixel value of each channel in a way number 1
min_along_channels_1 = tf.reduce_min(image, axis=-1)
# Take minimum pixel value of each channel in a way number 2
# The goal is that min_along_channels_1 is equal to min_along_channels_2
idxs = tf.argmin(image, axis=-1)
min_along_channels_2 = tf.gather_nd(image, idxs) # This line gives error :(
You will have to use tf.meshgrid, which will create a rectangular grid of two one-dimensional arrays representing the tensor indexing of the first and second dimension, since tf.gather_nd needs to know exactly where to extract values across the dimensions. Here is a simplified example:
import tensorflow as tf
image = tf.random.normal((1, 4, 4, 3))
image = tf.squeeze(image, axis=0)
idx = tf.argmin(image, axis=-1)
ij = tf.stack(tf.meshgrid(
tf.range(image.shape[0], dtype=tf.int64),
tf.range(image.shape[1], dtype=tf.int64),
indexing='ij'), axis=-1)
gather_indices = tf.concat([ij, tf.expand_dims(idx, axis=-1)], axis=-1)
result = tf.gather_nd(image, gather_indices)
print('First option -->', tf.reduce_min(image, axis=-1))
print('Second option -->', result)
First option --> tf.Tensor(
[[-0.53245485 -0.29117298 -0.64434254 -0.8209638 ]
[-0.9386176 -0.5993224 -0.597746 -1.5392851 ]
[-0.5478666 -1.5280861 -1.0344954 -1.920418 ]
[-0.5580688 -1.425873 -1.9276617 -1.0668412 ]], shape=(4, 4), dtype=float32)
Second option --> tf.Tensor(
[[-0.53245485 -0.29117298 -0.64434254 -0.8209638 ]
[-0.9386176 -0.5993224 -0.597746 -1.5392851 ]
[-0.5478666 -1.5280861 -1.0344954 -1.920418 ]
[-0.5580688 -1.425873 -1.9276617 -1.0668412 ]], shape=(4, 4), dtype=float32)
Or with your example:
from skimage import data
import tensorflow as tf
import numpy as np
image = data.astronaut()
image = tf.cast(image, tf.float32)
image = tf.expand_dims(image, axis=0)
min_along_channels_1 = tf.reduce_min(image, axis=-1)
image = tf.squeeze(image, axis=0)
idx = tf.argmin(image, axis=-1)
ij = tf.stack(tf.meshgrid(
tf.range(image.shape[0], dtype=tf.int64),
tf.range(image.shape[1], dtype=tf.int64),
indexing='ij'), axis=-1)
gather_indices = tf.concat([ij, tf.expand_dims(idx, axis=-1)], axis=-1)
min_along_channels_2 = tf.gather_nd(image, gather_indices)
print(tf.equal(min_along_channels_1, min_along_channels_2))
tf.Tensor(
[[[ True True True ... True True True]
[ True True True ... True True True]
[ True True True ... True True True]
...
[ True True True ... True True True]
[ True True True ... True True True]
[ True True True ... True True True]]], shape=(1, 512, 512), dtype=bool)

How can you specify a different matrix permutation for each element in a batch?

I have a constant symmetric matrix A with shape (50,50) and inputs x with shape (batch_size, 50) where each entry is an integer in [0,49] - these correspond to indexes in A.
I wish to create a new tensor with shape (batch_size, 50, 50) where each element in the batch is the matrix A permuted according to the ordering given in the input x. Each input has a different ordering of the integers from 0 to 49. Then, this
The only way I've thought to do this does not work, and I fear it would be inefficient even if it didn't give an error:
#Given x and A
x = np.zeros((b, 50))
for i in range(b):
x[b,:] = np.random.permutation(50)
rand_mat = np.random.rand(50,50)
A = np.matmul(rand_mat, np.transpose(rand_mat)) # a random symmetric matrix
# do permutation
batch_size = x.shape[0] # infer batch size from inputs
permuted_matrices = np.zeros((batch_size, 50, 50))
for i in range(batch_size):
permuted_matrices[i,:,:] = A[:,x[i,:]][x[i,:],:] # permute both rows and columns according to x[i,:]
But when I call my layer, I get an error TypeError: 'Tensor' object cannot be interpreted as an integer (because of the for loop). If I instead use tf.shape(x)[0] instead of x.shape[0], then I get TypeError: Expected int32, got None of type 'NoneType' instead (because of np.zeros). Is there a TensorFlow function I could use that would be easier?
Use gather() and gather_nd():
r = 50 # use 10 to check
batch_size = 10
x = tf.random.uniform((batch_size, r), 0, r, tf.int32)
A = tf.range(r * r)
A = tf.reshape(A, (r, r))
ind = x[..., tf.newaxis]
output = tf.gather_nd(A, ind) # permute rows
output = tf.transpose(output, (0, 2, 1))
output = tf.gather(output, x, axis=1, batch_dims=1) # permute columns
output = tf.transpose(output, (0, 2, 1))

Efficient boolean masking with Tensorflow SparseTensors

So, I want to mask out entire rows of a SparseTensor. This would be easy to do with tf.boolean_mask, but there isn't an equivalent for SparseTensors. Currently, something that is possible is for me to just go through all of the indices in SparseTensor.indices and filter out all of the ones that aren't a masked row, e.g.:
masked_indices = list(filter(lambda index: masked_rows[index[0]], indices))
where masked_rows is a 1D array of whether or not the row at that index is masked.
However, this is really slow, since my SparseTensor is fairly large (it has 90k indices, but will be growing to be significantly larger). It takes quite a few seconds on a single data point, before I even apply SparseTensor.mask on the filtered indices. Another flaw of the approach is that it doesn't actually remove the rows, either (although, in my case, a row of all zeros is just as fine).
Is there a better way to mask a SparseTensor by row, or is this the best approach?
You can do that like this:
import tensorflow as tf
def boolean_mask_sparse_1d(sparse_tensor, mask, axis=0): # mask is assumed to be 1D
mask = tf.convert_to_tensor(mask)
ind = sparse_tensor.indices[:, axis]
mask_sp = tf.gather(mask, ind)
new_size = tf.math.count_nonzero(mask)
new_shape = tf.concat([sparse_tensor.shape[:axis], [new_size],
sparse_tensor.shape[axis + 1:]], axis=0)
new_shape = tf.dtypes.cast(new_shape, tf.int64)
mask_count = tf.cumsum(tf.dtypes.cast(mask, tf.int64), exclusive=True)
masked_idx = tf.boolean_mask(sparse_tensor.indices, mask_sp)
new_idx_axis = tf.gather(mask_count, masked_idx[:, axis])
new_idx = tf.concat([masked_idx[:, :axis],
tf.expand_dims(new_idx_axis, 1),
masked_idx[:, axis + 1:]], axis=1)
new_values = tf.boolean_mask(sparse_tensor.values, mask_sp)
return tf.SparseTensor(new_idx, new_values, new_shape)
# Test
sp = tf.SparseTensor([[1], [3], [4], [6]], [1, 2, 3, 4], [7])
mask = tf.constant([True, False, True, True, False, False, True])
out = boolean_mask_sparse_1d(sp, mask)
print(out.indices.numpy())
# [[2]
# [3]]
print(out.values.numpy())
# [2 4]
print(out.shape)
# (4,)

Sum all diagonals in feature maps in parallel in PyTorch

Let's say I have a tensor shaped (1, 64, 128, 128) and I want to create a tensor of shape (1, 64, 255) holding the sums of all diagonals for every (128, 128) matrix (there are 1 main, 127 below, 127 above diagonals so in total 255). What I am currently doing is the following:
x = torch.rand(1, 64, 128, 128)
diag_sums = torch.zeros(1, 64, 255)
j = 0
for k in range(-127, 128):
diag_sums[j, :, k + 127] = torch.diagonal(x, offset=k, dim1=-2, dim2=-1).sum(dim=2)
This is obviously very slow, since it is using Python loops and is not done in parallel with respect to k.
I don't think this can be done using torch.diagonal since the function explicitly uses a single int for the offset parameter. If I could pass a list there, this would work, but I guess it would be complicated to implement (requiring changes in PyTorch itself).
I think it could be possible to implement this using torch.einsum, but I cannot think of a way to do it.
So this is my question: how do I get the tensor described above?
Have you considered using torch.nn.functional.conv2d?
You can sum the diagonals with a diagonal filter sliding across the tensor with appropriate zero padding.
import torch
import torch.nn.functional as nnf
# construct a diagonal filter using `eye` function, shape it appropriately
f = torch.eye(x.shape[2])[None, None,...].repeat(x.shape[1], 1, 1, 1)
# compute the diagonal sum with appropriate zero padding
conv_diag_sums = nnf.conv2d(x, f, padding=(x.shape[2]-1,0), groups=x.shape[1])[..., 0]
Note the the result has a slightly different order than the one you computed in the loop:
diag_sums = torch.zeros(1, 64, 255)
for k in range(-127, 128):
diag_sums[j, :, 127-k] = torch.diagonal(x, offset=k, dim1=-2, dim2=-1).sum(dim=2)
# compare
(conv_diag_sums == diag_sums).all()
results with True - they are the same.
Shai's answer works, however it looks like it has a lot of multiplications, due to the large size of the kernel. I figured out a way to do this for my use case. It is based on this answer for a similar question in Numpy: https://stackoverflow.com/a/35074207/6636290
I am doing the following:
digitized = np.sum(np.indices(a.shape), axis=0).ravel()
digitized_tensor = torch.Tensor(digitized).int()
a_tensor = torch.Tensor(a)
torch.bincount(digitized_tensor, a_tensor.view(-1))
If I could figure out a way to do this entirely in PyTorch (without Numpy's indices function), this would be great, but this answers the question.
The previous answers work, but there is another faster solution using strides (and that only uses Pytorch).
First I'll explain with a matrix as it is easier to understand.
Given you have a matrix M with size (n, n), you can change the matrix strides so that the resulting matrix has M's diagonals as columns. Then you can just sum the column to get your result.
import torch
def sum_all_diagonal_matrix(mat: torch.tensor):
n,_ = mat.shape
zero_mat = torch.zeros((n, n)) # Zero matrix used for padding
mat_padded = torch.cat((zero_mat, mat, zero_mat), 1) # pads the matrix on left and right
print(mad_padded)
mat_strided = mat_padded.as_strided((n, 2*n), (3*n + 1, 1)) # Change the strides
print(mat_strided)
sum_diags = torch.sum(mat_strided, 0) # Sums the resulting matrix's columns
return sum_diags[1:]
X = torch.arange(9).reshape(3,3)
print(X)
# tensor([[0, 1, 2],
# [3, 4, 5],
# [6, 7, 8]])
print(sum_all_diagonal_matrix(X))
# tensor([ 6., 10., 12., 6., 2.])
You can do exactly the same with one more dimension:
def sum_all_diagonal(mat: torch.tensor):
k,n,_ = mat.shape
zero_mat = torch.zeros((k, n, n))
mat_padded = torch.cat((zero_mat, mat, zero_mat), 2)
mat_strided = mat_padded.as_strided((k, n, 2*n), (3*n*n, 3*n + 1, 1))
sum_diags = torch.sum(mat_strided, 1)
return sum_diags[:, n:]

Get masked argmax with different mask for each row in TensorFlow

I have a tensor of shape Nx7, which looks something like this:
[0.97863993 0.64479575 -0.202357 0.94678476 0.0080051 0.44507797 0.47864
0.05914348 -0.72649432 0.193803 0.47295245 0.8381458 0.30449861 0.46783]
I have another tensor of the same shape, which is a boolean mask:
[True False True True False True False
False True False False True False False]
I want to get the argmax of each row in the first tensor, but only of those elements for which the mask is True, so basically the argmax of the following array:
[0.97863993 X -0.202357 0.94678476 X 0.44507797 X
X -0.72649432 X X 0.8381458 X X]
Which should thus become:
[0
4]
Is this possible in TensorFlow? I am trying to figure it out with tf.boolean_mask, but I don't see how to deal with different rows having differing numbers of True values in the mask.
Input code in TF:
mask = tf.placeholder(shape=[None, 7], dtype=tf.bool)
val = tf.placeholder(shape=[None, 7], dtype=tf.float32)
arg_max = ???
Note that I want negative values to be handled correctly as well (otherwise the method proposed by Ishant Mrinal would work).
Convert the boolean array into a float array
# mask = tf.placeholder(shape=[None, 7], dtype=tf.bool)
# mask = tf.cast(mask, dtype=tf.float32)
mask = tf.placeholder(shape=[None, 7], dtype=tf.float32)
val = tf.placeholder(shape=[None, 7], dtype=tf.float32)
argmax = tf.argmax(tf.multiply(val, mask), axis=1)
sess.run(argmax, {val: your_val_array, mask: 2*mask_bool_array.astype(float)-1 })
To emulate a masked argmax, you can set values outside of the mask to -inf, for example like this:
masked_val = tf.minimum(val, (2* tf.to_float(mask) - 1) * np.inf)
masked_arg_max = tf.argmax(masked_val, axis=1)
Alternatively, to compute masked_val, you could use
masked_val = tf.where(mask, val, -tf.ones_like(val) * np.inf)
which is arguably clearer, but may waste memory.
For a masked argmin, you would do the opposite:
masked_val = tf.maximum(val, (1 - 2* tf.to_float(mask)) * np.inf)
masked_arg_min = tf.argmin(masked_val, axis=1)

Categories