Pytorch tensor multi-dimensional selection - python

i have a question regarding the efficient operation of the pytorch tensor multidimensional selection.
Assuming i have a tensor a, with
# B=2, V=20000, d=64
a = torch.rand(B, V, d)
and a tensor b, with
# B=2, N=30000, k=10; k is the index inside of [0, V]
b = torch.randint(0, V, (B, N, k))
The target is to construct a selected tensor from a, namely
help_1 = a[:, None, :, :].repeat(1, N, 1, 1) # [B, N, V, d]
help_2 = b[:, :, :, None].expand(-1,-1,-1,d) # [B, N, k, d]
c = torch.gather(help_1, dim=2, index=help_2)
this operation can indeed output the desired results, but is not very efficient since i created a very large help_1 matrix, which has size [2, 30000, 20000, 64]. I wonder if anyone has idea about doing this without creating such a large helper tensor for selection? Thank you!

You could use broadcasting with the indexing to save memory. Something like the following would work.
idx0 = torch.arange(B, device=b.device).reshape(-1, 1, 1, 1) # [B, 1, 1, 1]
idx1 = b[..., None] # [B, N, k, 1]
idx2 = torch.arange(d, device=b.device).reshape(1, 1, 1, -1) # [1, 1, 1, d]
c = a[idx0, idx1, idx2] # [B, N, k, d]

Related

In numpy, multipy two structured matrices concisely

I have two matrices. The first has the following structure:
[[1, 0, a],
[0, 1, b],
[1, 0, c],
[0, 1, d]]
where 1, 0, a, b, c, and d are scalars. The matrix is 4 by 3
The second is just a 2 by 3 matrix:
[[r1],
[r2]]
where r1 and r2 are the first and second rows respectively, each having 3 elements.
I would like the output to be:
[[r1, 0, a*r1],
[0, r1, b*r1],
[r2, 0, c*r2],
[0, r2, d*r2]]
which would be a 4 by 9 matrix.
This is similar to the Kronecker product, except separately for each row of the second matrix. Of course this could be done with cumbersome loops which I want to avoid.
How can I do this concisely?
You can do exactly what you said in the last line: do a separate Kronecker product for each row of the second column and then concatenate the results.
Let's assume that the two matrices are called x (4 by 3) and y (2 by 3). The first thing to do is to split x in two parts because only half matrix participates in each part of the product.
x = x.reshape(2, 2, 3)
Then you can calculate the two products separately:
z0 = np.kron(x[0], y[0])
z1 = np.kron(x[1], y[1])
Finally, concatenate the two results along the first axis:
z = np.concatenate([z0, z1], axis=0)
Or if, like me, you enjoy big ugly one-liners you can do:
z = np.concatenate([np.kron(xr, yr) for xr, yr in zip(x.reshape(2, 2, 3), y)], axis=0)
In the general case you mentioned in the comments, it would become:
z = np.concatenate([np.kron(xr, yr) for xr, yr in zip(x.reshape(int(n / 2), 2, 3), y)], axis=0)
This gives equal results to the explicit loop, which can be numba.jit compiled I believe:
def solve_explicit(x, y):
# sanity checks
assert x.shape[0] == 2*y.shape[0]
assert x.shape[1] == y.shape[1]
n = x.shape[0]
z = np.zeros((n, 9))
for i in range(n):
for j in range(3):
for k in range(3):
z[i, k + 3 * j] = x[i, j] * y[int(i / 2), k]
return z
Using broadcasting, with x.shape (n, 3), and y.shape (n//2, 3):
out = (x.reshape(-1, 2, 3, 1) * y.reshape(-1, 1, 1, 3)).reshape(-1, 9)
I personally would use np.einsum in this situation because I think it's easier to understand than broadcasting.
import numpy as np
(a, b, c, d) = np.random.rand(4)
x = np.array([[1, 0, a], [0, 1, b], [1, 0, c], [0, 1, d]])
y = np.random.rand(2, 3)
z = np.einsum("ij,ik->ijk", x.reshape(-1, 6), y).reshape(-1, 9)
# timeit magic commands.
# %timeit -n 50000 np.einsum("ij,ik->ijk", x.reshape(-1, 6), y).reshape(-1, 9)
# %timeit -n 50000 (x.reshape(-1, 2, 3, 1) * y.reshape(-1, 1, 1, 3)).reshape(-1, 9)
Some good references on Einstein summation in NumPy: [2, 3, 4].

Indexing an array of unknown length in Python (Numpy, PyTorch)

I have an array of shape [m, 2, m, 2, ...]. By this, I mean that it has dimensions of size m and 2 that repeat a number of times L. I would like a solution of the following that works for any given L.
Example:
For L=1 the array would be of shape [m, 2]
For L=2 the array would be of shape [m, 2, m, 2]
For L=3 the array would be of shape [m, 2, m, 2, m, 2]
And so on...
I would like to index this array, in the dims of size m, with another array indices of shape [L, N] such as to eventually obtain an array of size [N, 2, 2, ...].
For a given L (e.g. L=3), I would do the indexing as follows,
array[indices[0], :, indices[1], :, indices[2], :]
resulting in an array of shape [N, 2, 2, 2].
Is there a smart way to do the indexing for generic L?
(Hope to have made the question clear!)
Edit 1:
To give idea of behavior, an ugly solution:
def indexing(array, indices):
L = indices.shape[0]
if L == 1:
array = array[indices[0]]
elif L == 2:
array = array[indices[0], :, indices[1], :]
elif L == 3:
array = array[indices[0], :, indices[1], :, indices[2], :]
elif L == 4:
array = array[indices[0], :, indices[1], :, indices[2], :, indices[3], :]
# etc...
return array
And a use example:
import torch
m = 5
N = 4
L = 3
array = torch.randn(m, 2, m, 2, m, 2)
indices = torch.randint(m, size=(L, N))
indexing(array, indices).shape # torch.Size([4, 2, 2, 2])
You can use len()! Pretty simple usage:
length = len(array)
for i in range(0, length):
# do something
You can also access the last item of the array whatever its length is by indexing -1, like so:
array = [1, 1, 5, 2, 4, ..., 99]
print(array[-1]) # 99

Is there a function in Tensorflow can do the following math?

I have two tensors, x and y, of shape [B, D]. I want to do something like the following code
B, D = x.shape
x = tf.expand_dims(x, 1) # [B, 1, D]
y = tf.expand_dims(y, -1) # [B, D, 1]
z = x * y # [B, D, D]
z = tf.reshape(z, (B, D**2))
Is there a function in Tensorflow that already does this?

Index pytorch 4d tensor by values in 2d tensor

I have two pytorch tensors:
X with shape (A, B, C, D)
I with shape (A, B)
Values in I are integers in range [0, C).
What is the most efficient way to get tensor Y with shape (A, B, D), such that:
Y[i][j][k] = X[i][j][ I[i][j] ][k]
You probably want to use torch.gather for the indexing and expand to adjust I to the required size:
eI = I[..., None, None].expand(-1, -1, 1, X.size(3)) # make eI the same for the last dimension
Y = torch.gather(X, dim=2, index=eI).squeeze()
testing the code:
A = 3
B = 4
C = 5
D = 7
X = torch.rand(A, B, C, D)
I = torch.randint(0, C, (A, B), dtype=torch.long)
eI = I[..., None, None].expand(-1, -1, 1, X.size(3))
Y = torch.gather(X, dim=2, index=eI).squeeze()
# manually gather
refY = torch.empty(A, B, D)
for i in range(A):
for j in range(B):
refY[i, j, :] = X[i, j, I[i,j], :]
(refY == Y).all()
# Out[]: tensor(1, dtype=torch.uint8)

How to construct the following matrix elegantly in numpy?

Suppose I have a 5 dimensional matrix v and now I want a new matrix D fulfilling
D[a, b, n, m, d] = v[a, b, n, n, d]-v[a, b, m, m, d].
How do I elegantly do this in numpy?
How do you want to change the dimensionality? You can reshape it like this
import numpy as np
a, b, n, d = 2, 3, 4, 5
v = np.zeros((a, b, n, n, d))
D = v.reshape((a, b, n*n, d))
I found einsum can do this:
D = np.einsum('abiic->abic', v)[..., None, :] - np.einsum('abiic->abic', v)[:, :, None, ...]

Categories