Vectors Concatenation - python

Suppose I have three vectors A, B, C
A vector size of 256
B vector size of 256
C vector size of 256
Now I want to do concatenation in the following way:
AB= vector size will be 512
AC = vector size will be 512
BC = vector size will be 512
However, I need to restrict all the concatenated vectors to 256, like:
AB= vector size will be 256
AC = vector size will be 256
BC = vector size will be 256
One way is to take the mean of each two values of the two vectors like A first index value and B first index value, A second index value and B second index value ... etc. Similarly, in the concatenation of other vectors.
How I implement this:
x # torch.Size([32, 3, 256]) # 32 is Batch size, 3 is vector A, vector B, vector C and 256 is each vector dimension
def my_fun(self, x):
iter = x.shape[0]
counter = 0
new_x = torch.zeros((10, x.shape[1]), dtype=torch.float32, device=torch.device('cuda'))
for i in range(0, x.shape[0] - 1):
iter -= 1
for j in range(0, iter):
mean = (x[i, :] + x[i+j, :])/2
new_x[counter, :] = torch.unsqueeze(mean, 0)
counter += 1
final_T = torch.cat((x, new_x), dim=0)
return final_T
ref = torch.zeros((x.shape[0], 15, x.shape[2]), dtype=torch.float32, device=torch.device('cuda'))
for i in range (x.shape[0]):
ref[i, :, :] = self.my_fun(x[i, :, :])
But this implementation is computationally expensive. One reason is I am iterating batch-wise which makes it computationally expensive. Is there any efficient way to implement this task?

Torch has a builtin mean method, which can calculate means element-wise.
import torch
import numpy as np
import itertools as it
allvectors=torch.stack((a,b,c), dim=0)
values=it.combinations([0,1,2], 2)
for i,j in values:
pairedvectors=torch.stack((allvectors[i],allvectors[j]), dim=0)
mean=torch.mean(pairedvectors,dim=0)
for 3 example vectors:
a=torch.from_numpy(np.zeros((5)))
b=torch.from_numpy(np.ones((5)))
c=torch.from_numpy(np.ones((5))*5)
It results in the following vectors:
tensor([0.5000, 0.5000, 0.5000, 0.5000, 0.5000], dtype=torch.float64)
tensor([2.5000, 2.5000, 2.5000, 2.5000, 2.5000], dtype=torch.float64)
tensor([3., 3., 3., 3., 3.], dtype=torch.float64)

Related

How to understand the 1D tensor's direction in Pytorch?

I have three tensors a, b, c
a = torch.tensor([1,2])
b = torch.tensor([3,4])
c = b.view(2,1)
Now If I do a # b == a # c it return tensor([True])
when I check b.shape c.shape, and they are different.
My question is what is the direction of the 1D tensor, is it vertical or horizontal?
whether it is vertical or horizontal, a # b should not work without b's transpose.
How to understand the 1D dimension's direction in Pytorch? is shape(3) same as shape(3,1) ?
Or shape(3) could be either shape(3,1) or shape (1,3)?
In your example, a and b are 1D tensors which means they are vectors. These vectors are in a 2 dimensional space. vector a has x=1 and y=2.
a#b is product of vector a and b.
a#b = a[0]*b[0] + a[1]*b[1] = 1*3+2*4 = 11
which 11 is a scalar. But c is a matrix and its product is:
a#c = a[0]*c[0] + a[1]*c[1] = 1*[3]+2*[4] = [11]
if you compare these results in torch you have:
a#b == a#c is equal to torch.tensor(11)==torch.tensor([11]) which the result is tensor([True])
a, b both are vectors(1st order tensor) of size 2. Vector multiplication is possible as long as both have same size.
a = torch.tensor([1,2])
b = torch.tensor([3,4])
print(a.shape)
print(b.shape)
#Output:
#torch.Size([2])
#torch.Size([2])
c is reshaped into a matrix(2nd order tensor) of shape (2x1).
dot product is possible between a vector and matrix with valid sizes.
c = b.view(2,1)
print(a.shape) #torch.Size([2])
print(c.shape) #torch.Size([2,1])
print(a#c) # possible
print(c#a) # not possible, will throw error of size mismatch.

How can you specify a different matrix permutation for each element in a batch?

I have a constant symmetric matrix A with shape (50,50) and inputs x with shape (batch_size, 50) where each entry is an integer in [0,49] - these correspond to indexes in A.
I wish to create a new tensor with shape (batch_size, 50, 50) where each element in the batch is the matrix A permuted according to the ordering given in the input x. Each input has a different ordering of the integers from 0 to 49. Then, this
The only way I've thought to do this does not work, and I fear it would be inefficient even if it didn't give an error:
#Given x and A
x = np.zeros((b, 50))
for i in range(b):
x[b,:] = np.random.permutation(50)
rand_mat = np.random.rand(50,50)
A = np.matmul(rand_mat, np.transpose(rand_mat)) # a random symmetric matrix
# do permutation
batch_size = x.shape[0] # infer batch size from inputs
permuted_matrices = np.zeros((batch_size, 50, 50))
for i in range(batch_size):
permuted_matrices[i,:,:] = A[:,x[i,:]][x[i,:],:] # permute both rows and columns according to x[i,:]
But when I call my layer, I get an error TypeError: 'Tensor' object cannot be interpreted as an integer (because of the for loop). If I instead use tf.shape(x)[0] instead of x.shape[0], then I get TypeError: Expected int32, got None of type 'NoneType' instead (because of np.zeros). Is there a TensorFlow function I could use that would be easier?
Use gather() and gather_nd():
r = 50 # use 10 to check
batch_size = 10
x = tf.random.uniform((batch_size, r), 0, r, tf.int32)
A = tf.range(r * r)
A = tf.reshape(A, (r, r))
ind = x[..., tf.newaxis]
output = tf.gather_nd(A, ind) # permute rows
output = tf.transpose(output, (0, 2, 1))
output = tf.gather(output, x, axis=1, batch_dims=1) # permute columns
output = tf.transpose(output, (0, 2, 1))

How to divide a 2D matrix into patches and multiply each patch by its center element?

I need to divide a 2D matrix into a set of 2D patches with a certain stride, then multiply every patch by its center element and sum the elements of each patch.
It feels not unlike a convolution where a separate kernel is used for every element of the matrix.
Below is a visual illustration.
The elements of the result matrix are calculated like this:
The result should look like this:
Here's a solution I came up with:
window_shape = (2, 2)
stride = 1
# Matrix
m = np.arange(1, 17).reshape((4, 4))
# Pad it once per axis to make sure the number of views
# equals the number of elements
m_padded = np.pad(m, (0, 1))
# This function divides the array into `windows`, from:
# https://stackoverflow.com/questions/45960192/using-numpy-as-strided-function-to-create-patches-tiles-rolling-or-sliding-w#45960193
w = window_nd(m_padded, window_shape, stride)
ww, wh, *_ = w.shape
w = w.reshape((ww * wh, 4)) # Two first dimensions multiplied is the number of rows
# Tile each center element for element-wise multiplication
m_tiled = np.tile(m.ravel(), (4, 1)).transpose()
result = (w * m_tiled).sum(axis = 1).reshape(m.shape)
In my view it's not very efficient as a few arrays are allocated in the intermediary steps.
What is a better or more efficient way to accomplish this?
Try scipy.signal.convolve
from scipy.signal import convolve
window_shape = (2, 2)
stride = 1
# Matrix
m = np.arange(1, 17).reshape((4, 4))
# Pad it once per axis to make sure the number of views
# equals the number of elements
m_padded = np.pad(m, (0, 1))
output = convolve(m_padded, np.ones(window_shape), 'valid') * m
print(output)
Output:
array([[ 14., 36., 66., 48.],
[150., 204., 266., 160.],
[414., 500., 594., 336.],
[351., 406., 465., 256.]])

How to create convolve function for two matrixes?

I want to create a convolver function without using the convolve function of NumPy that get 4 elements:
convolver_1(j, matrices_list, filter_matrix, stride_size)
The function should return the output after performing a convolution operation with filter_matrix, with a stride of size ​stride_size, over matrix in index j at matrices_list, filter_matrix has lower dimensions than the matrices in ​matrices_list.
Let's go through a simple implementation of np.convolve whose documentation can be found here.
import numpy as np
def convolve_1d(a, filter):
N, M = len(a), len(filter)
assert N >= M # assumption in the question
# in the full mode (default of np.convolve), result length is N+M-1
# therefore, pad should be M-1 on each side
# N-M+2p+1 == N+M-1 => p = M-1
result_length = N+M-1
result = np.zeros(result_length) # allocating memory for result
p = M-1
padded_a = np.pad(a,p)
flipped_filter = np.flip(filter)
for i in range(result_length):
result[i] = np.sum(np.multiply(flipped_filter, padded_a[i:i+M]))
return result
a = np.array([1,2,3,4])
filter = np.array([1,-1,3])
convolve_1d(a, filter)
results in
array([ 1., 1., 4., 7., 5., 12.])
which is the same as the result for np.convolve(a, filter).
So, it basically pads the input array with zeros, flips the filter and sums the element-wise multiplication of two arrays.
I am not sure about the index that you mentioned; the result is a 1d array and you can index its elements.
To add stride to this function, we need to modify the result_length and multiply the stride to the iterator:
def convolve_1d_strided(a, filter, stride):
N, M = len(a), len(filter)
assert N >= M # assumption in the question
# in the full mode (default of np.convolve), result length is N+M-1
# therefore, pad should be M-1 on each side
# N-M+2p+1 == N+M-1 => p = M-1
result_length = (N+M-1)//stride
result = np.zeros(result_length) # allocating memory for result
p = M-1
padded_a = np.pad(a,p)
flipped_filter = np.flip(filter)
for i in range(result_length):
result[i] = np.sum(np.multiply(flipped_filter, padded_a[i*stride:i*stride+M]))
return result
a = np.array([1,2,3,4])
filter = np.array([1,-1,3])
convolve_1d_strided(a, filter, 2)
array([1., 4., 5.])
Hope it helps and if that is what you liked to see, I am happy to expand it to two dimensions.
For 1D arrays:
import numpy as np
from numpy.lib.stride_tricks import as_strided
def conv1d(A, kernel, stride, reverse_kernel=True, mode='full'):
if reverse_kernel:
kernel = kernel[::-1]
if mode == 'full':
A = np.pad(A, kernel.shape[0] - 1)
#else: convolution in 'valid' mode
# Sliding-window view of A
output_size = (A.shape[0] - kernel.shape[0])//stride + 1
A_w = as_strided(A, shape=(output_size, kernel.shape[0]),
strides=(stride*A.strides[0], A.strides[0]))
# Return convolution of A with kernel
return np.sum(A_w * kernel, axis=1)
Here A = matrices_list[j]. Note that in Deep Learning the filters in convolution are not reversed.

Sum all diagonals in feature maps in parallel in PyTorch

Let's say I have a tensor shaped (1, 64, 128, 128) and I want to create a tensor of shape (1, 64, 255) holding the sums of all diagonals for every (128, 128) matrix (there are 1 main, 127 below, 127 above diagonals so in total 255). What I am currently doing is the following:
x = torch.rand(1, 64, 128, 128)
diag_sums = torch.zeros(1, 64, 255)
j = 0
for k in range(-127, 128):
diag_sums[j, :, k + 127] = torch.diagonal(x, offset=k, dim1=-2, dim2=-1).sum(dim=2)
This is obviously very slow, since it is using Python loops and is not done in parallel with respect to k.
I don't think this can be done using torch.diagonal since the function explicitly uses a single int for the offset parameter. If I could pass a list there, this would work, but I guess it would be complicated to implement (requiring changes in PyTorch itself).
I think it could be possible to implement this using torch.einsum, but I cannot think of a way to do it.
So this is my question: how do I get the tensor described above?
Have you considered using torch.nn.functional.conv2d?
You can sum the diagonals with a diagonal filter sliding across the tensor with appropriate zero padding.
import torch
import torch.nn.functional as nnf
# construct a diagonal filter using `eye` function, shape it appropriately
f = torch.eye(x.shape[2])[None, None,...].repeat(x.shape[1], 1, 1, 1)
# compute the diagonal sum with appropriate zero padding
conv_diag_sums = nnf.conv2d(x, f, padding=(x.shape[2]-1,0), groups=x.shape[1])[..., 0]
Note the the result has a slightly different order than the one you computed in the loop:
diag_sums = torch.zeros(1, 64, 255)
for k in range(-127, 128):
diag_sums[j, :, 127-k] = torch.diagonal(x, offset=k, dim1=-2, dim2=-1).sum(dim=2)
# compare
(conv_diag_sums == diag_sums).all()
results with True - they are the same.
Shai's answer works, however it looks like it has a lot of multiplications, due to the large size of the kernel. I figured out a way to do this for my use case. It is based on this answer for a similar question in Numpy: https://stackoverflow.com/a/35074207/6636290
I am doing the following:
digitized = np.sum(np.indices(a.shape), axis=0).ravel()
digitized_tensor = torch.Tensor(digitized).int()
a_tensor = torch.Tensor(a)
torch.bincount(digitized_tensor, a_tensor.view(-1))
If I could figure out a way to do this entirely in PyTorch (without Numpy's indices function), this would be great, but this answers the question.
The previous answers work, but there is another faster solution using strides (and that only uses Pytorch).
First I'll explain with a matrix as it is easier to understand.
Given you have a matrix M with size (n, n), you can change the matrix strides so that the resulting matrix has M's diagonals as columns. Then you can just sum the column to get your result.
import torch
def sum_all_diagonal_matrix(mat: torch.tensor):
n,_ = mat.shape
zero_mat = torch.zeros((n, n)) # Zero matrix used for padding
mat_padded = torch.cat((zero_mat, mat, zero_mat), 1) # pads the matrix on left and right
print(mad_padded)
mat_strided = mat_padded.as_strided((n, 2*n), (3*n + 1, 1)) # Change the strides
print(mat_strided)
sum_diags = torch.sum(mat_strided, 0) # Sums the resulting matrix's columns
return sum_diags[1:]
X = torch.arange(9).reshape(3,3)
print(X)
# tensor([[0, 1, 2],
# [3, 4, 5],
# [6, 7, 8]])
print(sum_all_diagonal_matrix(X))
# tensor([ 6., 10., 12., 6., 2.])
You can do exactly the same with one more dimension:
def sum_all_diagonal(mat: torch.tensor):
k,n,_ = mat.shape
zero_mat = torch.zeros((k, n, n))
mat_padded = torch.cat((zero_mat, mat, zero_mat), 2)
mat_strided = mat_padded.as_strided((k, n, 2*n), (3*n*n, 3*n + 1, 1))
sum_diags = torch.sum(mat_strided, 1)
return sum_diags[:, n:]

Categories