I have a tensor input of dimensions (B,C,H,W) and I would like to find a correlation matrix of the input. The code I am using is :
def corr(x):
"""
x: [B, C, H, W]
"""
# [B, C, H, W] -> [B, C, H * W]
x = x.view((x.size(0), x.size(1), -1))
# estimated covariance
x = x - x.mean(dim=-1, keepdim=True)
factor = 1 / (x.shape[-1] - 1)
cov = factor * (x # x.transpose(-1, -2))
return torch.div(cov,torch.diagonal(cov, dim1=-2, dim2=-1))
So I rechecked myself and it looks like I am getting good results for the cov variable in a function but when I try to normalize it to get the correlation, the result's range is very strange, there are values above 1 and below -1, and overall the solution does not seem to be right.
Any suggestions on how to solve the problem?
Related
I have a batch of depth images, shape -> [B, 1, H, W]. For each pixel in each image of the batch I need to perform:
X = d * Kinverse # [u, v, 1] #therefore X is in R^3
where d is float tensor[0;1] representing depth at pixel u,v; Kinverse is a constant 3X3 matrix and u, v refer to the pixel column and row respectively.
Is there some way I can vectorize the operation to obtain X(u+1,v), X(u,v) and X(u,v+1) for all the images in the batch.
I eventually need to take this cross product:
{X(u+1,v) - X(u,v)} x {X(u, v+1) - X(u,v)}
Thanks for the help!
You can use torch.meshgrid to produce the u and v tensors. Once you have them, you can use torch.einsum to do the batched matrix multiplication with Kinverse.
Finally, you can use torch.cross to compute the cross product:
u, v = torch.meshgrid(*[torch.arange(s_, dtype=d.dtype, device=d.device) for s_ in d.shape[2:]])
# make a single 1x1xHxW for [u v 1] per pixel:
uv = torch.cat((u[None, None, ...], v[None, None, ...], torch.ones_like(u)[None, None, ...]), dim=1)
# compute X
X = d * torch.einsum('ij,bjhw->bihw',Kinverse,uv)
# the cross product
out = torch.cross(X[..., 1:, :-1] - X[..., :-1, :-1], X[..., :-1, 1:] - X[..., :-1, :-1], dim=1)
I've been spending a few hours googling about this problem and it seems I can't find any information.
I tried coding a multivariate gaussian pdf as:
def multivariate_normal(X, M, S):
# X has shape (D, N) where D is the number of dimensions and N the number of observations
# M is the mean vector with shape (D, 1)
# S is the covariance matrix with shape (D, D)
D = S.shape[0]
S_inv = np.linalg.inv(S)
logdet = np.log(np.linalg.det(S))
log2pi = np.log(2*np.pi)
devs = X - M
a = np.array([- D/2 * log2pi - (1/2) * logdet - dev.T # S_inv # dev for dev in devs.T])
return np.exp(a)
I've only been successful in computing the pdf through a for loop, iterating N times. If I don't, I end up with an (N, N) matrix which is unhelpful. I've found another post here, but the post is quite outdated and in matlab.
Is there anyway to take advantage of numpy's vectorisation?
This is my first post on stackoverflow, let me know if anything is off!d
I came across this problem in a similar manner and here's how I solved it:
Variables:
X = numpy.ndarray[numpy.ndarray[float]] - m x n
MU = numpy.ndarray[numpy.ndarray[float]] - k x n
SIGMA = numpy.ndarray[numpy.ndarray[numpy.ndarray[float]]] - k x n x n
k = int
Where X is my feature vector, MU is my means, SIGMA is my covariance matrix.
To vectorize, I rewrote the dot product per the definition of the dot-product:
sigma_det = np.linalg.det(sigma)
sigma_inv = np.linalg.inv(sigma)
const = 1/((2*np.pi)**(n/2)*sigma_det**(1/2))
p = const*np.exp((-1/2)*np.sum((X-mu).dot(sigma_inv)*(X-mu),axis=1))
I have been working on this problem for the last few days and finally have come to a solution.
To do so I have added an extra dimension to the x vector, and then used the np.einsum() function for computing the Mahalanobis distance.
Example
For the following example we will use a (100 x 2) input array. That is, 100 samples of two random variables. That gives us a (1 x 2) mean vector and a (2 x 2) covariance matrix.
Generating some data:
# instantiate a random number generator
rng = np.random.default_rng(100)
# define mu and sigma for the dummy sample
mu = np.array([0.5, 0.25])
covmat = np.array([[1, 0.5],
[0.5, 1]])
# generate multivariate normal random sample
x = rng.multivariate_normal(mu, covmat, size=100)
And defining the pdf function:
def pdf(x, mu, covmat):
"""
Generates the probability of a given x vector based on the
probability distribution function N(mu, covmat)
Returns: the probability
"""
x = x[:, np.newaxis] # add a new first dimension to x
k = mu.shape[0] # number of dimensions
diff = x - mu # deviation of x from the mean
inv_covmat = np.linalg.inv(covmat)
term1 = (2*np.pi)**-(k/2)*np.linalg.det(inv_covmat)
term2 = np.exp(-np.einsum('ijk, kl, ijl->ij', diff, inv_covmat, diff) / 2)
return term1 * term2
Which returns a (n, 1) array, where n is the number of samples, in this case (100,1).
Explanation
The easiest way to think about solving the problem is just writing down the dimensions, and trying to do the linear algebra.
We need to do some kind of manipulation of three tensors with the following shapes, to get the resulting tensor:
A, B, C -> D
(100 x 1 x 2), (2, 2), (100 x 1 x 2) -> (100 x 1)
Let the first tensor, A, have the indices, ijk:
Then we want to do some operation of A and B to get the shape (100 x 1 x 2).
Hence,
ijk, kl - > ijl
(100 x 1 x 2), (2 x 2) -> (100 x 1 x 2)
This leaves us with AB, C
(100 x 1 x 2), (100 x 1 x 2)
We want D to have the shape (100 x 1)
Hence:
ijl, ijl->ij
(100 x 1 x 2), (100 x 1 x 2) -> (100 x 1)
Putting the two operations together, we get:
ijk, kl, ijl->ij
Firstly, I assume people are familiar with Python numpy.tensordot. Here I use a simple instance of that as follows (pseudocode):
A.shape = (1, x, y)
B.shape = (x, y, z, t)
C = numpy.tensordot(A, B)
C.shape = (1, z, t)
Now imagine A and C above are greyscale images (1 channel), and there is an image transformation that turns A into C. To be specific, assume people are familiar with OpenCV in Python and the functions cv2.warpAffine and cv2.warpPerspective, let's (pseudocode):
C = cv2.warpSomething(A, **kwargs)
My question is that, assume the above equations hold, then how to compute B (efficiently enough) from the variables (pseudocode):
x, y, z, t, the_transformation (i.e. warpAffine or warpPerspective, M, flags, borderMode, borderValue)
I'm also satisfied if one can produce B from only (warp, x, y, z, t, M), fixing flags=INTER_LINEAR, borderMode=BORDER_CONSTANT and borderValue=0.
Thanks in advance!
If there are N pixels in both A and C, then the transformation tensor B has N**2 components. For N on the order of 1E+6, you really don't want to store the B tensor. If it's a very small dataset, you could try something like this:
# assuming C and A are already initialized.
B = np.zeros(A.shape + C.shape)
A1 = np.zeros_like(A)
for i in range(A1.shape[0]):
for j in range(A1.shape[1]):
A1[i, j] = 1
B[i, j, :, :] = affine_something(A1)
A1[i, j] = 0
But this is still very slow and inefficient.
I am trying to implement PCA without any library for image dimension reduction. I tried the code in the O'Reilly Computer Vision book and implement it on a sample lenna picture:
from PIL import Image
from numpy import *
def pca(X):
num_data, dim = X.shape
mean_X = X.mean(axis=0)
X = X - mean_X
if dim > num_data:
# PCA compact trick
M = np.dot(X, X.T) # covariance matrix
e, U = np.linalg.eigh(M) # calculate eigenvalues an deigenvectors
tmp = np.dot(X.T, U).T
V = tmp[::-1] # reverse since the last eigenvectors are the ones we want
S = np.sqrt(e)[::-1] #reverse since the last eigenvalues are in increasing order
for i in range(V.shape[1]):
V[:,i] /= S
else:
# normal PCA, SVD method
U,S,V = np.linalg.svd(X)
V = V[:num_data] # only makes sense to return the first num_data
return V, S, mean_X
img=color.rgb2gray(io.imread('D:\lenna.png'))
x,y,z=pca(img)
plt.imshow(x)
but the image plot of the pca doesnt look like the original image like at all.
As far as i know PCA kinda reduce the image dimension but it will still somehow resemble the original image but in lower detail. Whats wrong with the code?
Well, nothing is wrong per se in your code, but you're not displaying the right thing if I do understand what you actually want to do!
What I would write for your problem is the following:
def pca(X, number_of_pcs):
num_data, dim = X.shape
mean_X = X.mean(axis=0)
X = X - mean_X
if dim > num_data:
# PCA compact trick
M = np.dot(X, X.T) # covariance matrix
e, U = np.linalg.eigh(M) # calculate eigenvalues an deigenvectors
tmp = np.dot(X.T, U).T
V = tmp[::-1] # reverse since the last eigenvectors are the ones we want
S = np.sqrt(e)[::-1] #reverse since the last eigenvalues are in increasing order
for i in range(V.shape[1]):
V[:,i] /= S
return V, S, mean_X
else:
# normal PCA, SVD method
U, S, V = np.linalg.svd(X, full_matrices=False)
# reconstruct the image using U, S and V
# otherwise you're just outputting the eigenvectors of X*X^T
V = V.T
S = np.diag(S)
X_hat = np.dot(U[:, :number_of_pcs], np.dot(S[:number_of_pcs, :number_of_pcs], V[:,:number_of_pcs].T))
return X_hat, S, mean_X
The change here lies in the fact that we want to reconstruct the image using a given number of eigenvectors (determined by number_of_pcs).
The thing to remember is that in np.linalg.svd, the columns of U are the eigenvectors of X.X^T.
When doing that, we obtain the following results (displayed here using 1 and 10 principal components):
X_hat, S, mean_X = pca(img, 1)
plt.imshow(X_hat)
X_hat, S, mean_X = pca(img, 10)
plt.imshow(X_hat)
PS: note that the picture aren't displayed in grayscale because of matplotlib.pyplot, but this is a very minor issue here.
I am trying to implement local-p attention based on this paper: https://arxiv.org/pdf/1508.04025.pdf Specifically, equation (9) derives a alignment position based on taking the sigmoid of some non-linear functions, and then multiplying the resultant with number of timesteps. As sigmoid returns values between 0 and 1, this multiplication yields a valid index between 0 and number of timesteps. I can soft round this to infer the predicted position, however, I couldn't find a way to convert this to a integer to use within slicing/indexing operations since tf.cast() is not differentiable. Another problem is that the derived positions are in shape (B, 1), and hence one aligned position for each example in the batch. See below to understand these operations:
"""B = batch size, S = sequence length (num. timesteps), V = vocabulary size, H = number of hidden dimensions"""
class LocalAttention(Layer):
def __init__(self, size, window_width=None, **kwargs):
super(LocalAttention, self).__init__(**kwargs)
self.size = size
self.window_width = window_width # 2*D
def build(self, input_shape):
self.W_p = Dense(units=input_shape[2], use_bias=False)
self.W_p.build(input_shape=(None, None, input_shape[2])) # (B, 1, H)
self._trainable_weights += self.W_p.trainable_weights
self.v_p = Dense(units=1, use_bias=False)
self.v_p.build(input_shape=(None, None, input_shape[2])) # (B, 1, H)
self._trainable_weights += self.v_p.trainable_weights
super(Attention, self).build(input_shape)
def call(self, inputs):
sequence_length = inputs.shape[1]
## Get h_t, the current (target) hidden state ##
target_hidden_state = Lambda(function=lambda x: x[:, -1, :])(inputs) # (B, H)
## Get h_s, source hidden states ##
aligned_position = self.W_p(target_hidden_state) # (B, H)
aligned_position = Activation('tanh')(aligned_position) # (B, H)
aligned_position = self.v_p(aligned_position) # (B, 1)
aligned_position = Activation('sigmoid')(aligned_position) # (B, 1)
aligned_position = aligned_position * sequence_length # (B, 1)
Let's say the aligned_position tensor has elements [24.2, 15.1, 12.3] for a batch size = B = 3 for simplification. Then, the source hidden states are derived from input hidden states (B=3, S, H) such that for the first example we take timesteps starting from 24, hence something along the lines of first_batch_states = Lambda(function=lambda x: x[:, 24:, :])(inputs) and so on. Note that the implementation of local-p attention is more complicated than this, but I simplified it here. Hence, the main challenge is converting 24.2 to 24 without losing differentiability, or using some sort of a mask operation to get the indexes through dot product. The mask operation is preferred, as we will have to do this for each example in batch, and having a loop inside a custom Keras layer is not neat. Do you have any ideas on how to accomplish this task? I will appreciate any answers and comments!
There are two ways I found to go about solving this problem.
Applying a Gaussian distribution based on the aligned position shown in the original question to the attention weights, making the process differentiable, as #Siddhant suggested:
gaussian_estimation = lambda s: tf.exp(-tf.square(s - aligned_position) /
(2 * tf.square(self.window_width / 2)))
gaussian_factor = gaussian_estimation(0)
for i in range(1, sequence_length):
gaussian_factor = Concatenate()([gaussian_factor, gaussian_estimation(i)])
# Adjust weights via gaussian_factor: (B, S*) to allow differentiability
attention_weights = attention_weights * gaussian_factor # (B, S*)
It should be noted that there is no hard slicing operation involved here, only simple adjusting according to distance.
Keeping the top n values and zeroing out the rest as suggested by #Vlad here, How to implement a custom keras layer that only keeps the top n values and zeros out all the rest?:
aligned_position = self.W_p(inputs) # (B, S, H)
aligned_position = Activation('tanh')(aligned_position) # (B, S, H)
aligned_position = self.v_p(aligned_position) # (B, S, 1)
aligned_position = Activation('sigmoid')(aligned_position) # (B, S, 1)
## Only keep top D values out of the sigmoid activation, and zero-out the rest ##
aligned_position = tf.squeeze(aligned_position, axis=-1) # (B, S)
top_probabilities = tf.nn.top_k(input=aligned_position,
k=self.window_width,
sorted=False) # (values:(B, D), indices:(B, D))
onehot_vector = tf.one_hot(indices=top_probabilities.indices,
depth=sequence_length) # (B, D, S)
onehot_vector = tf.reduce_sum(onehot_vector, axis=1) # (B, S)
aligned_position = Multiply()([aligned_position, onehot_vector]) # (B, S)
aligned_position = tf.expand_dims(aligned_position, axis=-1) # (B, S, 1)
source_hidden_states = Multiply()([inputs, aligned_position]) # (B, S*=S(D), H)
## Scale back-to approximately original hidden state values ##
aligned_position += 1 # (B, S, 1)
source_hidden_states /= aligned_position # (B, S*=S(D), H)
It should be noted that here we are instead applying the dense layers to all hidden source states to get a shape of (B,S,1) instead of (B,1) for aligned_position. I believe this is as close as we can get to what the paper suggests.
Anybody who is trying to implement attention mechanisms can check my repo https://github.com/uzaymacar/attention-mechanisms. Layers here are designed for many-to-one sequence tasks, but can be adapted to other forms with minor tweaks.