Laplacian positional encoding in a pytorch batch - python

I am trying to recode the laplacian positional encoding for a graph model in pytorch. A valid encoding in numpy can be found at https://docs.dgl.ai/en/0.9.x/_modules/dgl/transforms/functional.html#laplacian_pe .
I think I have managed to make an equivalent encoding to numpy in pytorch, but for performance issues I would like that function to be able to work with batches of data.
That is, the following function works with the parameters
with the form adj[N, N], degrees[N, N] and topk as an integer, where N is the number of nodes in the network.
def _laplacian_positional_encoding_th(self, adj, degrees, topk):
number_of_nodes = adj.shape[-1].
#degrees = th.clip(degrees, 0, 1) # not multigraph
assert topk < number_of_nodes
# Laplacian
D = th.diag(degrees**-0.5)
B = D * adj * D
L = th.eye(number_of_nodes).to(B.device) * B
# Eigenvectors
EigVal, EigVec = th.linalg.eig(L)
idx = th.argsort(th.real(EigVal)) # increasing order
EigVal, EigVec = th.real(EigVal[idx]), th.real(EigVec[:,idx])
# Only select [1,topk+1] EigenVectors as L is symmetric (Spectral decomposition)
out = EigVec[:,1:topk+1]
return out
However, when I try to perform the same efficient operations in batch form, I cannot code it. That is, the idea is that the parameters can come in the form adj[B, N, N], degrees[B, N, N] and topk as integer, B being the number of data in the batch.

How about:
def _laplacian_positional_encoding_th(self, adj, degrees, topk):
number_of_nodes = adj.shape[-1]
assert topk < number_of_nodes
D = th.clip(degrees, 0, 1) # not multigraph
B = D # adj # D
L = th.eye(number_of_nodes).to(B.device)[None, ...] - B
# Eigenvectors
EigVal, EigVec = th.linalg.eig(L)
idx = th.argsort(th.real(EigVal)) # increasing order
out = th.real(th.gather(EigVec, dim=-1, index=idx[..., None]))
return out
See th.diag_embed for creating a batch of diagonal matrices, and th.gather for selecting the right columns of EigVec according to the sorted indices.
Updata:
If you want to extract the topk vectors:
_, topk = th.topk(EigVal.real, k=5) # get the top 5
out = th.gather(EigVec.real, dim=-1, index=topk[:, None, :].expand(-1, EigVec.shape[1], -1))

Related

Why does my own implementation of Fast Fourier Transform in N dimensions give different results than NumPy's?

I've written the following code for the N-dimensional Fast Fourier Transform but it doesn't give me the same result as numpy's function.
def nffourier(f, direct):
dim = f.ndim
N = f.shape
G = np.zeros(f.shape, dtype=complex)
G = f
for k in range(dim):
for i in range(N[k]):
aux = G[(slice(None),) * (k) + (i,)]
trans = ffourier(aux, direct)
G[(slice(None),) * (k) + (i,)] = trans
return G
My code for calculating FFT in 1d is the following:
def ffourier(f, direct):
N = len(f)
M = int(m.log(N)/m.log(2))
G = []
order = []
for i in range(N):
order.append(int(bin(i)[2:]))
digitos = len(aux)
for i in range(N):
contenido_aux = str(int(order[i]))
aux = len(str(order[i]))
if(aux<digitos):
añadir=digitos-aux
for k in range(añadir):
contenido_aux = '0'+contenido_aux
G.append(contenido_aux)
for i in range(len(G)):
G[i] = G[i][::-1]
for i in range(len(G)):
G[i] = int(G[i], 2)
for i in range(len(G)):
G[i] = f[G[i]]
if direct == False:
signo = 1
else:
signo = -1
kmax = 1
kmax = int(kmax)
for alfa in range(1,M+1):
w1 = np.exp(signo*1j*2*m.pi/(2**alfa))
kmax = int(2*kmax)
W = 1
for k in range(0, int(kmax/2)-1+1):
for s in range(0, N-1+1, int(kmax)):
T0 = G[s+k]
T1 = G[s+k+int(kmax/2)]*W
G[s+k]=T0+T1
G[s+k+int(kmax/2)]=T0-T1
W=W*w1
cte = 1/m.sqrt(N)
for i in range(0, N-1+1):
G[i] = G[i]*cte
return G
The fundamentals of it is quite hard to explain, it's based on bit inversion, but I've checked it works properly, so the problem is with the N dimensional code.
Your indexing G[(slice(None),) * (k) + (i,)] works in 2D but not in higher dimensions. Let’s see what it does:
Say G is 2D. Now when k=0, your indexing is the same as G[i], which is the same as G[i,:]. You are selecting rows. When k=1, then that indexing is G[:,i]. You are selecting columns.
But now say G is 3D. Now when k=0, you get G[i] again, which now is equivalent to G[i,:,:]. You are selecting a 2D subarray! What you need is a 1D subarray. You need to get G[i,j,:] for all i and all j. And then G[i,:,j], and then G[:,i,j].
Likewise, for a 5D array, you want G[i,j,k,l,:], etc. That is to say, you want to loop over all dimensions minus one.
To loop over all i and j, you could do a double loop, but then you have specific 3D code. It is possible to write a loop over an arbitrary number of dimensions, but it’s not pretty. So we’ll look for an alternative.
I think the simplest way to get this to work is to flatten those N-1 dimensions, turning a MxNxOxPxQ array into a 2D (N*M*O*P)xQ array. Now you can do a 1D loop over the first dimension.
Now you need to loop over the dimensions, it’s a different dimension that we leave out every time. We can simplify this problem by “rolling” the dimensions, make a different dimension the last one every time, then apply that same flattening code. Now it’s easy to write a loop (not tested):
def nffourier(f, direct):
dim = f.ndim
G = f.astype(complex)
for k in range(dim):
G = np.moveaxis(G, 0, -1) # shifts the dimensions by one to the left
shape = G.shape
m = shape[-1]
G = np.reshape(G, (-1, m)) # flattens all but last dimension
m = G.shape[0]
for i in range(m): # loop over first dimension
G[i, :] = ffourier(G[i, :], direct) # apply over last dimension
G = np.reshape(G, shape) # return to original shape
# After applying moveaxis dim times, G should have the same dimension order it had at the start
return G
(Note also, as we already discussed in the comments, that the G = f line causes the output array G to be of the same type as f, likely not complex, and so will cause errors also.)

What does the locality linear coding function do?

I got this code for spectral clustering.
https://github.com/BirdYin/scllc/blob/master/scllc.py
This is a landmark-based spectral clustering code.
What does the locality_linear_coding function do in this code?
class Scllc:
def __locality_linear_coding(self, data, neighbors):
indicator = np.ones([neighbors.shape[0], 1])
penalty = np.eye(self.n_neighbors)
# Get the weights of every neighbors
z = neighbors - indicator.dot(data.reshape(-1,1).T)
local_variance = z.dot(z.T)
local_variance = local_variance + self.lambda_val * penalty
weights = scipy.linalg.solve(local_variance, indicator)
weights = weights / np.sum(weights)
weights = weights / np.sum(np.abs(weights))
weights = np.abs(weights)
return weights.reshape(self.n_neighbors)
def fit(self, X):
[n_data, n_dim] = X.shape
# Select landmarks
if self.func_landmark == 'kmeans':
landmarks, centers, unknown = k_means(X, self.n_landmarks, n_init=1, max_iter=100)
nbrs = NearestNeighbors(metric='euclidean').fit(landmarks)
# Create properties of the sparse matrix Z
[dist, indy] = nbrs.kneighbors(X, n_neighbors = self.n_neighbors)
indx = np.ones([n_data, self.n_neighbors]) * np.asarray(range(n_data))[:, None]
valx = np.zeros([n_data, self.n_neighbors])
self.delta = np.mean(valx)
# Compute all the coded data
for index in range(n_data):
# Compute the weights of its neighbors
localmarks = landmarks[indy[index,:], :]
weights = self.__locality_linear_coding(X[index,:], localmarks)
# Compute the coded data
valx[index] = weights
# Construct sparse matrix
indx = indx.reshape(n_data * self.n_neighbors)
indy = indy.reshape(n_data * self.n_neighbors)
valx = valx.reshape(n_data * self.n_neighbors)
Z = sparse.coo_matrix((valx,(indx,indy)),shape=(n_data,self.n_landmarks))
Z = Z / np.sqrt(np.sum(Z, 0))
# Get first k eigenvectors
[U, Sigma, V] = svds(Z, k = self.n_clusters + 1)
U = U[:, 0:self.n_clusters]
embedded_data = U / np.sqrt(np.sum(U * U, 0))
You can see the documentation of numpy module to deal with n-dimensional array
.For exemple, the dot method do the product of the matrices
Than They have use the scipy module, you can also see the documentation on internet.
the first function of a class is always an initialize method. Because the user have to call it to fully use the class. It is the first function where are defined and saved all the variables that the user want

Translating from Matlab: New problems (complex ginzburg landau equation

Thank you for all of your constructive criticisim on my last post. I have made some changes, but alas my code is still not working and I can't figure out why. What happens when I run this version is that I get a runtime warning about invalid errors encountered in matmul.
My code is given as
from __future__ import division
import numpy as np
from scipy.linalg import eig
from scipy.linalg import toeplitz
def poldif(*arg):
"""
Calculate differentiation matrices on arbitrary nodes.
Returns the differentiation matrices D1, D2, .. DM corresponding to the
M-th derivative of the function f at arbitrarily specified nodes. The
differentiation matrices can be computed with unit weights or
with specified weights.
Parameters
----------
x : ndarray
vector of N distinct nodes
M : int
maximum order of the derivative, 0 < M <= N - 1
OR (when computing with specified weights)
x : ndarray
vector of N distinct nodes
alpha : ndarray
vector of weight values alpha(x), evaluated at x = x_j.
B : int
matrix of size M x N, where M is the highest derivative required.
It should contain the quantities B[l,j] = beta_{l,j} =
l-th derivative of log(alpha(x)), evaluated at x = x_j.
Returns
-------
DM : ndarray
M x N x N array of differentiation matrices
Notes
-----
This function returns M differentiation matrices corresponding to the
1st, 2nd, ... M-th derivates on arbitrary nodes specified in the array
x. The nodes must be distinct but are, otherwise, arbitrary. The
matrices are constructed by differentiating N-th order Lagrange
interpolating polynomial that passes through the speficied points.
The M-th derivative of the grid function f is obtained by the matrix-
vector multiplication
.. math::
f^{(m)}_i = D^{(m)}_{ij}f_j
This function is based on code by Rex Fuzzle
https://github.com/RexFuzzle/Python-Library
References
----------
..[1] B. Fornberg, Generation of Finite Difference Formulas on Arbitrarily
Spaced Grids, Mathematics of Computation 51, no. 184 (1988): 699-706.
..[2] J. A. C. Weidemann and S. C. Reddy, A MATLAB Differentiation Matrix
Suite, ACM Transactions on Mathematical Software, 26, (2000) : 465-519
"""
if len(arg) > 3:
raise Exception('number of arguments is either two OR three')
if len(arg) == 2:
# unit weight function : arguments are nodes and derivative order
x, M = arg[0], arg[1]
N = np.size(x)
# assert M<N, "Derivative order cannot be larger or equal to number of points"
if M >= N:
raise Exception("Derivative order cannot be larger or equal to number of points")
alpha = np.ones(N)
B = np.zeros((M, N))
elif len(arg) == 3:
# specified weight function : arguments are nodes, weights and B matrix
x, alpha, B = arg[0], arg[1], arg[2]
N = np.size(x)
M = B.shape[0]
I = np.eye(N) # identity matrix
L = np.logical_or(I, np.zeros(N)) # logical identity matrix
XX = np.transpose(np.array([x, ] * N))
DX = XX - np.transpose(XX) # DX contains entries x(k)-x(j)
DX[L] = np.ones(N) # put 1's one the main diagonal
c = alpha * np.prod(DX, 1) # quantities c(j)
C = np.transpose(np.array([c, ] * N))
C = C / np.transpose(C) # matrix with entries c(k)/c(j).
Z = 1 / DX # Z contains entries 1/(x(k)-x(j)
Z[L] = 0 # eye(N)*ZZ; # with zeros on the diagonal.
X = np.transpose(np.copy(Z)) # X is same as Z', but with ...
Xnew = X
for i in range(0, N):
Xnew[i:N - 1, i] = X[i + 1:N, i]
X = Xnew[0:N - 1, :] # ... diagonal entries removed
Y = np.ones([N - 1, N]) # initialize Y and D matrices.
D = np.eye(N) # Y is matrix of cumulative sums
DM = np.empty((M, N, N)) # differentiation matrices
for ell in range(1, M + 1):
Y = np.cumsum(np.vstack((B[ell - 1, :], ell * (Y[0:N - 1, :]) * X)), 0) # diags
D = ell * Z * (C * np.transpose(np.tile(np.diag(D), (N, 1))) - D) # off-diags
D[L] = Y[N - 1, :]
DM[ell - 1, :, :] = D
return DM
def herdif(N, M, b=1):
"""
Calculate differentiation matrices using Hermite collocation.
Returns the differentiation matrices D1, D2, .. DM corresponding to the
M-th derivative of the function f, at the N Chebyshev nodes in the
interval [-1,1].
Parameters
----------
N : int
number of grid points
M : int
maximum order of the derivative, 0 < M < N
b : float, optional
scale parameter, real and positive
Returns
-------
x : ndarray
N x 1 array of Hermite nodes which are zeros of the N-th degree
Hermite polynomial, scaled by b
DM : ndarray
M x N x N array of differentiation matrices
Notes
-----
This function returns M differentiation matrices corresponding to the
1st, 2nd, ... M-th derivates on a Hermite grid of N points. The
matrices are constructed by differentiating N-th order Hermite
interpolants.
The M-th derivative of the grid function f is obtained by the matrix-
vector multiplication
.. math::
f^{(m)}_i = D^{(m)}_{ij}f_j
References
----------
..[1] B. Fornberg, Generation of Finite Difference Formulas on Arbitrarily
Spaced Grids, Mathematics of Computation 51, no. 184 (1988): 699-706.
..[2] J. A. C. Weidemann and S. C. Reddy, A MATLAB Differentiation Matrix
Suite, ACM Transactions on Mathematical Software, 26, (2000) : 465-519
..[3] R. Baltensperger and M. R. Trummer, Spectral Differencing With A
Twist, SIAM Journal on Scientific Computing 24, (2002) : 1465-1487
"""
if M >= N - 1:
raise Exception('number of nodes must be greater than M - 1')
if M <= 0:
raise Exception('derivative order must be at least 1')
x = herroots(N) # compute Hermite nodes
alpha = np.exp(-x * x / 2) # compute Hermite weights.
beta = np.zeros([M + 1, N])
# construct beta(l,j) = d^l/dx^l (alpha(x)/alpha'(x))|x=x_j recursively
beta[0, :] = np.ones(N)
beta[1, :] = -x
for ell in range(2, M + 1):
beta[ell, :] = -x * beta[ell - 1, :] - (ell - 1) * beta[ell - 2, :]
# remove initialising row from beta
beta = np.delete(beta, 0, 0)
# compute differentiation matrix (b=1)
DM = poldif(x, alpha, beta)
# scale nodes by the factor b
x = x / b
# scale the matrix by the factor b
for ell in range(M):
DM[ell, :, :] = (b ** (ell + 1)) * DM[ell, :, :]
return x, DM
def herroots(N):
"""
Compute roots of the Hermite polynomial of degree N
Parameters
----------
N : int
degree of the Hermite polynomial
Returns
-------
x : ndarray
N x 1 array of Hermite roots
"""
# Jacobi matrix
d = np.sqrt(np.arange(1, N))
J = np.diag(d, 1) + np.diag(d, -1)
# compute eigenvalues
mu = eig(J)[0]
# return sorted, normalised eigenvalues
# real part only since all roots must be real.
return np.real(np.sort(mu) / np.sqrt(2))
a = 1-1j
b = 2+0.2j
c1 = 0.34
c2 = 0.005
alpha1 = (4*c2/a)**0.25
alpha2 = b/2*a
Nx = 220;
# hermite differentiation matrices
[x,D] = herdif(Nx, 2, np.real(alpha1))
D1 = D[0,:]
D2 = D[1,:]
# integration weights
diff = np.diff(x)
#print(len(diff))
p = np.concatenate([np.zeros(1), diff])
q = np.concatenate([diff, np.zeros(1)])
w = (p + q)/2
Q = np.diag(w)
#Discretised operator
const = c1*np.diag(np.ones(len(x)))-c2*(np.diag(x)*np.diag(x))
#print(const)
A = a*D2 - b*D1 + const
##### Timestepping
tmax = 200
tmin = 0
dt = 1
n = (tmax - tmin)/dt
tvec = np.linspace(0,tmax,n, endpoint = True)
#(len(tvec))
q = np.zeros((Nx, len(tvec)),dtype=complex)
f = np.zeros((Nx, len(tvec)),dtype=complex)
q0 = np.ones(Nx)*10**4
q[:,0] = q0
#print(q[:,0])
#print(q0)
# qnew - qold = dt*Aqold + dt*N(qold,qold,qold)
# qnew - qold = dt*Aqnew - dt*N(qold,qold,qold)
# therefore qnew - qold = 0.5*dtAqold + 0.5*dt*Aqnew + dtN(qold,qold,qold)
# rearranging to give qnew( 1- 0.5Adt) = (1 + 0.5Adt) + dt N(qold,qold,qold)
from numpy.linalg import inv
inverted = inv(np.eye(Nx)-0.5*A*dt)
forqold = (np.eye(Nx) + 0.5*A*dt)
firstterm = np.matmul(inverted,forqold)
for t in range(0, len(tvec)-1):
nl = abs(np.square(q[:,t]))*q[:,t]
q[:,t+1] = np.matmul(firstterm,q[:,t]) - dt*np.matmul(inverted,nl)
where the hermitedifferentiation matrices can be found online and are in a different file. This code blows up after five interations, which I cannot understand as I don't see how it differs in the matlab found here https://www.bagherigroup.com/research/open-source-codes/
I would really appreciate any help.
Error in:
q[:,t+1] = inverted*forgold*np.array(q[:,t]) + inverted*dt*np.array(nl)
q[:, t+1] indexes a 2d array (probably not a np.matrix which is more MATLAB like). This indexing reduces the number of dimensions by 1, hence the (220,) shape in the error message.
The error message says the RHS is (220,220). That shape probably comes from inverted and forgold. np.array(q[:,t]) is 1d. Multiplying a (220,220) by a (220,) is ok, but you can't put that square array into a 1d slot.
Both uses of np.array in the error line are superfluous. Their arguments are already ndarray.
As for the loop, it may be necessary. It looks like q[:,t+1] is a function of q[:,t], a serial, rather than parallel operation. Those are harder to render as 'vectorized' (unless you can usecumsum` like operations).
Note that in numpy * is elementwise multiplication, the .* of MATLAB. np.dot and # are used for matrix multiplication.
q[:,t+1]= invert#q[:,t]
would work

Compute a least square only on n best values of an array

I have :
print(self.L.T.shape)
print(self.M.T.shape)
(8, 3)
(8, 9082318)
self.N = np.linalg.lstsq(self.L.T, self.M.T, rcond=None)[0].T
which is working fine and return
(9082318, 3)
But
I want to perform a kind of sort on M and compute the solution only on the best 8 - n values of M.
Or ignore values of M below and/or higher than a certain value.
Any pointer on how to do that would be extremely appreciated.
Thank you.
Tried to copy this solution exactly but it return an error
The original working function but basically it's just one line.
M is a stack of 8 grayscale images reshaped.
L is a stack of 8 light direction vectors.
M contains shadows but not always at the same location in the image.
So I need to remove those pixel from the computation but L must retains its dimensions.
def _solve_l2(self):
"""
Lambertian Photometric stereo based on least-squares
Woodham 1980
:return: None
Compute surface normal : numpy array of surface normal (p \times 3)
"""
self.N = np.linalg.lstsq(self.L.T, M.T, rcond=None)[0].T
print(self.N.shape)
self.N = normalize(self.N, axis=1) # normalize to account for diffuse reflectance
Here is the borrowed code from the link for trying to resolve this :
L and M as previously used
Ma = self.M.copy()
thresh = 300
Ma[self.M <= thresh] = 0
Ma[self.M > thresh] = 1
Ma = Ma.T
self.M = self.M.T
self.L = self.L.T
print(self.L.shape)
print(self.M.shape)
print(Ma.shape)
A = self.L
B = self.M
M = Ma #http://alexhwilliams.info/itsneuronalblog/2018/02/26/censored-lstsq/
# else solve via tensor representation
rhs = np.dot(A.T, M * B).T[:,:,None] # n x r x 1 tensor
T = np.matmul(A.T[None,:,:], M.T[:,:,None] * A[None,:,:]) # n x r x r tensor
self.N = np.squeeze(np.linalg.solve(T, rhs)).T # transpose to get r x n
return
numpy.linalg.LinAlgError: Singular matrix

Compute the pairwise distance between each pair of the two collections of inputs in TensorFlow

I have two collections. One consists of m1 points in k dimensions and another one of m2 points in k dimensions. I need to calculate pairwise distance between each pair of the two collections.
Basically having two matrices Am1, k and Bm2, k I need to get a matrix Cm1, m2.
I can easily do this in scipy by using distance.sdist and select one of many distance metrics, and I also can do this in TF in a loop, but I can't figure out how to do this with matrix manipulations even for Eucledian distance.
After a few hours I finally found how to do this in Tensorflow. My solution works only for Eucledian distance and is pretty verbose. I also do not have a mathematical proof (just a lot of handwaving, which I hope to make more rigorous):
import tensorflow as tf
import numpy as np
from scipy.spatial.distance import cdist
M1, M2, K = 3, 4, 2
# Scipy calculation
a = np.random.rand(M1, K).astype(np.float32)
b = np.random.rand(M2, K).astype(np.float32)
print cdist(a, b, 'euclidean'), '\n'
# TF calculation
A = tf.Variable(a)
B = tf.Variable(b)
p1 = tf.matmul(
tf.expand_dims(tf.reduce_sum(tf.square(A), 1), 1),
tf.ones(shape=(1, M2))
)
p2 = tf.transpose(tf.matmul(
tf.reshape(tf.reduce_sum(tf.square(B), 1), shape=[-1, 1]),
tf.ones(shape=(M1, 1)),
transpose_b=True
))
res = tf.sqrt(tf.add(p1, p2) - 2 * tf.matmul(A, B, transpose_b=True))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print sess.run(res)
This will do it for tensors of arbitrary dimensionality (i.e. containing (..., N, d) vectors). Note that it isn't between collections (i.e. not like scipy.spatial.distance.cdist) it's instead within a single batch of vectors (i.e. like scipy.spatial.distance.pdist)
import tensorflow as tf
import string
def pdist(arr):
"""Pairwise Euclidean distances between vectors contained at the back of tensors.
Uses expansion: (x - y)^T (x - y) = x^Tx - 2x^Ty + y^Ty
:param arr: (..., N, d) tensor
:returns: (..., N, N) tensor of pairwise distances between vectors in the second-to-last dim.
:rtype: tf.Tensor
"""
shape = tuple(arr.get_shape().as_list())
rank_ = len(shape)
N, d = shape[-2:]
# Build a prefix from the array without the indices we'll use later.
pref = string.ascii_lowercase[:rank_ - 2]
# Outer product of points (..., N, N)
xxT = tf.einsum('{0}ni,{0}mi->{0}nm'.format(pref), arr, arr)
# Inner product of points. (..., N)
xTx = tf.einsum('{0}ni,{0}ni->{0}n'.format(pref), arr, arr)
# (..., N, N) inner products tiled.
xTx_tile = tf.tile(xTx[..., None], (1,) * (rank_ - 1) + (N,))
# Build the permuter. (sigh, no tf.swapaxes yet)
permute = list(range(rank_))
permute[-2], permute[-1] = permute[-1], permute[-2]
# dists = (x^Tx - 2x^Ty + y^Tx)^(1/2). Note the axis swapping is necessary to 'pair' x^Tx and y^Ty
return tf.sqrt(xTx_tile - 2 * xxT + tf.transpose(xTx_tile, permute))

Categories