Im trying to implement ZCA whitening and found some articles to do it, but they are a bit confusing.. can someone shine a light for me?
Any tip or help is appreciated!
Here is the articles i read :
http://courses.media.mit.edu/2010fall/mas622j/whiten.pdf
http://bbabenko.tumblr.com/post/86756017649/learning-low-level-vision-feautres-in-10-lines-of
I tried several things but most of them i didnt understand and i got locked at some step.
Right now i have this as base to start again :
dtype = np.float32
data = np.loadtxt("../inputData/train.csv", dtype=dtype, delimiter=',', skiprows=1)
img = ((data[1,1:]).reshape((28,28)).astype('uint8')*255)
Here is a python function for generating the ZCA whitening matrix:
def zca_whitening_matrix(X):
"""
Function to compute ZCA whitening matrix (aka Mahalanobis whitening).
INPUT: X: [M x N] matrix.
Rows: Variables
Columns: Observations
OUTPUT: ZCAMatrix: [M x M] matrix
"""
# Covariance matrix [column-wise variables]: Sigma = (X-mu)' * (X-mu) / N
sigma = np.cov(X, rowvar=True) # [M x M]
# Singular Value Decomposition. X = U * np.diag(S) * V
U,S,V = np.linalg.svd(sigma)
# U: [M x M] eigenvectors of sigma.
# S: [M x 1] eigenvalues of sigma.
# V: [M x M] transpose of U
# Whitening constant: prevents division by zero
epsilon = 1e-5
# ZCA Whitening matrix: U * Lambda * U'
ZCAMatrix = np.dot(U, np.dot(np.diag(1.0/np.sqrt(S + epsilon)), U.T)) # [M x M]
return ZCAMatrix
And an example of the usage:
X = np.array([[0, 2, 2], [1, 1, 0], [2, 0, 1], [1, 3, 5], [10, 10, 10] ]) # Input: X [5 x 3] matrix
ZCAMatrix = zca_whitening_matrix(X) # get ZCAMatrix
ZCAMatrix # [5 x 5] matrix
xZCAMatrix = np.dot(ZCAMatrix, X) # project X onto the ZCAMatrix
xZCAMatrix # [5 x 3] matrix
Hope it helps!
Details for why Edgar Andrés Margffoy Tuay's answer is not correct: As pointed out in R.M's comment, Edgar Andrés Margffoy Tuay's ZCA whitening function contains a small, but crucial mistake: the np.diag(S) should be removed. Numpy returns S as a m x 1 vector and not a m x m matrix (as is common to other svd implementations, e.g. Matlab). Hence the ZCAMatrix variable becomes a m x 1 vector and not a m x m matrix as it should be (when the input is m x n). (Also, the covariance matrix in Andfoy's answer is only valid if X is pre-centered, i.e mean 0).
Other references for ZCA: You can see the full answer, in Python, to the Stanford UFLDL ZCA Whitening exercise here.
Is your data stored in an mxn matrix? Where m is the dimension of the data and n are the total number of cases? If that's not the case, you should resize your data. For instance if your images are of size 28x28 and you have only one image, you should have a 1x784 vector. You could use this function:
import numpy as np
def flatten_matrix(matrix):
vector = matrix.flatten(1)
vector = vector.reshape(1, len(vector))
return vector
Then you apply ZCA Whitening to your training set using:
def zca_whitening(inputs):
sigma = np.dot(inputs, inputs.T)/inputs.shape[1] #Correlation matrix
U,S,V = np.linalg.svd(sigma) #Singular Value Decomposition
epsilon = 0.1 #Whitening constant, it prevents division by zero
ZCAMatrix = np.dot(np.dot(U, np.diag(1.0/np.sqrt(np.diag(S) + epsilon))), U.T) #ZCA Whitening matrix
return np.dot(ZCAMatrix, inputs) #Data whitening
It is important to save the ZCAMatrix matrix, you should multiply your test cases if you want to predict after training the Neural Net.
Finally, I invite you to take the Stanford UFLDL Tutorials at http://ufldl.stanford.edu/wiki/index.php/UFLDL_Tutorial or http://ufldl.stanford.edu/tutorial/ . They have pretty good explanations and also some programming exercises on MATLAB, however, almost all the functions found on MATLAB are on Numpy by the same name. I hope this may give an insight.
I may be a little late to the discussion, but I found this thread recently as I struggled to implement ZCA in TensorFlow because my poor PC processor was too slow to process large volume of data.
If anyone is interested, I have made a gist of my implementation of the ZCA in TensorFlow:
import tensorflow as tf
from keras.datasets import mnist
import numpy as np
tf.enable_eager_execution()
assert tf.executing_eagerly()
class ZCA(object):
"""
Simple ZCA aka Mahalanobis transformation class made in TensorFlow.
The code was largely ported from Keras ImageDataGenerator
"""
def __init__(self, epsilon=1e-5, dtype='float64'):
"""epsilon is the normalization constant, dtype refers to the data type used in the computation.
WARNING: the default precision is set to float64 as i have found that when computing the mean tensorflow'
and numpy results can differ by a substantial amount.
Usage: fit method computes the principal components and should be called first,
compute method returns the actual transformed tensor
NOTE : The input to both methods must be a 4D tensor.
"""
assert dtype is 'float32' or 'float64', "precision must be float32 or float64"
self.epsilon = epsilon
self.dtype = dtype
self.princ_comp = None
self.mean = None
def _featurewise_center(self, images_tensor):
if self.mean is None:
self.mean, _ = tf.nn.moments(images_tensor, axes=(0, 1, 2))
broadcast_shape = [1, 1, 1]
broadcast_shape[2] = images_tensor.shape[3]
self.mean = tf.reshape(self.mean, broadcast_shape)
norm_images = tf.subtract(images_tensor, self.mean)
return norm_images
def fit(self, images_tensor):
assert images_tensor.shape[3], "The input should be a 4D tensor"
if images_tensor.dtype is not self.dtype: # numerical error for float32
images_tensor = tf.cast(images_tensor, self.dtype)
images_tensor = self._featurewise_center(images_tensor)
flat = tf.reshape(images_tensor, (-1, np.prod(images_tensor.shape[1:].as_list())))
sigma = tf.div(tf.matmul(tf.transpose(flat), flat), tf.cast(flat.shape[0], self.dtype))
s, u, _ = tf.svd(sigma)
s_inv = tf.div(tf.cast(1, self.dtype), (tf.sqrt(tf.add(s[tf.newaxis], self.epsilon))))
self.princ_comp = tf.matmul(tf.multiply(u, s_inv), tf.transpose(u))
def compute(self, images_tensor):
assert images_tensor.shape[3], "The input should be a 4D tensor"
assert self.princ_comp is not None, "Fit method should be called first"
if images_tensor.dtype is not self.dtype:
images_tensor = tf.cast(images_tensor, self.dtype)
images_tensors = self._featurewise_center(images_tensor)
flatx = tf.cast(tf.reshape(images_tensors, (-1, np.prod(images_tensors.shape[1:]))), self.dtype)
whitex = tf.matmul(flatx, self.princ_comp)
x = tf.reshape(whitex, images_tensors.shape)
return x
def main():
import matplotlib.pyplot as plt
train_set, test_set = mnist.load_data()
x_train, y_train = train_set
zca1 = ZCA(epsilon=1e-5, dtype='float64')
# input should be a 4D tensor
x_train = x_train.reshape(*x_train.shape, 1)
zca1.fit(x_train)
x_train_transf = zca1.compute(x_train)
# reshaping to 28*28 and casting to uint8 for plotting
x_train_transf = tf.reshape(x_train_transf, x_train_transf.shape[0:3])
fig, axes = plt.subplots(3, 3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(x_train_transf[i],
cmap='binary'
)
xlabel = "True: %d" % y_train[i]
ax.set_xlabel(xlabel)
ax.set_xticks([])
ax.set_yticks([])
plt.show()
if __name__ == '__main__':
main()
I know this isn't a proper answer to the original question, but still it may be useful to anyone who is looking for a GPU implementation of ZCA but couldn't find one.
Although both answers refer to the UFLDL tutorial, none of them seems to use the steps as described in it.
Therefore, I thought it might not be bad idea to just provide an answer that simply implements PCA/ZCA-whitening according to the tutorial:
import numpy as np
# generate some random, 2D data
x = np.random.randn(1000, 2)
# and center it
x_c = x - np.mean(x, 0)
# compute the 2x2 covariance matrix
# (remember that covariance matrix is symmetric)
sigma = np.cov(x, rowvar=False)
# and extract eigenvalues and eigenvectors
# using the algorithm for symmetric matrices
l,u = np.linalg.eigh(sigma)
# NOTE that for symmetric matrices,
# eigenvalues and singular values are the same.
# u, l, _ = np.linalg.svd(sigma) should thus give equivalent results
# rotate the (centered) data to decorrelate it
x_rot = np.dot(x_c, u)
# check that the covariance is diagonal (indicating decorrelation)
np.allclose(np.cov(x_rot.T), np.diag(np.diag(np.cov(x_rot.T))))
# scale the data by eigenvalues to get unit variance
x_white = x_rot / np.sqrt(l)
# have the whitened data be closer to the original data
x_zca = np.dot(x_white, u.T)
I assume you can wrap this in a function by yourself...
For completeness, different implementation flavours and their runtime (evaluated on a centred version of CIFAR10):
x = np.random.randn(10_000, 3, 32, 32)
x_ = np.reshape(x, (len(x), -1))
x_c = x_ - np.mean(x_, axis=0)
def zca1(x):
s, u = np.linalg.eigh(x.T # x)
scale = np.sqrt(len(x) / s)
return (u * scale) # u.T
def zca2(x):
u, s, _ = np.linalg.svd(x.T # x, hermitian=True)
scale = np.sqrt(len(x) / s)
return (u * scale) # u.T
def zca3(x):
_, s, v = np.linalg.svd(x, full_matrices=False)
scale = np.sqrt(len(x)) / s
return (v.T * scale) # v
%timeit zca1(x_c)
# 4.57 s ± 14.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit zca2(x_c)
# 4.62 s ± 22.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit zca3(x_c)
# 20.2 s ± 1.2 s per loop (mean ± std. dev. of 7 runs, 1 loop each)
For the mathematics behind this, I refer to this excellent answer from cross validated.
This works with an array of 48x48:
def flatten_matrix(matrix):
vector = matrix.flatten(order='F')
vector = vector.reshape(1, len(vector))
return vector
def zca_whitening(inputs):
sigma = np.dot(inputs, inputs.T)/inputs.shape[1] #Correlation matrix
U,S,V = np.linalg.svd(sigma) #Singular Value Decomposition
epsilon = 0.1 #Whitening constant, it prevents division by zero
ZCAMatrix = np.dot(np.dot(U, np.diag(1.0/np.sqrt(np.diag(S) + epsilon))), U.T) #ZCA Whitening matrix
return np.dot(ZCAMatrix, inputs) #Data whitening
def global_contrast_normalize(X, scale=1., subtract_mean=True, use_std=True,
sqrt_bias=10, min_divisor=1e-8):
"""
__author__ = "David Warde-Farley"
__copyright__ = "Copyright 2012, Universite de Montreal"
__credits__ = ["David Warde-Farley"]
__license__ = "3-clause BSD"
__email__ = "wardefar#iro"
__maintainer__ = "David Warde-Farley"
.. [1] A. Coates, H. Lee and A. Ng. "An Analysis of Single-Layer
Networks in Unsupervised Feature Learning". AISTATS 14, 2011.
http://www.stanford.edu/~acoates/papers/coatesleeng_aistats_2011.pdf
"""
assert X.ndim == 2, "X.ndim must be 2"
scale = float(scale)
assert scale >= min_divisor
mean = X.mean(axis=1)
if subtract_mean:
X = X - mean[:, np.newaxis]
else:
X = X.copy()
if use_std:
ddof = 1
if X.shape[1] == 1:
ddof = 0
normalizers = np.sqrt(sqrt_bias + X.var(axis=1, ddof=ddof)) / scale
else:
normalizers = np.sqrt(sqrt_bias + (X ** 2).sum(axis=1)) / scale
normalizers[normalizers < min_divisor] = 1.
X /= normalizers[:, np.newaxis] # Does not make a copy.
return X
def ZeroCenter(data):
data = data - np.mean(data,axis=0)
return data
def Zerocenter_ZCA_whitening_Global_Contrast_Normalize(data):
numpy_data = np.array(data).reshape(48,48)
data2 = ZeroCenter(numpy_data)
data3 = zca_whitening(flatten_matrix(data2)).reshape(48,48)
data4 = global_contrast_normalize(data3)
data5 = np.rot90(data4,3)
return data5
for example from this image:
returns:
Here is the code:
https://gist.github.com/m-alcu/45f4a083cb5e388d2ed26ace4392ed66, needs to put fer2013.csv file in the same directory (https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge/data)
Related
I got this code for spectral clustering.
https://github.com/BirdYin/scllc/blob/master/scllc.py
This is a landmark-based spectral clustering code.
What does the locality_linear_coding function do in this code?
class Scllc:
def __locality_linear_coding(self, data, neighbors):
indicator = np.ones([neighbors.shape[0], 1])
penalty = np.eye(self.n_neighbors)
# Get the weights of every neighbors
z = neighbors - indicator.dot(data.reshape(-1,1).T)
local_variance = z.dot(z.T)
local_variance = local_variance + self.lambda_val * penalty
weights = scipy.linalg.solve(local_variance, indicator)
weights = weights / np.sum(weights)
weights = weights / np.sum(np.abs(weights))
weights = np.abs(weights)
return weights.reshape(self.n_neighbors)
def fit(self, X):
[n_data, n_dim] = X.shape
# Select landmarks
if self.func_landmark == 'kmeans':
landmarks, centers, unknown = k_means(X, self.n_landmarks, n_init=1, max_iter=100)
nbrs = NearestNeighbors(metric='euclidean').fit(landmarks)
# Create properties of the sparse matrix Z
[dist, indy] = nbrs.kneighbors(X, n_neighbors = self.n_neighbors)
indx = np.ones([n_data, self.n_neighbors]) * np.asarray(range(n_data))[:, None]
valx = np.zeros([n_data, self.n_neighbors])
self.delta = np.mean(valx)
# Compute all the coded data
for index in range(n_data):
# Compute the weights of its neighbors
localmarks = landmarks[indy[index,:], :]
weights = self.__locality_linear_coding(X[index,:], localmarks)
# Compute the coded data
valx[index] = weights
# Construct sparse matrix
indx = indx.reshape(n_data * self.n_neighbors)
indy = indy.reshape(n_data * self.n_neighbors)
valx = valx.reshape(n_data * self.n_neighbors)
Z = sparse.coo_matrix((valx,(indx,indy)),shape=(n_data,self.n_landmarks))
Z = Z / np.sqrt(np.sum(Z, 0))
# Get first k eigenvectors
[U, Sigma, V] = svds(Z, k = self.n_clusters + 1)
U = U[:, 0:self.n_clusters]
embedded_data = U / np.sqrt(np.sum(U * U, 0))
You can see the documentation of numpy module to deal with n-dimensional array
.For exemple, the dot method do the product of the matrices
Than They have use the scipy module, you can also see the documentation on internet.
the first function of a class is always an initialize method. Because the user have to call it to fully use the class. It is the first function where are defined and saved all the variables that the user want
I am trying to implement PCA without any library for image dimension reduction. I tried the code in the O'Reilly Computer Vision book and implement it on a sample lenna picture:
from PIL import Image
from numpy import *
def pca(X):
num_data, dim = X.shape
mean_X = X.mean(axis=0)
X = X - mean_X
if dim > num_data:
# PCA compact trick
M = np.dot(X, X.T) # covariance matrix
e, U = np.linalg.eigh(M) # calculate eigenvalues an deigenvectors
tmp = np.dot(X.T, U).T
V = tmp[::-1] # reverse since the last eigenvectors are the ones we want
S = np.sqrt(e)[::-1] #reverse since the last eigenvalues are in increasing order
for i in range(V.shape[1]):
V[:,i] /= S
else:
# normal PCA, SVD method
U,S,V = np.linalg.svd(X)
V = V[:num_data] # only makes sense to return the first num_data
return V, S, mean_X
img=color.rgb2gray(io.imread('D:\lenna.png'))
x,y,z=pca(img)
plt.imshow(x)
but the image plot of the pca doesnt look like the original image like at all.
As far as i know PCA kinda reduce the image dimension but it will still somehow resemble the original image but in lower detail. Whats wrong with the code?
Well, nothing is wrong per se in your code, but you're not displaying the right thing if I do understand what you actually want to do!
What I would write for your problem is the following:
def pca(X, number_of_pcs):
num_data, dim = X.shape
mean_X = X.mean(axis=0)
X = X - mean_X
if dim > num_data:
# PCA compact trick
M = np.dot(X, X.T) # covariance matrix
e, U = np.linalg.eigh(M) # calculate eigenvalues an deigenvectors
tmp = np.dot(X.T, U).T
V = tmp[::-1] # reverse since the last eigenvectors are the ones we want
S = np.sqrt(e)[::-1] #reverse since the last eigenvalues are in increasing order
for i in range(V.shape[1]):
V[:,i] /= S
return V, S, mean_X
else:
# normal PCA, SVD method
U, S, V = np.linalg.svd(X, full_matrices=False)
# reconstruct the image using U, S and V
# otherwise you're just outputting the eigenvectors of X*X^T
V = V.T
S = np.diag(S)
X_hat = np.dot(U[:, :number_of_pcs], np.dot(S[:number_of_pcs, :number_of_pcs], V[:,:number_of_pcs].T))
return X_hat, S, mean_X
The change here lies in the fact that we want to reconstruct the image using a given number of eigenvectors (determined by number_of_pcs).
The thing to remember is that in np.linalg.svd, the columns of U are the eigenvectors of X.X^T.
When doing that, we obtain the following results (displayed here using 1 and 10 principal components):
X_hat, S, mean_X = pca(img, 1)
plt.imshow(X_hat)
X_hat, S, mean_X = pca(img, 10)
plt.imshow(X_hat)
PS: note that the picture aren't displayed in grayscale because of matplotlib.pyplot, but this is a very minor issue here.
For this question, I am using the following Wiki definition of Matrix whitening:
From the definition, I expect the covariance matrix of Y to be the identity matrix. However, this is far from the truth!
Here is the reproduction:
import numpy as np
# random matrix
dim1 = 512 # dimentionality_of_features
dim2 = 100 # no_of_samples
X = np.random.rand(dim1, dim2)
# centering to have mean 0
X = X - np.mean(X, axis=1, keepdims=True)
# covariance of X
Xcov = np.dot(X, X.T) / (X.shape[1] - 1)
# SVD decomposition
# Eigenvecors and eigenvalues
Ec, wc, _ = np.linalg.svd(Xcov)
# get only the first positive ones (for numerical stability)
k_c = (wc > 1e-5).sum()
# Diagonal Matrix of eigenvalues
Dc = np.diag((wc[:k_c]+1e-6)**-0.5)
# E D ET should be the whitening matrix
W = Ec[:,:k_c].dot(Dc).dot(Ec[:,:k_c].T)
# SVD decomposition End
Y = W.dot(X)
# Now apply the same to the whitened X
Ycov = np.dot(Y, Y.T) / (Y.shape[1] - 1)
print(Ycov)
>> [[ 0.19935189 -0.00740203 -0.00152036 ... 0.00133161 -0.03035149
0.02638468] ...
It seems that it won't give me a unit diagonal matrix, unless, dim2 >> dim1.
If I take dim2=1 then I get a vector (although in the example I get an error due to division by 0), and by the Wikis definition, it is incorrect?
I am trying to compute the accelerations due to gravity for an n-body problem in 3-space (I'm using symplectic Euler).
I have position and velocity vectors for each time step, and am using the below (working) code to calculate accelerations and update velocity and position. Note that the accelerations are vectors in 3-space, not just magnitudes.
I would like to know if there's a more efficient way to compute this with numpy to avoid the loops.
def accelerations(positions, masses):
'''Params:
- positions: numpy array of size (n,3)
- masses: numpy array of size (n,)
Returns:
- accelerations: numpy of size (n,3), the acceleration vectors in 3-space
'''
n_bodies = len(masses)
accelerations = numpy.zeros([n_bodies,3]) # n_bodies * (x,y,z)
# vectors from mass(i) to mass(j)
D = numpy.zeros([n_bodies,n_bodies,3]) # n_bodies * n_bodies * (x,y,z)
for i, j in itertools.product(range(n_bodies), range(n_bodies)):
D[i][j] = positions[j]-positions[i]
# Acceleration due to gravitational force between each pair of bodies
A = numpy.zeros((n_bodies, n_bodies,3))
for i, j in itertools.product(range(n_bodies), range(n_bodies)):
if numpy.linalg.norm(D[i][j]) > epsilon:
A[i][j] = gravitational_constant * masses[j] * D[i][j] \
/ numpy.linalg.norm(D[i][j])**3
# Calculate net acceleration of each body (vectors in 3-space)
accelerations = numpy.sum(A, axis=1) # sum of accel vectors for each body of shape (n_bodies,3)
return accelerations
Here is an optimized version using blas. blas has special routines for linear algebra on symmetric or Hermitian matrices. These use specialized, packed storage, keeping only the upper or lower triangle and leaving out the (redundant) mirrored entries. That way blas saves not only ~half the storage but also ~half the flops.
I've put quite a few comments to make it readable.
import numpy as np
import itertools
from scipy.linalg.blas import zhpr, dspr2, zhpmv
def acc_vect(pos, mas):
n = mas.size
d2 = pos#(-2*pos.T)
diag = -0.5 * np.einsum('ii->i', d2)
d2 += diag + diag[:, None]
np.einsum('ii->i', d2)[...] = 1
return np.nansum((pos[:, None, :] - pos) * (mas[:, None] * d2**-1.5)[..., None], axis=0)
def acc_blas(pos, mas):
n = mas.size
# trick: use complex Hermitian to get the packed anti-symmetric
# outer difference in the imaginary part of the zhpr answer
# don't want to sum over dimensions yet, therefore must do them one-by-one
trck = np.zeros((3, n * (n + 1) // 2), complex)
for a, p in zip(trck, pos.T - 1j):
zhpr(n, -2, p, a, 1, 0, 0, 1)
# does a -> a + alpha x x^H
# parameters: n -- matrix dimension
# alpha -- real scalar
# x -- complex vector
# ap -- packed Hermitian n x n matrix a
# i.e. an n(n+1)/2 vector
# incx -- x stride
# offx -- x offset
# lower -- is storage of ap lower or upper
# overwrite_ap -- whether to change a inplace
# as a by-product we get pos pos^T:
ppT = trck.real.sum(0) + 6
# now compute matrix of squared distances ...
# ... using (A-B)^2 = A^2 + B^2 - 2AB
# ... that and the outer sum X (+) X.T equals X ones^T + ones X^T
dspr2(n, -0.5, ppT[np.r_[0, 2:n+1].cumsum()], np.ones((n,)), ppT,
1, 0, 1, 0, 0, 1)
# does a -> a + alpha x y^T + alpha y x^T in packed symmetric storage
# scale anti-symmetric differences by distance^-3
np.divide(trck.imag, ppT*np.sqrt(ppT), where=ppT.astype(bool),
out=trck.imag)
# it remains to scale by mass and sum
# this can be done by matrix multiplication with the vector of masses ...
# ... unfortunately because we need anti-symmetry we need to work
# with Hermitian storage, i.e. complex numbers, even though the actual
# computation is only real:
out = np.zeros((3, n), complex)
for a, o in zip(trck, out):
zhpmv(n, 0.5, a, mas*-1j, 1, 0, 0, o, 1, 0, 0, 1)
# multiplies packed Hermitian matrix by vector
return out.real.T
def accelerations(positions, masses, epsilon=1e-6, gravitational_constant=1.0):
'''Params:
- positions: numpy array of size (n,3)
- masses: numpy array of size (n,)
'''
n_bodies = len(masses)
accelerations = np.zeros([n_bodies,3]) # n_bodies * (x,y,z)
# vectors from mass(i) to mass(j)
D = np.zeros([n_bodies,n_bodies,3]) # n_bodies * n_bodies * (x,y,z)
for i, j in itertools.product(range(n_bodies), range(n_bodies)):
D[i][j] = positions[j]-positions[i]
# Acceleration due to gravitational force between each pair of bodies
A = np.zeros((n_bodies, n_bodies,3))
for i, j in itertools.product(range(n_bodies), range(n_bodies)):
if np.linalg.norm(D[i][j]) > epsilon:
A[i][j] = gravitational_constant * masses[j] * D[i][j] \
/ np.linalg.norm(D[i][j])**3
# Calculate net accleration of each body
accelerations = np.sum(A, axis=1) # sum of accel vectors for each body
return accelerations
from numpy.linalg import norm
def acc_pm(positions, masses, G=1):
'''Params:
- positions: numpy array of size (n,3)
- masses: numpy array of size (n,)
'''
mass_matrix = masses.reshape((1, -1, 1))*masses.reshape((-1, 1, 1))
disps = positions.reshape((1, -1, 3)) - positions.reshape((-1, 1, 3)) # displacements
dists = norm(disps, axis=2)
dists[dists == 0] = 1 # Avoid divide by zero warnings
forces = G*disps*mass_matrix/np.expand_dims(dists, 2)**3
return forces.sum(axis=1)/masses.reshape(-1, 1)
n = 500
pos = np.random.random((n, 3))
mas = np.random.random((n,))
from timeit import timeit
print(f"loops: {timeit('accelerations(pos, mas)', globals=globals(), number=1)*1000:10.3f} ms")
print(f"pmende: {timeit('acc_pm(pos, mas)', globals=globals(), number=10)*100:10.3f} ms")
print(f"vectorized: {timeit('acc_vect(pos, mas)', globals=globals(), number=10)*100:10.3f} ms")
print(f"blas: {timeit('acc_blas(pos, mas)', globals=globals(), number=10)*100:10.3f} ms")
A = accelerations(pos, mas)
AV = acc_vect(pos, mas)
AB = acc_blas(pos, mas)
AP = acc_pm(pos, mas)
assert np.allclose(A, AV) and np.allclose(AB, AV) and np.allclose(AP, AV)
Sample run; comparing to OP, my pure numpy vectorization and #P Mende's.
loops: 3213.130 ms
pmende: 41.480 ms
vectorized: 43.860 ms
blas: 7.726 ms
We can see that
1) P Mende is slightly better than I at vectorizing
2) blas is ~5 times as fast; please note that my blas is not very good; I suspect with an optimized blas you may get even better (numpy would be expected to run faster too on a better blas, though)
3) any of the answers is much faster than loops
A follow up to my comments on your original post:
from numpy.linalg import norm
def accelerations(positions, masses):
'''Params:
- positions: numpy array of size (n,3)
- masses: numpy array of size (n,)
'''
mass_matrix = masses.reshape((1, -1, 1))*masses.reshape((-1, 1, 1))
disps = positions.reshape((1, -1, 3)) - positions.reshape((-1, 1, 3)) # displacements
dists = norm(disps, axis=2)
dists[dists == 0] = 1 # Avoid divide by zero warnings
forces = G*disps*mass_matrix/np.expand_dims(dists, 2)**3
return forces.sum(axis=1)/masses.reshape(-1, 1)
Some things to consider:
You only need half the distances; once you've calculated D[i][j], that's the same as -D[j][i].
You can do df2 = df.apply(lambda x:gravitational_constant/x**3)
You can generate a dataframe that records, for each pair of bodies, the product of their masses. You only have do that once, and then you can pass it to accelearations every time you call it.
Then df.product(df2).product(mass_products).sum().div(masses) gives you the accelerations.
I have a large sparse matrix - using sparse.csr_matrix from scipy. The values are binary. For each row, I need to compute the Jaccard distance to every row in the same matrix. What's the most efficient way to do this? Even for a 10.000 x 10.000 matrix, my runtime takes minutes to finish.
Current solution:
def jaccard(a, b):
intersection = float(len(set(a) & set(b)))
union = float(len(set(a) | set(b)))
return 1.0 - (intersection/union)
def regions(csr, p, epsilon):
neighbors = []
for index in range(len(csr.indptr)-1):
if jaccard(p, csr.indices[csr.indptr[index]:csr.indptr[index+1]]) <= epsilon:
neighbors.append(index)
return neighbors
csr = scipy.sparse.csr_matrix("file")
regions(csr, 0.51) #this is called for every row
Vectorization is relatively easy if you use matrix multiplication to calculate the set intersections and then the rule |union(a, b)| == |a| + |b| - |intersection(a, b)| to determine the unions:
# Not actually necessary for sparse matrices, but it is for
# dense matrices and ndarrays, if X.dtype is integer.
from __future__ import division
def pairwise_jaccard(X):
"""Computes the Jaccard distance between the rows of `X`.
"""
X = X.astype(bool).astype(int)
intrsct = X.dot(X.T)
row_sums = intrsct.diagonal()
unions = row_sums[:,None] + row_sums - intrsct
dist = 1.0 - intrsct / unions
return dist
Note the cast to bool and then int, because the dtype of X must be large enough to accumulate twice the maximum row sum and that entries of X must be either zero or one. The downside of this code is that it's heavy on RAM, because unions and dists are dense matrices.
If you're only interested in distances smaller than some cut-off epsilon, the code can be tuned for sparse matrices:
from scipy.sparse import csr_matrix
def pairwise_jaccard_sparse(csr, epsilon):
"""Computes the Jaccard distance between the rows of `csr`,
smaller than the cut-off distance `epsilon`.
"""
assert(0 < epsilon < 1)
csr = csr_matrix(csr).astype(bool).astype(int)
csr_rownnz = csr.getnnz(axis=1)
intrsct = csr.dot(csr.T)
nnz_i = np.repeat(csr_rownnz, intrsct.getnnz(axis=1))
unions = nnz_i + csr_rownnz[intrsct.indices] - intrsct.data
dists = 1.0 - intrsct.data / unions
mask = (dists > 0) & (dists <= epsilon)
data = dists[mask]
indices = intrsct.indices[mask]
rownnz = np.add.reduceat(mask, intrsct.indptr[:-1])
indptr = np.r_[0, np.cumsum(rownnz)]
out = csr_matrix((data, indices, indptr), intrsct.shape)
return out
If this still takes to much RAM you could try to vectorize over one dimension and Python-loop over the other.
To add to the accepted answer: I had use for a weighted version of the above method which is simply implemented as:
def pairwise_jaccard_sparse_weighted(csr, epsilon, weight):
csr = scipy.sparse.csr_matrix(csr).astype(bool).astype(int)
csr_w = csr * scipy.sparse.diags(weight)
csr_rowsum = numpy.array(csr_w.sum(axis = 1)).flatten()
intrsct = csr.dot(csr_w.T)
rowsum_i = numpy.repeat(csr_rowsum, intrsct.getnnz(axis = 1))
unions = rowsum_i + csr_rowsum[intrsct.indices] - intrsct.data
dists = 1.0 - 1.0 * intrsct.data / unions
mask = (dists > 0) & (dists <= epsilon)
data = dists[mask]
indices = intrsct.indices[mask]
rownnz = numpy.add.reduceat(mask, intrsct.indptr[:-1])
indptr = numpy.r_[0, numpy.cumsum(rownnz)]
out = scipy.sparse.csr_matrix((data, indices, indptr), intrsct.shape)
return out
I doubt this is the most efficient implementation, but it's a damn sight quicker than the dense implementation in scipy.spatial.distance.jaccard.
Here a solution that has a scikit-learn-like API.
def pairwise_sparse_jaccard_distance(X, Y=None):
"""
Computes the Jaccard distance between two sparse matrices or between all pairs in
one sparse matrix.
Args:
X (scipy.sparse.csr_matrix): A sparse matrix.
Y (scipy.sparse.csr_matrix, optional): A sparse matrix.
Returns:
numpy.ndarray: A similarity matrix.
"""
if Y is None:
Y = X
assert X.shape[1] == Y.shape[1]
X = X.astype(bool).astype(int)
Y = Y.astype(bool).astype(int)
intersect = X.dot(Y.T)
x_sum = X.sum(axis=1).A1
y_sum = Y.sum(axis=1).A1
xx, yy = np.meshgrid(x_sum, y_sum)
union = ((xx + yy).T - intersect)
return (1 - intersect / union).A
Here some testing and benchmarking:
>>> import timeit
>>> import numpy as np
>>> from scipy.sparse import csr_matrix
>>> from sklearn.metrics import pairwise_distances
>>> X = csr_matrix(np.random.choice(a=[False, True], size=(10000, 1000), p=[0.9, 0.1]))
>>> Y = csr_matrix(np.random.choice(a=[False, True], size=(1000, 1000), p=[0.9, 0.1]))
Asserting that all results are approximately equivalent
>>> custom_jaccard_distance = pairwise_sparse_jaccard_distance(X, Y)
>>> sklearn_jaccard_distance = pairwise_distances(X.todense(), Y.todense(), "jaccard")
>>> np.allclose(custom_jaccard_distance, sklearn_jaccard_distance)
True
Benchmarking runtime (from Jupyer Notebook)
>>> %timeit pairwise_jaccard_index(X, Y)
795 ms ± 58.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
>>> %timeit 1 - pairwise_distances(X.todense(), Y.todense(), "jaccard")
14.7 s ± 694 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)