I'm working with anaconda the next code to take out the correlation coefficient between two matrix.
The first matrix read 16 files of matrix upper left.
The sum is to get the average to compare with the result of another file
`` `python
for i in range(0,16):
i = i + 5
file = pd.read_csv(path,header=None)
file=file.fillna(0)
file = pd.DataFrame(file)
matrix = np.matrix(file)
matrix = np.flip(matrix, 1)
b = np.copy(matrix)
b = np.swapaxes(b, 1, 0)
np.fill_diagonal(b, 0)
c = matrix + b
sum = c.sum(0) / c.shape[0]
sum=pd.DataFrame(sum)
file2 = pd.read_csv(path,header=None)
file2=pd.DataFrame(file2)
file2 = file2.drop(file2.columns[48], axis=1)
` ``
the correlation coefficient between two files if sum is a matrix of (1,48) and file2 is a matrix of (16,48).
I did a bit research and hopefully below can help:
numpy.corrcoef
numpy.corrcoef(x, y=None, rowvar=True, bias=<no value>, ddof=<no value>)
Return Pearson product-moment correlation coefficients.
Computing the correlation coefficient between two multi-dimensional arrays
Correlation (default 'valid' case) between two 2D arrays:
You can simply use matrix-multiplication np.dot like so -
out = np.dot(arr_one,arr_two.T)
Correlation with the default "valid" case between each pairwise row combinations (row1,row2) of the two input arrays would correspond to multiplication result at each (row1,row2) position.
Please clarify your question in case I misunderstood.
Related
I've got a function f(x,y) that takes two 1-d arrays and returns a scalar.
If I have a 2d matrix of shape (M,N), how do I efficiently apply the function pairwise across the 0 axis to end up with a square symmetric result of shape (M, M)?
Edit:
I'm trying to calculate pairwise correlation of an array of 1d arrays:
def f(x, y):
sigma_x_y = np.nanstd(x) * np.nanstd(y)
covariance = np.nanmean((x-np.nanmean(x))*(y-np.nanmean(y)))
return covariance/sigma_x_y
I think this is what you are looking for. The equations are similar to your function f(x, y):
x_m = x - np.nanmean(x,axis=1)[:,None]
y_m = y - np.nanmean(y,axis=1)[:,None]
X = np.nansum(x_m**2,axis=1)
Y = np.nansum(y_m**2,axis=1)
corr = np.dot(x_m,y_m.T)/np.sqrt(np.dot(X[:,None],Y[None]))
EDIT: If you wish to ignore NaN values in calculating correlation of two rows, simply replace last line with this:
corr = np.dot(np.nan_to_num(x_m), np.nan_to_num(y_m).T)/np.sqrt(np.dot(X[:,None],Y[None]))
I have a 1D vector having N dimension in TensorFlow,
how to construct sum of a pairwise squared difference?
Example
Input Vector
[1,2,3]
Output
6
Computed As
(1-2)^2+(1-3)^2+(2-3)^2.
if I have input as an N-dim vector l, the output should be sigma_{i,j}((l_i-l_j)^2).
Added question: if I have a 2d matrix and want to perform the same process for each row of the matrix, and then average the results from all the rows, how can I do it? Many thanks!
For pair-wise difference, subtract the input and the transpose of input and take only the upper triangular part, like:
pair_diff = tf.matrix_band_part(a[...,None] -
tf.transpose(a[...,None]), 0, -1)
Then you can square and sum the differences.
Code:
a = tf.constant([1,2,3])
pair_diff = tf.matrix_band_part(a[...,None] -
tf.transpose(a[...,None]), 0, -1)
output = tf.reduce_sum(tf.square(pair_diff))
with tf.Session() as sess:
print(sess.run(output))
# 6
I have a (large) 4D array, consisting of the 5 coefficients in a given basis for a matrix field. Given the 5 basis matrices, I want to efficiently calculate the matrix field.
The coefficient field c[x,y,z,i] being the value of i-th coefficient at position x,y,z
And the matrix field M[x,y,z,a,b] being the (3,3) matrix at position x,y,z
And the basis matrices T_1,...T_5, being the (3,3) basis matrices
I could loop over each position in space:
M[x,y,z,:,:] = T_1[:,:]*c[x,y,z,0] + T_2[:,:]*c[x,y,z,1]...T_5[:,:]*c[x,y,z,4]
But this is very inefficient. My attempts at using np.multiply,np.sum result in broadcasting errors due to the ambiguity of the desired product being a field of 3x3 matrices.
Keep in mind that to numpy, these 4 and 5d arrays are just that, not 3d arrays containing 2d matrices, etc.
Let's try to write your calculation in a way that clarifies dimensions:
M[x,y,z] = T_1*c[x,y,z,0] + T_2*c[x,y,z,1]...T_5*c[x,y,z,4]
M[x,y,z,:,:] = T_1[:,:]*c[x,y,z,0] + T_2[:,:]*c[x,y,z,1]...T_5[:,:]*c[x,y,z,4]
c[x,y,z,i] is a coefficient, right? So M is a weighted sum of the T_n arrays?
One way of expressing this is:
T = np.stack([T_1, T_2, ...T_5], axis=0) # 3d (nab)
M = np.einsum('nab,xyzn->xyzab', T, c)
We could alternatively stack T_i on a new last axis
T = np.stack([T_1, T_2 ...T_5], axis=2) # (abn)
M = np.einsum('abn,xyzn->xyzab', T, c)
or as broadcasted multiplication plus sum:
M = (T[None,None,None,:,:,:] * c[:,:,:,None,None,:]).sum(axis=-1)
I'm writing this code without testing, so there may be errors, but I think the basic outline is right.
It could also be written as a dot, if I can put the n dimension last in one argument, and 2nd to the last in the other. Or with tensordot. But there's less control over broadcasting of the other dimensions.
For test calculations you could also reshape these arrays so that the x,y,z are rolled into one, and the a,b into another, e.g
M[xyz,:] = T_n[ab]*c[xyz,n] # etc
I am trying to understand this optimized code to find cosine similarity between users matrix.
def fast_similarity(ratings,epsilon=1e-9):
# epsilon -> small number for handling dived-by-zero errors
sim = ratings.T.dot(ratings) + epsilon
norms = np.array([np.sqrt(np.diagonal(sim))])
return (sim / norms / norms.T)
If ratings =
items
u [
s [1,2,3]
e [4,5,6]
r [7,8,9]
s ]
nomrs will be equal to = [1^2 + 5^2 + 9^2]
but why we are writing sim/norms/norms.T to calculate cosine similarity?
Any help is appreciated.
Going through the code we have that:
And this means that, one the diagonal of the sim matrix we have the result of the multiplication of each column.
You can give it a try if you want using a simple matrix:
And you can easily check that this gram matrix (that's how this matrix product is named) has this property.
Now the code defines norms that is nothing but an array taking the diagonal of our gram matrix and apply a sqrt on each element of it.
This will give us an array containing the norm value for each column:
So basically the norms vector contains the norm value of each column of the result matrix.
Once we have all those data we can evaluate the cosine similarity between those users, so we know that cosine similarity is evaluated like:
Note that :
So we have that our similarity is going to be:
So we just have to substitute the terms with our code variable to get:
And this explain why you have this line of code:
return sim / norms / norms.T
EDIT:
Since it seems that I was not clear, every time I am talking about matrix multiplication in this answer I am reffering to the DOT PRODUCT of two matrices.
This actually means that when it's written A*B we actually develop and
solve as A.T * B
My code:
from numpy import *
def pca(orig_data):
data = array(orig_data)
data = (data - data.mean(axis=0)) / data.std(axis=0)
u, s, v = linalg.svd(data)
print s #should be s**2 instead!
print v
def load_iris(path):
lines = []
with open(path) as input_file:
lines = input_file.readlines()
data = []
for line in lines:
cur_line = line.rstrip().split(',')
cur_line = cur_line[:-1]
cur_line = [float(elem) for elem in cur_line]
data.append(array(cur_line))
return array(data)
if __name__ == '__main__':
data = load_iris('iris.data')
pca(data)
The iris dataset: http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data
Output:
[ 20.89551896 11.75513248 4.7013819 1.75816839]
[[ 0.52237162 -0.26335492 0.58125401 0.56561105]
[-0.37231836 -0.92555649 -0.02109478 -0.06541577]
[ 0.72101681 -0.24203288 -0.14089226 -0.6338014 ]
[ 0.26199559 -0.12413481 -0.80115427 0.52354627]]
Desired Output:
Eigenvalues - [2.9108 0.9212 0.1474 0.0206]
Principal Components - Same as I got but transposed so okay I guess
Also, what's with the output of the linalg.eig function? According to the PCA description on wikipedia, I'm supposed to this:
cov_mat = cov(orig_data)
val, vec = linalg.eig(cov_mat)
print val
But it doesn't really match the output in the tutorials I found online. Plus, if I have 4 dimensions, I thought I should have 4 eigenvalues and not 150 like the eig gives me. Am I doing something wrong?
edit: I've noticed that the values differ by 150, which is the number of elements in the dataset. Also, the eigenvalues are supposed to add to be equal to the number of dimensions, in this case, 4. What I don't understand is why this difference is happening. If I simply divided the eigenvalues by len(data) I could get the result I want, but I don't understand why. Either way the proportion of the eigenvalues isn't altered, but they are important to me so I'd like to understand what's going on.
You decomposed the wrong matrix.
Principal Component Analysis requires manipulating the eigenvectors/eigenvalues
of the covariance matrix, not the data itself. The covariance matrix, created from an m x n data matrix, will be an m x m matrix with ones along the main diagonal.
You can indeed use the cov function, but you need further manipulation of your data. It's probably a little easier to use a similar function, corrcoef:
import numpy as NP
import numpy.linalg as LA
# a simulated data set with 8 data points, each point having five features
data = NP.random.randint(0, 10, 40).reshape(8, 5)
# usually a good idea to mean center your data first:
data -= NP.mean(data, axis=0)
# calculate the covariance matrix
C = NP.corrcoef(data, rowvar=0)
# returns an m x m matrix, or here a 5 x 5 matrix)
# now get the eigenvalues/eigenvectors of C:
eval, evec = LA.eig(C)
To get the eigenvectors/eigenvalues, I did not decompose the covariance matrix using SVD,
though, you certainly can. My preference is to calculate them using eig in NumPy's (or SciPy's)
LA module--it is a little easier to work with than svd, the return values are the eigenvectors
and eigenvalues themselves, and nothing else. By contrast, as you know, svd doesn't return these these directly.
Granted the SVD function will decompose any matrix, not just square ones (to which the eig function is limited); however when doing PCA, you'll always have a square matrix to decompose,
regardless of the form that your data is in. This is obvious because the matrix you
are decomposing in PCA is a covariance matrix, which by definition is always square
(i.e., the columns are the individual data points of the original matrix, likewise
for the rows, and each cell is the covariance of those two points, as evidenced
by the ones down the main diagonal--a given data point has perfect covariance with itself).
The left singular values returned by SVD(A) are the eigenvectors of AA^T.
The covariance matrix of a dataset A is : 1/(N-1) * AA^T
Now, when you do PCA by using the SVD, you have to divide each entry in your A matrix by (N-1) so you get the eigenvalues of the covariance with the correct scale.
In your case, N=150 and you haven't done this division, hence the discrepancy.
This is explained in detail here
(Can you ask one question, please? Or at least list your questions separately. Your post reads like a stream of consciousness because you are not asking one single question.)
You probably used cov incorrectly by not transposing the matrix first. If cov_mat is 4-by-4, then eig will produce four eigenvalues and four eigenvectors.
Note how SVD and PCA, while related, are not exactly the same. Let X be a 4-by-150 matrix of observations where each 4-element column is a single observation. Then, the following are equivalent:
a. the left singular vectors of X,
b. the principal components of X,
c. the eigenvectors of X X^T.
Also, the eigenvalues of X X^T are equal to the square of the singular values of X. To see all this, let X have the SVD X = QSV^T, where S is a diagonal matrix of singular values. Then consider the eigendecomposition D = Q^T X X^T Q, where D is a diagonal matrix of eigenvalues. Replace X with its SVD, and see what happens.
Question already adressed: Principal component analysis in Python