Matlab-Python translation error - python

Matlab Code:
AP(queryIdx) = diff([0;recall]')*prec
My python code:
AP[queryIdx] = np.dot(np.diff(np.concatenate(([[0]], recall), axis=0).transpose()),prec)
Variables:(Checked and am quite sure they are equivalent in python and in Matlab)
Recall: 1000x1 np array*
prec: 1000x1 np array
* prints out as [[.],.....,[.]]
Results:
Matlab: .1011
Python: 0.05263158
Only cause I can think of outside of the code is that python uses more
precision, but I doubt that would make such a large difference)
*Edit There was a problem with my prec variable. The above code worked

That code looks a bit messy. Try replacing it with this:
AP[queryIdx] = np.dot(np.diff(np.hstack([0, recall.ravel()])), prec.ravel())
In your post, you mentioned that you have a 1000 x 1 array for both recall and prec. This to me is interpreted as a 2D array with a singleton dimension: the second dimension. As such, you'd need to convert this back to a 1D array using ravel.
Now, np.hstack horizontally stacks 1D arrays together and so this will append a 0 at the front, then apply the diff operator, and the perform the dot product with prec.
One common gotcha that MATLAB coders have with numpy is the representation of 1D arrays in numpy. There is no such thing as the transpose of a 1D array. All numpy 1D arrays are row vectors. If you explicitly want to make the 1D array a column vector, you need to include an additional dimension and make the second dimension 1, then transpose it. Something like this:
r = v[:][None].T
In any case, let's verify the results:
MATLAB
>> recall = (1:1000).';
>> prec = (1000:-1:1).';
>> diff([0; recall].')*prec
ans =
500500
Python (IPython)
In [1]: import numpy as np
In [2]: recall = np.arange(1,1001)
In [3]: prec = np.arange(1000,0,-1)
In [4]: np.dot(np.diff(np.hstack([0, recall.ravel()])), prec.ravel())
Out[4]: 500500

Related

How to write a function that DIRECTLY outputs a 2D Numpy array from two 1D array?

I created two numpy 1D arrays
x1 = np.linspace(0, 1, 5)
x2 = np.linspace(0, 10, 5)
I wrote a function
def myfoo(x1,x2):
return x1**2+x1*x2+x2**2
To get a 2D numpy array, I use the following code :
y=np.empty((x1.size,x2.size))
for a in range(0,x2.size):
y[a]=myfoo(x1,x2[a])
I would like to know if is it possible to write a function that outputs this 2D array DIRECTLY. I simply wonder if is possible to write y=myfoo2(x1,x2) instead of three code lines as above.
I know I can insert these lines into the function as suggested in the comment. But, I wonder if it exists in Numpy or Python "something" (function, operators, ...) like the mathematicals dyadic product of two vectors (i.e. from two 1D vectors of size m,n, this operation gives a matrix of size m x n)
Thanks for answer
myfoo(x1[:,None], x2). x1[:,None]*x2
produces a (5,5) array.

Why multiplication functions of scipy sparse and numpy arrays give different results?

I have two matrices in Python 2.7: one dense A_dense and the another sparse matrix A_sparse. I am interested in computing element-wise multiplication followed by sum. There are two ways to do it: use numpy's multiplication or scipy sparse multiplication. I expect them to give exactly same result with difference in execution time. But I find that they give different results for certain matrix sizes.
import numpy as np
from scipy import sparse
L=2000
np.random.seed(2)
rand_x=np.random.rand(L)
A_sparse_init=np.diag(rand_x, -1)+np.diag(rand_x, 1)
A_sparse=sparse.csr_matrix(A_sparse_init)
A_dense=np.random.rand(L+1,L+1)
print np.sum(A_sparse.multiply(A_dense))-np.sum(np.multiply(A_dense[A_sparse.nonzero()], A_sparse.data))
Output:
1.1368683772161603e-13
If I choose L=2001, then output is:
0.0
To check the size dependence of the difference using two different multiplication method, I wrote:
L=100
np.random.seed(2)
N_loop=100
multiply_diff_arr=np.zeros(N_loop)
for i in xrange(N_loop):
rand_x=np.random.rand(L)
A_sparse_init=np.diag(rand_x, -1)+np.diag(rand_x, 1)
A_sparse=sparse.csr_matrix(A_sparse_init)
A_dense=np.random.rand(L+1,L+1)
multiply_diff_arr[i]=np.sum(A_sparse.multiply(A_dense))-np.sum(np.multiply(A_dense[A_sparse.nonzero()], A_sparse.data))
L+=1
I got the following plot:
Can anyone help me understand what's happening? Don't we expect the difference between two methods to be at least 1e-18 rather than 1e-13?
I don't have a full answer, but this might help find the answer:
Under the hood, scipy.sparse will convert to coo format and do this:
ret = self.tocoo()
if self.shape == other.shape:
data = np.multiply(ret.data, other[ret.row, ret.col])
The question is then why these two operations give different results:
ret = A_sparse.tocoo()
c = np.multiply(ret.data, A_dense[ret.row, ret.col])
ret.data = c.view(type=np.ndarray)
c.sum() - ret.sum()
-1.1368683772161603e-13
Edit:
The difference stems from different defaults on which axis to add.reduce first.
E.g.:
A_sparse.multiply(A_dense).sum(axis=1).sum()
A_sparse.multiply(A_dense).sum(axis=0).sum()
Numpy defaults to 0 first.

How to retrieve the dimensions of a numpy.array

I am very new to learning python and I am trying to scale a matrix using library np. array n x m.
the question : if a matrix with using library np.array is given as input and I don't know how big the range the matrix, how can I initialize the size of m? Are there certain features or tricks in Python that can be used for this?
import numpy as np
def scaleArray(arr: np.array);
arrayB = np.array([[1,2,4],
[3,4,5],
[2,1,0],
[0,1,0]])
scaleArray(b)
This arrayB is just for example.
Expected output :
3
arr.shape is what you are looking for, it gives you the dimensions of the nD array.
In your case, you want arr.shape[1]

Matlab to Python numpy indexing and multiplication issue

I have the following line of code in MATLAB which I am trying to convert to Python numpy:
pred = traindata(:,2:257)*beta;
In Python, I have:
pred = traindata[ : , 1:257]*beta
beta is a 256 x 1 array.
In MATLAB,
size(pred) = 1389 x 1
But in Python,
pred.shape = (1389L, 256L)
So, I found out that multiplying by the beta array is producing the difference between the two arrays.
How do I write the original Python line, so that the size of pred is 1389 x 1, like it is in MATLAB when I multiply by my beta array?
I suspect that beta is in fact a 1D numpy array. In numpy, 1D arrays are not row or column vectors where MATLAB clearly makes this distinction. These are simply 1D arrays agnostic of any shape. If you must, you need to manually introduce a new singleton dimension to the beta vector to facilitate the multiplication. On top of this, the * operator actually performs element-wise multiplication. To perform matrix-vector or matrix-matrix multiplication, you must use numpy's dot function to do so.
Therefore, you must do something like this:
import numpy as np # Just in case
pred = np.dot(traindata[:, 1:257], beta[:,None])
beta[:,None] will create a 2D numpy array where the elements from the 1D array are populated along the rows, effectively making a column vector (i.e. 256 x 1). However, if you have already done this on beta, then you don't need to introduce the new singleton dimension. Just use dot normally:
pred = np.dot(traindata[:, 1:257], beta)

How to generate a number of random vectors starting from a given one

I have an array of values and would like to create a matrix from that, where each row is my starting point vector multiplied by a sample from a (normal) distribution.
The number of rows of this matrix will then vary in dependence from the number of samples I want.
%pylab
my_vec = array([1,2,3])
my_rand_vec = my_vec*randn(100)
Last command does not work, because array shapes do not match.
I could think of using a for loop, but I am trying to leverage on array operations.
Try this
my_rand_vec = my_vec[None,:]*randn(100)[:,None]
For small numbers I get for example
import numpy as np
my_vec = np.array([1,2,3])
my_rand_vec = my_vec[None,:]*np.random.randn(5)[:,None]
my_rand_vec
# array([[ 0.45422416, 0.90844831, 1.36267247],
# [-0.80639766, -1.61279531, -2.41919297],
# [ 0.34203295, 0.6840659 , 1.02609885],
# [-0.55246431, -1.10492863, -1.65739294],
# [-0.83023829, -1.66047658, -2.49071486]])
Your solution my_vec*rand(100) does not work because * corresponds to the element-wise multiplication which only works if both arrays have identical shapes.
What you have to do is adding an additional dimension using [None,:] and [:,None] such that numpy's broadcasting works.
As a side note I would recommend not to use pylab. Instead, use import as in order to include modules as pointed out here.
It is the outer product of vectors:
my_rand_vec = numpy.outer(randn(100), my_vec)
You can pass the dimensions of the array you require to numpy.random.randn:
my_rand_vec = my_vec*np.random.randn(100,3)
To multiply each vector by the same random number, you need to add an extra axis:
my_rand_vec = my_vec*np.random.randn(100)[:,np.newaxis]

Categories