I have three numpy arrays:
X: a 3073 x 49000 matrix
W: a 10 x 3073 matrix
y: a 49000 x 1 vector
y contains values between 0 and 9, each value represents a row in W.
I would like to add the first column of X to the row in W given by the first element in y. I.e. if the first element in y is 3, add the first column of X to the fourth row of W. And then add the second column of X to the row in W given by the second element in y and so on, until all columns of X has been aded to the row in W specified by y, which means a total of 49000 added rows.
W[y] += X.T does not work for me, because this will not add more than one vector to a row in W.
Please note: I'm only looking for vectorized solutions. I.e. no for-loops.
EDIT: To clarify I'll add an example with small matrix sizes adapted from Salvador Dali's example below.
In [1]: import numpy as np
In [2]: a, b, c = 3, 4, 5
In [3]: np.random.seed(0)
In [4]: X = np.random.randint(10, size=(b,c))
In [5]: W = np.random.randint(10, size=(a,b))
In [6]: y = np.random.randint(a, size=(c,1))
In [7]: X
Out[7]:
array([[5, 0, 3, 3, 7],
[9, 3, 5, 2, 4],
[7, 6, 8, 8, 1],
[6, 7, 7, 8, 1]])
In [8]: W
Out[8]:
array([[5, 9, 8, 9],
[4, 3, 0, 3],
[5, 0, 2, 3]])
In [9]: y
Out[9]:
array([[0],
[1],
[1],
[2],
[0]])
In [10]: W[y.ravel()] + X.T
Out[10]:
array([[10, 18, 15, 15],
[ 4, 6, 6, 10],
[ 7, 8, 8, 10],
[ 8, 2, 10, 11],
[12, 13, 9, 10]])
In [11]: W[y.ravel()] = W[y.ravel()] + X.T
In [12]: W
Out[12]:
array([[12, 13, 9, 10],
[ 7, 8, 8, 10],
[ 8, 2, 10, 11]])
The problem is to get BOTH column 0 and column 4 in X added to row 0 in W, as well as both column 1 and 2 in X added to row 1 in W.
The desired outcome is thus:
W = [[17, 22, 16, 16],
[ 7, 11, 14, 17],
[ 8, 2, 10, 11]]
First the straight forward loop solution as reference:
In [65]: for i,j in enumerate(y):
W[j]+=X[:,i]
....:
In [66]: W
Out[66]:
array([[17, 22, 16, 16],
[ 7, 11, 14, 17],
[ 8, 2, 10, 11]])
An add.at solution:
In [67]: W=W1.copy()
In [68]: np.add.at(W,(y.ravel()),X.T)
In [69]: W
Out[69]:
array([[17, 22, 16, 16],
[ 7, 11, 14, 17],
[ 8, 2, 10, 11]])
add.at does an unbuffered calculation, getting around the buffering that prevents W[y.ravel()] += X.T from working. It is still iterative, but the loop has been moved to compiled code. It isn't true vectorization because the order of application matters. The addition for one row of X.T depends on the results from the previous rows.
https://stackoverflow.com/a/20811014/901925 is the answer I gave a couple of years ago to a similar question (for 1d arrays).
But when dealing with your large arrays:
X: a 3073 x 49000 matrix
W: a 10 x 3073 matrix
y: a 49000 x 1 vector
this can run into speed issues. Note that W[y.ravel()] is the same size as X.T (why did you pick these sizes that require transpose?). And it's a copy, not a view. So there's already a time penalty.
bincount has been suggested in previous questions, and I think it is faster. Making for loop with index arrays faster (both bincount and add.at solutions)
Iterating over the small 3073 dimension could also have speed advantages. Or better yet on the size 10 dimension as Divakar demonstrates.
For the small test case, a,b,c=3,4,5, the add.at solution is fastest, with Divakar's bincount and einseum next. For a larger a,b,c=10,1000,20000, add.at gets very slow, with bincount being the fastest.
Related SO answers
https://stackoverflow.com/a/28205888/901925 (notes that bincount requires complete coverage for y).
https://stackoverflow.com/a/30041823/901925 (where Divakar again shows that bincount rules!)
Vectorized approaches
Approach #1
Based on this answer, here's a vectorized solution using np.bincount -
N = y.max()+1
id = y.ravel() + np.arange(X.shape[0])[:,None]*N
W[:N] += np.bincount(id.ravel(), weights=X.ravel()).reshape(-1,N).T
Approach #2
You can make good usage of boolean indexing and np.einsum to get the job done in a concise vectorized manner -
N = y.max()+1
W[:N] += np.einsum('ijk,lk->il',(np.arange(N)[:,None,None] == y.ravel()),X)
Loopy approaches
Approach #3
Since you are selecting and adding up a huge number of columns from X per unique y, it might be better in terms of performance to run a loop with complexity equal to the number of such unique y's, which seems to be at max equal to the number of rows in W and that in your case is just 10. Thus, the loop has just 10 iterations, not bad! Here's the implementation to fulfill those aspirations -
for k in range(W.shape[0]):
W[k] += X[:,(y==k).ravel()].sum(1)
Approach #4
You can bring in np.einsum to do the columnwise summations and have the final output like so -
for k in range(W.shape[0]):
W[k] += np.einsum('ij->i',X[:,(y==k).ravel()])
This will achieve what you want: X + W[y.ravel()].T
To see that this really does the work, here is a reproducible example:
import numpy as np
np.random.seed(0)
a, b, c = 3, 5, 4 # you can use your 3073, 49000, 10 later
X = np.random.rand(a, b)
W = np.random.rand(c, a)
y = np.random.randint(c, size=(b, 1))
Now your matrices are:
[[ 0.0871293 0.0202184 0.83261985]
[ 0.77815675 0.87001215 0.97861834]
[ 0.79915856 0.46147936 0.78052918]
[ 0.11827443 0.63992102 0.14335329]]
[[3]
[0]
[3]
[2]
[0]]
[[ 0.5488135 0.71518937 0.60276338 0.54488318 0.4236548 ]
[ 0.64589411 0.43758721 0.891773 0.96366276 0.38344152]
[ 0.79172504 0.52889492 0.56804456 0.92559664 0.07103606]]
And W[y.ravel()] gives you " W given by the first element in y". By transposing it, you will get a matrix ready to be added to X:
[[ 0.11827443 0.0871293 0.11827443 0.79915856 0.0871293 ]
[ 0.63992102 0.0202184 0.63992102 0.46147936 0.0202184 ]
[ 0.14335329 0.83261985 0.14335329 0.78052918 0.83261985]]
While I can't say that this is very pythonic, it is a solution (I think):
for column in range(x.shape[1]):
w[y[column]] = x[:,column].T
Related
I have two vectors / one-dimensional numpy arrays and a function I want to apply:
arr1 = np.arange(1, 5)
arr2 = np.arange(2, 6)
func = lambda x, y: x * y
I now want to construct a n * m matrix (with n, m being the lengths of arr1, and arr2 respectively) containing the values of the function outputs. The naive approach using for loops would look like this:
np.array([[func(x, y) for x in arr1] for y in arr2])
I was wondering if there is a smarter vectorized approach using the arr1[:, None] syntax to apply my function - please note my actual function is significantly more complicated and can't be broken down to simple numpy operations (arr1[:, None] * arr2[None, :] won't work).
When you have numpy.array, One approach can be numpy.einsum. Because you want to compute this : arr1_i * arr2_j -> insert to arr_result_ji.
>>> np.einsum('i, j -> ji', arr1, arr2)
array([[ 2, 4, 6, 8],
[ 3, 6, 9, 12],
[ 4, 8, 12, 16],
[ 5, 10, 15, 20]])
Or you can use numpy.matmul or use #.
>>> np.matmul(arr2[:,None], arr1[None,:])
# OR
>>> arr2[:,None] # arr1[None,:]
# Or by thanks #hpaulj by elementwise multiplication with broadcasting
>>> arr2[:,None] * arr1[None,:]
array([[ 2, 4, 6, 8],
[ 3, 6, 9, 12],
[ 4, 8, 12, 16],
[ 5, 10, 15, 20]])
Here is some comparison between your loop approach and #I'mahdi 's approach:
import time
arr1 = np.arange(1, 10000)
arr2 = np.arange(2, 10001)
start = time.time()
np.array([[func(x, y) for x in arr1] for y in arr2])
print('loop: __time__', time.time()-start)
start = time.time()
(arr1[:, None]*arr2[None, :]).T
print('* __time__', time.time()-start)
start = time.time()
np.einsum('i, j -> ji', arr1, arr2)
print('einsum __time__', time.time()-start)
start = time.time()
np.matmul(arr2[:,None], arr1[None,:])
print('matmul __time__', time.time()-start)
Output:
loop: __time__ 70.3061535358429
* __time__ 0.43536829948425293
einsum __time__ 0.508014440536499
matmul __time__ 0.7149899005889893
I want to create a matrix with M rows and N columns. The increment along the columns are always 1, whilst the increment in rows are a constant value, c. For example, to create this matrix:
The number of rows are 4, the number of columns are 2 and the shift between rows: c = 8. One way to perform this could be:
# Indices of columns
coord_x = np.arange(0, 2)
# Indices of rows
coord_y = np.arange(1, 37, 9)
# Creates 2 matrices with the coordinates
x, y = np.meshgrid(coord_x, coord_y)
# To perform the shift between columns
idx_left = x + y
And the output is:
print(idx_left)
[[ 1 2]
[10 11]
[19 20]
[28 29]]
Can I perform this without the adding idx_left = x + y?. I've already seen other functions but I don't find any that considers a shift along the rows and columns...
Stride tricks
You can use np.lib.stride_tricks for this purpose.
arr = np.arange(1,100)
shape = (4,2)
strides = (arr.strides[0]*9,arr.strides[0]*1) #8 bytes with 9 steps on axis=0, 8bytes with 1 step on axis=1
np.lib.stride_tricks.as_strided(arr, shape=shape, strides=strides)
array([[ 1, 2],
[10, 11],
[19, 20],
[28, 29]])
Another example with 2 shift on axis=0 and 3 shift on axis=1.
arr = np.arange(1,100)
shape = (4,2)
strides = (arr.strides[0]*2,arr.strides[0]*3) #8bytes * 2shift on axis=0, 8bytes*3shift on axis=1
np.lib.stride_tricks.as_strided(arr, shape=shape, strides=strides)
array([[ 1, 4],
[ 3, 6],
[ 5, 8],
[ 7, 10]])
Broadcasting
You could simply do this the same way you are doing without meshgrids only using broadcasting as well -
#Your original example
coord_x = np.arange(0, 2, 1) #start, stop, step
coord_y = np.arange(1, 37, 9)
coord_x[None,:] + coord_y[:,None]
array([[ 1, 2],
[10, 11],
[19, 20],
[28, 29]])
Linspace
You could use linspace if you have extreme limits. So, in your case, you can create a 4 row array from (1,2) to (28,29).
np.linspace((1,2),(28,29),4)
array([[ 1., 2.],
[10., 11.],
[19., 20.],
[28., 29.]])
Mgrid
Mgrid is more convenient than mesh grid for your purpose. You can do -
np.mgrid[0:2:1, 1:37:9].sum(0).T
array([[ 1, 2],
[10, 11],
[19, 20],
[28, 29]])
import numpy
square = numpy.reshape(range(0,16),(4,4))
square
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11],
[12, 13, 14, 15]])
In the above array, how do I access the primary diagonal and secondary diagonal of any given element? For example 9.
by primary diagonal, I mean - [4,9,14],
by secondary diagonal, I mean - [3,6,9,12]
I can't use numpy.diag() cause it takes the entire array to get the diagonal.
Base on your description, with np.where, np.diagonal and np.fliplr
import numpy as np
x,y=np.where(square==9)
np.diagonal(square, offset=-(x-y))
Out[382]: array([ 4, 9, 14])
x,y=np.where(np.fliplr(square)==9)
np.diagonal(np.fliplr(square), offset=-(x-y))
# base on the op's comment it should be np.diagonal(np.fliplr(square), offset=-(x-y))
Out[396]: array([ 3, 6, 9, 12])
For the first diagonal, use the fact that both x_coordiante and y_coordinate increase with 1 each step:
def first_diagonal(x, y, length_array):
if x < y:
return zip(range(x, length_array), range(length_array - x))
else:
return zip(range(length_array - y), range(y, length_array))
For the secondary diagonal, use the fact that the x_coordinate + y_coordinate = constant.
def second_diagonal(x, y, length_array):
tot = x + y
return zip(range(tot+1), range(tot, -1, -1))
This gives you two lists you can use to access your matrix.
Of course, if you have a non square matrix these functions will have to be reshaped a bit.
To illustrate how to get the desired output:
a = np.reshape(range(0,16),(4,4))
first = first_diagonal(1, 2, len(a))
second = second_diagonal(1,2, len(a))
primary_diagonal = [a[i[0]][i[1]] for i in first]
secondary_diagonal = [a[i[0]][i[1]] for i in second]
print(primary_diagonal)
print(secondary_diagonal)
this outputs:
[4, 9, 14]
[3, 6, 9, 12]
x = np.array([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15])
I want to grab first 2 rows of array x from every block of 5, result should be:
x[fancy_indexing] = [1,2, 6,7, 11,12]
It's easy enough to build up an index like that using a for loop.
Is there a one-liner slicing trick that will pull it off? Points for simplicity here.
Approach #1 Here's a vectorized one-liner using boolean-indexing -
x[np.mod(np.arange(x.size),M)<N]
Approach #2 If you are going for performance, here's another vectorized approach using NumPy strides -
n = x.strides[0]
shp = (x.size//M,N)
out = np.lib.stride_tricks.as_strided(x, shape=shp, strides=(M*n,n)).ravel()
Sample run -
In [61]: # Inputs
...: x = np.array([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15])
...: N = 2
...: M = 5
...:
In [62]: # Approach 1
...: x[np.mod(np.arange(x.size),M)<N]
Out[62]: array([ 1, 2, 6, 7, 11, 12])
In [63]: # Approach 2
...: n = x.strides[0]
...: shp = (x.size//M,N)
...: out=np.lib.stride_tricks.as_strided(x,shape=shp,strides=(M*n,n)).ravel()
...:
In [64]: out
Out[64]: array([ 1, 2, 6, 7, 11, 12])
I first thought you need this to work for 2d arrays due to your phrasing of "first N rows of every block of M rows", so I'll leave my solution as this.
You could work some magic by reshaping your array into 3d:
M = 5 # size of blocks
N = 2 # number of columns to cut
x = np.arange(3*4*M).reshape(4,-1) # (4,3*N)-shaped dummy input
x = x.reshape(x.shape[0],-1,M)[:,:,:N+1].reshape(x.shape[0],-1) # (4,3*N)-shaped output
This will extract every column according to your preference. In order to use it for your 1d case you'd need to make your 1d array into a 2d one using x = x[None,:].
Reshape the array to multiple rows of five columns then take (slice) the first two columns of each row.
>>> x
array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15])
>>> x.reshape(x.shape[0] / 5, 5)[:,:2]
array([[ 1, 2],
[ 6, 7],
[11, 12]])
Or
>>> x.reshape(x.shape[0] / 5, 5)[:,:2].flatten()
array([ 1, 2, 6, 7, 11, 12])
>>>
It only works with 1-d arrays that have a length that is a multiple of five.
import numpy as np
x = np.array(range(1, 16))
y = np.vstack([x[0::5], x[1::5]]).T.ravel()
y
// => array([ 1, 2, 6, 7, 11, 12])
Taking the first N rows of every block of M rows in the array [1, 2, ..., K]:
import numpy as np
K = 30
M = 5
N = 2
x = np.array(range(1, K+1))
y = np.vstack([x[i::M] for i in range(N)]).T.ravel()
y
// => array([ 1, 2, 6, 7, 11, 12, 16, 17, 21, 22, 26, 27])
Notice that .T and .ravel() are fast operations: they don't copy any data, but just manipulate the dimensions and strides of the array.
If you insist on getting your slice using fancy indexing:
import numpy as np
K = 30
M = 5
N = 2
x = np.array(range(1, K+1))
fancy_indexing = [i*M+n for i in range(len(x)//M) for n in range(N)]
x[fancy_indexing]
// => array([ 1, 2, 6, 7, 11, 12, 16, 17, 21, 22, 26, 27])
in Python, given an n x p matrix, e.g. 4 x 4, how can I return a matrix that's 4 x 2 that simply averages the first two columns and the last two columns for all 4 rows of the matrix?
e.g. given:
a = array([[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12],
[13, 14, 15, 16]])
return a matrix that has the average of a[:, 0] and a[:, 1] and the average of a[:, 2] and a[:, 3].
I want this to work for an arbitrary matrix of n x p assuming that the number of columns I am averaging of n is obviously evenly divisible by n.
let me clarify: for each row, I want to take the average of the first two columns, then the average of the last two columns. So it would be:
1 + 2 / 2, 3 + 4 / 2 <- row 1 of new matrix
5 + 6 / 2, 7 + 8 / 2 <- row 2 of new matrix, etc.
which should yield a 4 by 2 matrix rather than 4 x 4.
thanks.
How about using some math? You can define a matrix M = [[0.5,0],[0.5,0],[0,0.5],[0,0.5]] so that A*M is what you want.
from numpy import array, matrix
A = array([[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12],
[13, 14, 15, 16]])
M = matrix([[0.5,0],
[0.5,0],
[0,0.5],
[0,0.5]])
print A*M
Generating M is pretty simple too, entries are 1/n or zero.
reshape - get mean - reshape
>>> a.reshape(-1, a.shape[1]//2).mean(1).reshape(a.shape[0],-1)
array([[ 1.5, 3.5],
[ 5.5, 7.5],
[ 9.5, 11.5],
[ 13.5, 15.5]])
is supposed to work for any array size, and reshape doesn't make a copy.
It's a bit unclear what should happen for matrices with n > 4, but this code will do what you want:
a = N.array([[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]], dtype=float)
avg = N.vstack((N.average(a[:,0:2], axis=1), N.average(a[:,2:4], axis=1))).T
This yields avg =
array([[ 1.5, 3.5],
[ 5.5, 7.5],
[ 9.5, 11.5],
[ 13.5, 15.5]])
Here's a way to do it. You only need to change groupsize to make it work with other sizes like you said, though I'm not fully sure what you want.
groupsize = 2
out = np.hstack([np.mean(x,axis=1,out=np.zeros((a.shape[0],1))) for x in np.hsplit(a,groupsize)])
yields
array([[ 1.5, 3.5],
[ 5.5, 7.5],
[ 9.5, 11.5],
[ 13.5, 15.5]])
for out. Hopefully it gives you some ideas on how to do exactly what it is that you want to do. You can make groupsize dependent on the dimensions of a for instance.