>>> c= array([[[1, 2],
[3, 4]],
[[2, 1],
[4, 3]],
[[3, 2],
[1, 4]]])
>>> x
array([[0, 1, 2],
[3, 4, 5]])
return me a matrix such that each column is the product of each matrix in c multiply the each corresponding column of x in regular matrix multiplication. I'm trying to figure out a way to vectorized it or at least not using for loop to solve it.
array([[6, 6, 16]
12, 16, 22]])
to extends this operation further let's say that I have an array of matrices,say
>>> c
array([[[1, 2],
[3, 4]],
[[2, 1],
[4, 3]],
[[3, 2],
[1, 4]]])
>>> x
array([[[1, 2, 3],
[1, 2, 3]],
[[1, 0, 2],
[1, 0, 2]],
[[2, 3, 1],
[0, 1, 0]]])
def fun(c,x):
for i in range(len(x)):
np.einsum('ijk,ki->ji',c,x[i])
##something
So basically, I want to have each matrix in x multiply with all of c. return a structure similar to c without introducing this for loop
The reason I'm doing this because I've encounter a problem to solve a problem ,trying to vectorized
Xc (the operation follows the normal matrix column vector multiplication), c is 3D array; like the c from above-- a column vector that each element is a matrix (in numpy its the form in the above). X is the matrix with each elements is a 1D array. The output of the Xc should be 1D array.
You can use np.einsum -
np.einsum('ijk,ki->ji',c,x)
Sample run -
In [155]: c
Out[155]:
array([[[1, 2],
[3, 4]],
[[2, 1],
[4, 3]],
[[3, 2],
[1, 4]]])
In [156]: x
Out[156]:
array([[0, 1, 2],
[3, 4, 5]])
In [157]: np.einsum('ijk,ki->ji',c,x)
Out[157]:
array([[ 6, 6, 16],
[12, 16, 22]])
For the 3D case of x, simply append the new dimension at the start of the string notation for x and correspondingly at the output string notation too, like so -
np.einsum('ijk,lki->lji',c,x)
Sample run -
In [151]: c
Out[151]:
array([[[1, 2],
[3, 4]],
[[2, 1],
[4, 3]],
[[3, 2],
[1, 4]]])
In [152]: x
Out[152]:
array([[[1, 2, 3],
[1, 2, 3]],
[[1, 0, 2],
[1, 0, 2]],
[[2, 3, 1],
[0, 1, 0]]])
In [153]: np.einsum('ijk,lki->lji',c,x)
Out[153]:
array([[[ 3, 6, 15],
[ 7, 14, 15]],
[[ 3, 0, 10],
[ 7, 0, 10]],
[[ 2, 7, 3],
[ 6, 15, 1]]])
Related
Suppose I have two matrices and I want to take the row of the first matrix, transpose it and apply # to the corresponding row of the second matrix to obtain a matrix. Do that for the number of rows in each. For example:
Matrix A = N x p
Matrix B = N x q
After operation I have N (p x q) matrices
An example to illustrate for the first row.
>>> x
array([[2, 1, 2],
[4, 3, 1],
[1, 2, 3],
[1, 2, 1]])
>>> g
array([[2, 3],
[3, 3],
[1, 2],
[2, 5]])
After first operation:
>>> x[0,:,np.newaxis] # g[np.newaxis,0,:]
array([[4, 6],
[2, 3],
[4, 6]])
After second operation:
>>> x[1,:,np.newaxis] # g[np.newaxis,1,:]
array([[12, 12],
[ 9, 9],
[ 3, 3]])
And so on, N times such that it would return N (p x q) matrices (Here 3 (3x2) matrices). How can this be done in Numpy with no loop?
In [17]: x = np.array([[2, 1, 2],
...: [4, 3, 1],
...: [1, 2, 3],
...: [1, 2, 1]])
...: g = np.array([[2, 3],
...: [3, 3],
...: [1, 2],
...: [2, 5]])
In [18]: x.shape
Out[18]: (4, 3)
In [19]: g.shape
Out[19]: (4, 2)
With broadcasting, multiply a (4,3,1) with a (4,1,2) to produce (4,3,2):
In [20]: x[:,:,None]*g[:,None,:]
Out[20]:
array([[[ 4, 6],
[ 2, 3],
[ 4, 6]],
[[12, 12],
[ 9, 9],
[ 3, 3]],
[[ 1, 2],
[ 2, 4],
[ 3, 6]],
[[ 2, 5],
[ 4, 10],
[ 2, 5]]])
In [21]: _.shape
Out[21]: (4, 3, 2)
x[:,:,None]#g[:,None,:] does the same thing, doing sum-of-products on the shared size 1 dimension.
You can try this:
np.split(np.repeat(x.T, g.shape[1], axis=1) * g.ravel(), g.shape[0], axis=1)
It gives:
[array([[4, 6],
[2, 3],
[4, 6]]),
array([[12, 12],
[ 9, 9],
[ 3, 3]]),
array([[1, 2],
[2, 4],
[3, 6]]),
array([[ 2, 5],
[ 4, 10],
[ 2, 5]])]
I want to add dimensions to an array, but expand_dims always adds dimension of size 1.
Input:
[[1, 2, 3], [4, 5, 6], [7, 8, 9]]
What expand_dims does:
[[[1], [2], [3]], [[4], [5], [6]], [[7], [8], [9]]]
What I want:
[[[1, 1], [1, 2], [1, 3]], [[1, 4], [1, 5], [1, 6]], [[1, 7], [1, 8], [1, 9]]]
Basically I want to replace each scalar in the matrix by a vector [1, x] where x is the original scalar.
Here's one way using broadcasting and np.insert() function:
In [32]: a = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
In [33]: np.insert(a[:,:,None], 0, 1, 2)
Out[33]:
array([[[1, 1],
[1, 2],
[1, 3]],
[[1, 4],
[1, 5],
[1, 6]],
[[1, 7],
[1, 8],
[1, 9]]])
There are lots of ways of constructing the new array.
You could initial the array with right shape and fill, and copy values:
In [402]: arr = np.arange(1,10).reshape(3,3)
In [403]: arr
Out[403]:
array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
In [404]: res = np.ones((3,3,2),int)
In [405]: res[:,:,1] = arr
In [406]: res
Out[406]:
array([[[1, 1],
[1, 2],
[1, 3]],
[[1, 4],
[1, 5],
[1, 6]],
[[1, 7],
[1, 8],
[1, 9]]])
You could join the array with a like size array of 1s. concatenate is the basic joining function:
In [407]: np.concatenate((np.ones((3,3,1),int), arr[:,:,None]), axis=2)
Out[407]:
array([[[1, 1],
[1, 2],
[1, 3]],
[[1, 4],
[1, 5],
[1, 6]],
[[1, 7],
[1, 8],
[1, 9]]])
np.stack((np.ones((3,3),int), arr), axis=2) does the same thing under the covers. np.dstack ('d' for depth) does it as well. The insert in the other answer also does this.
Similar to the premise in this question, I'd like to transpose each sub-array in the matrix. However, my sub-arrays are of different sizes. I've tried the following lines of code:
import numpy as np
test_array = np.array([
np.array([[1, 1, 1, 1],
[1, 1, 1, 1]]),
np.array([[2, 2, 2, 2],
[2, 2, 2, 2],
[2, 2, 2, 2]]),
np.array([[3, 3],
[3, 3],
[3, 3]])
])
new_test_array = np.apply_along_axis(test_array, 0, np.transpose)
*** numpy.AxisError: axis 0 is out of bounds for array of dimension 0
new_test_array = np.transpose(test_array, (0, 2, 1))
*** ValueError: axes don't match array
new_test_array = np.array(list(map(np.transpose, test_array)))
returns original array
My expected output is
new_test_array = np.array([
np.array([[1, 1],
[1, 1],
[1, 1],
[1, 1]]),
np.array([[2, 2, 2],
[2, 2, 2],
[2, 2, 2],
[2, 2, 2]]),
np.array([[3, 3, 3],
[3, 3, 3]])
])
To answer shortly, you can do this on your data to get what you want:
new_test_array = [np.transpose(x) for x in test_array]
But in your example you build an array of lists instead of an array of varying sizes (which is impossible in numpy). It is also why your methods did not work.
So if you want to do it in a more correct way, first you have to use a list and then convert each list into a numpy array, which you can then transpose individually.
Here's an example code:
test_list = [[[1, 1, 1, 1],
[1, 1, 1, 1]],
[[2, 2, 2, 2],
[2, 2, 2, 2],
[2, 2, 2, 2]],
[[3, 3],
[3, 3],
[3, 3]]]
list_of_arrays = [np.array(x) for x in test_list]
transposed_arrays = [np.transpose(x) for x in list_of_arrays]
Printing transposed_arrays will give you this:
[array([[1, 1],
[1, 1],
[1, 1],
[1, 1]]),
array([[2, 2, 2],
[2, 2, 2],
[2, 2, 2],
[2, 2, 2]]),
array([[3, 3, 3],
[3, 3, 3]])]
I have 0s and 1s store in a 3-dimensional numpy array:
g = np.array([[[0, 1], [0, 1], [1, 0]], [[0, 0], [1, 0], [1, 1]]])
# array([
# [[0, 1], [0, 1], [1, 0]],
# [[0, 0], [1, 0], [1, 1]]])
and I'd like to replace these values by those in another array using a row-wise replacement strategy. For example, replacing the vales of g by x:
x = np.array([[2, 3], [4, 5]])
array([[2, 3],
[4, 5]])
to obtain:
array([
[[2, 3], [2, 3], [3, 2]],
[[4, 4], [5, 4], [5, 5]]])
The idea here would be to have the first row of g replaced by the first elements of x (0 becomes 2 and 1 becomes 3) and the same for the other row (the first dimension - number of "rows" - will always be the same for g and x)
I can't seem to be able to use np.where because there's a ValueError: operands could not be broadcast together with shapes (2,3,2) (2,2) (2,2).
IIUC,
np.stack([x[i, g[i]] for i in range(x.shape[0])])
Output:
array([[[2, 3],
[2, 3],
[3, 2]],
[[4, 4],
[5, 4],
[5, 5]]])
Vectorized approach with np.take_along_axis to index into the last axis of x with g using axis=-1 -
In [20]: np.take_along_axis(x[:,None],g,axis=-1)
Out[20]:
array([[[2, 3],
[2, 3],
[3, 2]],
[[4, 4],
[5, 4],
[5, 5]]])
Or with manual integer-based indexing -
In [27]: x[np.arange(len(g))[:,None,None],g]
Out[27]:
array([[[2, 3],
[2, 3],
[3, 2]],
[[4, 4],
[5, 4],
[5, 5]]])
One solution, is to simply use comprehension directly here:
>>> np.array([[x[i][c] for c in r] for i, r in enumerate(g)])
array([[[2, 3],
[2, 3],
[3, 2]],
[[4, 4],
[5, 4],
[5, 5]]])
From what I understand, g is an array of indexes (indexes being 0 or 1) and x is the array to who's values you use.
Something like this should work (tested quickly)
import numpy as np
def swap_indexes(index_array, array):
out_array = []
for i, row in enumerate(index_array):
out_array.append([array[i,indexes] for indexes in row])
return np.array(out_array)
index_array = np.array([[[0, 1], [0, 1], [1, 0]], [[0, 0], [1, 0], [1, 1]]])
x = np.array([[2, 3], [4, 5]])
print(swap_indexes(index_array, x))
[EDIT: fixed typo that created duplicates]
The title is probably confusing. I have a reasonably large 3D numpy array. I'd like to cut it's size by 2^3 by binning blocks of size (2,2,2). Each element in the new 3D array should then contain the sum of the elements in it's respective block in the original array.
As an example, consider a 4x4x4 array:
input = [[[1, 1, 2, 2],
[1, 1, 2, 2],
[3, 3, 4, 4],
[3, 3, 4, 4]],
[[1, 1, 2, 2],
[1, 1, 2, 2],
[3, 3, 4, 4],
[3, 3, 4, 4]],
... ]]]
(I'm only representing half of it to save space). Notice that all the elements with the same value constitute a (2x2x2) block. The output should be a 2x2x2 array such that each element is the sum of a block:
output = [[[8, 16],
[24, 32]],
... ]]]
So 8 is the sum of all 1's, 16 is the sum of the 2's, and so on.
There's a builtin to do those block-wise reductions - skimage.measure.block_reduce-
In [36]: a
Out[36]:
array([[[1, 1, 2, 2],
[1, 1, 2, 2],
[3, 3, 4, 4],
[3, 3, 4, 4]],
[[1, 1, 2, 2],
[1, 1, 2, 2],
[3, 3, 4, 4],
[3, 3, 4, 4]]])
In [37]: from skimage.measure import block_reduce
In [39]: block_reduce(a, block_size=(2,2,2), func=np.sum)
Out[39]:
array([[[ 8, 16],
[24, 32]]])
Use other reduction ufuncs, say max-reduction -
In [40]: block_reduce(a, block_size=(2,2,2), func=np.max)
Out[40]:
array([[[1, 2],
[3, 4]]])
Implementing such a function isn't that difficult with NumPy tools and could be done like so -
def block_reduce_numpy(a, block_size, func):
shp = a.shape
new_shp = np.hstack([(i//j,j) for (i,j) in zip(shp,block_size)])
select_axes = tuple(np.arange(a.ndim)*2+1)
return func(a.reshape(new_shp),axis=select_axes)