I have an array A (shape = (a, 1)) and matrix B (shape = (b1, b2)). Want to multiply the latter by each element of the former to generate a tridimensional array (shape = (a, b1, b2)).
Is there a vectorized way to do this?
import numpy as np
A = np.random.rand(3, 1)
B = np.random.rand(5, 4)
C = np.array([ a * B for a in A ])
There are several ways you can achieve this.
One is using np.dot, note that it will be necessary to introduce a second axis in B so both ndarrays can be multiplied:
C = np.dot(A,B[:,None])
print(C.shape)
# (3, 5, 4)
Using np.multiply.outer, as #divakar suggests:
C = np.multiply.outer(A,B)
print(C.shape)
# (3, 5, 4)
Or you could also use np.einsum:
C = np.einsum('ij,kl->ikl', A, B)
print(C.shape)
# (3, 5, 4)
Related
I used to perform an outer subtraction on two one-dimensional arrays as follows to receive a single two-dimensional arrays that contains all pairs of subtractions:
import numpy as np
a = np.arange(5)
b = np.arange(3)
result = np.subtract.outer(a, b)
assert result.shape == (5, 3)
assert np.all(result == np.array([[aa - bb for bb in b] for aa in a ])) # no rounding errors
Now the state space switches to two dimensions, and I would like to perform the same operation, but only perform each subtraction on the two values on the last axis of the arrays A and B:
import numpy as np
A = np.arange(5 * 2).reshape(-1, 2)
B = np.arange(3 * 2).reshape(-1, 2)
result = np.subtract.outer(A, B)
# Obviously the following does not hold, because here we have got all subtractions, therefore the shape (5, 2, 3, 2)
# I would like to exchange np.subtract.outer such that the following holds:
# assert result.shape == (5, 3, 2)
expected_result = np.array([[aa - bb for bb in B] for aa in A ])
assert expected_result.shape == (5, 3, 2)
# That's what I want to hold:
# assert np.all(result == expected_result) # no rounding errors
Is there a "numpy-only" solution to perform this operation?
You can expand/reshape A to (5, 1, 2) and B to (1, 3, 2) and let the broadcasting do the job:
A[:, None, :] - B[None, :, :]
A[:, None] - B[None, :] does it.
A = np.arange(5 * 2).reshape(-1, 2)
B = np.arange(3 * 2).reshape(-1, 2)
expected_result = np.array([[aa - bb for bb in B] for aa in A ])
C = A[:, None] - B[None, :]
np.allclose(expected_result, C)
#> True
The exact same syntax works for your first example too. This is because with your requirement, you are combining every first axis element of A with every first axis element of B.
I wish to compute the dot product between two 3D tensors along the first dimension. I tried the following einsum notation:
import numpy as np
a = np.random.randn(30).reshape(3, 5, 2)
b = np.random.randn(30).reshape(3, 2, 5)
# Expecting shape: (3, 5, 5)
np.einsum("ijk,ikj->ijj", a, b)
Sadly it returns this error:
ValueError: einstein sum subscripts string includes output subscript 'j' multiple times
I went with Einstein sum after I failed at it with np.tensordot. Ideas and follow up questions are highly welcome!
Your two dimensions of size 5 and 5 do not correspond to the same axes. As such you need to use two different subscripts to designate them. For example, you can do:
>>> res = np.einsum('ijk,ilm->ijm', a, b)
>>> res.shape
(3, 5, 5)
Notice you are also required to change the subscript for axes of size 2 and 2. This is because you are computing the batched outer product (i.e. we iterate on two axes at the same time), not a dot product (i.e. we iterate simultaneously on the two axes).
Outer product:
>>> np.einsum('ijk,ilm->ijm', a, b)
Dot product over subscript k, which is axis=2 of a and axis=1 of b:
>>> np.einsum('ijk,ikm->ijm', a, b)
which is equivalent to a#b.
dot product ... along the first dimension is a bit unclear. Is the first dimension a 'batch' dimension, with 3 dot's on the rest? Or something else?
In [103]: a = np.random.randn(30).reshape(3, 5, 2)
...: b = np.random.randn(30).reshape(3, 2, 5)
In [104]: (a#b).shape
Out[104]: (3, 5, 5)
In [105]: np.einsum('ijk,ikl->ijl',a,b).shape
Out[105]: (3, 5, 5)
#Ivan's answer is different:
In [106]: np.einsum('ijk,ilm->ijm', a, b).shape
Out[106]: (3, 5, 5)
In [107]: np.allclose(np.einsum('ijk,ilm->ijm', a, b), a#b)
Out[107]: False
In [108]: np.allclose(np.einsum('ijk,ikl->ijl', a, b), a#b)
Out[108]: True
Ivan's sums the k dimension of one, and l of the other, and then does a broadcasted elementwise. That is not matrix multiplication:
In [109]: (a.sum(axis=-1,keepdims=True)* b.sum(axis=1,keepdims=True)).shape
Out[109]: (3, 5, 5)
In [110]: np.allclose((a.sum(axis=-1,keepdims=True)* b.sum(axis=1,keepdims=True)),np.einsum('ijk,ilm->ijm', a,
...: b))
Out[110]: True
Another test of the batch processing:
In [112]: res=np.zeros((3,5,5))
...: for i in range(3):
...: res[i] = a[i]#b[i]
...: np.allclose(res, a#b)
Out[112]: True
Very similar to https://math.stackexchange.com/q/3615927/419686, but different.
I have 2 matrices (A with shape (5,2,3) and B with shape (6,3,8)), and I want to perform some kind of multiplication in order to take a new matrix with shape (5,6,2,8).
Python code:
import numpy as np
np.random.seed(1)
A = np.random.randint(0, 10, size=(5,2,3))
B = np.random.randint(0, 10, size=(6,3,8))
C = np.zeros((5,6,2,8))
for i in range(A.shape[0]):
for j in range(B.shape[0]):
C[i,j] = A[i].dot(B[j])
Is it possible to do the above operation without using a loop?
In [52]: np.random.seed(1)
...: A = np.random.randint(0, 10, size=(5,2,3))
...: B = np.random.randint(0, 10, size=(6,3,8))
...:
...: C = np.zeros((5,6,2,8))
...: for i in range(A.shape[0]):
...: for j in range(B.shape[0]):
...: C[i,j] = A[i].dot(B[j])
...:
np.dot does broadcast the outer dimensions:
In [53]: D=np.dot(A,B)
In [54]: C.shape
Out[54]: (5, 6, 2, 8)
In [55]: D.shape
Out[55]: (5, 2, 6, 8)
The axes order is different, but we can easily change that:
In [56]: np.allclose(C, D.transpose(0,2,1,3))
Out[56]: True
In [57]: np.allclose(C, np.swapaxes(D,1,2))
Out[57]: True
From the np.dot docs:
dot(a, b)[i,j,k,m] = sum(a[i,j,:] * b[k,:,m])
Use np.einsum which is very powerful:
C = np.einsum('aij, bjk -> abik', A, B)
Can someone explain in steps how numpy broadcasting works in this case?
a = np.ones((2,3))
b = np.ones((2,1,3))
c = a-b
a.shape
(2, 3)
b.shape
(2, 1, 3)
c.shape
(2, 2, 3)
Referring to this page, it says that numpy prepends the tensor with lower rank with 1s, so in this case we have
a.shape = [1,2,3]
Tile a along axis 1 to get a.shape=[2,2,3]
tile b along axis 2 to get b.shape=[2,2,3]
When the dimensions are same, subtract
Prepend 1 to a.shape, so a.shape -> (1,2,3)
Stretch a along dim 1 to match b. so a.shape -> (2,2,3)
Stretch b along dim 2 to match a. so b.shape -> (2,2,3)
Subtract
Is that what you're looking for?
I have a numpy array a of size 5x5x4x5x5. I have another matrix b of size 5x5. I want to get a[i,j,b[i,j]] for i from 0 to 4 and for j from 0 to 4. This will give me a 5x5x1x5x5 matrix. Is there any way to do this without just using 2 for loops?
Let's think of the matrix a as 100 (= 5 x 5 x 4) matrices of size (5, 5). So, if you could get a liner index for each triplet - (i, j, b[i, j]) - you are done. That's where np.ravel_multi_index comes in. Following is the code.
import numpy as np
import itertools
# create some matrices
a = np.random.randint(0, 10, (5, 5, 4, 5, 5))
b = np.random(0, 4, (5, 5))
# creating all possible triplets - (ind1, ind2, ind3)
inds = list(itertools.product(range(5), range(5)))
(ind1, ind2), ind3 = zip(*inds), b.flatten()
allInds = np.array([ind1, ind2, ind3])
linearInds = np.ravel_multi_index(allInds, (5,5,4))
# reshaping the input array
a_reshaped = np.reshape(a, (100, 5, 5))
# selecting the appropriate indices
res1 = a_reshaped[linearInds, :, :]
# reshaping back into desired shape
res1 = np.reshape(res1, (5, 5, 1, 5, 5))
# verifying with the brute force method
res2 = np.empty((5, 5, 1, 5, 5))
for i in range(5):
for j in range(5):
res2[i, j, 0] = a[i, j, b[i, j], :, :]
print np.all(res1 == res2) # should print True
There's np.take_along_axis exactly for this purpose -
np.take_along_axis(a,b[:,:,None,None,None],axis=2)