multiplying "across" in two numpy arrays - python

Given two numpy arrays of shape (25, 2), and (2,), one can easily multiply them across:
import numpy as np
a = np.random.rand(2, 25)
b = np.random.rand(2)
(a.T * b).T # ok, shape (2, 25)
I have a similar situation where b is of shape (2, 4), and I'd like to get the same results as above for all "4" b. The following works,
a = np.random.rand(25, 2)
b = np.random.rand(2, 4)
c = np.moveaxis([a * bb for bb in b.T], -1, 0) # shape (2, 4, 25)
but I have a hunch that this is possible without moveaxis.
Any ideas?

In [185]: a = np.random.rand(2, 25)
...: b = np.random.rand(2)
The multiplication is possible with broadcasting:
In [186]: a.shape
Out[186]: (2, 25)
In [187]: a.T.shape
Out[187]: (25, 2)
In [189]: (a.T*b).shape
Out[189]: (25, 2)
(25,2) * (2,) => (25,2) * (1,2) => (25,2). The transpose is a moveaxis, changing the result to (2,25)
In your second case.
In [191]: c = np.moveaxis([a * bb for bb in b.T], -1, 0)
In [192]: c.shape
Out[192]: (2, 4, 25)
In [193]: np.array([a * bb for bb in b.T]).shape
Out[193]: (4, 25, 2)
b.T is (4,2), so bb is (2,); with the (25,2) a, produces (25,2) as above. add in the (4,) iteration.
(25,1,2) * (1,4,2) => (25,4,2), which can be transposed to (2,4,25)
In [195]: (a[:,None]*b.T).shape
Out[195]: (25, 4, 2)
In [196]: np.allclose((a[:,None]*b.T).T,c)
Out[196]: True
(2,4,1) * (2,1,25) => (2,4,25)
In [197]: (b[:,:,None] * a.T[:,None]).shape
Out[197]: (2, 4, 25)
In [198]: np.allclose((b[:,:,None] * a.T[:,None]),c)
Out[198]: True

An alternative with numpy.einsum:
np.einsum('ij,jk->jki', a, b)
Check results are the same:
(np.einsum('ij,jk->jki', a, b) == c).all()
True

Related

How to perform outer subtraction along an axis in numpy

I used to perform an outer subtraction on two one-dimensional arrays as follows to receive a single two-dimensional arrays that contains all pairs of subtractions:
import numpy as np
a = np.arange(5)
b = np.arange(3)
result = np.subtract.outer(a, b)
assert result.shape == (5, 3)
assert np.all(result == np.array([[aa - bb for bb in b] for aa in a ])) # no rounding errors
Now the state space switches to two dimensions, and I would like to perform the same operation, but only perform each subtraction on the two values on the last axis of the arrays A and B:
import numpy as np
A = np.arange(5 * 2).reshape(-1, 2)
B = np.arange(3 * 2).reshape(-1, 2)
result = np.subtract.outer(A, B)
# Obviously the following does not hold, because here we have got all subtractions, therefore the shape (5, 2, 3, 2)
# I would like to exchange np.subtract.outer such that the following holds:
# assert result.shape == (5, 3, 2)
expected_result = np.array([[aa - bb for bb in B] for aa in A ])
assert expected_result.shape == (5, 3, 2)
# That's what I want to hold:
# assert np.all(result == expected_result) # no rounding errors
Is there a "numpy-only" solution to perform this operation?
You can expand/reshape A to (5, 1, 2) and B to (1, 3, 2) and let the broadcasting do the job:
A[:, None, :] - B[None, :, :]
A[:, None] - B[None, :] does it.
A = np.arange(5 * 2).reshape(-1, 2)
B = np.arange(3 * 2).reshape(-1, 2)
expected_result = np.array([[aa - bb for bb in B] for aa in A ])
C = A[:, None] - B[None, :]
np.allclose(expected_result, C)
#> True
The exact same syntax works for your first example too. This is because with your requirement, you are combining every first axis element of A with every first axis element of B.

fill in numpy array without looping though all indices

I want to use a high-dimensional numpy array to store the norms of weighted sums of matrices.
For example:
mat1, mat2, mat3, mat4 = np.random.rand(3, 3), np.random.rand(3, 3), np.random.rand(3, 3), np.random.rand(3, 3)
res = np.empty((8, 7, 6, 5))
for i in range(8):
for j in range(7):
for p in range(6):
for q in range(5):
res[i, j, p, q] = np.linalg.norm(i * mat1 + j * mat2 + p * mat3 + q * mat4)
I would like to ask that are there any methods to avoid this nested loop?
Solution
Here's one way you can do it, via adding axes with None (equivalent to np.newaxis):
def weighted_norms(mat1, mat2, mat3, mat4):
P = mat1 * np.arange(8)[:, None, None]
Q = mat2 * np.arange(7)[:, None, None]
R = mat3 * np.arange(6)[:, None, None]
S = mat4 * np.arange(5)[:, None, None]
summation = S + R[:, None] + Q[:, None, None] + P[:, None, None, None]
return np.linalg.norm(summation, axis=(4, 5))
Veracity and a simple benchmark
In [6]: output = weighted_norms(mat1, mat2, mat3, mat4)
In [7]: np.allclose(output, res)
Out[7]: True
In [8]: %timeit weighted_norms(mat1, mat2, mat3, mat4)
71.3 µs ± 446 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Explanation
By adding two new axes to the np.arange objects, you can force the broadcasting you want, producing 0 * mat1, 1 * mat1, 2 * mat1 ....
The real tricky bit is then constructing the (8, 7, 6, 5, 3, 3) array (which is the shape before evaluating the norm which collapses the last two dimensions).
Notice that the summation of all the weighted 3D arrays starts with the last array, S, and progressively adds more weighted 3D arrays. The way it does this is by adding a new axis to broadcast over at each step.
For example, the shape of S is (5, 3, 3) and in order to correctly add R you need to insert a new axis. So the shape of R goes from (6, 3, 3) to (6, 1, 3, 3). This second dimension of size 1 is what allows us to broadcast the sum of S over R such that each array in the 3D S is added to each array in R (that's one level of nested loop).
Then we need to add Q (for every array in Q, for every array in R, for every array in S), so we need to insert two new axes turning Q from (7, 3, 3) to (7, 1, 1, 3, 3).
Finally, P goes from (8, 3, 3) to (8, 1, 1, 1, 3, 3).
It may help to "visualize" this by overlaying the shapes:
(5, 3, 3) <- S
:
+ (6, 1, 3, 3) <- R[:, None]
---------------------
(6, 5, 3, 3)
: :
+ (7, 1, 1, 3, 3) <- Q[:, None, None]
---------------------
(7, 6, 5, 3, 3)
: : :
+ (8, 1, 1, 1, 3, 3) <- P[:, None, None, None]
---------------------
(8, 7, 6, 5, 3, 3)
Generalizing
Here's a generalized version using a helper function for adding axes just to clean up the code a little:
from typing import Tuple
import numpy as np
def add_axes(x: np.ndarray, n: int) -> np.ndarray:
"""
Inserts `n` number of new axes into `x` from axis 1 onward.
e.g., for `x.shape == (3, 3)`, `add_axes(x, 2) -> (3, 1, 1, 3)`
"""
return np.expand_dims(x, axis=(*range(1, n + 1),))
def weighted_norms(arrs: Tuple[np.ndarray], weights: Tuple[int]) -> np.ndarray:
if len(arrs) != len(weights):
raise ValueError("Number of arrays must match number of weights")
summation = np.empty((weights[-1], *arrs[-1].shape))
for i, (x, w) in enumerate(zip(arrs[::-1], weights[::-1])):
summation = summation + add_axes(x * add_axes(np.arange(w), 2), i)
return np.linalg.norm(summation, axis=(-1, -2))
Usage:
In [10]: arrs = (mat1, mat2, mat3, mat4)
In [11]: weights = (8, 7, 6, 5)
In [12]: output = weighted_norms(arrs, weights)
In [13]: np.allclose(output, res)
Out[13]: True
In [14]: %timeit weighted_norms(arrs, weights)
109 µs ± 3.07 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)

Loopless 3D Array Multiplication

Very similar to https://math.stackexchange.com/q/3615927/419686, but different.
I have 2 matrices (A with shape (5,2,3) and B with shape (6,3,8)), and I want to perform some kind of multiplication in order to take a new matrix with shape (5,6,2,8).
Python code:
import numpy as np
np.random.seed(1)
A = np.random.randint(0, 10, size=(5,2,3))
B = np.random.randint(0, 10, size=(6,3,8))
C = np.zeros((5,6,2,8))
for i in range(A.shape[0]):
for j in range(B.shape[0]):
C[i,j] = A[i].dot(B[j])
Is it possible to do the above operation without using a loop?
In [52]: np.random.seed(1)
...: A = np.random.randint(0, 10, size=(5,2,3))
...: B = np.random.randint(0, 10, size=(6,3,8))
...:
...: C = np.zeros((5,6,2,8))
...: for i in range(A.shape[0]):
...: for j in range(B.shape[0]):
...: C[i,j] = A[i].dot(B[j])
...:
np.dot does broadcast the outer dimensions:
In [53]: D=np.dot(A,B)
In [54]: C.shape
Out[54]: (5, 6, 2, 8)
In [55]: D.shape
Out[55]: (5, 2, 6, 8)
The axes order is different, but we can easily change that:
In [56]: np.allclose(C, D.transpose(0,2,1,3))
Out[56]: True
In [57]: np.allclose(C, np.swapaxes(D,1,2))
Out[57]: True
From the np.dot docs:
dot(a, b)[i,j,k,m] = sum(a[i,j,:] * b[k,:,m])
Use np.einsum which is very powerful:
C = np.einsum('aij, bjk -> abik', A, B)

numpy.dot as part of a vectorized operation

Say I have three numpy arrays and I want to perform a calculation over them:
a = np.array([[1,2,3,4,5,6,7],[1,2,3,4,5,6,7],[1,2,3,4,5,6,7],[1,2,3,4,5,6,7],
[1,2,3,4,5,6,7]]) #shape is (5,7)
b = np.array([[11],[12],[11],[12],[11]]) #shape is (5,1)
c = np.array([[10],[20],[30],[40],[50],[60],[70]]) #shape is (5,1)
The calculation is: 10 + (b(rows) * (c . a(rows)))
Where c . a is the dot product of C and the row of a.
By rows, I mean doing it as a vector where I need my result to be (7,1) (one row per each column I have on a)
I'm trying to do something like:
result = 10 + (b[:][:] * (np.dot(c.T, a[:]) + b))
But this fails the np.dot operation with shapes being misaligned for that numpy.dot operation. I'm trying to figure out how to perform the calculation above as a one-liner (no for loops) in a way that Python will interpret the vectorized operation, especially for that np.dot part.
Any hints?
Thanks for your time
EDIT: this is a for loop that solves my problem. I'd like to replace that for loop with one Python line.
iBatchSize = a.shape[0]
iFeatureCount = a.shape[1]
result = np.zeros((iBatchSize,1))
for i in range(iBatchSize):
for j in range(iFeatureCount):
result [i] = 10 + (b[i][0] * (np.dot(c.T, a[i]) + b))
EDIT 2: Corrected array a with the correct array
EDIT 3: Corrected expected shape for result
In [31]: a = np.array([[1,2],[2,3],[3,4],[4,5],[5,6],[6,7],[7,8]]) #shape is (5,7)
...: b = np.array([[11],[12],[11],[12],[11]]) #shape is (5,1)
...: c = np.array([[10],[20],[30],[40],[50],[60],[70]]) #shape is (7,1)
In [32]: a.shape, b.shape, c.shape
Out[32]: ((7, 2), (5, 1), (7, 1))
a.shape does not match the comment.
In [33]: iBatchSize = a.shape[0]
...: iFeatureCount = a.shape[1]
...:
...: result = np.zeros((iBatchSize,1))
...:
...: for i in range(iBatchSize):
...: for j in range(iFeatureCount):
...: result [i] = 10 + (b[i][0] * (np.dot(c.T, a[i]) + b))
...:
Traceback (most recent call last):
File "<ipython-input-33-717691add3dd>", line 8, in <module>
result [i] = 10 + (b[i][0] * (np.dot(c.T, a[i]) + b))
File "<__array_function__ internals>", line 6, in dot
ValueError: shapes (1,7) and (2,) not aligned: 7 (dim 1) != 2 (dim 0)
np.dot is raising that error. It expects the last of first arg to match with the 2nd to the last (or only) of second arg:
In [34]: i
Out[34]: 0
In [35]: c.T.shape
Out[35]: (1, 7)
In [37]: a[i].shape
Out[37]: (2,)
This dot works:
In [38]: np.dot(c.T,a).shape # (1,7) with (7,2) => (1,2)
Out[38]: (1, 2)
====
With the correct a,
10 + (b[i][0] * (np.dot(c.T, a[i]) + b))
is (5,1) array (because of the +b), which can't be put in result[i].
===
a simple dot of a and c produces a (5,1) which can be combined with b (either with + or * or both), resulting in a (5,1) array:
In [68]: np.dot(a,c).shape
Out[68]: (5, 1)
In [69]: b*(np.dot(a,c)+b)
Out[69]:
array([[15521],
[16944],
[15521],
[16944],
[15521]])

Tri-dimensional array as multiplication of vector and matrix

I have an array A (shape = (a, 1)) and matrix B (shape = (b1, b2)). Want to multiply the latter by each element of the former to generate a tridimensional array (shape = (a, b1, b2)).
Is there a vectorized way to do this?
import numpy as np
A = np.random.rand(3, 1)
B = np.random.rand(5, 4)
C = np.array([ a * B for a in A ])
There are several ways you can achieve this.
One is using np.dot, note that it will be necessary to introduce a second axis in B so both ndarrays can be multiplied:
C = np.dot(A,B[:,None])
print(C.shape)
# (3, 5, 4)
Using np.multiply.outer, as #divakar suggests:
C = np.multiply.outer(A,B)
print(C.shape)
# (3, 5, 4)
Or you could also use np.einsum:
C = np.einsum('ij,kl->ikl', A, B)
print(C.shape)
# (3, 5, 4)

Categories