Numpy normalize multi dim (>=3) array - python

I have a 5 dim array (comes from binning operations) and would like to have it normed (sum == 1 for the last dimension).
I thought I found the answer here but it says:
ValueError: Found array with dim 5. the normalize function expected <= 2.
I achieve the result with 5 nested loops, like:
for en in range(en_bin.nb):
for zd in range(zd_bin.nb):
for az in range(az_bin.nb):
for oa in range(oa_bin.nb):
# reduce fifth dimension (en reco) for normalization
b = np.sum(a[en][zd][az][oa])
for er in range(er_bin.nb):
a[en][zd][az][oa][er] /= b
but I want to vectorise operations.
For example:
In [18]: a.shape
Out[18]: (3, 1, 1, 2, 4)
In [20]: b.shape
Out[20]: (3, 1, 1, 2)
In [22]: a
Out[22]:
array([[[[[ 0.90290316, 0.00953237, 0.57925688, 0.65402645],
[ 0.68826638, 0.04982717, 0.30458093, 0.0025204 ]]]],
[[[[ 0.7973917 , 0.93050739, 0.79963614, 0.75142376],
[ 0.50401287, 0.81916812, 0.23491561, 0.77206141]]]],
[[[[ 0.44507296, 0.06625994, 0.6196917 , 0.6808444 ],
[ 0.8199077 , 0.02179789, 0.24627425, 0.43382448]]]]])
In [23]: b
Out[23]:
array([[[[ 2.14571886, 1.04519487]]],
[[[ 3.27895899, 2.33015801]]],
[[[ 1.81186899, 1.52180432]]]])

Sum along the last axis by listing axis=-1 with numpy.sum, keeping dimensions and then simply divide by the array itself, thus bringing in NumPy broadcasting -
a/a.sum(axis=-1,keepdims=True)
This should be applicable for ndarrays of generic number of dimensions.
Alternatively, we could sum with axis-reduction and then add a new axis with None/np.newaxis to match up with the input array shape and then divide -
a/(a.sum(axis=-1)[...,None])

Related

Delete all zeros slices from 4d numpy array

I pretend to remove slices from the third dimension of a 4d numpy array if it's contains only zeros.
I have a 4d numpy array of dimensions [256,256,336,6] and I need to delete the slices in the third dimension that only contains zeros. So the result would have a shape like this , e.g. [256,256,300,6] if 36 slices are fully zeros. I have tried multiple approaches including for loops, np.delete and all(), any() functions without success.
You need to reduce on all axes but the one you are interested in.
An example using np.any() where there are all-zero subarrays along the axis 1 (at position 0 and 2):
import numpy as np
a=np.ones((2, 3, 2, 3))
a[:, 0, :, :] = a[:, 2, :, :] =0
mask = np.any(a, axis=(0, 2, 3))
new_a = a[:, mask, :, :]
print(new_a.shape)
# (2, 1, 2, 3)
print(new_a)
# [[[[1. 1. 1.]
# [1. 1. 1.]]]
#
#
# [[[1. 1. 1.]
# [1. 1. 1.]]]]
The same code parametrized and refactored as a function:
def remove_all_zeros(arr: np.ndarray, axis: int) -> np.ndarray:
red_axes = tuple(i for i in range(arr.ndim) if i != axis)
mask = np.any(arr, axis=red_axes)
slicing = tuple(slice(None) if i != axis else mask for i in range(arr.ndim))
return arr[slicing]
a = np.ones((2, 3, 2, 3))
a[:, 0, :, :] = a[:, 2, :, :] = 0
new_a = remove_all_zeros(a, 1)
print(new_a.shape)
# (2, 1, 2, 3)
print(new_a)
# [[[[1. 1. 1.]
# [1. 1. 1.]]]
#
#
# [[[1. 1. 1.]
# [1. 1. 1.]]]]
I'm not an afficionado with numpy, but does this do what you want?
I take the following small example matrix with 4 dimensions all full of 1s and then I set some slices to zero:
import numpy as np
a=np.ones((4,4,5,2))
The shape of a is:
>>> a.shape
(4, 4, 5, 2)
I will artificially set some of the slices in dimension 3 to zero:
a[:,:,0,:]=0
a[:,:,3,:]=0
I can find the indices of the slices with not all zeros by calculating sums (not very efficient, perhaps!)
indices = [i for i in range(a.shape[2]) if a[:,:,i,:].sum() != 0]
>>> indices
[1, 2, 4]
So, in your general case you could do this:
indices = [i for i in range(a.shape[2]) if a[:,:,i,:].sum() != 0]
a_new = a[:, :, indices, :].copy()
Then the shape of a_new is:
>>> anew.shape
(4, 4, 3, 2)

How can I manipulate a numpy array without nested loops?

If I have a MxN numpy array denoted arr, I wish to index over all elements and adjust the values like so
for m in range(arr.shape[0]):
for n in range(arr.shape[1]):
arr[m, n] += x**2 * np.cos(m) * np.sin(n)
Where x is a random float.
Is there a way to broadcast this over the entire array without needing to loop? Thus, speeding up the run time.
You are just adding zeros, because sin(2*pi*k) = 0 for integer k.
However, if you want to vectorize this, the function np.meshgrid could help you.
Check the following example, where I removed the 2 pi in the trigonometric functions to add something unequal zero.
x = 2
arr = np.arange(12, dtype=float).reshape(4, 3)
n, m = np.meshgrid(np.arange(arr.shape[1]), np.arange(arr.shape[0]), sparse=True)
arr += x**2 * np.cos(m) * np.sin(n)
arr
Edit: use the sparse argument to reduce memory consumption.
You can use nested generators of two-dimensional arrays:
import numpy as np
from random import random
x = random()
n, m = 10,20
arr = [[x**2 * np.cos(2*np.pi*j) * np.sin(2*np.pi*i) for j in range(m)] for i in range(n)]
In [156]: arr = np.ones((2, 3))
Replace the range with arange:
In [157]: m, n = np.arange(arr.shape[0]), np.arange(arr.shape[1])
And change the first array to (2,1) shape. A (2,1) array broadcasts with a (3,) to produce a (2,3) result.
In [158]: A = 0.23**2 * np.cos(m[:, None]) * np.sin(n)
In [159]: A
Out[159]:
array([[0. , 0.04451382, 0.04810183],
[0. , 0.02405092, 0.02598953]])
In [160]: arr + A
Out[160]:
array([[1. , 1.04451382, 1.04810183],
[1. , 1.02405092, 1.02598953]])
The meshgrid suggested in the accepted answer does the same thing:
In [161]: np.meshgrid(m, n, sparse=True, indexing="ij")
Out[161]:
[array([[0],
[1]]),
array([[0, 1, 2]])]
This broadcasting may be clearer with:
In [162]: m, n
Out[162]: (array([0, 1]), array([0, 1, 2]))
In [163]: m[:, None] * 10 + n
Out[163]:
array([[ 0, 1, 2],
[10, 11, 12]])

How to perform matrix multiplication between two 3D tensors along the first dimension?

I wish to compute the dot product between two 3D tensors along the first dimension. I tried the following einsum notation:
import numpy as np
a = np.random.randn(30).reshape(3, 5, 2)
b = np.random.randn(30).reshape(3, 2, 5)
# Expecting shape: (3, 5, 5)
np.einsum("ijk,ikj->ijj", a, b)
Sadly it returns this error:
ValueError: einstein sum subscripts string includes output subscript 'j' multiple times
I went with Einstein sum after I failed at it with np.tensordot. Ideas and follow up questions are highly welcome!
Your two dimensions of size 5 and 5 do not correspond to the same axes. As such you need to use two different subscripts to designate them. For example, you can do:
>>> res = np.einsum('ijk,ilm->ijm', a, b)
>>> res.shape
(3, 5, 5)
Notice you are also required to change the subscript for axes of size 2 and 2. This is because you are computing the batched outer product (i.e. we iterate on two axes at the same time), not a dot product (i.e. we iterate simultaneously on the two axes).
Outer product:
>>> np.einsum('ijk,ilm->ijm', a, b)
Dot product over subscript k, which is axis=2 of a and axis=1 of b:
>>> np.einsum('ijk,ikm->ijm', a, b)
which is equivalent to a#b.
dot product ... along the first dimension is a bit unclear. Is the first dimension a 'batch' dimension, with 3 dot's on the rest? Or something else?
In [103]: a = np.random.randn(30).reshape(3, 5, 2)
...: b = np.random.randn(30).reshape(3, 2, 5)
In [104]: (a#b).shape
Out[104]: (3, 5, 5)
In [105]: np.einsum('ijk,ikl->ijl',a,b).shape
Out[105]: (3, 5, 5)
#Ivan's answer is different:
In [106]: np.einsum('ijk,ilm->ijm', a, b).shape
Out[106]: (3, 5, 5)
In [107]: np.allclose(np.einsum('ijk,ilm->ijm', a, b), a#b)
Out[107]: False
In [108]: np.allclose(np.einsum('ijk,ikl->ijl', a, b), a#b)
Out[108]: True
Ivan's sums the k dimension of one, and l of the other, and then does a broadcasted elementwise. That is not matrix multiplication:
In [109]: (a.sum(axis=-1,keepdims=True)* b.sum(axis=1,keepdims=True)).shape
Out[109]: (3, 5, 5)
In [110]: np.allclose((a.sum(axis=-1,keepdims=True)* b.sum(axis=1,keepdims=True)),np.einsum('ijk,ilm->ijm', a,
...: b))
Out[110]: True
Another test of the batch processing:
In [112]: res=np.zeros((3,5,5))
...: for i in range(3):
...: res[i] = a[i]#b[i]
...: np.allclose(res, a#b)
Out[112]: True

Why does dim=1 return row indices in torch.argmax?

I am working on argmax function of PyTorch which is defined as:
torch.argmax(input, dim=None, keepdim=False)
Consider an example
a = torch.randn(4, 4)
print(a)
print(torch.argmax(a, dim=1))
Here when I use dim=1 instead of searching column vectors, the function searches for row vectors as shown below.
print(a) :
tensor([[-1.7739, 0.8073, 0.0472, -0.4084],
[ 0.6378, 0.6575, -1.2970, -0.0625],
[ 1.7970, -1.3463, 0.9011, -0.8704],
[ 1.5639, 0.7123, 0.0385, 1.8410]])
print(torch.argmax(a, dim=1))
tensor([1, 1, 0, 3])
As far as my assumption goes dim = 0 represents rows and dim =1 represent columns.
It's time to correctly understand how the axis or dim argument work in PyTorch:
The following example should make sense once you comprehend the above picture:
|
v
dim-0 ---> -----> dim-1 ------> -----> --------> dim-1
| [[-1.7739, 0.8073, 0.0472, -0.4084],
v [ 0.6378, 0.6575, -1.2970, -0.0625],
| [ 1.7970, -1.3463, 0.9011, -0.8704],
v [ 1.5639, 0.7123, 0.0385, 1.8410]]
|
v
# argmax (indices where max values are present) along dimension-1
In [215]: torch.argmax(a, dim=1)
Out[215]: tensor([1, 1, 0, 3])
Note: dim (short for 'dimension') is the torch equivalent of 'axis' in NumPy.
Dimensions are defined as shown in the above excellent answer. I have highlighted the way I understand dimensions in Torch and Numpy (dim and axis respectively) and hope that this will be helpful to others.
Notice that only the specified dimension’s index varies during the argmax operation, and the specified dimension’s index range reduces to a single index once the operation is completed. Let tensor A have M rows and N columns and consider the sum operation for simplicity. The shape of A is (M, N). If dim=0 is specified, then the vectors A[0,:], A[1,:], ..., A[M-1,:] are summed elementwise and the result is another tensor with 1 row and N columns. Notice that only the 0th dimension’s indices vary from 0 throughout M-1. Similarly, If dim=1 is specified, then the vectors A[:,0], A[:,1], ..., A[:,N-1] are summed elementwise and the result is another tensor with M rows and 1 column.
An example is given below:
>>> A = torch.tensor([[1,2,3], [4,5,6]])
>>> A
tensor([[1, 2, 3],
[4, 5, 6]])
>>> S0 = torch.sum(A, dim = 0)
>>> S0
tensor([5, 7, 9])
>>> S1 = torch.sum(A, dim = 1)
>>> S1
tensor([ 6, 15])
In the above sample code, the first sum operation specifies dim=0, therefore A[0,:] and A[1,:], which are [1,2,3] and [4,5,6], are summed and resulted in [5, 7, 9]. When dim=1 was specified, the vectors A[:,0], A[:,1], and A[:2], which are the vectors [1, 4], [2, 5], and [3, 6], are elementwise added to find [6, 15].
Note also that the specified dimension collapses. Again let A have the shape (M, N). If dim=0, then the result will have the shape (1, N), where dimension 0 is reduced from M to 1. Similarly if dim=1, then the result would have the shape (M, 1), where N is reduced to 1. Note also that shapes (1, N) and (M,1) are represented by a single-dimensional tensor with N and M elements respectively.

Array size difference Python

My question is about Python array shape.
What is the difference between array size (2, ) and (2, 1)?
I tried to add those two arrays together. However, I got an error as follows:
Non-broadcastable output operant with shape (2, ) doesn't match the broadcast shape (2, 2)
There is no difference in the raw memory. But logically, one is a one-dimensional array of two values, the other is a 2D array (where one of the dimensions just happens to be size 1).
The logical distinction is important to numpy; when you try to add them, it wants to make a new 2x2 array where the top row is the sum of the (2, 1) array's top "row" with each value in the (2,) array. If you use += to do that though, you're indicating that you expect to be able to modify the (2,) array in place, which is not possible without resizing (which numpy won't do). If you change your code from:
arr1 += arr2
to:
arr1 = arr1 + arr2
it will happily create a new (2, 2) array. Or if the goal was that the 2x1 array should act like a flat 1D array, you can flatten it:
alreadyflatarray += twodarray.flatten()
(2,) is an unidimensional array, (2,1) is a matrix with only one column
You can easily see the difference by crating arrays full of zeros using np.zero passing the desired shape:
>>> np.zeros((2,))
array([0., 0.])
>>> np.zeros((2,1))
array([[0.],
[0.]])
#yx131, you can have a look at the below code to just have a clear picture about tuples and it's use in defining the shape of numpy arrays.
Note: Do not forget to see the code below as it has explanation of the problems related to Broadcasting in numpy.
Also check numpy's broadcasting rule at
https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html.
There's a difference between (2) and (2,). The first one is a literal value 2 where as the 2nd one is a tuple.
(2,) is 1 item tuple and (2, 2) is 2 items tuple. It is clear in the code example.
Note: In case of numpy arrays, (2,) denotes shape of 1 dimensional array of 2 items and (2, 2) denotes the shape of 2 dimensional array (matrix) with 2 rows and 2 colums. If you want to add 2 arrays then their shape should be same.
v = (2) # Assignment of value 2
t = (2,) # Comma is necessary at the end of item to define 1 item tuple, it is not required in case of list
t2 = (2, 1) # 2 item tuple
t3 = (3, 4) # 2 item tuple
print(v, type(v))
print(t, type(t))
print(t2, type(t2))
print(t3, type(t3))
print(t + t2)
print(t2 + t3)
"""
2 <class 'int'>
(2,) <class 'tuple'>
(2, 1) <class 'tuple'>
(3, 4) <class 'tuple'>
(2, 2, 1)
(2, 1, 3, 4)
"""
Now, let's have a look at the below code to figure out the error related to broadcasting. It's all related to dimensions.
# Python 3.5.2
import numpy as np
arr1 = np.array([1, 4]);
arr2 = np.array([7, 6, 3, 8]);
arr3 = np.array([3, 6, 2, 1]);
print(arr1, ':', arr1.shape)
print(arr2, ":", arr2.shape)
print(arr3, ":", arr3.shape)
print ("\n")
"""
[1 4] : (2,)
[7 6 3 8] : (4,)
[3 6 2 1] : (4,)
"""
# Changing shapes (dimensions)
arr1.shape = (2, 1)
arr2.shape = (2, 2)
arr3.shape = (2, 2)
print(arr1, ':', arr1.shape)
print(arr2, ":", arr2.shape)
print(arr3, ":", arr3.shape)
print("\n")
print(arr1 + arr2)
"""
[[1]
[4]] : (2, 1)
[[7 6]
[3 8]] : (2, 2)
[[3 6]
[2 1]] : (2, 2)
[[ 8 7]
[ 7 12]]
"""
arr1.shape = (2, )
print(arr1, arr1.shape)
print(arr1 + arr2)
"""
[1 4] (2,)
[[ 8 10]
[ 4 12]]
"""
# Code with error(Broadcasting related)
arr2.shape = (4,)
print(arr1+arr2)
"""
Traceback (most recent call last):
File "source_file.py", line 53, in <module>
print(arr1+arr2)
ValueError: operands could not be broadcast together
with shapes (2,) (4,)
"""
So in your case, the problem is related to the mismatched dimensions (acc. to numpy's broadcasting ) to be added. Thanks.
Make an array that has shape (2,)
In [164]: a = np.array([3,6])
In [165]: a
Out[165]: array([3, 6])
In [166]: a.shape
Out[166]: (2,)
In [167]: a.reshape(2,1)
Out[167]:
array([[3],
[6]])
In [168]: a.reshape(1,2)
Out[168]: array([[3, 6]])
The first displays like a simple list [3,6]. The second as a list with 2 nested lists. The third as a list with one nested list of 2 items. So there is a consistent relation between shape and list nesting.
In [169]: a + a
Out[169]: array([ 6, 12]) # shape (2,)
In [170]: a + a.reshape(1,2)
Out[170]: array([[ 6, 12]]) # shape (1,2)
In [171]: a + a.reshape(2,1)
Out[171]:
array([[ 6, 9], # shape (2,2)
[ 9, 12]])
Dimensions behave as:
(2,) + (2,) => (2,)
(2,) + (1,2) => (1,2) + (1,2) => (1,2)
(2,) + (2,1) => (1,2) + (2,1) => (2,2) + (2,2) => (2,2)
That is a lower dimensional array can be expanded to the matching number of dimensions with the addition of leading size 1 dimensions.
And size 1 dimensions can be changed to match the corresponding dimension.
I suspect you got the error when doing a a += ... (If so you should have stated that clearly.)
In [172]: a += a
In [173]: a += a.reshape(1,2)
....
ValueError: non-broadcastable output operand with shape (2,)
doesn't match the broadcast shape (1,2)
In [175]: a += a.reshape(2,1)
...
ValueError: non-broadcastable output operand with shape (2,)
doesn't match the broadcast shape (2,2)
With the a+=... addition, the result shape is fixed at (2,), the shape of a. But as noted above the two additions generate (1,2) and (2,2) results, which aren't compatible with (2,).
The same reasoning can explain these additions and errors:
In [176]: a1 = a.reshape(1,2)
In [177]: a1 += a
In [178]: a1
Out[178]: array([[12, 24]])
In [179]: a2 = a.reshape(2,1)
In [180]: a2 += a
...
ValueError: non-broadcastable output operand with shape (2,1)
doesn't match the broadcast shape (2,2)
In [182]: a1 += a2
...
ValueError: non-broadcastable output operand with shape (1,2)
doesn't match the broadcast shape (2,2)

Categories