In numpy.sum() there is parameter called keepdims. What does it do?
As you can see here in the documentation:
http://docs.scipy.org/doc/numpy/reference/generated/numpy.sum.html
numpy.sum(a, axis=None, dtype=None, out=None, keepdims=False)[source]
Sum of array elements over a given axis.
Parameters:
...
keepdims : bool, optional
If this is set to True, the axes which are reduced are left in the result as
dimensions with size one. With this option, the result will broadcast
correctly against the input array.
...
#Ney
#hpaulj is correct, you need to experiment, but I suspect you don't realize that summation for some arrays can occur along axes. Observe the following which reading the documentation
>>> a
array([[0, 0, 0],
[0, 1, 0],
[0, 2, 0],
[1, 0, 0],
[1, 1, 0]])
>>> np.sum(a, keepdims=True)
array([[6]])
>>> np.sum(a, keepdims=False)
6
>>> np.sum(a, axis=1, keepdims=True)
array([[0],
[1],
[2],
[1],
[2]])
>>> np.sum(a, axis=1, keepdims=False)
array([0, 1, 2, 1, 2])
>>> np.sum(a, axis=0, keepdims=True)
array([[2, 4, 0]])
>>> np.sum(a, axis=0, keepdims=False)
array([2, 4, 0])
You will notice that if you don't specify an axis (1st two examples), the numerical result is the same, but the keepdims = True returned a 2D array with the number 6, whereas, the second incarnation returned a scalar.
Similarly, when summing along axis 1 (across rows), a 2D array is returned again when keepdims = True.
The last example, along axis 0 (down columns), shows a similar characteristic... dimensions are kept when keepdims = True.
Studying axes and their properties is critical to a full understanding of the power of NumPy when dealing with multidimensional data.
An example showing keepdims in action when working with higher dimensional arrays. Let's see how the shape of the array changes as we do different reductions:
import numpy as np
a = np.random.rand(2,3,4)
a.shape
# => (2, 3, 4)
# Note: axis=0 refers to the first dimension of size 2
# axis=1 refers to the second dimension of size 3
# axis=2 refers to the third dimension of size 4
a.sum(axis=0).shape
# => (3, 4)
# Simple sum over the first dimension, we "lose" that dimension
# because we did an aggregation (sum) over it
a.sum(axis=0, keepdims=True).shape
# => (1, 3, 4)
# Same sum over the first dimension, but instead of "loosing" that
# dimension, it becomes 1.
a.sum(axis=(0,2)).shape
# => (3,)
# Here we "lose" two dimensions
a.sum(axis=(0,2), keepdims=True).shape
# => (1, 3, 1)
# Here the two dimensions become 1 respectively
Related
I want to create 2d tensor (or numpy array, doesn't really matter), where every row will be cycle shifted first row. I do it using for loop:
import torch
import numpy as np
a = np.random.rand(33, 11)
miss_size = 64
lp_order = a.shape[1] - 1
inv_a = -np.flip(a, axis=1)
mtx_size = miss_size+lp_order # some constant
mtx_row = torch.cat((torch.from_numpy(inv_a), torch.zeros((a.shape[0], miss_size - 1 + a.shape[1]))), dim=1)
mtx_full = mtx_row.unsqueeze(1)
for i in range(mtx_size):
mtx_row = torch.roll(mtx_row, 1, 1)
mtx_full = torch.cat((mtx_full, mtx_row.unsqueeze(1)), dim=1)
unsqueezing is needed because I stack 2d tensors into 3d tensor
Is there more efficient way to do that? Maybe linear algebra trick or more pythonic approach.
You can use scipy.linalg.circulant():
scipy.linalg.circulant([1, 2, 3])
# array([[1, 3, 2],
# [2, 1, 3],
# [3, 2, 1]])
I believe you can achieve this using torch.gather by constructing the appropriate index tensor. This approach works with batches too.
If we take this approach, the objective is to construct an index tensor where each value refers to an index in mtx_row (along the last dimension here dim=1). In this case, it would be shaped (3, 3):
tensor([[0, 1, 2],
[2, 0, 1],
[1, 2, 0]])
You can achieve this by broadcasting torch.arange with its own transpose and applying modulo on the resulting matrix:
>>> idx = (n-torch.arange(n)[None].T + torch.arange(n)[None]) % n
tensor([[0, 1, 2],
[2, 0, 1],
[1, 2, 0]])
Let mtx_row be shaped (2, 3):
>>> mtx_row
tensor([[0.3523, 0.0170, 0.1875],
[0.2156, 0.7773, 0.4563]])
From there you need to idx and mtx_row so they have the same shapes:
>>> idx_ = idx[None].expand(len(mtx_row), -1, -1)
>>> val_ = mtx_row[:, None].expand(-1, n, -1)
Then we can apply torch.gather on the last dimension dim=2:
>>> val_.gather(-1, idx_)
tensor([[[0.3523, 0.0170, 0.1875],
[0.1875, 0.3523, 0.0170],
[0.0170, 0.1875, 0.3523]],
[[0.2156, 0.7773, 0.4563],
[0.4563, 0.2156, 0.7773],
[0.7773, 0.4563, 0.2156]]])
I wish to compute the dot product between two 3D tensors along the first dimension. I tried the following einsum notation:
import numpy as np
a = np.random.randn(30).reshape(3, 5, 2)
b = np.random.randn(30).reshape(3, 2, 5)
# Expecting shape: (3, 5, 5)
np.einsum("ijk,ikj->ijj", a, b)
Sadly it returns this error:
ValueError: einstein sum subscripts string includes output subscript 'j' multiple times
I went with Einstein sum after I failed at it with np.tensordot. Ideas and follow up questions are highly welcome!
Your two dimensions of size 5 and 5 do not correspond to the same axes. As such you need to use two different subscripts to designate them. For example, you can do:
>>> res = np.einsum('ijk,ilm->ijm', a, b)
>>> res.shape
(3, 5, 5)
Notice you are also required to change the subscript for axes of size 2 and 2. This is because you are computing the batched outer product (i.e. we iterate on two axes at the same time), not a dot product (i.e. we iterate simultaneously on the two axes).
Outer product:
>>> np.einsum('ijk,ilm->ijm', a, b)
Dot product over subscript k, which is axis=2 of a and axis=1 of b:
>>> np.einsum('ijk,ikm->ijm', a, b)
which is equivalent to a#b.
dot product ... along the first dimension is a bit unclear. Is the first dimension a 'batch' dimension, with 3 dot's on the rest? Or something else?
In [103]: a = np.random.randn(30).reshape(3, 5, 2)
...: b = np.random.randn(30).reshape(3, 2, 5)
In [104]: (a#b).shape
Out[104]: (3, 5, 5)
In [105]: np.einsum('ijk,ikl->ijl',a,b).shape
Out[105]: (3, 5, 5)
#Ivan's answer is different:
In [106]: np.einsum('ijk,ilm->ijm', a, b).shape
Out[106]: (3, 5, 5)
In [107]: np.allclose(np.einsum('ijk,ilm->ijm', a, b), a#b)
Out[107]: False
In [108]: np.allclose(np.einsum('ijk,ikl->ijl', a, b), a#b)
Out[108]: True
Ivan's sums the k dimension of one, and l of the other, and then does a broadcasted elementwise. That is not matrix multiplication:
In [109]: (a.sum(axis=-1,keepdims=True)* b.sum(axis=1,keepdims=True)).shape
Out[109]: (3, 5, 5)
In [110]: np.allclose((a.sum(axis=-1,keepdims=True)* b.sum(axis=1,keepdims=True)),np.einsum('ijk,ilm->ijm', a,
...: b))
Out[110]: True
Another test of the batch processing:
In [112]: res=np.zeros((3,5,5))
...: for i in range(3):
...: res[i] = a[i]#b[i]
...: np.allclose(res, a#b)
Out[112]: True
I am working on argmax function of PyTorch which is defined as:
torch.argmax(input, dim=None, keepdim=False)
Consider an example
a = torch.randn(4, 4)
print(a)
print(torch.argmax(a, dim=1))
Here when I use dim=1 instead of searching column vectors, the function searches for row vectors as shown below.
print(a) :
tensor([[-1.7739, 0.8073, 0.0472, -0.4084],
[ 0.6378, 0.6575, -1.2970, -0.0625],
[ 1.7970, -1.3463, 0.9011, -0.8704],
[ 1.5639, 0.7123, 0.0385, 1.8410]])
print(torch.argmax(a, dim=1))
tensor([1, 1, 0, 3])
As far as my assumption goes dim = 0 represents rows and dim =1 represent columns.
It's time to correctly understand how the axis or dim argument work in PyTorch:
The following example should make sense once you comprehend the above picture:
|
v
dim-0 ---> -----> dim-1 ------> -----> --------> dim-1
| [[-1.7739, 0.8073, 0.0472, -0.4084],
v [ 0.6378, 0.6575, -1.2970, -0.0625],
| [ 1.7970, -1.3463, 0.9011, -0.8704],
v [ 1.5639, 0.7123, 0.0385, 1.8410]]
|
v
# argmax (indices where max values are present) along dimension-1
In [215]: torch.argmax(a, dim=1)
Out[215]: tensor([1, 1, 0, 3])
Note: dim (short for 'dimension') is the torch equivalent of 'axis' in NumPy.
Dimensions are defined as shown in the above excellent answer. I have highlighted the way I understand dimensions in Torch and Numpy (dim and axis respectively) and hope that this will be helpful to others.
Notice that only the specified dimension’s index varies during the argmax operation, and the specified dimension’s index range reduces to a single index once the operation is completed. Let tensor A have M rows and N columns and consider the sum operation for simplicity. The shape of A is (M, N). If dim=0 is specified, then the vectors A[0,:], A[1,:], ..., A[M-1,:] are summed elementwise and the result is another tensor with 1 row and N columns. Notice that only the 0th dimension’s indices vary from 0 throughout M-1. Similarly, If dim=1 is specified, then the vectors A[:,0], A[:,1], ..., A[:,N-1] are summed elementwise and the result is another tensor with M rows and 1 column.
An example is given below:
>>> A = torch.tensor([[1,2,3], [4,5,6]])
>>> A
tensor([[1, 2, 3],
[4, 5, 6]])
>>> S0 = torch.sum(A, dim = 0)
>>> S0
tensor([5, 7, 9])
>>> S1 = torch.sum(A, dim = 1)
>>> S1
tensor([ 6, 15])
In the above sample code, the first sum operation specifies dim=0, therefore A[0,:] and A[1,:], which are [1,2,3] and [4,5,6], are summed and resulted in [5, 7, 9]. When dim=1 was specified, the vectors A[:,0], A[:,1], and A[:2], which are the vectors [1, 4], [2, 5], and [3, 6], are elementwise added to find [6, 15].
Note also that the specified dimension collapses. Again let A have the shape (M, N). If dim=0, then the result will have the shape (1, N), where dimension 0 is reduced from M to 1. Similarly if dim=1, then the result would have the shape (M, 1), where N is reduced to 1. Note also that shapes (1, N) and (M,1) are represented by a single-dimensional tensor with N and M elements respectively.
I am trying to concatenate 4 arrays, one 1D array of shape (78427,) and 3 2D array of shape (78427, 375/81/103). Basically this are 4 arrays with features for 78427 images, in which the 1D array only has 1 value for each image.
I tried concatenating the arrays as follows:
>>> print X_Cscores.shape
(78427, 375)
>>> print X_Mscores.shape
(78427, 81)
>>> print X_Tscores.shape
(78427, 103)
>>> print X_Yscores.shape
(78427,)
>>> np.concatenate((X_Cscores, X_Mscores, X_Tscores, X_Yscores), axis=1)
This results in the following error:
Traceback (most recent call last):
File "", line 1, in
ValueError: all the input arrays must have same number of dimensions
The problem seems to be the 1D array, but I can't really see why (it also has 78427 values). I tried to transpose the 1D array before concatenating it, but that also didn't work.
Any help on what's the right method to concatenate these arrays would be appreciated!
Try concatenating X_Yscores[:, None] (or X_Yscores[:, np.newaxis] as imaluengo suggests). This creates a 2D array out of a 1D array.
Example:
A = np.array([1, 2, 3])
print A.shape
print A[:, None].shape
Output:
(3,)
(3,1)
I am not sure if you want something like:
a = np.array( [ [1,2],[3,4] ] )
b = np.array( [ 5,6 ] )
c = a.ravel()
con = np.concatenate( (c,b ) )
array([1, 2, 3, 4, 5, 6])
OR
np.column_stack( (a,b) )
array([[1, 2, 5],
[3, 4, 6]])
np.row_stack( (a,b) )
array([[1, 2],
[3, 4],
[5, 6]])
You can try this one-liner:
concat = numpy.hstack([a.reshape(dim,-1) for a in [Cscores, Mscores, Tscores, Yscores]])
The "secret" here is to reshape using the known, common dimension in one axis, and -1 for the other, and it automatically matches the size (creating a new axis if needed).
Given three numpy arrays: one multidimensional array x, one vector y with trailing singleton dimension, and one vector z without trailing singleton dimension,
x = np.zeros((M,N))
y = np.zeros((M,1))
z = np.zeros((M,))
the behaviour of broadcast operations changes depending on vector representation and context:
x[:,0] = y # error cannot broadcast from shape (M,1) into shape (M)
x[:,0] = z # OK
x[:,0] += y # error non-broadcastable output with shape (M) doesn't match
# broadcast shape(M,M)
x[:,0] += z # OK
x - y # OK
x - z # error cannot broadcast from shape (M,N) into shape (M)
I realize I can do the following:
x - z[:,None] # OK
but I don't understand what this explicit notation buys me. It certainly doesn't buy readability. I don't understand why the expression x - y is OK, but x - z is ambiguous.
Why does Numpy treat vectors with or without trailing singleton dimensions differently?
edit: The documentation states that: two dimensions are compatible when they are equal, or one of them is 1, but y and z are both functionally M x 1 vectors, since an M x 0 vector doesn't contain any elements.
The convention is that broadcasting will insert singleton dimensions at the beginning of an array's shape. This makes it convenience to perform operations over the last dimensions of an array, so (x.T - z).T should work.
If it were to automatically decide which axis of x was matched by z, an operation like x - z would result in an error if and only if N == M, making code harder to test. So the convention allows some convenience, while being robust to some error.
If you don't like the z[:, None] shorthand, perhaps you find z[:, np.newaxis] clearer.
For an assignment like x[:,0] = y to work, you can use x[:,0:1] = y instead.
Using the Numpy matrix interface as opposed to the array interface yields the desired broadcasting behaviours:
x = np.asmatrix(np.zeros((M,N)))
y = np.asmatrix(np.zeros((M,1)))
x[:,0] = y # OK
x[:,0] = y[:,0] # OK
x[:,0] = y[:,0:1] # OK
x[:,0] += y # OK
x - y # OK
x - np.mean(x, axis=0) # OK
x - np.mean(x, axis=1) # OK
One benefit of treating (M,1) and (M,) differently is to enable you to specify what dimensions to align and what dimensions to broadcast
Say you have:
a = np.arange(4)
b = np.arange(16).reshape(4,4)
# i.e a = array([0, 1, 2, 3])
# i.e b = array([[ 0, 1, 2, 3],
# [ 4, 5, 6, 7],
# [ 8, 9, 10, 11],
# [12, 13, 14, 15]])
When you do c = a + b, a and b will be aligned in axis=1 and a will be broadcasted along axis=0:
array([[0, 1, 2, 3],
[0, 1, 2, 3],
[0, 1, 2, 3],
[0, 1, 2, 3]])
But what if you want to align a and b in axis=0 and broadcast in axis=1 ?
array([[0, 0, 0, 0],
[1, 1, 1, 1],
[2, 2, 2, 2],
[3, 3, 3, 3]])
(M,1) vs (M,) difference enables you to specify which dimension to align and broadcast.
(i.e if (M,1) and (M,) are treated the same, how do you tell numpy you want to broadcast on axis=1?)