Numpy broadcasting sliced arrays and vectors - python

Given three numpy arrays: one multidimensional array x, one vector y with trailing singleton dimension, and one vector z without trailing singleton dimension,
x = np.zeros((M,N))
y = np.zeros((M,1))
z = np.zeros((M,))
the behaviour of broadcast operations changes depending on vector representation and context:
x[:,0] = y # error cannot broadcast from shape (M,1) into shape (M)
x[:,0] = z # OK
x[:,0] += y # error non-broadcastable output with shape (M) doesn't match
# broadcast shape(M,M)
x[:,0] += z # OK
x - y # OK
x - z # error cannot broadcast from shape (M,N) into shape (M)
I realize I can do the following:
x - z[:,None] # OK
but I don't understand what this explicit notation buys me. It certainly doesn't buy readability. I don't understand why the expression x - y is OK, but x - z is ambiguous.
Why does Numpy treat vectors with or without trailing singleton dimensions differently?
edit: The documentation states that: two dimensions are compatible when they are equal, or one of them is 1, but y and z are both functionally M x 1 vectors, since an M x 0 vector doesn't contain any elements.

The convention is that broadcasting will insert singleton dimensions at the beginning of an array's shape. This makes it convenience to perform operations over the last dimensions of an array, so (x.T - z).T should work.
If it were to automatically decide which axis of x was matched by z, an operation like x - z would result in an error if and only if N == M, making code harder to test. So the convention allows some convenience, while being robust to some error.
If you don't like the z[:, None] shorthand, perhaps you find z[:, np.newaxis] clearer.
For an assignment like x[:,0] = y to work, you can use x[:,0:1] = y instead.

Using the Numpy matrix interface as opposed to the array interface yields the desired broadcasting behaviours:
x = np.asmatrix(np.zeros((M,N)))
y = np.asmatrix(np.zeros((M,1)))
x[:,0] = y # OK
x[:,0] = y[:,0] # OK
x[:,0] = y[:,0:1] # OK
x[:,0] += y # OK
x - y # OK
x - np.mean(x, axis=0) # OK
x - np.mean(x, axis=1) # OK

One benefit of treating (M,1) and (M,) differently is to enable you to specify what dimensions to align and what dimensions to broadcast
Say you have:
a = np.arange(4)
b = np.arange(16).reshape(4,4)
# i.e a = array([0, 1, 2, 3])
# i.e b = array([[ 0, 1, 2, 3],
# [ 4, 5, 6, 7],
# [ 8, 9, 10, 11],
# [12, 13, 14, 15]])
When you do c = a + b, a and b will be aligned in axis=1 and a will be broadcasted along axis=0:
array([[0, 1, 2, 3],
[0, 1, 2, 3],
[0, 1, 2, 3],
[0, 1, 2, 3]])
But what if you want to align a and b in axis=0 and broadcast in axis=1 ?
array([[0, 0, 0, 0],
[1, 1, 1, 1],
[2, 2, 2, 2],
[3, 3, 3, 3]])
(M,1) vs (M,) difference enables you to specify which dimension to align and broadcast.
(i.e if (M,1) and (M,) are treated the same, how do you tell numpy you want to broadcast on axis=1?)

Related

How to calculate x*x.T in python

I want to calculate the following:
but I have no idea how to do this in python, I do not want to implement this manually but use a predefined function for this, something from numpy for example.
But numpy seems to ignore that x.T should be transposed.
Code:
import numpy as np
x = np.array([1, 5])
print(np.dot(x, x.T)) # = 26, This is not the matrix it should be!
While your vectors are defined as 1-d arrays, you can use np.outer:
np.outer(x, x.T)
> array([[ 1, 5],
> [ 5, 25]])
Alternatively, you could also define your vectors as matrices and use normal matrix multiplication:
x = np.array([[1], [5]])
x # x.T
> array([[ 1, 5],
> [ 5, 25]])
You can do:
x = np.array([[1], [5]])
print(np.dot(x, x.T))
Your original x is of shape (2,), while you need a shape of (2,1). Another way is reshaping your x:
x = np.array([1, 5]).reshape(-1,1)
print(np.dot(x, x.T))
.reshape(-1,1) reshapes your array to have 1 column and implicitely takes care of number of rows.
output:
[[ 1 5]
[ 5 25]]
np.matmul(x[:, np.newaxis], [x])

Why does dim=1 return row indices in torch.argmax?

I am working on argmax function of PyTorch which is defined as:
torch.argmax(input, dim=None, keepdim=False)
Consider an example
a = torch.randn(4, 4)
print(a)
print(torch.argmax(a, dim=1))
Here when I use dim=1 instead of searching column vectors, the function searches for row vectors as shown below.
print(a) :
tensor([[-1.7739, 0.8073, 0.0472, -0.4084],
[ 0.6378, 0.6575, -1.2970, -0.0625],
[ 1.7970, -1.3463, 0.9011, -0.8704],
[ 1.5639, 0.7123, 0.0385, 1.8410]])
print(torch.argmax(a, dim=1))
tensor([1, 1, 0, 3])
As far as my assumption goes dim = 0 represents rows and dim =1 represent columns.
It's time to correctly understand how the axis or dim argument work in PyTorch:
The following example should make sense once you comprehend the above picture:
|
v
dim-0 ---> -----> dim-1 ------> -----> --------> dim-1
| [[-1.7739, 0.8073, 0.0472, -0.4084],
v [ 0.6378, 0.6575, -1.2970, -0.0625],
| [ 1.7970, -1.3463, 0.9011, -0.8704],
v [ 1.5639, 0.7123, 0.0385, 1.8410]]
|
v
# argmax (indices where max values are present) along dimension-1
In [215]: torch.argmax(a, dim=1)
Out[215]: tensor([1, 1, 0, 3])
Note: dim (short for 'dimension') is the torch equivalent of 'axis' in NumPy.
Dimensions are defined as shown in the above excellent answer. I have highlighted the way I understand dimensions in Torch and Numpy (dim and axis respectively) and hope that this will be helpful to others.
Notice that only the specified dimension’s index varies during the argmax operation, and the specified dimension’s index range reduces to a single index once the operation is completed. Let tensor A have M rows and N columns and consider the sum operation for simplicity. The shape of A is (M, N). If dim=0 is specified, then the vectors A[0,:], A[1,:], ..., A[M-1,:] are summed elementwise and the result is another tensor with 1 row and N columns. Notice that only the 0th dimension’s indices vary from 0 throughout M-1. Similarly, If dim=1 is specified, then the vectors A[:,0], A[:,1], ..., A[:,N-1] are summed elementwise and the result is another tensor with M rows and 1 column.
An example is given below:
>>> A = torch.tensor([[1,2,3], [4,5,6]])
>>> A
tensor([[1, 2, 3],
[4, 5, 6]])
>>> S0 = torch.sum(A, dim = 0)
>>> S0
tensor([5, 7, 9])
>>> S1 = torch.sum(A, dim = 1)
>>> S1
tensor([ 6, 15])
In the above sample code, the first sum operation specifies dim=0, therefore A[0,:] and A[1,:], which are [1,2,3] and [4,5,6], are summed and resulted in [5, 7, 9]. When dim=1 was specified, the vectors A[:,0], A[:,1], and A[:2], which are the vectors [1, 4], [2, 5], and [3, 6], are elementwise added to find [6, 15].
Note also that the specified dimension collapses. Again let A have the shape (M, N). If dim=0, then the result will have the shape (1, N), where dimension 0 is reduced from M to 1. Similarly if dim=1, then the result would have the shape (M, 1), where N is reduced to 1. Note also that shapes (1, N) and (M,1) are represented by a single-dimensional tensor with N and M elements respectively.

Subtracting one dimensional array (list of scalars) from 3 dimensional arrays using broadcasting

I have a one dimesional array of scalar values
Y = np.array([1, 2])
I also have a 3-dimensional array:
X = np.random.randint(0, 255, size=(2, 2, 3))
I am attempting to subtract each value of Y from X, so I should get back Z which should be of shape (2, 2, 2, 3) or maybe (2, 2, 2, 3).
I can"t seem to figure out how to do this via broadcasting.
I tried changing the change of Y:
Y = np.array([[[1, 2]]])
but not sure what the correct shape should be.
Broadcasting lines up dimensions on the right. So you're looking to operate on a (2, 1, 1, 1) array and a (2, 2, 3) array.
The simplest way I can think of is using reshape:
Y = Y.reshape(-1, 1, 1, 1)
More generally:
Y = Y.reshape(-1, *([1] * X.ndim))
At most one of the arguments to reshape can be -1, indicating all the remaining size not accounted for by other dimensions.
To get Z of shape (2, 2, 2, 3):
Z = X - Y.reshape(-1, *([1] * X.ndim))
If you were OK with having Z of shape (2, 2, 3, 2), the operation would be much simpler:
Z = X[..., None] - Y
None or np.newaxis will insert a unit axis into the end of X's shape, making it broadcast properly with the 1D Y.
I am not entirely sure on which dimension you want your subtraction to take place, but X - Y will not return an error if you define Y such as Y = numpy.array([1,2]).reshape(2, 1, 1) or Y = numpy.array([1,2]).reshape(1, 2, 1).

In numpy.sum() there is parameter called "keepdims". What does it do?

In numpy.sum() there is parameter called keepdims. What does it do?
As you can see here in the documentation:
http://docs.scipy.org/doc/numpy/reference/generated/numpy.sum.html
numpy.sum(a, axis=None, dtype=None, out=None, keepdims=False)[source]
Sum of array elements over a given axis.
Parameters:
...
keepdims : bool, optional
If this is set to True, the axes which are reduced are left in the result as
dimensions with size one. With this option, the result will broadcast
correctly against the input array.
...
#Ney
#hpaulj is correct, you need to experiment, but I suspect you don't realize that summation for some arrays can occur along axes. Observe the following which reading the documentation
>>> a
array([[0, 0, 0],
[0, 1, 0],
[0, 2, 0],
[1, 0, 0],
[1, 1, 0]])
>>> np.sum(a, keepdims=True)
array([[6]])
>>> np.sum(a, keepdims=False)
6
>>> np.sum(a, axis=1, keepdims=True)
array([[0],
[1],
[2],
[1],
[2]])
>>> np.sum(a, axis=1, keepdims=False)
array([0, 1, 2, 1, 2])
>>> np.sum(a, axis=0, keepdims=True)
array([[2, 4, 0]])
>>> np.sum(a, axis=0, keepdims=False)
array([2, 4, 0])
You will notice that if you don't specify an axis (1st two examples), the numerical result is the same, but the keepdims = True returned a 2D array with the number 6, whereas, the second incarnation returned a scalar.
Similarly, when summing along axis 1 (across rows), a 2D array is returned again when keepdims = True.
The last example, along axis 0 (down columns), shows a similar characteristic... dimensions are kept when keepdims = True.
Studying axes and their properties is critical to a full understanding of the power of NumPy when dealing with multidimensional data.
An example showing keepdims in action when working with higher dimensional arrays. Let's see how the shape of the array changes as we do different reductions:
import numpy as np
a = np.random.rand(2,3,4)
a.shape
# => (2, 3, 4)
# Note: axis=0 refers to the first dimension of size 2
# axis=1 refers to the second dimension of size 3
# axis=2 refers to the third dimension of size 4
a.sum(axis=0).shape
# => (3, 4)
# Simple sum over the first dimension, we "lose" that dimension
# because we did an aggregation (sum) over it
a.sum(axis=0, keepdims=True).shape
# => (1, 3, 4)
# Same sum over the first dimension, but instead of "loosing" that
# dimension, it becomes 1.
a.sum(axis=(0,2)).shape
# => (3,)
# Here we "lose" two dimensions
a.sum(axis=(0,2), keepdims=True).shape
# => (1, 3, 1)
# Here the two dimensions become 1 respectively

Euclidean distances between several images and one base image

I have a matrix X of dimensions (30x8100) and another one Y of dimensions (1x8100). I want to generate an array containing the difference between them (X[1]-Y, X[2]-Y,..., X[30]-Y)
Can anyone help?
All you need for that is
X - Y
Since several people have offered answers that seem to try to make the shapes match manually, I should explain:
Numpy will automatically expand Y's shape so that it matches with that of X. This is called broadcasting, and it usually does a very good job of guessing what should be done. In ambiguous cases, an axis keyword can be applied to tell it which direction to do things. Here, since Y has a dimension of length 1, that is the axis that is expanded to be length 30 to match with X's shape.
For example,
In [87]: import numpy as np
In [88]: n, m = 3, 5
In [89]: x = np.arange(n*m).reshape(n,m)
In [90]: y = np.arange(m)[None,...]
In [91]: x.shape
Out[91]: (3, 5)
In [92]: y.shape
Out[92]: (1, 5)
In [93]: (x-y).shape
Out[93]: (3, 5)
In [106]: x
Out[106]:
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14]])
In [107]: y
Out[107]: array([[0, 1, 2, 3, 4]])
In [108]: x-y
Out[108]:
array([[ 0, 0, 0, 0, 0],
[ 5, 5, 5, 5, 5],
[10, 10, 10, 10, 10]])
But this is not really a euclidean distance, as your title seems to suggest you want:
df = np.asarray(x - y) # the difference between the images
dst = np.sqrt(np.sum(df**2, axis=1)) # their euclidean distances
use array and use numpy broadcasting in order to subtract it from Y
init the matrix:
>>> from numpy import *
>>> a = array([[1,2,3],[4,5,6]])
Accessing the second row in a:
>>> a[1]
array([4, 5, 6])
Subtract array from Y
>>> Y = array([3,9,0])
>>> a - Y
array([[-2, -7, 3],
[ 1, -4, 6]])
Just iterate rows from your numpy array and you can actually just subtract them and numpy will make a new array with the differences!
import numpy as np
final_array = []
#X is a numpy array that is 30X8100 and Y is a numpy array that is 1X8100
for row in X:
output = row - Y
final_array.append(output)
output will be your resulting array of X[0] - Y, X[1] - Y etc. Now your final_array will be an array with 30 arrays inside, each that have the values of the X-Y that you need! Simple as that. Just make sure you convert your matrices to a numpy arrays first
Edit: Since numpy broadcasting will do the iteration, all you need is one line once you have your two arrays:
final_array = X - Y
And then that is your array with the differences!
a1 = numpy.array(X) #make sure you have a numpy array like [[1,2,3],[4,5,6],...]
a2 = numpy.array(Y) #make sure you have a 1d numpy array like [1,2,3,...]
a2 = [a2] * len(a1[0]) #make a2 as wide as a1
a2 = numpy.array(zip(*a2)) #transpose it (a2 is now same shape as a1)
print a1-a2 #idiomatic difference between a1 and a2 (or X and Y)

Categories