How to index a 2D array with 3D array? - python

Today, i encountered such a problem:
Tensor A is a segmentation mask with the shape of (1, 4, 4) and its value is either 0 or 1.
Tensor B is a diagonal array created by torch.eye(2).
My problems are why we can index B(2D) with A(3D) in the form of B[A] and why the result is a tensor with the shape of (1, 4, 4, 2)?
Above is my test instance, and the socure code is obtained from a diceloss class:
y_true_dummy = torch.eye(num_classes)[y_true.squeeze(1)]
the shape of y_true is (b, h, w), num_classes equals c.
by the way, why we need function .squeeze()?
I want some explanation about the indexing problem and some videos are more appreciated.

You can understand the problem if you work on a smaller example:
A = torch.randint(2, (4,))
B = torch.eye(2)
>>> A
# tensor([1, 0, 1, 1])
>>> B[A].shape
# (4, 2)
>>> B[A]
# tensor([[0., 1.],
# [1., 0.],
# [0., 1.],
# [0., 1.]])
[1, 0] and [0, 1] are the first and second rows of the 2x2 identity matrix, B. So, using the 1D array A of shape (4, ) as index is selecting 4 "rows" of B / selecting 4 elements of B along axis 0. B[A] is basically [B[1], B[1], B[0], B[1]].
So when A is a 3D array of shape (1, 4, 4), B[A] means selecting (1, 4, 4) rows of B. And because each row in B had 2 elements (2 columns), your output is (1, 4, 4, 2).
B is a 2x2 identity matrix, having 2 rows. Think of it like: you are picking 16 rows out of these 2 rows, getting a (16, 2) matrix -> then you reshape it to get (1, 4, 4, 2) tensor. In fact, you can check this easily:
A = torch.randint(2, (4, 4))
A_flat = A.reshape(-1)
B = torch.eye(2)
>>> torch.allclose(B[A], B[A_flat].reshape(1, 4, 4, -1)])
# True
This isn't also a PyTorch specific phenomenon either. You can observe the same indexing rules in NumPy, which torch maintains close compatibility with.

Related

looping numpy error: all the input arrays must have same number of dimensions

I want to write the following code:
for i = 1:N
for j = 1:N
Ab(i,j) = (Ap(i)*Ap(j))^(0.5)*(1 - kij(i,j)) ;
end
end
However an error appears: "all the input arrays must have same number of dimensions, but the array at index 0 has 2 dimension(s) and the array at index 1 has 1 dimension(s)"
ab=np.matrix((2, 2))
for i in range(0,nc):
for j in range(0, nc):
np.append(ab,((Ap[i]*Ap[j])**(0.5)*(1 - kij[i][j])))
There is a bit context missing, but if I guess correctly looking at Matlab part you can write something like this.
ab = np.zeros((2, 2))
for i in range(ab.shape[0]): # you do not have to put 0 and you can use size of array to limit iterations
for j in range(ab.shape[1]):
ab[i, j] = (Ap[i]*Ap[j])**(0.5)*(1 - kij[i][j])))
My assumptions
ab matrix meant to be 2x2 matrix, not 1x2 matrix with values [2, 2], this what np.matrix confusingly does (at least these were my expectations coming from Matlab). np.zeros - creates array with all zeros of size 2x2. Array and matrix are a bit different in numpy, by matrix is being slowly deprecated (more here https://numpy.org/doc/stable/reference/generated/numpy.matrix.html?highlight=matrix#numpy.matrix)
nc - is size of ab matrix
Why you had an error?
np.matrix((2, 2)) - creates 1x2 matrix with values 2 and 2 [[2, 2]]
(Ap[i]Ap[j])**(0.5)(1 - kij[i][j])) - this looks like a scalar value
np.append(ab, scalar_value) - tries to append scalar to matrix, but there is dimensions mismatch between ab and scalar value, which is stated in the error. Essentially, in order for this to work, they should be similar types of objects.
Examples
>>> np.zeros((2, 2))
array([[0., 0.],
[0., 0.]])
>>> np.matrix((2, 2))
matrix([[2, 2]])
>>> np.array((2, 2))
array([2, 2])
>> np.append(np.matrix((2, 2)), [[3, 3]], axis=0)
matrix([[2, 2],
[3, 3]])
>> np.append(np.zeros((2, 2)), [[3, 3]], axis=0)
array([[0., 0.],
[0., 0.],
[3., 3.]])

How to perform matrix multiplication between two 3D tensors along the first dimension?

I wish to compute the dot product between two 3D tensors along the first dimension. I tried the following einsum notation:
import numpy as np
a = np.random.randn(30).reshape(3, 5, 2)
b = np.random.randn(30).reshape(3, 2, 5)
# Expecting shape: (3, 5, 5)
np.einsum("ijk,ikj->ijj", a, b)
Sadly it returns this error:
ValueError: einstein sum subscripts string includes output subscript 'j' multiple times
I went with Einstein sum after I failed at it with np.tensordot. Ideas and follow up questions are highly welcome!
Your two dimensions of size 5 and 5 do not correspond to the same axes. As such you need to use two different subscripts to designate them. For example, you can do:
>>> res = np.einsum('ijk,ilm->ijm', a, b)
>>> res.shape
(3, 5, 5)
Notice you are also required to change the subscript for axes of size 2 and 2. This is because you are computing the batched outer product (i.e. we iterate on two axes at the same time), not a dot product (i.e. we iterate simultaneously on the two axes).
Outer product:
>>> np.einsum('ijk,ilm->ijm', a, b)
Dot product over subscript k, which is axis=2 of a and axis=1 of b:
>>> np.einsum('ijk,ikm->ijm', a, b)
which is equivalent to a#b.
dot product ... along the first dimension is a bit unclear. Is the first dimension a 'batch' dimension, with 3 dot's on the rest? Or something else?
In [103]: a = np.random.randn(30).reshape(3, 5, 2)
...: b = np.random.randn(30).reshape(3, 2, 5)
In [104]: (a#b).shape
Out[104]: (3, 5, 5)
In [105]: np.einsum('ijk,ikl->ijl',a,b).shape
Out[105]: (3, 5, 5)
#Ivan's answer is different:
In [106]: np.einsum('ijk,ilm->ijm', a, b).shape
Out[106]: (3, 5, 5)
In [107]: np.allclose(np.einsum('ijk,ilm->ijm', a, b), a#b)
Out[107]: False
In [108]: np.allclose(np.einsum('ijk,ikl->ijl', a, b), a#b)
Out[108]: True
Ivan's sums the k dimension of one, and l of the other, and then does a broadcasted elementwise. That is not matrix multiplication:
In [109]: (a.sum(axis=-1,keepdims=True)* b.sum(axis=1,keepdims=True)).shape
Out[109]: (3, 5, 5)
In [110]: np.allclose((a.sum(axis=-1,keepdims=True)* b.sum(axis=1,keepdims=True)),np.einsum('ijk,ilm->ijm', a,
...: b))
Out[110]: True
Another test of the batch processing:
In [112]: res=np.zeros((3,5,5))
...: for i in range(3):
...: res[i] = a[i]#b[i]
...: np.allclose(res, a#b)
Out[112]: True

Python code to create an array of arrays (8x8 with each being a 3x3)

I am attempting to create an array of arrays with the structure being an 8x8, where each cell in that is a 3x3 array. What I have created works, but when I want to change a specific value, I need to access it differently than I would expect.
import numpy as np
a = np.zeros((3,3))
b = np.array([[0,1,0],[1,1,1],[0,1,0]])
d = np.array([[b,a,b,a,b,a,b,a]])
e = np.array([[a,b,a,b,a,b,a,b]])
g = np.array([[d],[e],[d],[e],[d],[e],[d],[e]])
#Needed to change a specific cell
#g[0][0][0][0][0][0] = x : [Row-x][0][0][Cell-x][row-x][cell-x]
#Not sure why I have to have the 2 0's between the Row-x and the Cell-x identifiers
After this, I will need to map each value to a 24x24 grid with 1's having a different color than 0's. If anyone could provide direction to achieve this, it would be appreciated. Not looking for the specific code, but a base to understand how it can be done.
Thanks
In [291]: a = np.zeros((3,3))
...: b = np.array([[0,1,0],[1,1,1],[0,1,0]])
...: d = np.array([[b,a,b,a,b,a,b,a]])
...: e = np.array([[a,b,a,b,a,b,a,b]])
...: g = np.array([[d],[e],[d],[e],[d],[e],[d],[e]])
In [292]: a.shape
Out[292]: (3, 3)
In [293]: b.shape
Out[293]: (3, 3)
d is 4d - count the brackets: [[....]]:
In [294]: d.shape
Out[294]: (1, 8, 3, 3)
In [295]: e.shape
Out[295]: (1, 8, 3, 3)
g is (8,1) of 4 dim elements, for a total of 6. Again count the brackets:
In [296]: g.shape
Out[296]: (8, 1, 1, 8, 3, 3)
Accessing a 2d subarray, in this case equal to b:
In [298]: g[0,0,0,0,:,:]
Out[298]:
array([[0., 1., 0.],
[1., 1., 1.],
[0., 1., 0.]])
Redo, without the excess brackets:
In [299]: a = np.zeros((3,3))
...: b = np.array([[0,1,0],[1,1,1],[0,1,0]])
...: d = np.array([b,a,b,a,b,a,b,a])
...: e = np.array([a,b,a,b,a,b,a,b])
...: g = np.array([d,e,d,e,d,e,d,e])
In [300]: d.shape
Out[300]: (8, 3, 3)
In [301]: g.shape
Out[301]: (8, 8, 3, 3)

What does .view() do in PyTorch?

What does .view() do to a tensor x? What do negative values mean?
x = x.view(-1, 16 * 5 * 5)
view() reshapes the tensor without copying memory, similar to numpy's reshape().
Given a tensor a with 16 elements:
import torch
a = torch.range(1, 16)
To reshape this tensor to make it a 4 x 4 tensor, use:
a = a.view(4, 4)
Now a will be a 4 x 4 tensor. Note that after the reshape the total number of elements need to remain the same. Reshaping the tensor a to a 3 x 5 tensor would not be appropriate.
What is the meaning of parameter -1?
If there is any situation that you don't know how many rows you want but are sure of the number of columns, then you can specify this with a -1. (Note that you can extend this to tensors with more dimensions. Only one of the axis value can be -1). This is a way of telling the library: "give me a tensor that has these many columns and you compute the appropriate number of rows that is necessary to make this happen".
This can be seen in this model definition code. After the line x = self.pool(F.relu(self.conv2(x))) in the forward function, you will have a 16 depth feature map. You have to flatten this to give it to the fully connected layer. So you tell PyTorch to reshape the tensor you obtained to have specific number of columns and tell it to decide the number of rows by itself.
Let's do some examples, from simpler to more difficult.
The view method returns a tensor with the same data as the self tensor (which means that the returned tensor has the same number of elements), but with a different shape. For example:
a = torch.arange(1, 17) # a's shape is (16,)
a.view(4, 4) # output below
1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16
[torch.FloatTensor of size 4x4]
a.view(2, 2, 4) # output below
(0 ,.,.) =
1 2 3 4
5 6 7 8
(1 ,.,.) =
9 10 11 12
13 14 15 16
[torch.FloatTensor of size 2x2x4]
Assuming that -1 is not one of the parameters, when you multiply them together, the result must be equal to the number of elements in the tensor. If you do: a.view(3, 3), it will raise a RuntimeError because shape (3 x 3) is invalid for input with 16 elements. In other words: 3 x 3 does not equal 16 but 9.
You can use -1 as one of the parameters that you pass to the function, but only once. All that happens is that the method will do the math for you on how to fill that dimension. For example a.view(2, -1, 4) is equivalent to a.view(2, 2, 4). [16 / (2 x 4) = 2]
Notice that the returned tensor shares the same data. If you make a change in the "view" you are changing the original tensor's data:
b = a.view(4, 4)
b[0, 2] = 2
a[2] == 3.0
False
Now, for a more complex use case. The documentation says that each new view dimension must either be a subspace of an original dimension, or only span d, d + 1, ..., d + k that satisfy the following contiguity-like condition that for all i = 0, ..., k - 1, stride[i] = stride[i + 1] x size[i + 1]. Otherwise, contiguous() needs to be called before the tensor can be viewed. For example:
a = torch.rand(5, 4, 3, 2) # size (5, 4, 3, 2)
a_t = a.permute(0, 2, 3, 1) # size (5, 3, 2, 4)
# The commented line below will raise a RuntimeError, because one dimension
# spans across two contiguous subspaces
# a_t.view(-1, 4)
# instead do:
a_t.contiguous().view(-1, 4)
# To see why the first one does not work and the second does,
# compare a.stride() and a_t.stride()
a.stride() # (24, 6, 2, 1)
a_t.stride() # (24, 2, 1, 6)
Notice that for a_t, stride[0] != stride[1] x size[1] since 24 != 2 x 3
view() reshapes a tensor by 'stretching' or 'squeezing' its elements into the shape you specify:
How does view() work?
First let's look at what a tensor is under the hood:
Tensor and its underlying storage
e.g. the right-hand tensor (shape (3,2)) can be computed from the left-hand one with t2 = t1.view(3,2)
Here you see PyTorch makes a tensor by converting an underlying block of contiguous memory into a matrix-like object by adding a shape and stride attribute:
shape states how long each dimension is
stride states how many steps you need to take in memory til you reach the next element in each dimension
view(dim1,dim2,...) returns a view of the same underlying information, but reshaped to a tensor of shape dim1 x dim2 x ... (by modifying the shape and stride attributes).
Note this implicitly assumes that the new and old dimensions have the same product (i.e. the old and new tensor have the same volume).
PyTorch -1
-1 is a PyTorch alias for "infer this dimension given the others have all been specified" (i.e. the quotient of the original product by the new product). It is a convention taken from numpy.reshape().
Hence t1.view(3,2) in our example would be equivalent to t1.view(3,-1) or t1.view(-1,2).
torch.Tensor.view()
Simply put, torch.Tensor.view() which is inspired by numpy.ndarray.reshape() or numpy.reshape(), creates a new view of the tensor, as long as the new shape is compatible with the shape of the original tensor.
Let's understand this in detail using a concrete example.
In [43]: t = torch.arange(18)
In [44]: t
Out[44]:
tensor([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17])
With this tensor t of shape (18,), new views can only be created for the following shapes:
(1, 18) or equivalently (1, -1) or (-1, 18)
(2, 9) or equivalently (2, -1) or (-1, 9)
(3, 6) or equivalently (3, -1) or (-1, 6)
(6, 3) or equivalently (6, -1) or (-1, 3)
(9, 2) or equivalently (9, -1) or (-1, 2)
(18, 1) or equivalently (18, -1) or (-1, 1)
As we can already observe from the above shape tuples, the multiplication of the elements of the shape tuple (e.g. 2*9, 3*6 etc.) must always be equal to the total number of elements in the original tensor (18 in our example).
Another thing to observe is that we used a -1 in one of the places in each of the shape tuples. By using a -1, we are being lazy in doing the computation ourselves and rather delegate the task to PyTorch to do calculation of that value for the shape when it creates the new view. One important thing to note is that we can only use a single -1 in the shape tuple. The remaining values should be explicitly supplied by us. Else PyTorch will complain by throwing a RuntimeError:
RuntimeError: only one dimension can be inferred
So, with all of the above mentioned shapes, PyTorch will always return a new view of the original tensor t. This basically means that it just changes the stride information of the tensor for each of the new views that are requested.
Below are some examples illustrating how the strides of the tensors are changed with each new view.
# stride of our original tensor `t`
In [53]: t.stride()
Out[53]: (1,)
Now, we will see the strides for the new views:
# shape (1, 18)
In [54]: t1 = t.view(1, -1)
# stride tensor `t1` with shape (1, 18)
In [55]: t1.stride()
Out[55]: (18, 1)
# shape (2, 9)
In [56]: t2 = t.view(2, -1)
# stride of tensor `t2` with shape (2, 9)
In [57]: t2.stride()
Out[57]: (9, 1)
# shape (3, 6)
In [59]: t3 = t.view(3, -1)
# stride of tensor `t3` with shape (3, 6)
In [60]: t3.stride()
Out[60]: (6, 1)
# shape (6, 3)
In [62]: t4 = t.view(6,-1)
# stride of tensor `t4` with shape (6, 3)
In [63]: t4.stride()
Out[63]: (3, 1)
# shape (9, 2)
In [65]: t5 = t.view(9, -1)
# stride of tensor `t5` with shape (9, 2)
In [66]: t5.stride()
Out[66]: (2, 1)
# shape (18, 1)
In [68]: t6 = t.view(18, -1)
# stride of tensor `t6` with shape (18, 1)
In [69]: t6.stride()
Out[69]: (1, 1)
So that's the magic of the view() function. It just changes the strides of the (original) tensor for each of the new views, as long as the shape of the new view is compatible with the original shape.
Another interesting thing one might observe from the strides tuples is that the value of the element in the 0th position is equal to the value of the element in the 1st position of the shape tuple.
In [74]: t3.shape
Out[74]: torch.Size([3, 6])
|
In [75]: t3.stride() |
Out[75]: (6, 1) |
|_____________|
This is because:
In [76]: t3
Out[76]:
tensor([[ 0, 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10, 11],
[12, 13, 14, 15, 16, 17]])
the stride (6, 1) says that to go from one element to the next element along the 0th dimension, we have to jump or take 6 steps. (i.e. to go from 0 to 6, one has to take 6 steps.) But to go from one element to the next element in the 1st dimension, we just need only one step (for e.g. to go from 2 to 3).
Thus, the strides information is at the heart of how the elements are accessed from memory for performing the computation.
torch.reshape()
This function would return a view and is exactly the same as using torch.Tensor.view() as long as the new shape is compatible with the shape of the original tensor. Otherwise, it will return a copy.
However, the notes of torch.reshape() warns that:
contiguous inputs and inputs with compatible strides can be reshaped without copying, but one should not depend on the copying vs. viewing behavior.
Let's try to understand view by the following examples:
a=torch.range(1,16)
print(a)
tensor([ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14.,
15., 16.])
print(a.view(-1,2))
tensor([[ 1., 2.],
[ 3., 4.],
[ 5., 6.],
[ 7., 8.],
[ 9., 10.],
[11., 12.],
[13., 14.],
[15., 16.]])
print(a.view(2,-1,4)) #3d tensor
tensor([[[ 1., 2., 3., 4.],
[ 5., 6., 7., 8.]],
[[ 9., 10., 11., 12.],
[13., 14., 15., 16.]]])
print(a.view(2,-1,2))
tensor([[[ 1., 2.],
[ 3., 4.],
[ 5., 6.],
[ 7., 8.]],
[[ 9., 10.],
[11., 12.],
[13., 14.],
[15., 16.]]])
print(a.view(4,-1,2))
tensor([[[ 1., 2.],
[ 3., 4.]],
[[ 5., 6.],
[ 7., 8.]],
[[ 9., 10.],
[11., 12.]],
[[13., 14.],
[15., 16.]]])
-1 as an argument value is an easy way to compute the value of say x provided we know values of y, z or the other way round in case of 3d and for 2d again an easy way to compute the value of say x provided we know values of y or vice versa..
I figured it out that x.view(-1, 16 * 5 * 5) is equivalent to x.flatten(1), where the parameter 1 indicates the flatten process starts from the 1st dimension(not flattening the 'sample' dimension)
As you can see, the latter usage is semantically more clear and easier to use, so I prefer flatten().
What is the meaning of parameter -1?
You can read -1 as dynamic number of parameters or "anything". Because of that there can be only one parameter -1 in view().
If you ask x.view(-1,1) this will output tensor shape [anything, 1] depending on the number of elements in x. For example:
import torch
x = torch.tensor([1, 2, 3, 4])
print(x,x.shape)
print("...")
print(x.view(-1,1), x.view(-1,1).shape)
print(x.view(1,-1), x.view(1,-1).shape)
Will output:
tensor([1, 2, 3, 4]) torch.Size([4])
...
tensor([[1],
[2],
[3],
[4]]) torch.Size([4, 1])
tensor([[1, 2, 3, 4]]) torch.Size([1, 4])
weights.reshape(a, b) will return a new tensor with the same data as weights with size (a, b) as in it copies the data to another part of memory.
weights.resize_(a, b) returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory.
weights.view(a, b) will return a new tensor with the same data as weights with size (a, b)
I really liked #Jadiel de Armas examples.
I would like to add a small insight to how elements are ordered for .view(...)
For a Tensor with shape (a,b,c), the order of it's elements are
determined by a numbering system: where the first digit has a
numbers, second digit has b numbers and third digit has c numbers.
The mapping of the elements in the new Tensor returned by .view(...)
preserves this order of the original Tensor.

Reduce array over ranges

Say I have an array of numbers
np.array(([1, 4, 2, 1, 2, 5]))
And I want to compute the sum over a list of slices
((0, 3), (2, 4), (2, 6))
Giving
[(1 + 4 + 2), (2 + 1), (2 + 1 + 2 + 5)]
Is there a nice way to do this in numpy?
Looking for something equivalent to
def reduce(a, ranges):
np.array(list(np.sum(a[low:high]) for (low, high) in ranges))
Seems like there is probably some fancy numpy way to do this though. Anyone know?
One way is to use np.add.reduceat. If a is the array of values [1, 4, 2, 1, 2, 5]:
>>> np.add.reduceat(a, [0,3, 2,4, 2])[::2]
array([ 7, 3, 10], dtype=int32)
Here the slice indexes are passed in a list and are summed to return [ 7, 1, 3, 2, 10] (i.e. the sums of a[0:3], a[3:], a[2:4], a[4:], a[2:]). We only want every other element from this array.
Longer alternative approach...
The fact that the slices are of different lengths makes this slightly trickier to vectorise in NumPy, but here is one way you approach the problem.
Given an array of values and an array of slices to make...
a = np.array(([1, 4, 2, 1, 2, 5]))
slices = np.array([(0, 3), (2, 4), (2, 6)])
...create a mask-like array z that, for each slice, will be used to "zero-out" the values from a we don't want to sum:
z = np.zeros((3, 6))
s1 = np.arange(6) >= s[:, 0][:,None]
s2 = np.arange(6) < s[:, 1][:,None]
z[s1 & s2] = 1
Then you can do:
>>> (z * a).sum(axis=1)
array([ 7., 3., 10.])
A quick %timeit shows this is slightly faster than the list comprehension, even though we had to construct z and z * a. If slices is made to be of length 3000, this method is around 40 times quicker.
However note that the array z will be of shape (len(slices), len(a)) which may not be as practical if a or slices are both very long - an iterative approach might be preferred to avoid large temporary arrays in memory.

Categories