numpy: multiplying a 2D array by a 1D array - python

Let us say one has an array of 2D vectors:
v = np.array([ [1, 1], [1, 1], [1, 1], [1, 1]])
v.shape = (4, 2)
And an array of scalars:
s = np.array( [2, 2, 2, 2] )
s.shape = (4,)
I would like the result:
f(v, s) = np.array([ [2, 2], [2, 2], [2, 2], [2, 2]])
Now, executing v*s is an error. Then, what is the most efficient way to go about implementing f?

Add a new singular dimension to the vector:
v*s[:,None]
This is equivalent to reshaping the vector as (len(s), 1). Then, the shapes of the multiplied objects will be (4,2) and (4,1), which are compatible due to NumPy broadcasting rules (corresponding dimensions are either equal to each other or equal to 1).
Note that when two operands have unequal numbers of dimensions, NumPy will insert extra singular dimensions "in front" of the operand with fewer dimensions. This would make your vector (1,4) which is incompatible with (4,2). Therefore, we explicitly specify where the extra dimensions are added, in order to make the shapes compatible.

Related

2D times 2D equals a 3d pytorch tensor

Given two 2-D pytorch tensors:
A = torch.FloatTensor([[1,2],[3,4]])
B = torch.FloatTensor([[0,0],[1,1],[2,2]])
Is there an efficient way to calculate a tensor of shape (6, 2, 2) where each entry is a column of A times each row of B?
For example, with A and B above, the 3D tensor should have the following matrices:
[[[0, 0],
[0, 0]],
[[1, 1],
[3, 3]],
[[2, 2],
[6, 6]],
[[0, 0],
[0, 0]],
[[2, 2],
[4, 4]],
[[4, 4],
[8, 8]]]
I know how to do it via for-loop but I am wondering if could have an efficient way to save it.
Pytorch tensors implement numpy style broadcast semantics which will work for this problem.
It's not clear from the question if you want to perform matrix multiplication or element-wise multiplication. In the length 2 case that you showed the two are equivalent, but this is certainly not true for higher dimensionality! Thankfully the code is almost the same so I'll just give both options.
A = torch.FloatTensor([[1, 2], [3, 4]])
B = torch.FloatTensor([[0, 0], [1, 1], [2, 2]])
# matrix multiplication
C_mm = (A.T[:, None, :, None] # B[None, :, None, :]).flatten(0, 1)
# element-wise multiplication
C_ew = (A.T[:, None, :, None] * B[None, :, None, :]).flatten(0, 1)
Code description. A.T transposes A and the indexing with None inserts unitary dimensions so A.T[:, None, :, None] will be shape (2, 1, 2, 1) and B[None, :, None, :] is shape (1, 3, 1, 2). Since # (matrix multiplication) operates on the last two dimensions of tensors, and broadcasts the other dimensions, then the result is matrix multiplication for each column of A times each row of B. In the element-wise case the broadcasting is performed on every dimension. The result is a (2, 3, 2, 2) tensor. To turn it into a (6, 2, 2) tensor we just flatten the first two dimensions using Tensor.flatten.

Does 1-dimensional numpy array always behave like a row vector?

I am trying to get a good understanding on broadcasting rules in numpy, but I have noticed I firstly need to get a good understanding on what 1-dimensional numpy array is. I found multiple sources saying that 1-dimensional numpy array is neither a horizontal or vertical vector. From that I'd expect that it behaves differently depending on an operation done and other component of the operation. But I can't really find a case when 1-dimensional array would behave like a column vector. For example:
a = np.arange(3)
b = np.arange(3)[:, np.newaxis]
a + b
array([[0, 1, 2],
[1, 2, 3],
[2, 3, 4]])
which indicates that a behaves like a horizontal vector. On the other hand, if we add it to horizontal vector b:
a = np.arange(3)
b = np.arange(3)[np.newaxis, :]
a + b
array([[0, 1, 4]])
a still behaves like a horizontal vector. On the other hand a seems to be indifferent to transformation with .T. So my question is - does 1-dimensional numpy arrays always mimic the horizontal vector behaviour? If not, what are the cases when they behave like standard vertical vector?
What you just came across is known as right align property of numpy arrays. When you have a vector of shape (n, ) and some other array of shape (a, b, c, d, ..., z) then numpy will always try to broadcast the vector to shape (1, 1, ...., n) and finally check if n is broadcastable with z (in other words, z is a multiple of n).
Now, if you don't want the behaviour, you will have to tell numpy explicitly, how do you want to broadcast with the other array with which you are operating by adding axis to the vector using np.newaxis. You can also use the function np.broadcast_arrays to get the broadcasted arrays.
For example,
import numpy as np
a = np.array([1, 2, 3])
b = np.eye(3)
# broadcasts a to shape (1, 3) first
# adds the vector a to rows of b
# [[1, 0, 0] [[1, 2, 3]
# [0, 1, 0] + [1, 2, 3]
# [0, 0, 1]] [1, 2, 3]]
print(a + b)
# Tell numpy explicitly, how you want
# your vector to be broadcasted
# Now, a is first broadcasted to shape (3, 1)
# and the vector a is added to the columns of b
# [[1, 0, 0] [[1, 1, 1]
# [0, 1, 0] + [2, 2, 2]
# [0, 0, 1]] [3, 3, 3]]
print(b + a[np.newaxis, :])

What's the difference between shape(150,) and shape (150,1)?

What's the difference between shape(150,) and shape (150,1)?
I think they are the same, I mean they both represent a column vector.
Both have the same values, but one is a vector and the other one is a matrix of the vector. Here's an example:
import numpy as np
x = np.array([1, 2, 3, 4, 5])
y = np.array([[1], [2], [3], [4], [5]])
print(x.shape)
print(y.shape)
And the output is:
(5,)
(5, 1)
Although they both occupy same space and positions in memory,
I think they are the same, I mean they both represent a column vector.
No they are not and certainly not according to NumPy (ndarrays).
The main difference is that the
shape (150,) => is a 1D array, whereas
shape (150,1) => is a 2D array
Questions like this see to come from two misconceptions.
not realizing that (5,) is a 1 element tuple.
expecting MATLAB like matrices
Make an array with the handy arange function:
In [424]: x = np.arange(5)
In [425]: x.shape
Out[425]: (5,) # 1 element tuple
In [426]: x.ndim
Out[426]: 1
numpy does not automatically make matrices, 2d arrays. It does not follow MATLAB in that regard.
We can reshape that array, adding a 2nd dimension. The result is a view (sooner or later you need to learn what that means):
In [427]: y = x.reshape(5,1)
In [428]: y.shape
Out[428]: (5, 1)
In [429]: y.ndim
Out[429]: 2
The display of these 2 arrays is very different. Same numbers, but the layout and number of brackets is very different, reflecting the respective shapes:
In [430]: x
Out[430]: array([0, 1, 2, 3, 4])
In [431]: y
Out[431]:
array([[0],
[1],
[2],
[3],
[4]])
The shape difference may seem academic - until you try to do math with the arrays:
In [432]: x+x
Out[432]: array([0, 2, 4, 6, 8]) # element wise sum
In [433]: x+y
Out[433]:
array([[0, 1, 2, 3, 4],
[1, 2, 3, 4, 5],
[2, 3, 4, 5, 6],
[3, 4, 5, 6, 7],
[4, 5, 6, 7, 8]])
How did that end up producing a (5,5) array? Broadcasting a (5,) array with a (5,1) array!

Check shape of numpy array

I want to write a function that takes a numpy array and I want to check if it meets the requirements. One thing that confuses me is that:
np.array([1,2,3]).shape = np.array([[1,2,3],[2,3],[2,43,32]]) = (3,)
[1,2,3] should be allowed, while [[1,2,3],[2,3],[2,43,32]] shouldn't.
Allowed shapes:
[0, 1, 2, 3, 4]
[0, 1, 2]
[[1],[2]]
[[1, 2], [2, 3], [3, 4]]
Not Allowed:
[] (empty array is not allowed)
[[0], [1, 2]] (inner dimensions must have same size 1!=2)
[[[4,5,6],[4,3,2][[2,3,2],[2,3,4]]] (more than 2 dimension)
You should start with defining what you want in terms of shape. I tried to understand it from the question, please add more details if it is not correct.
So here we have (1) empty array is not allowed and (2) no more than two dimensions. It translates the following way:
def is_allowed(arr):
return arr.shape != (0, ) and len(arr.shape) <= 2
The first condition just compares you array's shape with the shape of an empty array. the second condition checks that an array has no more than two dimensions.
With inner dimensions there is a problem. Some of the lists you provided as an example are not numpy arrays. If you cast np.array([[1,2,3],[2,3],[2,43,32]]), you get just an array where each element is the list. It is not a "real" numpy array with direct access to all the elements. See example:
>>> np.array([[1,2,3],[2,3],[2,43,32]])
array([list([1, 2, 3]), list([2, 3]), list([2, 43, 32])], dtype=object)
>>> np.array([[1,2,3],[2,3, None],[2,43,32]])
array([[1, 2, 3],
[2, 3, None],
[2, 43, 32]], dtype=object)
So I would recommend (if you are operating with usual lists) check that all arrays have the same length without numpy.

How does the axis parameter from NumPy work?

Can someone explain exactly what the axis parameter in NumPy does?
I am terribly confused.
I'm trying to use the function myArray.sum(axis=num)
At first I thought if the array is itself 3 dimensions, axis=0 will return three elements, consisting of the sum of all nested items in that same position. If each dimension contained five dimensions, I expected axis=1 to return a result of five items, and so on.
However this is not the case, and the documentation does not do a good job helping me out (they use a 3x3x3 array so it's hard to tell what's happening)
Here's what I did:
>>> e
array([[[1, 0],
[0, 0]],
[[1, 1],
[1, 0]],
[[1, 0],
[0, 1]]])
>>> e.sum(axis = 0)
array([[3, 1],
[1, 1]])
>>> e.sum(axis=1)
array([[1, 0],
[2, 1],
[1, 1]])
>>> e.sum(axis=2)
array([[1, 0],
[2, 1],
[1, 1]])
>>>
Clearly the result is not intuitive.
Clearly,
e.shape == (3, 2, 2)
Sum over an axis is a reduction operation so the specified axis disappears. Hence,
e.sum(axis=0).shape == (2, 2)
e.sum(axis=1).shape == (3, 2)
e.sum(axis=2).shape == (3, 2)
Intuitively, we are "squashing" the array along the chosen axis, and summing the numbers that get squashed together.
To understand the axis intuitively, refer the picture below (source: Physics Dept, Cornell Uni)
The shape of the (boolean) array in the above figure is shape=(8, 3). ndarray.shape will return a tuple where the entries correspond to the length of the particular dimension. In our example, 8 corresponds to length of axis 0 whereas 3 corresponds to length of axis 1.
If someone need this visual description:
There are good answers for visualization however it might help to think purely from analytical perspective.
You can create array of arbitrary dimension with numpy.
For example, here's a 5-dimension array:
>>> a = np.random.rand(2, 3, 4, 5, 6)
>>> a.shape
(2, 3, 4, 5, 6)
You can access any element of this array by specifying indices. For example, here's the first element of this array:
>>> a[0, 0, 0, 0, 0]
0.0038908603263844155
Now if you take out one of the dimensions, you get number of elements in that dimension:
>>> a[0, 0, :, 0, 0]
array([0.00389086, 0.27394775, 0.26565889, 0.62125279])
When you apply a function like sum with axis parameter, that dimension gets eliminated and array of dimension less than original gets created. For each cell in new array, the operator will get list of elements and apply the reduction function to get a scaler.
>>> np.sum(a, axis=2).shape
(2, 3, 5, 6)
Now you can check that the first element of this array is sum of above elements:
>>> np.sum(a, axis=2)[0, 0, 0, 0]
1.1647502999560164
>>> a[0, 0, :, 0, 0].sum()
1.1647502999560164
The axis=None has special meaning to flatten out the array and apply function on all numbers.
Now you can think about more complex cases where axis is not just number but a tuple:
>>> np.sum(a, axis=(2,3)).shape
(2, 3, 6)
Note that we use same technique to figure out how this reduction was done:
>>> np.sum(a, axis=(2,3))[0,0,0]
7.889432081931909
>>> a[0, 0, :, :, 0].sum()
7.88943208193191
You can also use same reasoning for adding dimension in array instead of reducing dimension:
>>> x = np.random.rand(3, 4)
>>> y = np.random.rand(3, 4)
# New dimension is created on specified axis
>>> np.stack([x, y], axis=2).shape
(3, 4, 2)
>>> np.stack([x, y], axis=0).shape
(2, 3, 4)
# To retrieve item i in stack set i in that axis
Hope this gives you generic and full understanding of this important parameter.
Some answers are too specific or do not address the main source of confusion. This answer attempts to provide a more general but simple explanation of the concept, with a simple example.
The main source of confusion is related to expressions such as "Axis along which the means are computed", which is the documentation of the argument axis of the numpy.mean function. What the heck does "along which" even mean here? "Along which" essentially means that you will sum the rows (and divide by the number of rows, given that we are computing the mean), if the axis is 0, and the columns, if the axis is 1. In the case of axis is 0 (or 1), the rows can be scalars or vectors or even other multi-dimensional arrays.
In [1]: import numpy as np
In [2]: a=np.array([[1, 2], [3, 4]])
In [3]: a
Out[3]:
array([[1, 2],
[3, 4]])
In [4]: np.mean(a, axis=0)
Out[4]: array([2., 3.])
In [5]: np.mean(a, axis=1)
Out[5]: array([1.5, 3.5])
So, in the example above, np.mean(a, axis=0) returns array([2., 3.]) because (1 + 3)/2 = 2 and (2 + 4)/2 = 3. It returns an array of two numbers because it returns the mean of the rows for each column (and there are two columns).
Both 1st and 2nd reply is great for understanding ndarray concept in numpy. I am giving a simple example.
And according to this image by #debaonline4u
https://i.stack.imgur.com/O5hBF.jpg
Suppose , you have an 2D array -
[1, 2, 3]
[4, 5, 6]
In, numpy format it will be -
c = np.array([[1, 2, 3],
[4, 5, 6]])
Now,
c.ndim = 2 (rows/axis=0)
c.shape = (2,3) (axis0, axis1)
c.sum(axis=0) = [1+4, 2+5, 3+6] = [5, 7, 9] (sum of the 1st elements of each rows, so along axis0)
c.sum(axis=1) = [1+2+3, 4+5+6] = [6, 15] (sum of the elements in a row, so along axis1)
So for your 3D array,

Categories