getting started with Tensor Flow - python

I am trying to learn tensor flow. In the given example how can we define rank and shape?? I mean how to find the rank and shape??
3 # a rank 0 tensor; this is a scalar with shape []
[1. ,2., 3.] # a rank 1 tensor; this is a vector with shape [3]
[[1., 2., 3.], [4., 5., 6.]] # a rank 2 tensor; a matrix with shape [2, 3]
[[[1., 2., 3.]], [[7., 8., 9.]]] # a rank 3 tensor with shape [2, 1, 3]

Rank is the number of dimensions in the tensor. Refer to:
https://en.wikipedia.org/wiki/Tensor
The total number of indices required to identify each component uniquely is equal to the dimension of the array, and is called the order, degree or rank of the tensor.
Shape describes the number of elements in each dimension of the tensor.
In the given example,
[1. ,2., 3.]
is a set of numbers with only one dimension. This is called a vector and generally used to represent a line.
[[1., 2., 3.], [4., 5., 6.]]
is a set of numbers with two dimensions. This is called a matrix and generally represents a set of lines geometrically. (Each line described by elements in each of the inner brackets)
This can be generalized to more than two dimensions.
More generally, all these sets of numbers are known as Tensors. TensorFlow uses these sets of numbers as a data structure.

Related

PyTorch Tensor broadcasting

I'm trying to figure out how to do the following broadcast:
I have two tensors, of sizes (n1,N) and (n2,N)
What I want to do is to multiply each row of the first tensor, with each row of the second tensor, and then sum each of there multiplied row result, so that my final tensor should be of the form (n1,n2).
I tried this:
x1*torch.reshape(x2,(x2.size(dim=0),x2.size(dim=1),1))
But obviously this doesn't work.. Can't figure out how to do this
What you are looking for is the Tensordot command from PyTorch and Numpy
Since you want to compute dot product along N, which is dimension 1 of x1, and dimension 1 of x2 tensor, you need to perform a contraction along the first axes of both Tensors by supplying a ([1], [1]) to dims arg in Tensordot. This means Torch will sum products of x1 and x2 elements over the specified x1-axes 1 and specified x2-axes 1 respectively. The args to supply to dims is quite confusing, here's a useful thread to help understand how to use Tensordothere
x1 = torch.arange(6.).reshape(2,3)
>>> tensor([[0., 1., 2.],
[3., 4., 5.]])
# x1 is Tensor of shape (2,3)
x2 = torch.arange(9.).reshape(3,3)
>>> tensor([[0., 1., 2.],
[3., 4., 5.],
[6., 7., 8.]])
# x2 is Tensor of shape (3,3)
x = torch.tensordot(x1, x2, dims=([1],[1]))
>>> tensor([[ 5., 14., 23.],
[14., 50., 86.]])
# x is Tensor of shape (2,3)
What you describe seems to be effectively the same as performing a matrix multiplication between the first tensor and the transpose of the second tensor. This can be done as:
torch.matmul(x1, x2.T)

Dot Product of Tensor row matrices

I have a tensor that I need to dot product all its the row matrices in a vectorized way:
a = np.zeros((3,4,2,2))+1
which is a 3x4 tensor and elements are matrices 2x2. I need to dot product the 2x2 matrices in each row.
the result should be a 3x1 matrix that contains a 2x2 matrix filled with 8s
I tried
a = np.zeros((3,4,2,2))+1
np.prod(a, axis= 1)
but it only gives the element-wise product:
array([[[1., 1.],
[1., 1.]],
[[1., 1.],
[1., 1.]],
[[1., 1.],
[1., 1.]]])
I need a vectorized function, not a for-loop.
I'd appreciate it if someone has a solution using NumPy or Scipy as TensorFlow is a huge dependency to include.
how about
def np_multi_matmul(tensor: np.ndarray, axis: int) -> np.ndarray:
arrays = np.split(tensor, tensor.shape[axis], axis = axis)
return functools.reduce(lambda x, y: np.matmul(y, x), arrays)
edit: first, split the array along the axis you want to reduce. then compute the matmul of each two matrices at a time. matmul will ignore all but the last two dimensions and compute their matrix multiplication result as long as the other dimensions of the array are the same.

Pytorch softmax: What dimension to use?

The function torch.nn.functional.softmax takes two parameters: input and dim. According to its documentation, the softmax operation is applied to all slices of input along the specified dim, and will rescale them so that the elements lie in the range (0, 1) and sum to 1.
Let input be:
input = torch.randn((3, 4, 5, 6))
Suppose I want the following, so that every entry in that array is 1:
sum = torch.sum(input, dim = 3) # sum's size is (3, 4, 5, 1)
How should I apply softmax?
softmax(input, dim = 0) # Way Number 0
softmax(input, dim = 1) # Way Number 1
softmax(input, dim = 2) # Way Number 2
softmax(input, dim = 3) # Way Number 3
My intuition tells me that is the last one, but I am not sure. English is not my first language and the use of the word along seemed confusing to me because of that.
I am not very clear on what "along" means, so I will use an example that could clarify things. Suppose we have a tensor of size (s1, s2, s3, s4), and I want this to happen
Steven's answer is not correct. See the snapshot below. It is actually the reverse way.
Image transcribed as code:
>>> x = torch.tensor([[1,2],[3,4]],dtype=torch.float)
>>> F.softmax(x,dim=0)
tensor([[0.1192, 0.1192],
[0.8808, 0.8808]])
>>> F.softmax(x,dim=1)
tensor([[0.2689, 0.7311],
[0.2689, 0.7311]])
The easiest way I can think of to make you understand is: say you are given a tensor of shape (s1, s2, s3, s4) and as you mentioned you want to have the sum of all the entries along the last axis to be 1.
sum = torch.sum(input, dim = 3) # input is of shape (s1, s2, s3, s4)
Then you should call the softmax as:
softmax(input, dim = 3)
To understand easily, you can consider a 4d tensor of shape (s1, s2, s3, s4) as a 2d tensor or matrix of shape (s1*s2*s3, s4). Now if you want the matrix to contain values in each row (axis=0) or column (axis=1) that sum to 1, then, you can simply call the softmax function on the 2d tensor as follows:
softmax(input, dim = 0) # normalizes values along axis 0
softmax(input, dim = 1) # normalizes values along axis 1
You can see the example that Steven mentioned in his answer.
Let's consider the example in two dimensions
x = [[1,2],
[3,4]]
do you want your final result to be
y = [[0.27,0.73],
[0.27,0.73]]
or
y = [[0.12,0.12],
[0.88,0.88]]
If it's the first option then you want dim = 1. If it's the second option you want dim = 0.
Notice that the columns or zeroth dimension is normalized in the second example hence it is normalized along the zeroth dimension.
Updated 2018-07-10: to reflect that zeroth dimension refers to columns in pytorch.
I am not 100% sure what your question means but I think your confusion is simply that you don't understand what dim parameter means. So I will explain it and provide examples.
If we have:
m0 = nn.Softmax(dim=0)
what that means is that m0 will normalize elements along the zeroth coordinate of the tensor it receives. Formally if given a tensor b of size say (d0,d1) then the following will be true:
sum^{d0}_{i0=1} b[i0,i1] = 1, forall i1 \in {0,...,d1}
you can easily check this with a Pytorch example:
>>> b = torch.arange(0,4,1.0).view(-1,2)
>>> b
tensor([[0., 1.],
[2., 3.]])
>>> m0 = nn.Softmax(dim=0)
>>> b0 = m0(b)
>>> b0
tensor([[0.1192, 0.1192],
[0.8808, 0.8808]])
now since dim=0 means going through i0 \in {0,1} (i.e. going through the rows) if we choose any column i1 and sum its elements (i.e. the rows) then we should get 1. Check it:
>>> b0[:,0].sum()
tensor(1.0000)
>>> b0[:,1].sum()
tensor(1.0000)
as expected.
Note we do get all rows sum to 1 by "summing out the rows" with torch.sum(b0,dim=0), check it out:
>>> torch.sum(b0,0)
tensor([1.0000, 1.0000])
We can create a more complicated example to make sure it's really clear.
a = torch.arange(0,24,1.0).view(-1,3,4)
>>> a
tensor([[[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.]],
[[12., 13., 14., 15.],
[16., 17., 18., 19.],
[20., 21., 22., 23.]]])
>>> a0 = m0(a)
>>> a0[:,0,0].sum()
tensor(1.0000)
>>> a0[:,1,0].sum()
tensor(1.0000)
>>> a0[:,2,0].sum()
tensor(1.0000)
>>> a0[:,1,0].sum()
tensor(1.0000)
>>> a0[:,1,1].sum()
tensor(1.0000)
>>> a0[:,2,3].sum()
tensor(1.0000)
so as we expected if we sum all the elements along the first coordinate from the first value to the last value we get 1. So everything is normalized along the first dimension (or first coordiante i0).
>>> torch.sum(a0,0)
tensor([[1.0000, 1.0000, 1.0000, 1.0000],
[1.0000, 1.0000, 1.0000, 1.0000],
[1.0000, 1.0000, 1.0000, 1.0000]])
Also along the dimension 0 means that you vary the coordinate along that dimension and consider each element. Sort of like having a for loop going through the values the first coordinates can take i.e.
for i0 in range(0,d0):
a[i0,b,c,d]
import torch
import torch.nn.functional as F
x = torch.tensor([[1, 2], [3, 4]], dtype=torch.float)
s1 = F.softmax(x, dim=0)
tensor([[0.1192, 0.1192],
[0.8808, 0.8808]])
s2 = F.softmax(x, dim=1)
tensor([[0.2689, 0.7311],
[0.2689, 0.7311]])
torch.sum(s1, dim=0)
tensor([1., 1.])
torch.sum(s2, dim=1)
tensor([1., 1.])
Think of what softmax is trying to achieve. It outputs probability of one outcome against the other. Let's say you are trying to predict two outcomes: is it A or is it B. If p(A) is greater than p(B) then the next step is to convert the outcome into Boolean( i.e. the outcome would be A if p(A) > 50% or B if p(B) > 50% Since we are dealing with probabilities they should add-up to 1.
Therefore what you want is sum probabilities OF EACH ROW to be 1. Therefore you specify dim=1 or row sum
On the other hand if your model is designed to predict more than two variables the output tensor will look something like [p(a), p(b), p(c)...p(i)]
What matters here is that p(a) + p(b) + p(c) +...p(i) = 1
then you would use dim = 0
It all depends on how you define your output layer.

How to calculate shape of a tensor in tensorflow

In order to understand a tensor in tensorflow clearly, I need to have a clear understanding of how is the shape a tensor defined.
These are some examples from the tensorflow document:
3 # a rank 0 tensor; this is a scalar with shape []
[1. ,2., 3.] # a rank 1 tensor; this is a vector with shape [3]
[[1., 2., 3.], [4., 5., 6.]] # a rank 2 tensor; a matrix with shape [2, 3]
[[[1., 2., 3.]], [[7., 8., 9.]]] # a rank 3 tensor with shape [2, 1, 3]
Is the below understanding of mine correct:
In order to find the shape of the tensor, we start from the outermost list and count the number of elements (or lists) inside. This count makes the first dimension. We then repeat this procedure for the inner lists and find the next dimensions of the tensor.
Please correct me if I am wrong.
Yes, your understanding is correct. If you have a valid tensor, your algorithm will return you the correct dimensions of the tensor. You can write it in python in the following way
def get_shape(arr):
res = []
while isinstance((arr), list):
res.append(len(arr))
arr = arr[0]
return res
Notice that in case of the arbitrary value of arr, you also need to make sure that dimensions match ([[1, 2, 3], [4, 5]] is not a valid tensor)

Python Median Filter for 1D numpy array

I have a numpy.array with a dimension dim_array. I'm looking forward to obtain a median filter like scipy.signal.medfilt(data, window_len).
This in fact doesn't work with numpy.array may be because the dimension is (dim_array, 1) and not (dim_array, ).
How to obtain such filter?
Next, another question, how can I obtain other filter, i.e., min, max, mean?
Based on this post, we could create sliding windows to get a 2D array of such windows being set as rows in it. These windows would merely be views into the data array, so no memory consumption and thus would be pretty efficient. Then, we would simply use those ufuncs along each row axis=1.
Thus, for example sliding-median` could be computed like so -
np.median(strided_app(data, window_len,1),axis=1)
For the other ufuncs, just use the respective ufunc names there : np.min, np.max & np.mean. Please note this is meant to give a generic solution to use ufunc supported functionality.
For the best performance, one must still look into specific functions that are built for those purposes. For the four requested functions, we have the builtins, like so -
Median : scipy.signal.medfilt.
Max : scipy.ndimage.filters.maximum_filter1d.
Min : scipy.ndimage.filters.minimum_filter1d.
Mean : scipy.ndimage.filters.uniform_filter1d
The fact that applying of a median filter with the window size 1 will not change the array gives us a freedom to apply the median filter row-wise or column-wise.
For example, this code
from scipy.ndimage import median_filter
import numpy as np
arr = np.array([[1., 2., 3.], [4., 5., 6.], [7., 8., 9.]])
median_filter(arr, size=3, cval=0, mode='constant')
#with cval=0, mode='constant' we set that input array is extended with zeros
#when window overlaps edges, just for visibility and ease of calculation
outputs an expected filtered with window (3, 3) array
array([[0., 2., 0.],
[2., 5., 3.],
[0., 5., 0.]])
because median_filter automatically extends the size to all dimensions, so the same effect we can get with:
median_filter(arr, size=(3, 3), cval=0, mode='constant')
Now, we can also apply median_filter row-wise with setting 1 to the first element of size
median_filter(arr, size=(1, 3), cval=0, mode='constant')
Output:
array([[1., 2., 2.],
[4., 5., 5.],
[7., 8., 8.]])
And column-wise with the same logic
median_filter(arr, size=(3, 1), cval=0, mode='constant')
Output:
array([[1., 2., 3.],
[4., 5., 6.],
[4., 5., 6.]])

Categories