I am having troubles understanding the meaning and usages for Tensorflow Tensors and Sparse Tensors.
According to the documentation
Tensor
Tensor is a typed multi-dimensional array. For example, you can represent a mini-batch of images as a 4-D array of floating point numbers with dimensions [batch, height, width, channels].
Sparse Tensor
TensorFlow represents a sparse tensor as three separate dense tensors: indices, values, and shape. In Python, the three tensors are collected into a SparseTensor class for ease of use. If you have separate indices, values, and shape tensors, wrap them in a SparseTensor object before passing to the ops below.
My understandings are Tensors are used for operations, input and output. And Sparse Tensor is just another representation of a Tensor(dense?). Hope someone can further explain the differences, and the use cases for them.
Matthew did a great job but I would love to give an example to shed more light on Sparse tensors with a example.
If a tensor has lots of values that are zero, it can be called sparse.
Lets consider a sparse 1-D Tensor
[0, 7, 0, 0, 8, 0, 0, 0, 0]
A sparse representation of the same tensor will focus only on the non-zero values
values = [7,8]
We also have to remember where those values occurs, by their indices
indices = [1,4]
The one-dimensional indices form will work with some methods, for this one-dimensional example, but in general indices have multiple dimensions, so it will be more consistent (and work everywhere) to represent indices like this:
indices = [[1], [4]]
With values and indices, we don't have quite enough information yet. How many zeros are there? We represent dense shape of a tensor.
dense_shape = [9]
These three things together, values, indices, and dense_shape, are a sparse representation of the tensor
In tensorflow 2.0 it can be implemented as
x = tf.SparseTensor(values=[7,8],indices=[[1],[4]],dense_shape=[9])
x
#o/p: <tensorflow.python.framework.sparse_tensor.SparseTensor at 0x7ff04a58c4a8>
print(x.values)
print(x.dense_shape)
print(x.indices)
#o/p:
tf.Tensor([7 8], shape=(2,), dtype=int32)
tf.Tensor([9], shape=(1,), dtype=int64)
tf.Tensor(
[[1]
[4]], shape=(2, 1), dtype=int64)
EDITED to correct indices as pointed out in the comments.
The difference involves computational speed. If a large tensor has many, many zeroes, it's faster to perform computation by iterating through the non-zero elements. Therefore, you should store the data in a SparseTensor and use the special operations for SparseTensors.
The relationship is similar for matrices and sparse matrices. Sparse matrices are common in dynamic systems, and mathematicians have developed many special methods for operating on them.
Related
For quick debugging purposes, I'm trying to print out the SparseTensor I've just initialized.
The built-in print function just says it's a SparseTensor object, and tf.Print() gives an error. The error statement does print the contents of the object, but not in a way that shows the actual entries (unless it's telling me it's empty, there's some :0s I don't know the significance of).
rows = tf.Print(rows, [rows])
TypeError: Failed to convert object of type <class 'tensorflow.python.framework.sparse_tensor.SparseTensor'> to Tensor. Contents: SparseTensor(indices=Tensor("SparseTensor/indices:0", shape=(6, 2), dtype=int64), values=Tensor("SparseTensor/values:0", shape=(6,), dtype=float32), dense_shape=Tensor("SparseTensor/dense_shape:0", shape=(2,), dtype=int64)). Consider casting elements to a supported type.
Way 0: Run the SparseTensor and print the result
Running the graph (in this case just the SparseTensor object) returns a SparseTensorValue object which prints in the same format as the call used to initialize the SparseTensor, which is ultimately what I wanted.
with tf.Session() as sess:
rows = sess.run(rows)
print(rows)
Way 1: Use Print after conversion to dense matrix
To use the Print function, I could convert to a dense matrix in my case. But Print only executes when you run the graph:
rows = tf.sparse_tensor_to_dense(rows)
rows = tf.Print(rows, [rows], summarize=100)
with tf.Session() as sess:
sess.run(rows)
Note the "summarize"--the default setting just printed out zeroes since it's getting the first few entries of a sparse matrix represented in dense form!
Way 2: Use tf.test.TestCase
I found out that the TestCase.evaluate method gives me the kind of nice format I want, the same as Way 0 above:
print(str(self.evaluate(rows)))
Outputs e.g.:
SparseTensorValue(indices=array([[1, 2],
[1, 7],
[1, 8],
[2, 2],
[3, 4],
[3, 5]]), values=array([1., 1., 1., 1., 1., 1.], dtype=float32), dense_shape=array([4, 9]))
You're seeing this error because SparseTensor is not really a Tensor, it's a MetaTensor that wraps 3 dense tensors.
Try using print() on your SparseTensor and you'll see the internal details:
indices=Tensor(…), values=Tensor(…), dense_shape=Tensor(…))
You can print any of these "internal" tensors using tf.Print. For example, tf.Print(my_sparse_tensor.values, [my_sparse_tensor.values]) will succeed.
The SparseTensor documentation describes the internal data structure:
https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor
TensorFlow represents a sparse tensor as three separate dense tensors: indices, values, and dense_shape. In Python, the three tensors are collected into a SparseTensor class for ease of use. If you have separate indices, values, and dense_shape tensors, wrap them in a SparseTensor object before passing to the ops below.
Concretely, the sparse tensor SparseTensor(indices, values, dense_shape) comprises the following components, where N and ndims are the number of values and number of dimensions in the SparseTensor, respectively:
indices: A 2-D int64 tensor of dense_shape [N, ndims], which specifies the indices of the elements in the sparse tensor that contain nonzero values (elements are zero-indexed). For example, indices=[[1,3], [2,4]] specifies that the elements with indexes of [1,3] and [2,4] have nonzero values.
values: A 1-D tensor of any type and dense_shape [N], which supplies the values for each element in indices. For example, given indices=[[1,3], [2,4]], the parameter values=[18, 3.6] specifies that element [1,3] of the sparse tensor has a value of 18, and element [2,4] of the tensor has a value of 3.6.
dense_shape: A 1-D int64 tensor of dense_shape [ndims], which specifies the dense_shape of the sparse tensor. Takes a list indicating the number of elements in each dimension. For example, dense_shape=[3,6] specifies a two-dimensional 3x6 tensor, dense_shape=[2,3,4] specifies a three-dimensional 2x3x4 tensor, and dense_shape=[9] specifies a one-dimensional tensor with 9 elements.
The corresponding dense tensor satisfies:
dense.shape = dense_shape
dense[tuple(indices[i])] = values[i]
By convention, indices should be sorted in row-major order (or equivalently lexicographic order on the tuples indices[i]). This is not enforced when SparseTensor objects are constructed, but most ops assume correct ordering. If the ordering of sparse tensor st is wrong, a fixed version can be obtained by calling tf.sparse_reorder(st).
Example: The sparse tensor
SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])
represents the dense tensor:
[[1, 0, 0, 0]
[0, 0, 2, 0]
[0, 0, 0, 0]]
A=np.array([
[1,2],
[3,4]
])
B=np.ones(2)
A is clearly of shape 2X2
How does numpy allow me to compute a dot product np.dot(A,B)
[1,2] (dot) [1,1]
[3,4]
B has to have dimensions of 2X1 for a dot product or rather this
[1,2] (dot) [1]
[3,4] [1]
This is a very silly question but i am not able to figure out where i am going wrong here?
Earlier i used to think that np.ones(2) would give me this:
[1]
[1]
But it gives me this:
[1,1]
I'm copying part of an answer I wrote earlier today:
You should resist the urge to think of numpy arrays as having rows
and columns, but instead consider them as having dimensions and
shape. This is an important point which differentiates np.array and np.matrix:
x = np.array([1, 2, 3])
print(x.ndim, x.shape) # 1 (3,)
y = np.matrix([1, 2, 3])
print(y.ndim, y.shape) # 2 (1, 3)
An n-D array can only use n integer(s) to represent its shape.
Therefore, a 1-D array only uses 1 integer to specify its shape.
In practice, combining calculations between 1-D and 2-D arrays is not
a problem for numpy, and syntactically clean since # matrix
operation was introduced in Python 3.5. Therefore, there is rarely a
need to resort to np.matrix in order to satisfy the urge to see
expected row and column counts.
This behavior is by design. The NumPy docs state:
If a is an N-D array and b is a 1-D array, it is a sum product over the last axis of a and b.
Most of the rules for vector and matrix shapes relating to the dot product exist mostly in order to have a coherent method that scales up into higher tensor orders. But they aren't very important when dealing with 1st order (vectors) and 2nd order (matrix) tensors. And those orders are what the vast majority of numpy users need.
As a result, # and np.dot are optimized (both mathematically and input parsing) for those orders, always summing over the last axis of the first and the second to last axis (if applicable) of the second. The "if applicable" is sort of an idiot-proofing to assure the output is what is expected in the vast majority of cases, even if the shapes don't technically fit.
Those of us who use higher-order tensors, meanwhile, are relegated to np.tensordot or np.einsum, which come complete with all the niggling little rules about dimension matching.
I've tried searching StackOverflow, googling, and even using symbolhound to do character searches, but was unable to find an answer. Specifically, I'm confused about Ch. 1 of Nielsen's Neural Networks and Deep Learning, where he says "It is assumed that the input a is an (n, 1) Numpy ndarray, not a (n,) vector."
At first I thought (n,) referred to the orientation of the array - so it might refer to a one-column vector as opposed to a vector with only one row. But then I don't see why we need (n,) and (n, 1) both - they seem to say the same thing. I know I'm misunderstanding something but am unsure.
For reference a refers to a vector of activations that will be input to a given layer of a neural network, before being transformed by the weights and biases to produce the output vector of activations for the next layer.
EDIT: This question equivocates between a "one-column vector" (there's no such thing) and a "one-column matrix" (does actually exist). Same for "one-row vector" and "one-row matrix".
A vector is only a list of numbers, or (equivalently) a list of scalar transformations on the basis vectors of a vector space. A vector might look like a matrix when we write it out, if it only has one row (or one column). Confusingly, we will sometimes refer to a "vector of activations" but actually mean "a single-row matrix of activation values transposed so that it is a single-column."
Be aware that in neither case are we discussing a one-dimensional vector, which would be a vector defined by only one number (unless, trivially, n==1, in which case the concept of a "column" or "row" distinction would be meaningless).
In numpy an array can have a number of different dimensions, 0, 1, 2 etc.
The typical 2d array has dimension (n,m) (this is a Python tuple). We tend to describe this as having n rows, m columns. So a (n,1) array has just 1 column, and a (1,m) has 1 row.
But because an array may have just 1 dimension, it is possible to have a shape (n,) (Python notation for a 1 element tuple: see here for more).
For many purposes (n,), (1,n), (n,1) arrays are equivalent (also (1,n,1,1) (4d)). They all have n terms, and can be reshaped to each other.
But sometimes that extra 1 dimension matters. A (1,m) array can multiply a (n,1) array to produce a (n,m) array. A (n,1) array can be indexed like a (n,m), with 2 indices, x[:,0] where as a (n,) only accepts x[0].
MATLAB matrices are always 2d (or higher). So people transfering ideas from MATLAB tend to expect 2 dimensions. There is a np.matrix subclass that supposed to imitate that.
For numpy programmers the distinctions between vector, row vector, column vector, matrix are loose and relatively unimportant. Or the use is derived from the application rather than from numpy itself. I think that's what's happening with this network book - the notation and expectations come from outside of numpy.
See as well this answer for how to interpret the shapes with respect to the data stored in ndarrays. It also provides insight on how to use .reshape: https://stackoverflow.com/a/22074424/3277902
(n,) is a tuple of length 1, whose only element is n. (The syntax isn't (n) because that's just n instead of making a tuple.)
If an array has shape (n,), that means it's a 1-dimensional array with a length of n along its only dimension. It's not a row vector or a column vector; it doesn't have rows or columns. It's just a vector.
I have a tensor in the shape (n_samples, n_steps, n_features). I want to decompose this into a tensor of shape (n_samples, n_components).
I need a method of decomposition that has a .fit(...) so that I can apply the same decomposition to a new batch of samples. I have been looking at Tucker Decomposition and PARAFAC Decomposition, but neither have that crucial .fit(...) and .transform(...) functionality. (Or at least I think they don't?)
I could use PCA and train it on a representative sample and then call .transform(...) on the remaining samples, but I would rather have some sort of tensor decomposition that can handle all of the samples at once, so as to get a better idea of the differences between each sample.
This is what I mean by "tensor":
In fact tensors are merely a generalisation of scalars and vectors; a scalar is a zero rank tensor, and a vector is a first rank tensor. The rank (or order) of a tensor is defined by the number of directions (and hence the dimensionality of the array) required to describe it.
If you have any questions, please ask, I'll try to clarify my problem if needed.
EDIT: The best solution would be some type of kernel but I have yet to find a kernel that can deal with n-rank Tensors and not just 2D data
You can do this using the development (master) version of TensorLy. Specifically, you can use the new partial_tucker function (it is not yet updated in the documentation...).
Note that the following solution preserves the structure of the tensor, i.e. a tensor of shape (n_samples, n_steps, n_features) is decomposed into a (smaller) tensor of shape (n_samples, n_components_1, n_components_2).
Code
Short answer: this is a very basic class that does what you want (and it would work on tensors of arbitrary order).
import tensorly as tl
from tensorly.decomposition._tucker import partial_tucker
class TensorPCA:
def __init__(self, ranks, modes):
self.ranks = ranks
self.modes = modes
def fit(self, tensor):
self.core, self.factors = partial_tucker(tensor, modes=self.modes, ranks=self.ranks)
return self
def transform(self, tensor):
return tl.tenalg.multi_mode_dot(tensor, self.factors, modes=self.modes, transpose=True)
Usage
Given an input tensor, you can use the previous class by first instantiating it with the desired ranks (size of the core tensor) and modes on which to perform the decomposition (in your 3D case, 1 and 2 since indexing starts at zero):
tpca = TensorPCA(ranks=[4, 5], modes=[1, 2])
tpca.fit(tensor)
Given a new tensor originally called new_tensor, you can project it using the transform method:
tpca.transform(new_tensor)
Explanation
Let's go through the code with an example: first let's import the necessary bits:
import numpy as np
import tensorly as tl
from tensorly.decomposition._tucker import partial_tucker
We then generate a random tensor:
tensor = np.random.random((10, 11, 12))
The next step is to decompose it along its second and third dimensions, or modes (as the first dimension corresponds to the samples):
core, factors = partial_tucker(tensor, modes=[1, 2], ranks=[4, 5])
The core corresponds to the transformed input tensor while factors is a list of two projection matrices, one for the second mode and one for the third mode. Given a new tensor, you can project it to the same subspace (the transform method) by projecting each of its last two dimensions:
tl.tenalg.multi_mode_dot(tensor, factors, modes=[1, 2], transpose=True)
The transposition here is equivalent to an inverse since the factors are orthogonal.
Finally, a note on the terminology: in general, even though it is sometimes done, it is probably best to not use interchangeably order and rank of a tensor. The order of a tensor is simply its number of dimensions while the rank of a tensor is usually a much more complicated notion which you could think of as a generalization of the notion of matrix rank.
How can I transform the following numpy array A of the form
[[1,2]
[3,4]]
into B of the form
[[[1,1,1],[2,2,2]]
[[3,3,3],[4,4,4]]]
such that I can do an element-wise multiplication with C
[[[ 5, 6, 7],[ 8, 9,10]]
[[11,12,13],[13,15,16]]]
?
The original problem is to multiply a scalar with a vector, e.g. 4 * [13,15,16]. But instead of a scalar I have a matrix of scalars (A) and instead of a vector I have a matrix of vectors (C). If there is a way without actually replicating the values like in B I would prefer that (the obvious for-loop will be too slow I guess).
You've already mentioned the answer in the comments:
A[:,:,None] * C
The reason why this works is because numpy interprets a None slice as a newaxis. From the docs:
Each newaxis object in the selection tuple serves to expand the dimensions of the resulting selection by one unit-length dimension. The added dimension is the position of the newaxis object in the selection tuple.
So that slice is equivalent to doing this:
A.reshape(len(A), -1, 1)*C
And since I assume these are numpy arrays, multiplication is of course elementwise by default.