I am using pytorch 0.3.0. I'm trying to selectively copy a neuron and it's weights within the same layer, then replace the original neuron with an another set of weights. Here's my attempt at that:
reshaped_data2 = data2.unsqueeze(0)
new_layer_data = torch.cat([new_layer.data, reshaped_data2], dim=0)
new_layer_data[i] = data1
new_layer.data.copy_(new_layer_data)
First I unsqueezed data2 to make it a 1*X tensor instead of 0*X.
Then I concatenate my layer's tensor with the reshaped data2 along dimension 0.
I then replace the original data2 located at index i with data1.
Finally, I copy all of that into my layer.
The error I get is:
RuntimeError: inconsistent tensor size, expected tensor [10 x 128] and src [11 x 128] to have the same number of elements, but got 1280 and 1408 elements respectively at /Users/soumith/code/builder/wheel/pytorch-src/torch/lib/TH/generic/THTensorCopy.c:86
If I do a simple assignment instead of copy I get
RuntimeError: The expanded size of the tensor (11) must match the existing size (10) at non-singleton dimension 1. at /Users/soumith/code/builder/wheel/pytorch-src/torch/lib/TH/generic/THTensor.c:309
I understand the error, but what is the right way to go about this?
You're trying to replace a 10x128 tensor with a 11x128 tensor, which the model doesn't allow. Is new_layer initialised with the size (11, 128)?
If not, try creating your new layer with your desired size (11, 128) and then copy/assign your new_layer_data.
The solution here is to create a new model with the correct size and pass in weights as default values. No dynamic expansion solution was found.
Related
I am trying to reshape an array of size (14,14,3) to (None, 14,14,3). I have seen that the output of each layer in convolutional neural network has shape in the format(None, n, n, m).
Consider that the name of my array is arr
I tried arr[None,:,:] but it converts it to a dimension of (1,14,14,3).
How should I do it?
https://www.tensorflow.org/api_docs/python/tf/TensorShape
A TensorShape represents a possibly-partial shape specification for a Tensor. It may be one of the following:
Partially-known shape: has a known number of dimensions, and an unknown size for one or more dimension. e.g. TensorShape([None, 256])
That is not possible in numpy. All dimensions of a ndarray are known.
arr[None,:,:] notation adds a new size 1 dimension, (1,14,14,3). Under broadcasting rules, such a dimension may be changed to match a dimension of another array. In that sense we often treat the None as a flexible dimension.
I haven't worked with tensorflow though I see a lot of questions with both tags. tensorflow should have mechanisms for transfering values to and from tensors. It knows about numpy, but numpy does not 'know' anything about tensorflow.
A ndarray is an object with known values, and its shape is used to access those values in a multidimensional way. In contrast a tensor does not have values:
https://www.tensorflow.org/api_docs/python/tf/Tensor
It does not hold the values of that operation's output, but instead provides a means of computing those values
Looks like you can create a TensorProt from an array (and return an array from one as well):
https://www.tensorflow.org/api_docs/python/tf/make_tensor_proto
and to make a Tensor from an array:
https://www.tensorflow.org/api_docs/python/tf/convert_to_tensor
The shape (None, 14,14,3) represent ,(batch_size,imgH,imgW,imgChannel) now imgH and imgW can be use interchangeably depends on the network and the problem.
But the batchsize is given as "None" in the neural network because we don't want to restrict our batchsize to some specific value as our batchsize depends on a lot of factors like memory available for our model to run etc.
So lets say you have 4 images of size 14x14x3 then you can append each image into the array say L1, and now the L1 will have the shape 4x14x14x3 i.e you made a batch of 4 images and now you can feed this to your neural network.
NOTE here None will be replaced by 4 and for the whole training process it will be 4. Similarly when you feed your network only one image it assumes the batchsize of 1 and set None equal to 1 giving you the shape (1X14X14X3)
In my model, a layer has a shape of [None, None, 40, 64]. I want to reshape this into [None, None, 40*64]. However, if I simply do the following:
reshaped_layer = Reshape((None, None, 40*64))(my_layer)
It throws an error complaining that None values not supported.
(Just to be clear, this is not tf.keras, this is just Keras).
First of all, the argument you pass to Reshape layer is the desired shape of one sample in the batch and not the whole batch of samples. So since each of the samples in the batch is a 3D tensor, the argument must also consider only that 3D tensor (i.e. excluding the batch axis).
Second, you can use -1 as the shape of only one axis. It tells to the Reshape layer to automatically infer the shape of that axis based on the shape of other axes you provide. So considering these two points, it would be:
reshaped_out = Reshape((-1, 40*64))(layer_out)
My Keras RNN code is as follows:
def RNN():
inputs = Input(shape = (None, word_vector_size))
layer = LSTM(64)(inputs)
layer = Dense(256,name='FC1')(layer)
layer = Dropout(0.5)(layer)
layer = Dense(num_classes,name='out_layer')(layer)
layer = Activation('softmax')(layer)
model = Model(inputs=inputs,outputs=layer)
return model
I'm getting the error when I call model.fit()
model.fit(np.array(word_vector_matrix), np.array(Y_binary), batch_size=128, epochs=10, validation_split=0.2, callbacks=[EarlyStopping(monitor='val_loss',min_delta=0.0001)])
Word_vector_matrix is a 3-dim numpy array.
I have printed the following :
print(type(word_vector_matrix), type(word_vector_matrix[0]), type(word_vector_matrix[0][0]), type(word_vector_matrix[0][0][0]))
and the answer is :
<class 'numpy.ndarray'> <class 'numpy.ndarray'> <class 'numpy.ndarray'> <class 'numpy.float32'>
It's shape is 1745 x sentence length x word vector size.
The sentence length is variable and I'm trying to pass this entire word vector matrix to the RNN, but I get the error above.
The shape is printed like:
print(word_vector_matrix.shape)
The answer is (1745,)
The shape of the nested arrays are printed like:
print(word_vector_matrix[10].shape)
The answer is (7, 300)
The first number 7 denotes the sentence length, which is variable and changes for each sentence, and the second number is 300, which is fixed for all words and is the word vector size.
I have converted everything to np.array() as suggested by the other posts, but still the same error. Can someone please help me. I'm using python3 btw. The similar thing is working in python2 for me, but not in python3. Thanks!
word_vector_matrix is not a 3-D ndarray. It's a 1-D ndarray of 2-D arrays. This is due to variable sentence length.
Numpy allows ndarray to be list-like structures that may contain a complex element (another ndarray). In Keras however, the ndarray must be converted into a Tensor (which has to be a "mathematical" matrix of some dimension - this is required for the sake of efficient computation).
Therefore, each batch must have fixed size sentences (and not the entire data).
Here are a few alternatives:
Use batch size of 1 - simplest approach, but impedes your network's convergence. I would suggest to only use it as a temporary sanity check.
If sequence length variability is low, pad all your batches to be of the same length.
If sequence length variability is high, pad each batch with the max length within that batch. This would require you to use a custom data generator.
Note: After you padded your data, you need to use Masking, so that the padded part will be ignored during training.
I have the following batch shape:
[?,227,227]
And the following weight variable:
weight_tensor = tf.truncated_normal([227,227],**{'stddev':0.1,'mean':0.0})
weight_var = tf.Variable(weight_tensor)
But when I do tf.batch_matmul:
matrix = tf.batch_matmul(prev_net_2d,weight_var)
I fail with the following error:
ValueError: Shapes (?,) and () must have the same rank
So my question becomes: How do I do this?
How do I just have a weight_variable in 2D that gets multiplied by each individual picture (227x227) so that I have a (227x227) output?? The flat version of this operation completely exhausts the resources...plus the gradient won't change the weights correctly in the flat form...
Alternatively: how do I split the incoming tensor along the batch dimension (?,) so that I can run the tf.matmul function on each of the split tensors with my weight_variable?
You could tile weights along the first dimension
weight_tensor = tf.truncated_normal([227,227],**{'stddev':0.1,'mean':0.0})
weight_var = tf.Variable(weight_tensor)
weight_var_batch = tf.tile(tf.expand_dims(weight_var, axis=0), [batch_size, 1, 1])
matrix = tf.matmul(prev_net_2d,weight_var_batch)
Although batch_matmul doesn't exist anymore
I'm playing around with tensorflow and ran into a problem with the following code:
def _init_parameters(self, input_data, labels):
# the input shape is (batch_size, input_size)
input_size = tf.shape(input_data)[1]
# labels in one-hot format have shape (batch_size, num_classes)
num_classes = tf.shape(labels)[1]
stddev = 1.0 / tf.cast(input_size, tf.float32)
w_shape = tf.pack([input_size, num_classes], 'w-shape')
normal_dist = tf.truncated_normal(w_shape, stddev=stddev, name='normaldist')
self.w = tf.Variable(normal_dist, name='weights')
(I'm using tf.pack as suggested in this question, since I was getting the same error)
When I run it (from a larger script that invokes this one), I get this error:
ValueError: initial_value must have a shape specified: Tensor("normaldist:0", shape=TensorShape([Dimension(None), Dimension(None)]), dtype=float32)
I tried to replicate the process in the interactive shell. Indeed, the dimensions of normal_dist are unspecified, although the supplied values do exist:
In [70]: input_size.eval()
Out[70]: 4
In [71]: num_classes.eval()
Out[71]: 3
In [72]: w_shape.eval()
Out[72]: array([4, 3], dtype=int32)
In [73]: normal_dist.eval()
Out[73]:
array([[-0.27035281, -0.223277 , 0.14694688],
[-0.16527176, 0.02180306, 0.00807841],
[ 0.22624688, 0.36425814, -0.03099642],
[ 0.25575709, -0.02765726, -0.26169327]], dtype=float32)
In [78]: normal_dist.get_shape()
Out[78]: TensorShape([Dimension(None), Dimension(None)])
This is weird. Tensorflow generates the vector but can't say its shape. Am I doing something wrong?
As Ishamael says, all tensors have a static shape, which is known at graph construction time and accessible using Tensor.get_shape(); and a dynamic shape, which is only known at runtime and is accessible by fetching the value of the tensor, or passing it to an operator like tf.shape. In many cases, the static and dynamic shapes are the same, but they can be different - the static shape can be partially defined - in order allow the dynamic shape to vary from one step to the next.
In your code normal_dist has a partially-defined static shape, because w_shape is a computed value. (TensorFlow sometimes attempts to evaluate
these computed values at graph construction time, but it gets stuck at tf.pack.) It infers the shape TensorShape([Dimension(None), Dimension(None)]), which means "a matrix with an unknown number of rows and columns," because it knowns that w_shape is a vector of length 2, so the resulting normal_dist must be 2-dimensional.
You have two options to deal with this. You can set the static shape as Ishamael suggests, but this requires you to know the shape at graph construction time. For example, the following may work:
normal_dist.set_shape([input_data.get_shape()[1], labels.get_shape()[1]])
Alternatively, you can pass validate_shape=False to the tf.Variable constructor. This allows you to create a variable with a partially-defined shape, but it limits the amount of static shape information that can be inferred later on in the graph.
Similar question is nicely explained in TF FAQ:
In TensorFlow, a tensor has both a static (inferred) shape and a
dynamic (true) shape. The static shape can be read using the
tf.Tensor.get_shape method: this shape is inferred from the operations
that were used to create the tensor, and may be partially complete. If
the static shape is not fully defined, the dynamic shape of a Tensor t
can be determined by evaluating tf.shape(t).
So tf.shape() returns you a tensor, will always have a size of shape=(N,), and can be calculated in a session:
a = tf.Variable(tf.zeros(shape=(2, 3, 4)))
with tf.Session() as sess:
print sess.run(tf.shape(a))
On the other hand you can extract the static shape by using x.get_shape().as_list() and this can be calculated anywhere.
The variable can have a dynamic shape. get_shape() returns the static shape.
In your case you have a tensor that has a dynamic shape, and currently happens to hold value that is 4x3 (but at some other time it can hold a value with a different shape -- because the shape is dynamic). To set the static shape, use set_shape(w_shape). After that the shape you set will be enforced, and the tensor will be a valid initial_value.