I am working on capsule network implementation in TensorFlow version-2-gpu. while I am doing reshaping on the output of the convolution layer(tensor) it gives the error of attempt to convert the value. error and my code are as below.
conv1_params = {"filters": 256,"kernel_size": 9,"strides": 1,"padding":
"valid","activation":tf.nn.relu,}
conv2_params = {"filters": caps1_n_maps * caps1_n_dims,"kernel_size":9,"strides": 2,"padding":
"valid","activation": tf.nn.relu}
conv1 = tf.keras.layers.Conv2D(input_shape=(None,28,28,1), name="conv1", **conv1_params)
conv2 = tf.keras.layers.Conv2D(name="conv2", **conv2_params)
#output shape of conv1=TensorShape([None, 20, 20, 256])
#output shape of conv2=TensorShape([None, 6, 6, 256])
caps1_raw=tf.keras.backend.reshape(conv2,shape=[-1,caps1_n_caps,caps1_n_dims])
error
ValueError: Attempt to convert a value
() with an unsupported type () to a Tensor.
Related
I want to translate a code from pytorch which uses torch.nn.functional.unfold to tensorflow2.
I saw in How to replicate PyTorch's nn.functional.unfold function in Tensorflow? and Pytorch "Unfold" equivalent in Tensorflow that i need to use tf.image.extract_patches() function.
I have:
image = np.random.rand(2,3,32,32)
torch_image = tensor(image)
torch_x = torch.nn.functional.unfold(torch_image, (3,3), dilation=1, padding=0, stride=1)
print(torch_x.shape)
tf_image = tf.convert_to_tensor(image)
tf_image = tf.transpose(tf_image, [0, 2, 3, 1])
tf_x = tf.image.extract_patches(tf_image, sizes=[1,3,3,1], strides=[1,1,1,1], rates=[1,1,1,1], padding="VALID")
print(tf_x.shape)
This code gives me an output torch_x with a shape of (2,27,900) and an output tf_x with a shape of (2,30,30,27).
I realize a small test:
a = sorted(list(torch_x.numpy().flatten()))
b = sorted(list(tf_x.numpy().flatten()))
print(set([i-j for i,j in zip(a,b)]))
It results than all the values of tf_x are in torch_x. But, i dont know how to reshape tf_x to be equal to torch_x. I tried :
final_tf_x = tf.transpose(tf_x, [0, 3, 1, 2])
final_tf_x = tf.reshape(final_tf_x, [final_tf_x.shape[0], final_tf_x.shape[1], -1])
print(final_tf_x.shape)
print(np.abs(torch_x.numpy()-final_tf_x.numpy())<1e-8)
It gives me a tensor of the same shape as torch_x but the 2 tensors are not equal elementwise. Can someone explain me how to do this last step?
I am working with lstm using tensor flow when I am running the code it is showing me the error. the code is running fine but when I am running the function tf.nn.dynamic_rnn(lstmCell, data, dtype=tf.float64) it is showing Value ERROR
import tensorflow as tf
wordsList = np.load('urduwords.npy')
wordVectors = np.load('urduwordsMatrix.npy')
batchSize = 24
lstmUnits = 64
numClasses = 2
iterations = 10000
tf.reset_default_graph()
labels = tf.placeholder(tf.float32, [batchSize, numClasses])
input_data = tf.placeholder(tf.int32, [batchSize, maxSeqLength])
print(labels)
data = tf.Variable(tf.zeros([batchSize, maxSeqLength, numDimensions]),dtype=tf.float32)
print(data)
data = tf.nn.embedding_lookup(wordVectors,input_data)
print(data)
lstmCell = tf.contrib.rnn.BasicLSTMCell(lstmUnits)
lstmCell = tf.contrib.rnn.DropoutWrapper(cell=lstmCell, output_keep_prob=0.1)
value, _ = tf.nn.dynamic_rnn(lstmCell, data, dtype=tf.float64)
How to resolve this error using tensor flow.
ValueError: Input 0 of layer basic_lstm_cell_1 is incompatible with the layer: expected ndim=2, found ndim=3. Full shape received: [24, 1, 2]
the shape of the input_data is
(24, 30, 1, 2)
and the shape of wordVector is
(24053, 1, 2)
the label shape is 4 dimension because of you feed the wrong type of data to tf,
please try to use NumberPy array or List
The conv1d_transpose is not yet in the stable version of Tensorflow, but an implementation is available on github
I would like to create a 1D deconvolution network. The shape of the input is [-1, 256, 16] and the output should be [-1,1024,8]. The kernel's size is 5 and the stride is 4.
I tried to build a 1D convolutional layer with this function:
(output_depth, input_depth) = (8, 16)
kernel_width = 7
f_shape = [kernel_width, output_depth, input_depth]
layer_1_filter = tf.Variable(tf.random_normal(f_shape))
layer_1 = tf_exp.conv1d_transpose(
x,
layer_1_filter,
[-1,1024,8],
stride=4, padding="VALID"
)
The shape of layer_1 is TensorShape([Dimension(None), Dimension(None), Dimension(None)]), but it should be [-1,1024,8]
What do I wrong? How is it possible to implement 1D deconvolution in Tensorflow?
The pull request is open as of this moment, so the API and behavior can and probably will change. Some feature that one might expect from conv1d_transpose aren't supported:
output_shape requires batch size to be known statically, can't pass -1;
on the other hand, output shape is dynamic (this explains None dimension).
Also, the kernel_width=7 expects in_width=255, not 256. Should make kernel_width less than 4 to match in_width=256. The result is this demo code:
x = tf.placeholder(shape=[None, 256, 16], dtype=tf.float32)
filter = tf.Variable(tf.random_normal([3, 8, 16])) # [kernel_width, output_depth, input_depth]
out = conv1d_transpose(x, filter, output_shape=[100, 1024, 8], stride=4, padding="VALID")
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
result = sess.run(out, feed_dict={x: np.zeros([100, 256, 16])})
print(result.shape) # prints (100, 1024, 8)
The new tf.contrib.nn.conv1d_transpose is now added to Tensorflow API r1.8.
I am trying to adapt a Tensorflow r0.12 code (from https://github.com/ZZUTK/Face-Aging-CAAE) to version 1.2.1 and I am having issues with optimizer.minimize().
In this case I am using GradientDescent, but the following error message is only slightly different (in terms of shapes provided) when I try with different optimizers:
ValueError: Shape must be rank 0 but is rank 1 for 'GradientDescent/update_E_con
v0/w/ApplyGradientDescent' (op: 'ApplyGradientDescent')
with input shapes: [5,5,1,64], [1], [5,5,1,64].
Where [5,5] is my kernels size, 1 is the number of initial channels and 64 is the number of filters in the first convolution. This is the convolutional encoder network it is referring to:
E_conv0: (100, 128, 128, 64)
E_conv1: (100, 64, 64, 128)
E_conv2: (100, 32, 32, 256)
E_conv3: (100, 16, 16, 512)
E_conv4: (100, 8, 8, 1024)
E_conv5: (100, 4, 4, 2048)
...
This is code that's triggering the error:
self.EG_optimizer = tf.train.GradientDescentOptimizer(
learning_rate=EG_learning_rate,
beta1=beta1
).minimize(
loss=self.loss_EG,
global_step=self.EG_global_step,
var_list=self.E_variables + self.G_variables
)
Where:
EG_learning_rate = tf.train.exponential_decay(
learning_rate=learning_rate,
global_step=self.EG_global_step,
decay_steps=size_data / self.size_batch * 2,
decay_rate=decay_rate,
staircase=True
)
self.EG_global_step = tf.get_variable(name='global_step',shape=1, initializer=tf.constant_initializer(0), trainable=False)
And
self.E_variables = [var for var in trainable_variables if 'E_' in var.name]
self.G_variables = [var for var in trainable_variables if 'G_' in var.name]
self.loss_EG = tf.reduce_mean(tf.abs(self.input_image - self.G))
After some debugging I now believe the problem comes from the minimize() method. The error seems to be attributed to the last parameter (var_list) but when I try to comment out the second or third parameter, the error remains the same and is just attributed to the first parameter (loss).
I have changed the code with respect to the one currently on GitHub to adapt it to the new version, so I worked a lot on tf.variable_scope(tf.get_variable_scope(), reuse=True). Could this be the cause?
Thank you so much in advance!
It's tricky to decode, since it comes from an internal op, but this error message points to the cause:
ValueError: Shape must be rank 0 but is rank 1 for 'GradientDescent/update_E_conv0/w/ApplyGradientDescent' (op: 'ApplyGradientDescent')
with input shapes: [5,5,1,64], 1, [5,5,1,64].
One of the inputs to the ApplyGradientDescent op is a rank 1 tensor (i.e. a vector) when it should be a rank 0 tensor (i.e. a scalar). Looking at the definition of the ApplyGradientDescent op, the only scalar input is alpha, or the learning rate.
Therefore, it appears that the EG_learning_rate tensor is a vector when it should be a scalar. A simple fix would be to "slice" a scalar from the EG_learning_rate tensor when you construct the tf.train.GradientDescentOptimizer:
scalar_learning_rate = EG_learning_rate[0]
self.EG_optimizer = tf.train.GradientDescentOptimizer(
learning_rate=scalar_learning_rate,
beta1=beta1
).minimize(
loss=self.loss_EG,
global_step=self.EG_global_step,
var_list=self.E_variables + self.G_variables
)
I've been searching for a way to visualize parameters in Caffe after traning the network, I found this link. it send a transpose of parameter with
filters = net.params['conv1'][0].data
vis_square(filters.transpose(0, 2, 3, 1))
Which i don't understand why it transpose the data? and in the vis_square it use this code:
data = data.reshape((n, n) + data.shape[1:]).transpose((0, 2, 1, 3) + tuple(range(4, data.ndim + 1)))
data = data.reshape((n * data.shape[1], n * data.shape[3]) + data.shape[4:])
Which is too compressed to understand, any explanation would be appreciated. and then when i changed the code to get conv2 instead of conv1:
filters = net.params['conv2'][0].data
vis_square(filters.transpose(0, 2, 3, 1))
I get
TypeError: Invalid dimensions for image data
, Is there any different between conv1 and conv2 which cause this error ? How can we change the code to fix it and it work for all layer ?
Some debugging data :
net.params['conv1'][0].data.shape : (96, 3, 11, 11)
net.params['conv1'][1].data.shape : (96,)
net.params['conv2'][0].data.shape : (256, 48, 5, 5)
net.params['conv2'][1].data.shape : (256,)
net.params['conv3'][0].data.shape : (384, 256, 3, 3)
net.params['conv3'][1].data.shape : (384,)
for conv2:
data.shape[0] : 256
np.sqrt(data.shape[0]) : 16.0
np.ceil(np.sqrt(data.shape[0])) : 16.0
data.shape[0] : 256
data.shape[0:] : (256, 6, 6, 48)
data.shape[1] : 6
data.shape[1:] : (6, 6, 48)
data.ndim : 4
range(4, data.ndim + 1)) : [4]
tuple(range(4, data.ndim + 1)) : (4,)
AND after :
data = np.pad(data, padding, mode='constant', constant_values=1)
for conv2:
data.shape : (10, 12, 10, 12, 3)
and after
data = data.reshape((n, n) + data.shape[1:]).transpose((0, 2, 1, 3) + tuple(range(4, data.ndim + 1)))
data became :
data.shape : (120, 120, 3)
The code you inspected is written to visualize (i.e., convert to RGB image) convolutional filters.
The shape of conv1 filters (in your example) is (96, 3, 11, 11) which means
- 96 : you have 96 filters in conv1 of your net (i.e., num_output: 96), therefore you would wish to view 96 different filters.
- 3 : the input dimension of each filter is 3, because the input to conv1 in your net is an RGB image with three channels.
- 11, 11: the spatial size of each kernel/filter in your case is 11x11 (i.e., kernel_size: 11).
Therefore, to visualize 96 filters as 11x11x3 thumbnails.
However, when trying to visualize conv2 (or any other deeper layer) you have a problem. There is no longer RGB meaning to filter dimensions. The filters of conv2 work on the output feature of conv1 (which in your case is a 96-dim space). To date, AFAIK, there is no straight-forward way to convert a 96-dim data to a simple 3D RGB representation.
So, you cannot use the same code to visualize conv2 filters. You must use some other method for visualization.