I am working with lstm using tensor flow when I am running the code it is showing me the error. the code is running fine but when I am running the function tf.nn.dynamic_rnn(lstmCell, data, dtype=tf.float64) it is showing Value ERROR
import tensorflow as tf
wordsList = np.load('urduwords.npy')
wordVectors = np.load('urduwordsMatrix.npy')
batchSize = 24
lstmUnits = 64
numClasses = 2
iterations = 10000
tf.reset_default_graph()
labels = tf.placeholder(tf.float32, [batchSize, numClasses])
input_data = tf.placeholder(tf.int32, [batchSize, maxSeqLength])
print(labels)
data = tf.Variable(tf.zeros([batchSize, maxSeqLength, numDimensions]),dtype=tf.float32)
print(data)
data = tf.nn.embedding_lookup(wordVectors,input_data)
print(data)
lstmCell = tf.contrib.rnn.BasicLSTMCell(lstmUnits)
lstmCell = tf.contrib.rnn.DropoutWrapper(cell=lstmCell, output_keep_prob=0.1)
value, _ = tf.nn.dynamic_rnn(lstmCell, data, dtype=tf.float64)
How to resolve this error using tensor flow.
ValueError: Input 0 of layer basic_lstm_cell_1 is incompatible with the layer: expected ndim=2, found ndim=3. Full shape received: [24, 1, 2]
the shape of the input_data is
(24, 30, 1, 2)
and the shape of wordVector is
(24053, 1, 2)
the label shape is 4 dimension because of you feed the wrong type of data to tf,
please try to use NumberPy array or List
Related
I am trying to build a LSTM model and I cannot get the input_shape parameter to work properly.
My data is set up so every row is a timestep and each column is an input_dim.
It is always the wrong shape, either missing the timestep, or adding an extra value in to the parameter:
ValueError: Input 0 of layer "sequential" is incompatible with the layer: expected shape=(None, 5592, 9), found shape=(None, 9)
Or
ValueError: Input 0 of layer "lstm" is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: (None, 14, 5592, 9)
Here is the relevant snippet of code:
tdf = pd.read_csv(train_csv)
tdf2 = pd.read_csv(train_csv2)
df = pd.read_csv(test_csv)
# Split the data into training and testing sets
train_x = []
train_y = []
for i in range(len(tdf_list)):
train_x.append(tdf_list[i])
train_y.append(tdf_list[i]["Close"].shift(-1).dropna())
train_x[i] = tdf_list[i].drop(index=tdf_list[i].index[-1]).drop(columns=["Close"])
test_x = df
test_y = df["Close"].shift(-1).dropna()
test_x = df.drop(index=df.index[-1]).drop(columns=["Close"])
test_x = test_x.values.reshape(-1, test_x.shape[1], 1)
print(train_x[0].shape)
def run_model(g):
# Define the LSTM model
model = tf.keras.Sequential()
model.add(tf.keras.layers.LSTM(neurons, return_sequences=True, input_shape=(train_x[g].shape[0],train_x[g].shape[1])))
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.Dropout(rate=0.2))
model.add(tf.keras.layers.LSTM(neurons, return_sequences=False))
model.add(tf.keras.layers.Dense(32))
model.add(tf.keras.layers.Activation('tanh'))
model.add(tf.keras.layers.Dense(32))
model.add(tf.keras.layers.Dense(1))
I have tried manually entering input shape integers, and even when hardcoded I cannot get it to work properly. I have also tried every permutation of reshaping train_x[0] to get it to fit properly. The only way I can get the code to execute is if I set
input_shape=(train_x[g].shape[1], 1)
But then it is using the data columns as timesteps...
I try to apply a convolutional layer to a picture of shape [256,256,3]
a have an error when I user the tensor of the image directly
conv1 = conv2d(input,W_conv1) +b_conv1 #<=== error
error message:
ValueError: Shape must be rank 4 but is rank 3 for 'Conv2D' (op: 'Conv2D')
with input shapes: [256,256,3], [3,3,3,1].
but when I reshape the function conv2d work normally
x_image = tf.reshape(input,[-1,256,256,3])
conv1 = conv2d(x_image,W_conv1) +b_conv1
if I must reshape the tensor what the best value to reshape in my case and why?
import tensorflow as tf
import numpy as np
from PIL import Image
def img_to_tensor(img) :
return tf.convert_to_tensor(img, np.float32)
def weight_generater(shape):
return tf.Variable(tf.truncated_normal(shape,stddev=0.1))
def bias_generater(shape):
return tf.Variable(tf.constant(.1,shape=shape))
def conv2d(x,W):
return tf.nn.conv2d(x,W,[1,1,1,1],'SAME')
def pool_max_2x2(x):
return tf.nn.max_pool(x,ksize=[1,2,2,1],strides=[1,1,1,1],padding='SAME')
#read image
img = Image.open("img.tif")
sess = tf.InteractiveSession()
#convetir image to tensor
input = img_to_tensor(img).eval()
#print(input)
# get img dimension
img_dimension = tf.shape(input).eval()
print(img_dimension)
height,width,channel=img_dimension
filter_size = 3
feature_map = 32
x = tf.placeholder(tf.float32,shape=[height*width*channel])
y = tf.placeholder(tf.float32,shape=21)
# generate weigh [kernal size, kernal size,channel,number of filters]
W_conv1 = weight_generater([filter_size,filter_size,channel,1])
#for each filter W has his specific bais
b_conv1 = bias_generater([feature_map])
""" I must reshape the picture
x_image = tf.reshape(input,[-1,256,256,3])
"""
conv1 = conv2d(input,W_conv1) +b_conv1 #<=== error
h_conv1 = tf.nn.relu(conv1)
h_pool1 = pool_max_2x2(h_conv1)
layer1_dimension = tf.shape(h_pool1).eval()
print(layer1_dimension)
The first dimension is the batch size. If you are feeding 1 image at a time you can simply make the first dimension 1 and it doesn't change your data any, just changes the indexing to 4D:
x_image = tf.reshape(input, [1, 256, 256, 3])
If you reshape it with a -1 in the first dimension what you are doing is saying that you will feed in a 4D batch of images (shaped [batch_size, height, width, color_channels], and you are allowing the batch size to be dynamic (which is common to do).
You could also use
im = tf.expand_dims(input, axis=0)
to insert a dimension of 1 into the tensor's shape. im will be a rank 4 tensor. This way you do not have to specify the dimensions of the image.
I am trying to implement RNN in Tensorflow. I am writing my own functions instead of using RNN cells to practice.
The problem is sequence tagging, input size is [32, 48, 900] where 32 is batch size, 48 is time steps and 900 is vocab size which is one-hot encoded vector. Output is [32, 48, 145] where first two dimensions are same as input, but the last dimension is output vocabulary size (one-hot). Basically this is a NLP tagging problem.
I am getting following error:
InvalidArgumentError (see above for traceback): logits and labels must
be same size: logits_size=[48,145] labels_size=[1536,145]
The actual labels_size is [32, 48, 145] but it merges first two dimensions without my control. FYI 32*48 = 1536
If I run my RNN with batch size 1, it works fine as expected. I could not figure out how to solve the issue. I am getting the problem in the last line of the code.
I pasted the related part of the code:
inputs = tf.placeholder(shape=[None, self.seq_length, self.vocab_size], dtype=tf.float32, name="inputs")
targets = tf.placeholder(shape=[None, self.seq_length, self.output_vocab_size], dtype=tf.float32, name="targets")
init_state = tf.placeholder(shape=[1, self.hidden_size], dtype=tf.float32, name="state")
initializer = tf.random_normal_initializer(stddev=0.1)
with tf.variable_scope("RNN") as scope:
hs_t = init_state
ys = []
for t, xs_t in enumerate(tf.split(inputs[0], self.seq_length, axis=0)):
if t > 0: scope.reuse_variables()
Wxh = tf.get_variable("Wxh", [self.vocab_size, self.hidden_size], initializer=initializer)
Whh = tf.get_variable("Whh", [self.hidden_size, self.hidden_size], initializer=initializer)
Why = tf.get_variable("Why", [self.hidden_size, self.output_vocab_size], initializer=initializer)
bh = tf.get_variable("bh", [self.hidden_size], initializer=initializer)
by = tf.get_variable("by", [self.output_vocab_size], initializer=initializer)
hs_t = tf.tanh(tf.matmul(xs_t, Wxh) + tf.matmul(hs_t, Whh) + bh)
ys_t = tf.matmul(hs_t, Why) + by
ys.append(ys_t)
hprev = hs_t
output_softmax = tf.nn.softmax(ys) # Get softmax for sampling
#outputs = tf.concat(ys, axis=0)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=targets, logits=ys))
The problem may fall in the size of the ys, ys should have the size of [32, 48, 145], but the output ys only have the size of [48,145], so if the batchsize is 1, the taget size is [1, 48, 145], which just have the same size of [48,145] after dimensionality reduction.
To solve the problem you can add a loop to deal with the batchsize ( inputs[0] ) :
such as :
for i in range(inputs.getshape(0)):
for t, xs_t in enumerate(tf.split(inputs[i], self.seq_length, axis=0)):
I am trying to run the LSTM Code and for this trying to connect the word2Vec word embeddings input, But getting error in getting the embedding look up.
Following is the code:
batchSize = 24
lstmUnits = 64
numClasses = 2
iterations = 100000
maxSeqLength = 250
numDimensions = 128
import tensorflow as tf
tf.reset_default_graph()
labels = tf.placeholder(tf.float32, [batchSize, numClasses])
input_data = tf.placeholder(tf.int32, [batchSize, maxSeqLength])
data = tf.Variable(tf.zeros([batchSize, maxSeqLength, numDimensions]),dtype=tf.float32)
# word Vector Shape = (13277, 128)
data = tf.nn.embedding_lookup(wordVectors,input_data)
saver = tf.train.Saver()
sess.run(tf.global_variables_initializer())
try:
for i in range(iterations):
#nextBatch shape is (24, 250)
nextBatch, nextBatchLabels = getTrainBatch()
sess.run(optimizer, feed_dict={input_data: nextBatch, labels: nextBatchLabels})
except Exception as ex:
print(ex)
There might be small step i'm missing. what can it be.
When i run the code, I get exception as :
I have simplified the code that you have reported in order to let you understand how to use word embeddings in your case. In addition, you haven't specified everything (see the optimizer variable) so it is not possible to completely reproduce your code.
I report here a simple snippet that will allow you to get word embeddings from an input matrix of shape (batchSize, maxSeqLength).
batchSize = 24
lstmUnits = 64
numClasses = 2
iterations = 100000
maxSeqLength = 250
numDimensions = 128
numTokens = 50
import tensorflow as tf
import numpy as np
session = tf.InteractiveSession()
input_data = tf.placeholder(tf.int32, [batchSize, maxSeqLength])
# you should NOT use tf.Variable() but tf.get_variable() instead
embeddings_weights = tf.get_variable("embeddings_weights", initializer=tf.random_normal_initializer(0, 1), shape=(numTokens, numDimensions))
input_embeddings = tf.nn.embedding_lookup(embeddings_weights, input_data)
result = session.run(input_embeddings, feed_dict={input_data: np.random.randint(0, numTokens, (batchSize, maxSeqLength))})
print(result.shape)
// should print (24, 250, 300)
If you are trying to understand why you receive that error you should debug your code and see if in the training data specified there are indexes which are not valid. In my snippet code, by using np.random.randint(), I forced the output elements to be in the range (0, numTokens) in order to avoid the error that you got. This happens because TensorFlow is not able to complete the lookup operation for an ID which goes out of range!
I have two layer LSTM network. (config.n_input is 3, config.n_steps is 5)
I think this may be related to the shape of my inputs, but I'm not sure how to fix it, I tried changing the projecting of the LSTM so that they would be the same input size, but that didn't work.
self.input_data = tf.placeholder(tf.float32, [None, config.n_steps, config.n_input], name='input')
# Tensorflow LSTM cell requires 2x n_hidden length (state & cell)
self.initial_state = tf.placeholder(tf.float32, [None, 2*config.n_hidden], name='state')
self.targets = tf.placeholder(tf.float32, [None, config.n_classes], name='target')
_X = tf.transpose(self.input_data, [1, 0, 2]) # permute n_steps and batch_size
_X = tf.reshape(_X, [-1, config.n_input]) # (n_steps*batch_size, n_input)
input_cell = rnn_cell.LSTMCell(num_units=config.n_hidden, input_size=3, num_proj=300, forget_bias=1.0)
print(input_cell.output_size)
inner_cell = rnn_cell.LSTMCell(num_units=config.n_hidden, input_size=300)
cells = [input_cell, inner_cell]
cell = rnn.rnn_cell.MultiRNNCell(cells)
It returns the following error when attempt to run it.
tensorflow.python.pywrap_tensorflow.StatusNotOK: Invalid argument: Expected size[1] in [0, 0], but got 600
[[Node: RNN/MultiRNNCell/Cell1/Slice = Slice[Index=DT_INT32, T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](_recv_state_0/_3, RNN/MultiRNNCell/Cell1/Slice/begin, RNN/MultiRNNCell/Cell1/Slice/size)]]
any superior explanations of the error message? Or are there any ways to easily fix this?
Add num_proj to your initial state:
# Tensorflow LSTM cell requires 2x n_hidden length (state & cell)
self.initial_state = tf.placeholder(tf.float32, [None, 2*config.n_hidden + 300], name='state')
This is quite an opaque error, and it might be a good idea idea for you to raise it on the TF GitHub issues page!