TensorFlow not recognising feed_dict input - python

I am running a simple neural network for linear regression. However TensorFlow is complaining that my feed_dict placeholder(s) are not an element of the graph. However my placeholders and my model are all defined within my graph as can be seen below:
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Dense
with tf.Graph().as_default():
x = tf.placeholder(dtype=tf.float32, shape = (None,4))
y = tf.placeholder(dtype=tf.float32, shape = (None,4))
model = tf.keras.Sequential([
Dense(units=4, activation=tf.nn.relu)
])
y = model(x)
loss = tf.reduce_mean(tf.square(y-x))
train_op = tf.train.AdamOptimizer().minimize(loss)
with tf.Session() as sess:
sess.run(train_op, feed_dict = {x:np.ones(dtype='float32', shape=(4)),
y:5*np.ones(dtype='float32', shape=(4,))})
This gives an error:
TypeError: Cannot interpret feed_dict key as Tensor: Tensor
Tensor("Placeholder:0", shape=(?, 4), dtype=float32) is not an element of this graph.
____________UPDATE________________
Following the advice from #Silgon and #Mcangus, I have modified the code:
g= tf.Graph()
with g.as_default():
x = tf.placeholder(dtype=tf.float32, shape = (None,4))
model = tf.keras.Sequential([
Dense(units=4, activation=tf.nn.relu)
])
y = model(x)
loss = tf.reduce_mean(tf.square(y-x))
train_op = tf.train.AdamOptimizer().minimize(loss)
init_op = tf.group(tf.global_variables_initializer(),
tf.local_variables_initializer())
with tf.Session(graph=g) as sess:
sess.run(init_op)
for i in range(5):
_ , answer = sess.run([train_op,loss], feed_dict = {x:np.ones(dtype='float32', shape=(1,4)),
y:5*np.ones(dtype='float32', shape=(1,4))})
print(answer)
However the model doesn't appear to be learning:
16.0
16.0
16.0
16.0
16.0

The error tells you that the variable is not an element of the graph. It might be because it's not in the same scope. One way to solve it is to have a structure like the following.
# define a graph
graph = tf.Graph()
with graph.as_default():
# placeholder
x = tf.placeholder(...)
y = tf.placeholder(...)
# create model
model = create_model(x, w, b)
with tf.Session(graph=graph) as sess:
# initialize all the variables
sess.run(init)
Also, as #Mcangus points out, be careful with the definition of your variables.

I believe your issue is this line:
y = model(x)
You overwrite y with the output of your model so it's no longer a placeholder.

Related

How to fix TensorFlow Linear Regression no change in MSE?

I'm working on a simple linear regression model to predict the next step in a series. I'm giving it x/y coordinate data and I want the regressor to predict where the next point on the plot will lie.
I'm using dense layers with AdamOptmizer and have my loss function set to:
tf.reduce_mean(tf.square(layer_out - y))
I'm trying to create linear regression models from scratch (I don't want to utilize the TF estimator package here).
I've seen ways to do it by manually specifying weights and biases, but nothing goes into deep regression.
X = tf.placeholder(tf.float32, [None, self.data_class.batch_size, self.inputs])
y = tf.placeholder(tf.float32, [None, self.data_class.batch_size, self.outputs])
layer_input = tf.layers.dense(inputs=X, units=10, activation=tf.nn.relu)
layer_hidden = tf.layers.dense(inputs=layer_input, units=10, activation=tf.nn.relu)
layer_out = tf.layers.dense(inputs=layer_hidden, units=1, activation=tf.nn.relu)
cost = tf.reduce_mean(tf.square(layer_out - y))
optmizer = tf.train.AdamOptimizer(learning_rate=self.learning_rate)
training_op = optmizer.minimize(cost)
init = tf.initialize_all_variables()
iterations = 10000
with tf.Session() as sess:
init.run()
for iteration in range(iterations):
X_batch, y_batch = self.data_class.get_data_batch()
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
if iteration % 100 == 0:
mse = cost.eval(feed_dict={X:X_batch, y:y_batch})
print(mse)
array = []
for i in range(len(self.data_class.dates), (len(self.data_class.dates)+self.data_class.batch_size)):
array.append(i)
x_pred = np.array(array).reshape(1, self.data_class.batch_size, 1)
y_pred = sess.run(layer_out, feed_dict={X: x_pred})
print(y_pred)
predicted = np.array(y_pred).reshape(self.data_class.batch_size)
predicted = np.insert(predicted, 0, self.data_class.prices[0], axis=0)
plt.plot(self.data_class.dates, self.data_class.prices)
array = [self.data_class.dates[0]]
for i in range(len(self.data_class.dates), (len(self.data_class.dates)+self.data_class.batch_size)):
array.append(i)
plt.plot(array, predicted)
plt.show()
When I run training I'm getting the same loss value over and over again.
It's not being reduced, like it should, why?
The issue is that I'm applying an activation to the output layer. This is causing that output to go to whatever it activates to.
By specifying in the last layer that activation=None the deep regression works as intended.
Here is the updated architecture:
layer_input = tf.layers.dense(inputs=X, units=150, activation=tf.nn.relu)
layer_hidden = tf.layers.dense(inputs=layer_input, units=100, activation=tf.nn.relu)
layer_out = tf.layers.dense(inputs=layer_hidden, units=1, activation=None)

Implementation of a neural model on Tensor-flow

I am trying to implement a neural network model on Tensor flow but seems to be having problems with the shape of the placeholders. I'm new to TF, hence it could just be a simple misunderstanding. Here's my code and data sample:
_data=[[0.4,0.5,0.6,1],[0.7,0.8,0.9,0],....]
The data comprises of arrays of 4 columns, the last column of each array is the label. I want to classify each array as label 0, label 1 or label 2.
import tensorflow as tf
import numpy as np
_data = datamatrix
X = tf.placeholder(tf.float32, [None, 3])
W = tf.Variable(tf.zeros([3, 1]))
b = tf.Variable(tf.zeros([3]))
init = tf.global_variables_initializer()
Y = tf.nn.softmax(tf.matmul(X, W) + b)
# placeholder for correct labels
Y_ = tf.placeholder(tf.float32, [None, 1])
# loss function
import time
start=time.time()
cross_entropy = -tf.reduce_sum(Y_ * tf.log(Y))
# % of correct answers found in batch
is_correct = tf.equal(tf.argmax(Y,1), tf.argmax(Y_,1))
accuracy = tf.reduce_mean(tf.cast(is_correct, tf.float32))
optimizer = tf.train.GradientDescentOptimizer(0.003)
train_step = optimizer.minimize(cross_entropy)
sess = tf.Session()
sess.run(init)
for i in range(1000):
# load batch of images and correct answers
batch_X, batch_Y = [x[:3] for x in _data[:2000]],[x[-1] for x in _data[:2000]]
train_data={X: batch_X, Y_: batch_Y}
# train
sess.run(train_step, feed_dict=train_data)
# success ?
a,c = sess.run([accuracy, cross_entropy], feed_dict=train_data)
I got the following error message after running my code:
ValueError: Cannot feed value of shape (2000,) for Tensor 'Placeholder_1:0', which has shape '(?, 1)'
My desired output should be the performance of the model using cross-entropy; the accuracy value from the codeline below:
a,c = sess.run([accuracy, cross_entropy], feed_dict=train_data)
I would also appreciate any suggestions on how to improve the model, or a model that is more suitable for my data.
The shape of Placeholder_1:0 Y_, and input data batch_Y is mismatched as specified by the error message. Notice the 1-D vs 2-D array.
So you should either define 1-D place holder:
Y_ = tf.placeholder(tf.float32, [None])
or prepare 2-D data:
batch_X, batch_Y = [x[:3] for x in _data[:2000]],[x[-1:] for x in _data[:2000]]

tensorflow ValueError: Dimension 0 in both shapes must be equal

I am currently studying TensorFlow. I am trying to create a NN which can accurately assess a prediction model and assign it a score. My plan right now is to combine scores from already existing programs run them through a mlp while comparing them to true values. I have played around with the MNIST data and I am trying to apply what I have learnt to my project. Unfortunately i have a problem
def multilayer_perceptron(x, w1):
# Hidden layer with RELU activation
layer_1 = tf.matmul(x, w1)
layer_1 = tf.nn.relu(layer_1)
# Output layer with linear activation
#out_layer = tf.matmul(layer_1, w2)
return layer_1
def my_mlp (trainer, trainer_awn, learning_rate, training_epochs, n_hidden, n_input, n_output):
trX, trY= trainer, trainer_awn
#create placeholders
x = tf.placeholder(tf.float32, shape=[9517, 5])
y_ = tf.placeholder(tf.float32, shape=[9517, ])
#create initial weights
w1 = tf.Variable(tf.zeros([5, 1]))
#predicted class and loss function
y = multilayer_perceptron(x, w1)
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y, y_))
#training
train_step = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
with tf.Session() as sess:
# you need to initialize all variables
sess.run(tf.initialize_all_variables())
print("1")
for i in range(training_epochs + 1):
sess.run([train_step], feed_dict={x: [trX['V7'], trX['V8'], trX['V9'], trX['V10'], trX['V12']], y_: trY})
return
The code gives me this error
ValueError: Dimension 0 in both shapes must be equal, but are 9517 and 1
This error occurs when running the line for cross_entropy. I don't understand why this is happing, if you need any more information I would be happy to give it to you.
in your case, y has shape [9517, 1] while y_ has shape [9517]. they are not campatible. Please try to reshape y_ using tf.reshape(y_, [-1, 1])
This was caused by the weights.hdf5 file being incompatible with the new data in the repository. I have updated the repo and it should work now.

How can I complete following GRU based RNN written in tensorflow?

So far I have written following code:
import pickle
import numpy as np
import pandas as pd
import tensorflow as tf
# load pickled objects (x and y)
x_input, y_actual = pickle.load(open('sample_input.pickle', 'rb'))
x_input = np.reshape(x_input, (50, 1))
y_actual = np.reshape(y_actual, (50, 1))
# parameters
batch_size = 50
hidden_size = 100
# create network graph
input_data = tf.placeholder(tf.float32, [batch_size, 1])
output_data = tf.placeholder(tf.float32, [batch_size, 1])
cell = tf.nn.rnn_cell.GRUCell(hidden_size)
initial_state = cell.zero_state(batch_size, tf.float32)
hidden_state = initial_state
output_of_cell, hidden_state = cell(inputs=input_data, state=hidden_state)
init_op = tf.initialize_all_variables()
softmax_w = tf.get_variable("softmax_w", [hidden_size, 1], )
softmax_b = tf.get_variable("softmax_b", [1])
logits = tf.matmul(output_of_cell, softmax_w) + softmax_b
probabilities = tf.nn.softmax(logits)
sess = tf.Session()
sess.run(init_op)
something = sess.run([probabilities, hidden_state], feed_dict={input_data:x_input, output_data:y_actual})
#cost = tf.nn.sigmoid_cross_entropy_with_logits(logits, output_data)
#sess.close()
But I am getting error for softmax_w/b as uninitialized variables.
I am not getting how should I use these W and b and carry out train operation.
Something like following:
## some cost function
## training operation minimizing cost function using gradient descent optimizer
tf.initialize_all_variables() gets the "current" set of variables from the graph. Since you are creating softmax_w and softmax_b after your call to tf.initialize_all_variables(), they are not in the list that tf.initialize_all_variables() consults, and hence not initialized when you run sess.run(init_op). The following should work :
softmax_w = tf.get_variable("softmax_w", [hidden_size, 1], )
softmax_b = tf.get_variable("softmax_b", [1])
init_op = tf.initialize_all_variables()

How to test a model in tensor flow?

I'm following this tutorial:
https://www.tensorflow.org/versions/r0.9/tutorials/mnist/beginners/index.html#mnist-for-ml-beginners
What I want to be able to do is pass in a test image x - as a numpy array, and see the resulting softmax classification values - perhaps as another numpy array. Everything I can find online about testing tensor flow models works by passing in test values and test labels and the outputting the accuracy. In my case, I want to output the model labels just based on the test values.
This is what Im trying:
import tensorflow as tf
import numpy as np
from skimage import color,io
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
x = tf.placeholder(tf.float32, [None, 784])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x, W) + b)
y_ = tf.placeholder(tf.float32, [None, 10])
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
for i in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
#so now its trained successfully, and W and b should be the stored "model"
#now to load in a test image
greyscale_test = color.rgb2gray(io.imread('4.jpeg'))
greyscale_expanded = np.expand_dims(greyscale_test,axis=0) #now shape (1,28,28)
x = np.reshape(greyscale_expanded,(1,784)) #now same dimensions as mnist.train.images
#initialize the variable
init_op = tf.initialize_all_variables()
#run the graph
with tf.Session() as sess:
sess.run(init_op) #execute init_op
print (sess.run(feed_dict={x:x})) #this is pretty much just a shot in the dark. What would go here?
Right now it results in this:
TypeError Traceback (most recent call last)
<ipython-input-116-f232a17507fb> in <module>()
36 sess.run(init_op) #execute init_op
---> 37 print (sess.run(feed_dict={x:x})) #this is pretty much just a shot in the dark. What would go here?
TypeError: unhashable type: 'numpy.ndarray'
So when training, the sess.run is passed a train_step and a feed_dict. When I am trying to evaluate a tensor x, would this go in the feed dict? Would I even use sess.run?(seems I have to), but what would the train_step be? Is there a "test_step" or "evaluate_step"?
You're getting the TypeError because you are using a (mutable) numpy.ndarray as a key for your dictionary but the key should be a tf.placeholder and the value a numpy array.
The following adjustment fixes this problem:
x_placeholder = tf.placeholder(tf.float32, [None, 784])
# ...
x = np.reshape(greyscale_expanded,(1,784))
# ...
print(sess.run([inference_step], feed_dict={x_placeholder:x}))
If you just want to perform inference on your model, this will print a numpy array with the predictions.
If you want to evaluate your model (for example compute the accuracy) you also need to feed in the corresponding ground truth labels y as in:
accuracy = sess.run([accuracy_op], feed_dict={x_placeholder:x, y_placeholder:y}
In your case, the accuracy_op could be defined as follows:
correct_predictions = tf.equal(tf.argmax(predictions, 1), tf.cast(labels, tf.int64))
accuracy_op = tf.reduce_mean(tf.cast(correct_predictions, tf.float32))
Here, predictions is the output tensor of your model.
your tf.Session.run op needs a fetches
tf.Session.run(fetches, feed_dict=None, options=None, run_metadata=None)
https://www.tensorflow.org/versions/r0.9/api_docs/python/client.html#session-management
print (sess.run(train_step,feed_dict={x:x})) #but it also needs a feed_dict for y_
what do you mean with:
print the random values that we sample

Categories