I implemented a relatively straightforward logistic regression function. I save all the necessary variables such as weights, bias, x, y, etc. and then I run the training algorithm...
# launch the graph
with tf.Session() as sess:
sess.run(init)
# training cycle
for epoch in range(FLAGS.training_epochs):
avg_cost = 0
total_batch = int(mnist.train.num_examples/FLAGS.batch_size)
# loop over all batches
for i in range(total_batch):
batch_xs, batch_ys = mnist.train.next_batch(FLAGS.batch_size)
_, c = sess.run([optimizer, cost], feed_dict={x: batch_xs, y: batch_ys})
# compute average loss
avg_cost += c / total_batch
# display logs per epoch step
if (epoch + 1) % FLAGS.display_step == 0:
print("Epoch:", '%04d' % (epoch + 1), "cost=", "{:.9f}".format(avg_cost))
save_path = saver.save(sess, "/tmp/model.ckpt")
The model is saved and the prediction and accuracy of the trained model is displayed...
# list of booleans to determine the correct predictions
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
print(correct_prediction.eval({x:mnist.test.images, y:mnist.test.labels}))
# calculate total accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print("Accuracy:", accuracy.eval({x: mnist.test.images, y: mnist.test.labels}))
This is all fine and dandy. However, now I want to be able to predict any given image using the trained model. For example, I want to feed it picture of say 7 and see what it predicts it to be.
I have another module that restores the model. First we load the variables...
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
# tf Graph Input
x = tf.placeholder(tf.float32, [None, 784]) # mnist data image of shape 28*28=784
y = tf.placeholder(tf.float32, [None, 10]) # 0-9 digits recognition => 10 classes
# set model weights
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
# construct model
pred = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax
# minimize error using cross entropy
cost = tf.reduce_mean(-tf.reduce_sum(y * tf.log(pred), reduction_indices=1))
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(FLAGS.learning_rate).minimize(cost)
# initializing the variables
init = tf.global_variables_initializer()
saver = tf.train.Saver()
with tf.Session() as sess:
save.restore(sess, "/tmp/model.ckpt")
This is good. Now I want to compare one image to the model and get a prediction. In this example, I take the first image from the test dataset mnist.test.images[0] and I attempt to compare it to the model.
classification = sess.run(tf.argmax(pred, 1), feed_dict={x: mnist.test.images[0]})
print(classification)
I know this will not work. I get the error...
ValueError: Cannot feed value of shape (784,) for Tensor 'Placeholder:0', which has shape '(?, 784)'
I am at a loss for ideas. This question is rather long, if a straightforward answer is not possible, some guidance as to the steps I may take to do this is appreciated.
Your input placeholder must be of size (?, 784), the question mark meaning variable size which is probably the batch size. You are feeding an input of size (784,) which does not work as the error message states.
In your case, during prediction time, the batch size is just 1, so the following should work:
import numpy as np
...
x_in = np.expand_dims(mnist.test.images[0], axis=0)
classification = sess.run(tf.argmax(pred, 1), feed_dict={x:x_in})
Assuming that the input image is available as a numpy array. If it is already a tensor, the corresponding function is tf.expand_dims(..).
Related
Problem
I'm trying to classify some 64x64 images as a black box exercise. The NN I have written doesn't change my weights. First time writing something like this, the same code, but on MNIST letters input works just fine, but on this code it does not train like it should:
import tensorflow as tf
import numpy as np
path = ""
# x is a holder for the 64x64 image
x = tf.placeholder(tf.float32, shape=[None, 4096])
# y_ is a 1 element vector, containing the predicted probability of the label
y_ = tf.placeholder(tf.float32, [None, 1])
# define weights and balances
W = tf.Variable(tf.zeros([4096, 1]))
b = tf.Variable(tf.zeros([1]))
# define our model
y = tf.nn.softmax(tf.matmul(x, W) + b)
# loss is cross entropy
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
# each training step in gradient decent we want to minimize cross entropy
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
train_labels = np.reshape(np.genfromtxt(path + "train_labels.csv", delimiter=',', skip_header=1), (14999, 1))
train_data = np.genfromtxt(path + "train_samples.csv", delimiter=',', skip_header=1)
# perform 150 training steps with each taking 100 train data
for i in range(0, 15000, 100):
sess.run(train_step, feed_dict={x: train_data[i:i+100], y_: train_labels[i:i+100]})
if i % 500 == 0:
print(sess.run(cross_entropy, feed_dict={x: train_data[i:i+100], y_: train_labels[i:i+100]}))
print(sess.run(b), sess.run(W))
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
sess.close()
How do I solve this problem?
The key to the problem is that the class number of you output y_ and y is 1.You should adopt one-hot mode when you use tf.nn.softmax_cross_entropy_with_logits on classification problems in tensorflow. tf.nn.softmax_cross_entropy_with_logits will first compute tf.nn.softmax. When your class number is 1, your results are all the same. For example:
import tensorflow as tf
y = tf.constant([[1],[0],[1]],dtype=tf.float32)
y_ = tf.constant([[1],[2],[3]],dtype=tf.float32)
softmax_var = tf.nn.softmax(logits=y_)
cross_entropy = tf.multiply(y, tf.log(softmax_var))
errors = tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y)
with tf.Session() as sess:
print(sess.run(softmax_var))
print(sess.run(cross_entropy))
print(sess.run(errors))
[[1.]
[1.]
[1.]]
[[0.]
[0.]
[0.]]
[0. 0. 0.]
This means that no matter what your output y_, your loss will be zero. So your weights and bias haven't been updated.
The solution is to modify the class number of y_ and y.
I suppose your class number is n.
First approch:You can change data to one-hot before feed data.Then use the following code.
y_ = tf.placeholder(tf.float32, [None, n])
W = tf.Variable(tf.zeros([4096, n]))
b = tf.Variable(tf.zeros([n]))
Second approch:change data to one-hot after feed data.
y_ = tf.placeholder(tf.int32, [None, 1])
y_ = tf.one_hot(y_,n) # your dtype of y_ need to be tf.int32
W = tf.Variable(tf.zeros([4096, n]))
b = tf.Variable(tf.zeros([n]))
All your initial weights are zeros. When you have that way, the NN doesn't learn well. You need to initialize all the initial weights with random values.
See Why should weights of Neural Networks be initialized to random numbers?
"Why Not Set Weights to Zero?
We can use the same set of weights each time we train the network; for example, you could use the values of 0.0 for all weights.
In this case, the equations of the learning algorithm would fail to make any changes to the network weights, and the model will be stuck. It is important to note that the bias weight in each neuron is set to zero by default, not a small random value.
"
See
https://machinelearningmastery.com/why-initialize-a-neural-network-with-random-weights/
I am trying to implement dropout in TensorFlow for a simple 3-layer neural network for classification and am running into problems. More specifically, I am trying to apply different dropout parameter pkeep values when I train vs. test.
I am taking the approach as follows:
1) def create_placeholders(n_x, n_y):
X = tf.placeholder("float", [n_x, None])
Y = tf.placeholder("float", [n_y, None])
pkeep = tf.placeholder(tf.float32)
return X,Y,pkeep
2) In the function forward_propagation(X, parameters, pkeep), I perform the following:
Z1 = tf.add(tf.matmul(W1, X), b1)
A1 = tf.nn.relu(Z1)
A1d = tf.nn.dropout(A1, pkeep)
Z2 = tf.add(tf.matmul(W2, A1d),b2)
A2 = tf.nn.relu(Z2)
A2d = tf.nn.dropout(A2, pkeep)
Z3 = tf.add(tf.matmul(W3, A2d),b3)
return Z3
3) Later when tensorflow session is called (in between code lines omitted for clarity):
X, Y, pkeep = create_placeholders(n_x, n_y)
Z3 = forward_propagation(X, parameters, pkeep)
sess.run([optimizer,cost], feed_dict={X:minibatch_X, Y:minibatch_Y, pkeep: 0.75})
Above would run without giving any errors. However, I think that the above would set pkeep value to 0.75 for both training and testing runs. Minibatching is done only on the train data set but I am not setting the pkeep value anywhere else.
I would like to set pkeep = 0.75 for training and pkeep = 1.0 for testing.
4) It does give an error when I do something like this:
x_train_eval = Z3.eval(feed_dict={X: X_train, Y: Y_train, pkeep: 0.75})
x_test_eval = Z3.eval(feed_dict={X: X_test, Y: Y_test, pkeep: 1.0})
The error message I am getting is:
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder_2' with dtype float
[[Node: Placeholder_2 = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
What is the best way to pass different pkeep values for training and testing? Your advice would be much appreciated.
Assuming you have some operation for testing defined as test_op (for instance, something that evaluates accuracy on input data and labels), you can do something like this:
for i in range(num_iters):
# Run your training process.
_, loss = sess.run([optimizer, cost],
feed_dict={X:minibatch_X, Y:minibatch_Y, pkeep: 0.75})
# Test the model after training
test_accuracy = sess.run(test_op,
feed_dict={X:test_X, Y:test_Y, pkeep: 1.0})
Basically you aren't testing the model when you are training it, so you can just call your test_op when you are ready to test, feeding it different hyperparameters like pkeep. The same goes for validating the model periodically during training on a held-out dataset: every so often run your evaluation op on the held-out dataset, passing different hyperparameters than those used during training, and then you can save off configurations or stop early based on your validation accuracy.
Your test_op could return something like the prediction accuracy for a classifier:
correct = tf.equal(tf.argmax(y, 1), tf.argmax(predict, 1))
test_op = tf.reduce_mean(tf.cast(correct, tf.float32))
where y are your target labels and predict is the name of your prediction op.
In neural networks, you forward propagate to compute the logits (probabilities of each label) and compare it with the real value to get the error.
Training involves minimizing the error, by propagating backwards i.e. finding the derivative of error with respect to each weight and then subtracting this value from the weight value.
Applying different dropout parameters is easier when you separate the operations needed for training and evaluating your model.
Training is just minimizing the loss:
def loss(logits, labels):
'''
Calculate cross entropy loss
'''
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(
labels=labels, logits=logits)
return tf.reduce_mean(cross_entropy, name='loss_op')
def train(loss, learning_rate):
'''
Train model by optimizing gradient descent
'''
optimizer = tf.train.AdamOptimizer(learning_rate)
train_op = optimizer.minimize(loss, name='train_op')
return train_op
Evaluating is just calculating the accuracy:
def accuracy(logits, labels):
'''
Calculate accuracy of logits at predicting labels
'''
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(labels, 1))
accuracy_op = tf.reduce_mean(tf.cast(correct_prediction, tf.float32),
name='accuracy_op')
return accuracy_op
Then in your Tensorflow session, after generating placeholders and adding necessary operations to the Graph etc. just feed different keep_prob_pl values into the run methods:
# Train model
sess.run(train_op, feed_dict={x_pl: x_train, y_train: y, keep_prob_pl: 0.75}})
# Evaluate test data
batch_size = 100
epoch = data.num_examples // batch_size
acc = 0.0
for i in range(epoch):
batch_x, batch_y = data.next_batch(batch_size)
feed_dict = {x_pl: batch_x, y_pl: batch_y, keep_prob_pl: 1}
acc += sess.run(accuracy_op, feed_dict=feed_dict)
print(("Epoch Accuracy: {:.4f}").format(acc / epoch))
I am working through some code to understand how to save and restore checkpoints in tensorflow. To do so, I implemented a simple neural netowork that works with MNIST digits and saved the .ckpt file like so:
from tensorflow.examples.tutorials.mnist import input_data
import numpy as np
learning_rate = 0.001
n_input = 784 # MNIST data input (img shape = 28*28)
n_classes = 10 # MNIST total classes 0-9
#import MNIST data
mnist = input_data.read_data_sets('.', one_hot = True)
#Features and Labels
features = tf.placeholder(tf.float32, [None, n_input])
labels = tf.placeholder(tf.float32, [None, n_classes])
#Weights and biases
weights = tf.Variable(tf.random_normal([n_input, n_classes]))
bias = tf.Variable(tf.random_normal([n_classes]))
#logits = xW + b
logits = tf.add(tf.matmul(features, weights), bias)
#Define loss and optimizer
cost = tf.reduce_mean(\
tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)\
.minimize(cost)
# Calculate accuracy
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(labels, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
import math
save_file = './train_model.ckpt'
batch_size = 128
n_epochs = 100
saver = tf.train.Saver()
# Launch the graph
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(n_epochs):
total_batch = math.ceil(mnist.train.num_examples / batch_size)
# Loop over all batches
for i in range(total_batch):
batch_features, batch_labels = mnist.train.next_batch(batch_size)
sess.run(
optimizer,
feed_dict={features: batch_features, labels: batch_labels})
# Print status for every 10 epochs
if epoch % 10 == 0:
valid_accuracy = sess.run(
accuracy,
feed_dict={
features: mnist.validation.images,
labels: mnist.validation.labels})
print('Epoch {:<3} - Validation Accuracy: {}'.format(
epoch,
valid_accuracy))
# Save the model
saver.save(sess, save_file)
print('Trained Model Saved.')
This part works well, and I get the .ckpt file saved in the correct directory. The problem comes in when I try to restore the model in an attempt to work on it again. I use the following code to restore the model:
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, 'train_model.ckpt.meta')
print('model restored')
and end up with the error: ValueError: No variables to save
Not too sure, what the mistake here is. Any help is appreciated. Thanks in advance
A Graph is different to the Session. A graph is the set of operations joining tensors, each of which is a symbolic representation of a set of values. A Session assigns specific values to the Variable tensors, and allows you to run operations in that graph.
The chkpt file saves variable values - i.e. those saved in the weights and biases - but not the graph itself.
The solution is simple: re-run the graph construction (everything before the Session, then start your session and load values from the chkpt file.
Alternatively, you can check out this guide for exporting and importing MetaGraphs.
You should tell the Saver which Variables to restore, default Saver will get all the Variables from the default graph.
As in your case, you should add the constructing graph code before saver = tf.train.Saver()
I am still trying to grasp how to restore a saved tensorflow graph from disk and feed through dictionaries to the model. I have looked at multiple sources but cannot troubleshoot this. The generic MLP code below (first snippet) saves files to disk, however after restoring (second snippet), my accuracy returns a value of None. Any ideas what the reason for this may be?
# Import MINST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
import tensorflow as tf
# Parameters
learning_rate = 0.001
training_epochs = 15
batch_size = 100
display_step = 1
# Network Parameters
n_hidden_1 = 256 # 1st layer number of features
n_hidden_2 = 256 # 2nd layer number of features
n_input = 784 # MNIST data input (img shape: 28*28)
n_classes = 10 # MNIST total classes (0-9 digits)
with tf.name_scope('placeholders'):
# tf Graph input
x = tf.placeholder("float", [None, n_input],name='x')
y = tf.placeholder("float", [None, n_classes],name='y')
with tf.name_scope('Layer-1'):
NN_weights_1=tf.Variable(tf.random_normal([n_input, n_hidden_1],seed=1),name='NN_weights_1')
NN_biases_1=tf.Variable(tf.constant(0.0,shape=[n_hidden_1],name='Const'),name='NN_biases_1')
func=tf.add(tf.matmul(x, NN_weights_1,name='matmul'), NN_biases_1,name='Addition')
func_2=tf.nn.relu(func)
with tf.name_scope('Layer-2'):
NN_weights_2=tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2],seed=2),name='NN_weights_2')
NN_biases_2=tf.Variable(tf.constant(0.0,shape=[n_hidden_2],name='Const'),name='NN_biases_2')
func_3=tf.add(tf.matmul(func_2, NN_weights_2,name='matmul'), NN_biases_2,name='Addition')
func_4=tf.nn.relu(func_3)
with tf.name_scope('Output'):
NN_weights_3=tf.Variable(tf.random_normal([n_hidden_2, n_classes],seed=3),name='NN_weights_3')
NN_biases_3=tf.Variable(tf.constant(0.0,shape=[n_classes],name='Const'),name='NN_biases_3')
func_3=tf.add(tf.matmul(func_4, NN_weights_3,name='matmul'), NN_biases_3,name='Addition')
func_4=tf.nn.sigmoid(func_3)
# Define loss and optimizer
with tf.name_scope('Operations_'):
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=func_4, labels=y),name='cost')
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# Test model
correct_prediction = tf.equal(tf.argmax(func_4, 1), tf.argmax(y, 1),name='correct_prediction')
# Calculate accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"),name='accuracy')
# Initializing the variables
init = tf.global_variables_initializer()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
saver = tf.train.Saver()
# Training cycle
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(mnist.train.num_examples/batch_size)
# Loop over all batches
for i in range(total_batch):
batch_x, batch_y = mnist.train.next_batch(batch_size)
# Run optimization op (backprop) and cost op (to get loss value)
_, c = sess.run([optimizer, cost], feed_dict={x: batch_x,
y: batch_y})
# Compute average loss
avg_cost += c / total_batch
# Display logs per epoch step
if epoch % display_step == 0:
print (("Epoch:", '%04d' % (epoch+1), "cost="), \
"{:.9f}".format(avg_cost))
print ("Optimization Finished!")
print ("Accuracy:", accuracy.eval({x: mnist.test.images, y: mnist.test.labels}))
saver.save(sess, 'my_test_model',global_step=1000)
Restoring Model and passing dictionary for accuracy:
import tensorflow as tf
sess=tf.Session()
#First let's load meta graph and restore weights
saver = tf.train.import_meta_graph('my_test_model-1000.meta')
saver.restore(sess,"my_test_model-1000")
graph = tf.get_default_graph()
accuracy=graph.get_operation_by_name("Operations_/accuracy")
# Access saved Variables directly
print(sess.run('Layer-1/NN_weights_1:0'))
# This will print 2, which is the value of bias that we saved
print ("Accuracy:", sess.run([accuracy],feed_dict={'placeholders/x:0': mnist.test.images, 'placeholders/y:0': mnist.test.labels}))
Change it to:
accuracy=graph.get_operation_by_name("Operations_/accuracy").outputs[0]
Tensorflow discards the outputs of Operation objects executed by means of Session.run. See here for detailed explanation: TensorFlow: eval restored graph
i am new to neural networks. i have gone through TensorFlow mninst ML Beginners
used tensorflow basic mnist tutorial
and trying to get prediction using external image enter image description here
I have the updated the mnist example provided by tensorflow
On top of that i have added few things :
1. Saving trained models locally
2. loading the saved models.
3. preprocessing the image into 28 * 28.
i have attached the image for reference
1. while training the models, save it locally. So i can reuse it at any point of time.
2. once after training, loading the models.
3. creating an external image via gimp which contains any one values ranging from [0 - 9]
4. using opencv to convert the image into 28 * 28 image and reversing the bit as well.
5. Then trying to predict.
i am able to train the models and save it properly.
i am getting predictions which are not right.
Find my codes Below
1.TrainSimple.py
# Load MNIST Data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
from random import randint
from scipy import misc
# Start TensorFlow InteractiveSession
import tensorflow as tf
sess = tf.InteractiveSession()
# Placeholders
x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
# Variables
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
sess.run(tf.initialize_all_variables())
# Predicted Class and Cost Function
y = tf.nn.softmax(tf.matmul(x,W) + b)
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
saver = tf.train.Saver() # defaults to saving all variables
# GradientDescentOptimizer
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
# Train the Model
for i in range(40000):
if (i + 1) == 40000 :
saver.save(sess, "/Users/xxxx/Desktop/TensorFlow/"+"/model.ckpt", global_step=i)
batch = mnist.train.next_batch(50)
train_step.run(feed_dict={x: batch[0], y_: batch[1]})
# Evaluate the Model
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
loadImageAndPredict.py
from random import randint
from scipy import misc
import numpy as np
import cv2
def preProcess(invert_file):
print "preprocessing the images" + invert_file
image=cv2.imread(invert_file,0)
ret,image_thresh = cv2.threshold(image,127,255,cv2.THRESH_BINARY)
l,b=image.shape
fr=0
lr=0
fc=0
lc=0
i=0
while len(set(image_thresh[i,]))==1:
i+=1
fr=i
i=0
while len(set(image_thresh[-1+i,]))==1:
i-=1
lr=i+l
j=0
while len(set(image_thresh[0:,j]))==1:
j+=1
fc=j
j=0
while len(set(image_thresh[0:,-1+j]))==1:
j-=1
lc=j+b
image_crop=image_thresh[fr:lr,fc:lc]
image_padded= cv2.copyMakeBorder(image_crop,5,5,5,5,cv2.BORDER_CONSTANT,value=255)
image_resized = cv2.resize(image_padded, (28, 28))
image_resized = (255-image_resized)
cv2.imwrite(invert_file, image_resized)
import tensorflow as tf
sess = tf.InteractiveSession()
# Placeholders
x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
# # Variables
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
# Predicted Class and Cost Function
y = tf.nn.softmax(tf.matmul(x,W) + b)
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
saver = tf.train.Saver() # defaults to saving all variables - in this case w and b
# Train the Model
# GradientDescentOptimizer
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
flag_1 = 0
# create an an array where we can store 1 picture
images = np.zeros((1,784))
# and the correct values
correct_vals = np.zeros((1,10))
preProcess("4_white.png")
gray = cv2.imread("4_white.png", 0)
flatten = gray.flatten() / 255.0
"""
we need to store the flatten image and generate
the correct_vals array
correct_val for a digit (9) would be
[0,0,0,0,0,0,0,0,0,1]
"""
images[0] = flatten
# print images[0]
print len(images[0])
sess.run(tf.initialize_all_variables())
ckpt = tf.train.get_checkpoint_state("/Users/xxxx/Desktop/TensorFlow")
if ckpt and ckpt.model_checkpoint_path:
saver.restore(sess, ckpt.model_checkpoint_path)
my_classification = sess.run(tf.argmax(y, 1), feed_dict={x: [images[0]]})
print 'Neural Network predicted', my_classification[0], "for your digit"
i am not sure what mistake i have done.
Thinking that simple model might not work i have used this convolution code to predict.
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/image/mnist/convolutional.py
Even that does not predict properly :(
Some things to check:
Does your training loss go down?
Do you get high accuracy on training dataset?
Do you get high accuracy on validation dataset (part of training set set aside?)
Do you get high accuracy on your target dataset?
If you have low training loss (1.), then you are not learning, and need to try different hyper-parameters, such as learning rates.
If you have high (2.) and low (3.), you are overfitting, and need to train for less long, or have higher regularization penalty. If you have high (3.) and low (4.), your training set is not representative of your actual set. Need to make your training set more representative, or at least harder, for instance, by adding distortions