i am new to neural networks. i have gone through TensorFlow mninst ML Beginners
used tensorflow basic mnist tutorial
and trying to get prediction using external image enter image description here
I have the updated the mnist example provided by tensorflow
On top of that i have added few things :
1. Saving trained models locally
2. loading the saved models.
3. preprocessing the image into 28 * 28.
i have attached the image for reference
1. while training the models, save it locally. So i can reuse it at any point of time.
2. once after training, loading the models.
3. creating an external image via gimp which contains any one values ranging from [0 - 9]
4. using opencv to convert the image into 28 * 28 image and reversing the bit as well.
5. Then trying to predict.
i am able to train the models and save it properly.
i am getting predictions which are not right.
Find my codes Below
1.TrainSimple.py
# Load MNIST Data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
from random import randint
from scipy import misc
# Start TensorFlow InteractiveSession
import tensorflow as tf
sess = tf.InteractiveSession()
# Placeholders
x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
# Variables
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
sess.run(tf.initialize_all_variables())
# Predicted Class and Cost Function
y = tf.nn.softmax(tf.matmul(x,W) + b)
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
saver = tf.train.Saver() # defaults to saving all variables
# GradientDescentOptimizer
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
# Train the Model
for i in range(40000):
if (i + 1) == 40000 :
saver.save(sess, "/Users/xxxx/Desktop/TensorFlow/"+"/model.ckpt", global_step=i)
batch = mnist.train.next_batch(50)
train_step.run(feed_dict={x: batch[0], y_: batch[1]})
# Evaluate the Model
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
loadImageAndPredict.py
from random import randint
from scipy import misc
import numpy as np
import cv2
def preProcess(invert_file):
print "preprocessing the images" + invert_file
image=cv2.imread(invert_file,0)
ret,image_thresh = cv2.threshold(image,127,255,cv2.THRESH_BINARY)
l,b=image.shape
fr=0
lr=0
fc=0
lc=0
i=0
while len(set(image_thresh[i,]))==1:
i+=1
fr=i
i=0
while len(set(image_thresh[-1+i,]))==1:
i-=1
lr=i+l
j=0
while len(set(image_thresh[0:,j]))==1:
j+=1
fc=j
j=0
while len(set(image_thresh[0:,-1+j]))==1:
j-=1
lc=j+b
image_crop=image_thresh[fr:lr,fc:lc]
image_padded= cv2.copyMakeBorder(image_crop,5,5,5,5,cv2.BORDER_CONSTANT,value=255)
image_resized = cv2.resize(image_padded, (28, 28))
image_resized = (255-image_resized)
cv2.imwrite(invert_file, image_resized)
import tensorflow as tf
sess = tf.InteractiveSession()
# Placeholders
x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
# # Variables
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
# Predicted Class and Cost Function
y = tf.nn.softmax(tf.matmul(x,W) + b)
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
saver = tf.train.Saver() # defaults to saving all variables - in this case w and b
# Train the Model
# GradientDescentOptimizer
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
flag_1 = 0
# create an an array where we can store 1 picture
images = np.zeros((1,784))
# and the correct values
correct_vals = np.zeros((1,10))
preProcess("4_white.png")
gray = cv2.imread("4_white.png", 0)
flatten = gray.flatten() / 255.0
"""
we need to store the flatten image and generate
the correct_vals array
correct_val for a digit (9) would be
[0,0,0,0,0,0,0,0,0,1]
"""
images[0] = flatten
# print images[0]
print len(images[0])
sess.run(tf.initialize_all_variables())
ckpt = tf.train.get_checkpoint_state("/Users/xxxx/Desktop/TensorFlow")
if ckpt and ckpt.model_checkpoint_path:
saver.restore(sess, ckpt.model_checkpoint_path)
my_classification = sess.run(tf.argmax(y, 1), feed_dict={x: [images[0]]})
print 'Neural Network predicted', my_classification[0], "for your digit"
i am not sure what mistake i have done.
Thinking that simple model might not work i have used this convolution code to predict.
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/image/mnist/convolutional.py
Even that does not predict properly :(
Some things to check:
Does your training loss go down?
Do you get high accuracy on training dataset?
Do you get high accuracy on validation dataset (part of training set set aside?)
Do you get high accuracy on your target dataset?
If you have low training loss (1.), then you are not learning, and need to try different hyper-parameters, such as learning rates.
If you have high (2.) and low (3.), you are overfitting, and need to train for less long, or have higher regularization penalty. If you have high (3.) and low (4.), your training set is not representative of your actual set. Need to make your training set more representative, or at least harder, for instance, by adding distortions
Related
Problem
I'm trying to classify some 64x64 images as a black box exercise. The NN I have written doesn't change my weights. First time writing something like this, the same code, but on MNIST letters input works just fine, but on this code it does not train like it should:
import tensorflow as tf
import numpy as np
path = ""
# x is a holder for the 64x64 image
x = tf.placeholder(tf.float32, shape=[None, 4096])
# y_ is a 1 element vector, containing the predicted probability of the label
y_ = tf.placeholder(tf.float32, [None, 1])
# define weights and balances
W = tf.Variable(tf.zeros([4096, 1]))
b = tf.Variable(tf.zeros([1]))
# define our model
y = tf.nn.softmax(tf.matmul(x, W) + b)
# loss is cross entropy
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
# each training step in gradient decent we want to minimize cross entropy
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
train_labels = np.reshape(np.genfromtxt(path + "train_labels.csv", delimiter=',', skip_header=1), (14999, 1))
train_data = np.genfromtxt(path + "train_samples.csv", delimiter=',', skip_header=1)
# perform 150 training steps with each taking 100 train data
for i in range(0, 15000, 100):
sess.run(train_step, feed_dict={x: train_data[i:i+100], y_: train_labels[i:i+100]})
if i % 500 == 0:
print(sess.run(cross_entropy, feed_dict={x: train_data[i:i+100], y_: train_labels[i:i+100]}))
print(sess.run(b), sess.run(W))
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
sess.close()
How do I solve this problem?
The key to the problem is that the class number of you output y_ and y is 1.You should adopt one-hot mode when you use tf.nn.softmax_cross_entropy_with_logits on classification problems in tensorflow. tf.nn.softmax_cross_entropy_with_logits will first compute tf.nn.softmax. When your class number is 1, your results are all the same. For example:
import tensorflow as tf
y = tf.constant([[1],[0],[1]],dtype=tf.float32)
y_ = tf.constant([[1],[2],[3]],dtype=tf.float32)
softmax_var = tf.nn.softmax(logits=y_)
cross_entropy = tf.multiply(y, tf.log(softmax_var))
errors = tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y)
with tf.Session() as sess:
print(sess.run(softmax_var))
print(sess.run(cross_entropy))
print(sess.run(errors))
[[1.]
[1.]
[1.]]
[[0.]
[0.]
[0.]]
[0. 0. 0.]
This means that no matter what your output y_, your loss will be zero. So your weights and bias haven't been updated.
The solution is to modify the class number of y_ and y.
I suppose your class number is n.
First approch:You can change data to one-hot before feed data.Then use the following code.
y_ = tf.placeholder(tf.float32, [None, n])
W = tf.Variable(tf.zeros([4096, n]))
b = tf.Variable(tf.zeros([n]))
Second approch:change data to one-hot after feed data.
y_ = tf.placeholder(tf.int32, [None, 1])
y_ = tf.one_hot(y_,n) # your dtype of y_ need to be tf.int32
W = tf.Variable(tf.zeros([4096, n]))
b = tf.Variable(tf.zeros([n]))
All your initial weights are zeros. When you have that way, the NN doesn't learn well. You need to initialize all the initial weights with random values.
See Why should weights of Neural Networks be initialized to random numbers?
"Why Not Set Weights to Zero?
We can use the same set of weights each time we train the network; for example, you could use the values of 0.0 for all weights.
In this case, the equations of the learning algorithm would fail to make any changes to the network weights, and the model will be stuck. It is important to note that the bias weight in each neuron is set to zero by default, not a small random value.
"
See
https://machinelearningmastery.com/why-initialize-a-neural-network-with-random-weights/
I am working through some code to understand how to save and restore checkpoints in tensorflow. To do so, I implemented a simple neural netowork that works with MNIST digits and saved the .ckpt file like so:
from tensorflow.examples.tutorials.mnist import input_data
import numpy as np
learning_rate = 0.001
n_input = 784 # MNIST data input (img shape = 28*28)
n_classes = 10 # MNIST total classes 0-9
#import MNIST data
mnist = input_data.read_data_sets('.', one_hot = True)
#Features and Labels
features = tf.placeholder(tf.float32, [None, n_input])
labels = tf.placeholder(tf.float32, [None, n_classes])
#Weights and biases
weights = tf.Variable(tf.random_normal([n_input, n_classes]))
bias = tf.Variable(tf.random_normal([n_classes]))
#logits = xW + b
logits = tf.add(tf.matmul(features, weights), bias)
#Define loss and optimizer
cost = tf.reduce_mean(\
tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)\
.minimize(cost)
# Calculate accuracy
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(labels, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
import math
save_file = './train_model.ckpt'
batch_size = 128
n_epochs = 100
saver = tf.train.Saver()
# Launch the graph
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(n_epochs):
total_batch = math.ceil(mnist.train.num_examples / batch_size)
# Loop over all batches
for i in range(total_batch):
batch_features, batch_labels = mnist.train.next_batch(batch_size)
sess.run(
optimizer,
feed_dict={features: batch_features, labels: batch_labels})
# Print status for every 10 epochs
if epoch % 10 == 0:
valid_accuracy = sess.run(
accuracy,
feed_dict={
features: mnist.validation.images,
labels: mnist.validation.labels})
print('Epoch {:<3} - Validation Accuracy: {}'.format(
epoch,
valid_accuracy))
# Save the model
saver.save(sess, save_file)
print('Trained Model Saved.')
This part works well, and I get the .ckpt file saved in the correct directory. The problem comes in when I try to restore the model in an attempt to work on it again. I use the following code to restore the model:
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, 'train_model.ckpt.meta')
print('model restored')
and end up with the error: ValueError: No variables to save
Not too sure, what the mistake here is. Any help is appreciated. Thanks in advance
A Graph is different to the Session. A graph is the set of operations joining tensors, each of which is a symbolic representation of a set of values. A Session assigns specific values to the Variable tensors, and allows you to run operations in that graph.
The chkpt file saves variable values - i.e. those saved in the weights and biases - but not the graph itself.
The solution is simple: re-run the graph construction (everything before the Session, then start your session and load values from the chkpt file.
Alternatively, you can check out this guide for exporting and importing MetaGraphs.
You should tell the Saver which Variables to restore, default Saver will get all the Variables from the default graph.
As in your case, you should add the constructing graph code before saver = tf.train.Saver()
I wish to run the training phase of my tensorflow code on my GPU while after I finish and store the results to load the model I created and run its test phase on CPU.
I have created this code (I have put a part of it, just for reference because it's huge otherwise, I know that the rules are to include a fully functional code and I apologise about that).
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.contrib.rnn.python.ops import rnn_cell, rnn
# Import MNIST data http://yann.lecun.com/exdb/mnist/
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
x_train = mnist.train.images
# Check that the dataset contains 55,000 rows and 784 columns
N,D = x_train.shape
tf.reset_default_graph()
sess = tf.InteractiveSession()
x = tf.placeholder("float", [None, n_steps,n_input])
y_true = tf.placeholder("float", [None, n_classes])
keep_prob = tf.placeholder(tf.float32,shape=[])
learning_rate = tf.placeholder(tf.float32,shape=[])
#[............Build the RNN graph model.............]
sess.run(tf.global_variables_initializer())
# Because I am using my GPU for the training, I avoid allocating the whole
# mnist.validation set because of memory error, so I gragment it to
# small batches (100)
x_validation_bin, y_validation_bin = mnist.validation.next_batch(batch_size)
x_validation_bin = binarize(x_validation_bin, threshold=0.1)
x_validation_bin = x_validation_bin.reshape((-1,n_steps,n_input))
for k in range(epochs):
steps = 0
for i in range(training_iters):
#Stochastic descent
batch_x, batch_y = mnist.train.next_batch(batch_size)
batch_x = binarize(batch_x, threshold=0.1)
batch_x = batch_x.reshape((-1,n_steps,n_input))
sess.run(train_step, feed_dict={x: batch_x, y_true: batch_y,keep_prob: keep_prob,eta:learning_rate})
if do_report_err == 1:
if steps % display_step == 0:
# Calculate batch accuracy
acc = sess.run(accuracy, feed_dict={x: batch_x, y_true: batch_y,keep_prob: 1.0})
# Calculate batch loss
loss = sess.run(total_loss, feed_dict={x: batch_x, y_true: batch_y,keep_prob: 1.0})
print("Iter " + str(i) + ", Minibatch Loss= " + "{:.6f}".format(loss) + ", Training Accuracy = " + "{:.5f}".format(acc))
steps += 1
# Validation Accuracy and Cost
validation_accuracy = sess.run(accuracy,feed_dict={x:x_validation_bin, y_true:y_validation_bin, keep_prob:1.0})
validation_cost = sess.run(total_loss,feed_dict={x:x_validation_bin, y_true:y_validation_bin, keep_prob:1.0})
validation_loss_array.append(final_validation_cost)
validation_accuracy_array.append(final_validation_accuracy)
saver.save(sess, savefilename)
total_epochs = total_epochs + 1
np.savez(datasavefilename,epochs_saved = total_epochs,learning_rate_saved = learning_rate,keep_prob_saved = best_keep_prob, validation_loss_array_saved = validation_loss_array,validation_accuracy_array_saved = validation_accuracy_array,modelsavefilename = savefilename)
After that, my model has been trained successfully and saved the relevant data, so I wish to load the file and do a final train and test part in the model but using my CPU this time. The reason is the GPU can't handle the whole dataset of mnist.train.images and mnist.train.labels.
So, manually I select this part and I run it:
with tf.device('/cpu:0'):
# Initialise variables
sess.run(tf.global_variables_initializer())
# Accuracy and Cost
saver.restore(sess, savefilename)
x_train_bin = binarize(mnist.train.images, threshold=0.1)
x_train_bin = x_train_bin.reshape((-1,n_steps,n_input))
final_train_accuracy = sess.run(accuracy,feed_dict={x:x_train_bin, y_true:mnist.train.labels, keep_prob:1.0})
final_train_cost = sess.run(total_loss,feed_dict={x:x_train_bin, y_true:mnist.train.labels, keep_prob:1.0})
x_test_bin = binarize(mnist.test.images, threshold=0.1)
x_test_bin = x_test_bin.reshape((-1,n_steps,n_input))
final_test_accuracy = sess.run(accuracy,feed_dict={x:x_test_bin, y_true:mnist.test.labels, keep_prob:1.0})
final_test_cost = sess.run(total_loss,feed_dict={x:x_test_bin, y_true:mnist.test.labels, keep_prob:1.0})
But I get an OMM GPU memory error, which it doesn't make sense to me since I think I have forced the program to rely on CPU. I did not put a command sess.close() in the first (training with batches) code, but I am not sure if this really the reason behind it. I followed this post actually for the CPU
Any suggestions how to run the last part on CPU only?
with tf.device() statements only apply to graph building, not to execution, so doing sess.run inside a device block is equivalent to not having the device at all.
To do what you want to do you need to build separate training and test graphs, which share variables.
I implemented a relatively straightforward logistic regression function. I save all the necessary variables such as weights, bias, x, y, etc. and then I run the training algorithm...
# launch the graph
with tf.Session() as sess:
sess.run(init)
# training cycle
for epoch in range(FLAGS.training_epochs):
avg_cost = 0
total_batch = int(mnist.train.num_examples/FLAGS.batch_size)
# loop over all batches
for i in range(total_batch):
batch_xs, batch_ys = mnist.train.next_batch(FLAGS.batch_size)
_, c = sess.run([optimizer, cost], feed_dict={x: batch_xs, y: batch_ys})
# compute average loss
avg_cost += c / total_batch
# display logs per epoch step
if (epoch + 1) % FLAGS.display_step == 0:
print("Epoch:", '%04d' % (epoch + 1), "cost=", "{:.9f}".format(avg_cost))
save_path = saver.save(sess, "/tmp/model.ckpt")
The model is saved and the prediction and accuracy of the trained model is displayed...
# list of booleans to determine the correct predictions
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
print(correct_prediction.eval({x:mnist.test.images, y:mnist.test.labels}))
# calculate total accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print("Accuracy:", accuracy.eval({x: mnist.test.images, y: mnist.test.labels}))
This is all fine and dandy. However, now I want to be able to predict any given image using the trained model. For example, I want to feed it picture of say 7 and see what it predicts it to be.
I have another module that restores the model. First we load the variables...
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
# tf Graph Input
x = tf.placeholder(tf.float32, [None, 784]) # mnist data image of shape 28*28=784
y = tf.placeholder(tf.float32, [None, 10]) # 0-9 digits recognition => 10 classes
# set model weights
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
# construct model
pred = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax
# minimize error using cross entropy
cost = tf.reduce_mean(-tf.reduce_sum(y * tf.log(pred), reduction_indices=1))
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(FLAGS.learning_rate).minimize(cost)
# initializing the variables
init = tf.global_variables_initializer()
saver = tf.train.Saver()
with tf.Session() as sess:
save.restore(sess, "/tmp/model.ckpt")
This is good. Now I want to compare one image to the model and get a prediction. In this example, I take the first image from the test dataset mnist.test.images[0] and I attempt to compare it to the model.
classification = sess.run(tf.argmax(pred, 1), feed_dict={x: mnist.test.images[0]})
print(classification)
I know this will not work. I get the error...
ValueError: Cannot feed value of shape (784,) for Tensor 'Placeholder:0', which has shape '(?, 784)'
I am at a loss for ideas. This question is rather long, if a straightforward answer is not possible, some guidance as to the steps I may take to do this is appreciated.
Your input placeholder must be of size (?, 784), the question mark meaning variable size which is probably the batch size. You are feeding an input of size (784,) which does not work as the error message states.
In your case, during prediction time, the batch size is just 1, so the following should work:
import numpy as np
...
x_in = np.expand_dims(mnist.test.images[0], axis=0)
classification = sess.run(tf.argmax(pred, 1), feed_dict={x:x_in})
Assuming that the input image is available as a numpy array. If it is already a tensor, the corresponding function is tf.expand_dims(..).
i am new to neural networks.
i have gone through TensorFlow mninst ML Beginners
used tensorflow basic mnist tutorial
and trying to get prediction using external image
I have the updated the mnist example provided by tensorflow
On top of that i have added few things :
1. Saving trained models locally
2. loading the saved models.
3. preprocessing the image into 28 * 28.
i have attached the image for reference
1. while training the models, save it locally. So i can reuse it at any point of time.
2. once after training, loading the models.
3. creating an external image via gimp which contains any one values ranging from [0 - 9]
4. using opencv to convert the image into 28 * 28 image and reversing the bit as well.
5. Then trying to predict.
i am able to train the models and save it properly.
i am getting predictions which are not right.
Find my codes Below
TrainSimple.py
# Load MNIST Data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
from random import randint
from scipy import misc
# Start TensorFlow InteractiveSession
import tensorflow as tf
sess = tf.InteractiveSession()
# Placeholders
x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
# Variables
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
sess.run(tf.initialize_all_variables())
# Predicted Class and Cost Function
y = tf.nn.softmax(tf.matmul(x,W) + b)
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
saver = tf.train.Saver() # defaults to saving all variables
# GradientDescentOptimizer
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
# Train the Model
for i in range(40000):
if (i + 1) == 40000 :
saver.save(sess, "/Users/xxxx/Desktop/TensorFlow/"+"/model.ckpt", global_step=i)
batch = mnist.train.next_batch(50)
train_step.run(feed_dict={x: batch[0], y_: batch[1]})
# Evaluate the Model
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
loadImageAndPredict.py
from random import randint
from scipy import misc
import numpy as np
import cv2
def preProcess(invert_file):
print "preprocessing the images" + invert_file
image=cv2.imread(invert_file,0)
ret,image_thresh = cv2.threshold(image,127,255,cv2.THRESH_BINARY)
l,b=image.shape
fr=0
lr=0
fc=0
lc=0
i=0
while len(set(image_thresh[i,]))==1:
i+=1
fr=i
i=0
while len(set(image_thresh[-1+i,]))==1:
i-=1
lr=i+l
j=0
while len(set(image_thresh[0:,j]))==1:
j+=1
fc=j
j=0
while len(set(image_thresh[0:,-1+j]))==1:
j-=1
lc=j+b
image_crop=image_thresh[fr:lr,fc:lc]
image_padded= cv2.copyMakeBorder(image_crop,5,5,5,5,cv2.BORDER_CONSTANT,value=255)
image_resized = cv2.resize(image_padded, (28, 28))
image_resized = (255-image_resized)
cv2.imwrite(invert_file, image_resized)
import tensorflow as tf
sess = tf.InteractiveSession()
# Placeholders
x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
# # Variables
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
# Predicted Class and Cost Function
y = tf.nn.softmax(tf.matmul(x,W) + b)
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
saver = tf.train.Saver() # defaults to saving all variables - in this case w and b
# Train the Model
# GradientDescentOptimizer
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
flag_1 = 0
# create an an array where we can store 1 picture
images = np.zeros((1,784))
# and the correct values
correct_vals = np.zeros((1,10))
preProcess("4_white.png")
gray = cv2.imread("4_white.png", 0)
flatten = gray.flatten() / 255.0
"""
we need to store the flatten image and generate
the correct_vals array
correct_val for a digit (9) would be
[0,0,0,0,0,0,0,0,0,1]
"""
images[0] = flatten
# print images[0]
print len(images[0])
sess.run(tf.initialize_all_variables())
ckpt = tf.train.get_checkpoint_state("/Users/xxxx/Desktop/TensorFlow")
if ckpt and ckpt.model_checkpoint_path:
saver.restore(sess, ckpt.model_checkpoint_path)
my_classification = sess.run(tf.argmax(y, 1), feed_dict={x: [images[0]]})
print 'Neural Network predicted', my_classification[0], "for your digit"
i am not sure what mistake i have done.
Thinking that simple model might not work i have used this convolution code to predict.
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/image/mnist/convolutional.py
Even that does not predict properly :(