Tensorflow: Run training phase on GPU and test phase on CPU - python

I wish to run the training phase of my tensorflow code on my GPU while after I finish and store the results to load the model I created and run its test phase on CPU.
I have created this code (I have put a part of it, just for reference because it's huge otherwise, I know that the rules are to include a fully functional code and I apologise about that).
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.contrib.rnn.python.ops import rnn_cell, rnn
# Import MNIST data http://yann.lecun.com/exdb/mnist/
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
x_train = mnist.train.images
# Check that the dataset contains 55,000 rows and 784 columns
N,D = x_train.shape
tf.reset_default_graph()
sess = tf.InteractiveSession()
x = tf.placeholder("float", [None, n_steps,n_input])
y_true = tf.placeholder("float", [None, n_classes])
keep_prob = tf.placeholder(tf.float32,shape=[])
learning_rate = tf.placeholder(tf.float32,shape=[])
#[............Build the RNN graph model.............]
sess.run(tf.global_variables_initializer())
# Because I am using my GPU for the training, I avoid allocating the whole
# mnist.validation set because of memory error, so I gragment it to
# small batches (100)
x_validation_bin, y_validation_bin = mnist.validation.next_batch(batch_size)
x_validation_bin = binarize(x_validation_bin, threshold=0.1)
x_validation_bin = x_validation_bin.reshape((-1,n_steps,n_input))
for k in range(epochs):
steps = 0
for i in range(training_iters):
#Stochastic descent
batch_x, batch_y = mnist.train.next_batch(batch_size)
batch_x = binarize(batch_x, threshold=0.1)
batch_x = batch_x.reshape((-1,n_steps,n_input))
sess.run(train_step, feed_dict={x: batch_x, y_true: batch_y,keep_prob: keep_prob,eta:learning_rate})
if do_report_err == 1:
if steps % display_step == 0:
# Calculate batch accuracy
acc = sess.run(accuracy, feed_dict={x: batch_x, y_true: batch_y,keep_prob: 1.0})
# Calculate batch loss
loss = sess.run(total_loss, feed_dict={x: batch_x, y_true: batch_y,keep_prob: 1.0})
print("Iter " + str(i) + ", Minibatch Loss= " + "{:.6f}".format(loss) + ", Training Accuracy = " + "{:.5f}".format(acc))
steps += 1
# Validation Accuracy and Cost
validation_accuracy = sess.run(accuracy,feed_dict={x:x_validation_bin, y_true:y_validation_bin, keep_prob:1.0})
validation_cost = sess.run(total_loss,feed_dict={x:x_validation_bin, y_true:y_validation_bin, keep_prob:1.0})
validation_loss_array.append(final_validation_cost)
validation_accuracy_array.append(final_validation_accuracy)
saver.save(sess, savefilename)
total_epochs = total_epochs + 1
np.savez(datasavefilename,epochs_saved = total_epochs,learning_rate_saved = learning_rate,keep_prob_saved = best_keep_prob, validation_loss_array_saved = validation_loss_array,validation_accuracy_array_saved = validation_accuracy_array,modelsavefilename = savefilename)
After that, my model has been trained successfully and saved the relevant data, so I wish to load the file and do a final train and test part in the model but using my CPU this time. The reason is the GPU can't handle the whole dataset of mnist.train.images and mnist.train.labels.
So, manually I select this part and I run it:
with tf.device('/cpu:0'):
# Initialise variables
sess.run(tf.global_variables_initializer())
# Accuracy and Cost
saver.restore(sess, savefilename)
x_train_bin = binarize(mnist.train.images, threshold=0.1)
x_train_bin = x_train_bin.reshape((-1,n_steps,n_input))
final_train_accuracy = sess.run(accuracy,feed_dict={x:x_train_bin, y_true:mnist.train.labels, keep_prob:1.0})
final_train_cost = sess.run(total_loss,feed_dict={x:x_train_bin, y_true:mnist.train.labels, keep_prob:1.0})
x_test_bin = binarize(mnist.test.images, threshold=0.1)
x_test_bin = x_test_bin.reshape((-1,n_steps,n_input))
final_test_accuracy = sess.run(accuracy,feed_dict={x:x_test_bin, y_true:mnist.test.labels, keep_prob:1.0})
final_test_cost = sess.run(total_loss,feed_dict={x:x_test_bin, y_true:mnist.test.labels, keep_prob:1.0})
But I get an OMM GPU memory error, which it doesn't make sense to me since I think I have forced the program to rely on CPU. I did not put a command sess.close() in the first (training with batches) code, but I am not sure if this really the reason behind it. I followed this post actually for the CPU
Any suggestions how to run the last part on CPU only?

with tf.device() statements only apply to graph building, not to execution, so doing sess.run inside a device block is equivalent to not having the device at all.
To do what you want to do you need to build separate training and test graphs, which share variables.

Related

How to plot different summary metrics on the same plot with Tensorboard?

I would like to be able plot the training loss per batch and the average validation loss for the validation set on the same plot in Tensorboard. I ran into this issue when my validation set was too large to fit into memory so required batching and the use of tf.metrics update ops.
This question could apply to any Tensorflow metrics you wanted to appear on the same graph in Tensorboard.
I am able to
plot these two graphs separately (see here)
plot the validation-loss-per-validation-batch on the same graph as the training-loss-per-training-batch (this was OK when the validation set could be a single batch and I could reuse the training summary op train_summ below)
In the example code below, my issue stems from the fact that my validation summary tf.summary.scalar with name=loss gets renamed to loss_1 and thus is moved to a separate graph in Tensorboard. From what I can work out Tensorboard takes "same name" and plots them on the same graph, regardless of what folder they are in. This is frustrating as train_summ (name=loss) is only ever written to the train folder and valid_summ (name=loss) is only ever written to the valid folder - but is still renamed to loss_1.
The example code:
# View graphs with (Linux): $ tensorboard --logdir=/tmp/my_tf_model
import tensorflow as tf
import numpy as np
import os
import tempfile
def train_data_gen():
yield np.random.normal(size=[3]), np.array([0.5, 0.5, 0.5])
def valid_data_gen():
yield np.random.normal(size=[3]), np.array([0.8, 0.8, 0.8])
batch_size = 25
n_training_batches = 4
n_valid_batches = 2
n_epochs = 5
summary_loc = os.path.join(tempfile.gettempdir(), 'my_tf_model')
print("Summaries written to" + summary_loc)
# Dummy data
train_data = tf.data.Dataset.from_generator(train_data_gen, (tf.float32, tf.float32)).repeat().batch(batch_size)
valid_data = tf.data.Dataset.from_generator(valid_data_gen, (tf.float32, tf.float32)).repeat().batch(batch_size)
handle = tf.placeholder(tf.string, shape=[])
iterator = tf.data.Iterator.from_string_handle(handle,
train_data.output_types, train_data.output_shapes)
batch_x, batch_y = iterator.get_next()
train_iter = train_data.make_initializable_iterator()
valid_iter = valid_data.make_initializable_iterator()
# Some ops on the data
loss = tf.losses.mean_squared_error(batch_x, batch_y)
valid_loss, valid_loss_update = tf.metrics.mean(loss)
# Write to summaries
train_summ = tf.summary.scalar('loss', loss)
valid_summ = tf.summary.scalar('loss', valid_loss) # <- will be renamed to "loss_1"
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
train_handle, valid_handle = sess.run([train_iter.string_handle(), valid_iter.string_handle()])
sess.run([train_iter.initializer, valid_iter.initializer])
# Summary writers
writer_train = tf.summary.FileWriter(os.path.join(summary_loc, 'train'), sess.graph)
writer_valid = tf.summary.FileWriter(os.path.join(summary_loc, 'valid'), sess.graph)
global_step = 0 # implicit as no actual training
for i in range(n_epochs):
# "Training"
for j in range(n_training_batches):
global_step += 1
summ = sess.run(train_summ, feed_dict={handle: train_handle})
writer_train.add_summary(summary=summ, global_step=global_step)
# "Validation"
sess.run(tf.local_variables_initializer())
for j in range(n_valid_batches):
_, batch_summ = sess.run([valid_loss_update, train_summ], feed_dict={handle: valid_handle})
# The following will plot the batch loss for the validation set on the loss plot with the training data:
# writer_valid.add_summary(summary=batch_summ, global_step=global_step + j + 1)
summ = sess.run(valid_summ)
writer_valid.add_summary(summary=summ, global_step=global_step) # <- I want this on the training loss graph
What I have tried
Separate tf.summary.FileWriter objects (one for training, one for validation), as recommended by this issue and this question (think what I'm after is alluded to in the comment of that question)
The use of tf.summary.merge to merge all my training and validation/test metrics into overall summary ops; does useful book-keeping but doesn't plot what I want on the same graph
Use of the tf.summary.scalar family attribute (loss still gets renamed to loss_1)
(Complete hack solution) Use valid_loss, valid_loss_update = tf.metrics.mean(loss) on the training data and then run tf.local_variables_initializer() every training batch. This does give you the same summary op and thus puts things on the same graph but is surely not how you're meant to do this? It also doesn't generalise to other metrics.
Context
Tensorflow 1.9.0
Tensorboard 1.9.0
Python 3.5.2
The Tensorboard custom_scalar plugin is the way to solve this problem.
Here's the same example again with a custom_scalar to plot the two losses (per training batch + averaged over all validation batches) on the same plot:
# View graphs with (Linux): $ tensorboard --logdir=/tmp/my_tf_model
import os
import tempfile
import tensorflow as tf
import numpy as np
from tensorboard import summary as summary_lib
from tensorboard.plugins.custom_scalar import layout_pb2
def train_data_gen():
yield np.random.normal(size=[3]), np.array([0.5, 0.5, 0.5])
def valid_data_gen():
yield np.random.normal(size=[3]), np.array([0.8, 0.8, 0.8])
batch_size = 25
n_training_batches = 4
n_valid_batches = 2
n_epochs = 5
summary_loc = os.path.join(tempfile.gettempdir(), 'my_tf_model')
print("Summaries written to " + summary_loc)
# Dummy data
train_data = tf.data.Dataset.from_generator(
train_data_gen, (tf.float32, tf.float32)).repeat().batch(batch_size)
valid_data = tf.data.Dataset.from_generator(
valid_data_gen, (tf.float32, tf.float32)).repeat().batch(batch_size)
handle = tf.placeholder(tf.string, shape=[])
iterator = tf.data.Iterator.from_string_handle(handle, train_data.output_types,
train_data.output_shapes)
batch_x, batch_y = iterator.get_next()
train_iter = train_data.make_initializable_iterator()
valid_iter = valid_data.make_initializable_iterator()
# Some ops on the data
loss = tf.losses.mean_squared_error(batch_x, batch_y)
valid_loss, valid_loss_update = tf.metrics.mean(loss)
with tf.name_scope('loss'):
train_summ = summary_lib.scalar('training', loss)
valid_summ = summary_lib.scalar('valid', valid_loss)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
train_handle, valid_handle = sess.run([train_iter.string_handle(), valid_iter.string_handle()])
sess.run([train_iter.initializer, valid_iter.initializer])
writer_train = tf.summary.FileWriter(os.path.join(summary_loc, 'train'), sess.graph)
writer_valid = tf.summary.FileWriter(os.path.join(summary_loc, 'valid'), sess.graph)
layout_summary = summary_lib.custom_scalar_pb(
layout_pb2.Layout(category=[
layout_pb2.Category(
title='losses',
chart=[
layout_pb2.Chart(
title='losses',
multiline=layout_pb2.MultilineChartContent(tag=[
'loss/training', 'loss/valid'
]))
])
]))
writer_train.add_summary(layout_summary)
global_step = 0
for i in range(n_epochs):
for j in range(n_training_batches): # "Training"
global_step += 1
summ = sess.run(train_summ, feed_dict={handle: train_handle})
writer_train.add_summary(summary=summ, global_step=global_step)
sess.run(tf.local_variables_initializer())
for j in range(n_valid_batches): # "Validation"
_, batch_summ = sess.run([valid_loss_update, train_summ], feed_dict={handle: valid_handle})
summ = sess.run(valid_summ)
writer_valid.add_summary(summary=summ, global_step=global_step)
Here's the resulting output in Tensorboard.

TensorFlow restore throwing "No Variable to save" error

I am working through some code to understand how to save and restore checkpoints in tensorflow. To do so, I implemented a simple neural netowork that works with MNIST digits and saved the .ckpt file like so:
from tensorflow.examples.tutorials.mnist import input_data
import numpy as np
learning_rate = 0.001
n_input = 784 # MNIST data input (img shape = 28*28)
n_classes = 10 # MNIST total classes 0-9
#import MNIST data
mnist = input_data.read_data_sets('.', one_hot = True)
#Features and Labels
features = tf.placeholder(tf.float32, [None, n_input])
labels = tf.placeholder(tf.float32, [None, n_classes])
#Weights and biases
weights = tf.Variable(tf.random_normal([n_input, n_classes]))
bias = tf.Variable(tf.random_normal([n_classes]))
#logits = xW + b
logits = tf.add(tf.matmul(features, weights), bias)
#Define loss and optimizer
cost = tf.reduce_mean(\
tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)\
.minimize(cost)
# Calculate accuracy
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(labels, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
import math
save_file = './train_model.ckpt'
batch_size = 128
n_epochs = 100
saver = tf.train.Saver()
# Launch the graph
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(n_epochs):
total_batch = math.ceil(mnist.train.num_examples / batch_size)
# Loop over all batches
for i in range(total_batch):
batch_features, batch_labels = mnist.train.next_batch(batch_size)
sess.run(
optimizer,
feed_dict={features: batch_features, labels: batch_labels})
# Print status for every 10 epochs
if epoch % 10 == 0:
valid_accuracy = sess.run(
accuracy,
feed_dict={
features: mnist.validation.images,
labels: mnist.validation.labels})
print('Epoch {:<3} - Validation Accuracy: {}'.format(
epoch,
valid_accuracy))
# Save the model
saver.save(sess, save_file)
print('Trained Model Saved.')
This part works well, and I get the .ckpt file saved in the correct directory. The problem comes in when I try to restore the model in an attempt to work on it again. I use the following code to restore the model:
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, 'train_model.ckpt.meta')
print('model restored')
and end up with the error: ValueError: No variables to save
Not too sure, what the mistake here is. Any help is appreciated. Thanks in advance
A Graph is different to the Session. A graph is the set of operations joining tensors, each of which is a symbolic representation of a set of values. A Session assigns specific values to the Variable tensors, and allows you to run operations in that graph.
The chkpt file saves variable values - i.e. those saved in the weights and biases - but not the graph itself.
The solution is simple: re-run the graph construction (everything before the Session, then start your session and load values from the chkpt file.
Alternatively, you can check out this guide for exporting and importing MetaGraphs.
You should tell the Saver which Variables to restore, default Saver will get all the Variables from the default graph.
As in your case, you should add the constructing graph code before saver = tf.train.Saver()

Restoring Tensorflow Model but results are incorrect

I am still trying to grasp how to restore a saved tensorflow graph from disk and feed through dictionaries to the model. I have looked at multiple sources but cannot troubleshoot this. The generic MLP code below (first snippet) saves files to disk, however after restoring (second snippet), my accuracy returns a value of None. Any ideas what the reason for this may be?
# Import MINST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
import tensorflow as tf
# Parameters
learning_rate = 0.001
training_epochs = 15
batch_size = 100
display_step = 1
# Network Parameters
n_hidden_1 = 256 # 1st layer number of features
n_hidden_2 = 256 # 2nd layer number of features
n_input = 784 # MNIST data input (img shape: 28*28)
n_classes = 10 # MNIST total classes (0-9 digits)
with tf.name_scope('placeholders'):
# tf Graph input
x = tf.placeholder("float", [None, n_input],name='x')
y = tf.placeholder("float", [None, n_classes],name='y')
with tf.name_scope('Layer-1'):
NN_weights_1=tf.Variable(tf.random_normal([n_input, n_hidden_1],seed=1),name='NN_weights_1')
NN_biases_1=tf.Variable(tf.constant(0.0,shape=[n_hidden_1],name='Const'),name='NN_biases_1')
func=tf.add(tf.matmul(x, NN_weights_1,name='matmul'), NN_biases_1,name='Addition')
func_2=tf.nn.relu(func)
with tf.name_scope('Layer-2'):
NN_weights_2=tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2],seed=2),name='NN_weights_2')
NN_biases_2=tf.Variable(tf.constant(0.0,shape=[n_hidden_2],name='Const'),name='NN_biases_2')
func_3=tf.add(tf.matmul(func_2, NN_weights_2,name='matmul'), NN_biases_2,name='Addition')
func_4=tf.nn.relu(func_3)
with tf.name_scope('Output'):
NN_weights_3=tf.Variable(tf.random_normal([n_hidden_2, n_classes],seed=3),name='NN_weights_3')
NN_biases_3=tf.Variable(tf.constant(0.0,shape=[n_classes],name='Const'),name='NN_biases_3')
func_3=tf.add(tf.matmul(func_4, NN_weights_3,name='matmul'), NN_biases_3,name='Addition')
func_4=tf.nn.sigmoid(func_3)
# Define loss and optimizer
with tf.name_scope('Operations_'):
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=func_4, labels=y),name='cost')
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# Test model
correct_prediction = tf.equal(tf.argmax(func_4, 1), tf.argmax(y, 1),name='correct_prediction')
# Calculate accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"),name='accuracy')
# Initializing the variables
init = tf.global_variables_initializer()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
saver = tf.train.Saver()
# Training cycle
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(mnist.train.num_examples/batch_size)
# Loop over all batches
for i in range(total_batch):
batch_x, batch_y = mnist.train.next_batch(batch_size)
# Run optimization op (backprop) and cost op (to get loss value)
_, c = sess.run([optimizer, cost], feed_dict={x: batch_x,
y: batch_y})
# Compute average loss
avg_cost += c / total_batch
# Display logs per epoch step
if epoch % display_step == 0:
print (("Epoch:", '%04d' % (epoch+1), "cost="), \
"{:.9f}".format(avg_cost))
print ("Optimization Finished!")
print ("Accuracy:", accuracy.eval({x: mnist.test.images, y: mnist.test.labels}))
saver.save(sess, 'my_test_model',global_step=1000)
Restoring Model and passing dictionary for accuracy:
import tensorflow as tf
sess=tf.Session()
#First let's load meta graph and restore weights
saver = tf.train.import_meta_graph('my_test_model-1000.meta')
saver.restore(sess,"my_test_model-1000")
graph = tf.get_default_graph()
accuracy=graph.get_operation_by_name("Operations_/accuracy")
# Access saved Variables directly
print(sess.run('Layer-1/NN_weights_1:0'))
# This will print 2, which is the value of bias that we saved
print ("Accuracy:", sess.run([accuracy],feed_dict={'placeholders/x:0': mnist.test.images, 'placeholders/y:0': mnist.test.labels}))
Change it to:
accuracy=graph.get_operation_by_name("Operations_/accuracy").outputs[0]
Tensorflow discards the outputs of Operation objects executed by means of Session.run. See here for detailed explanation: TensorFlow: eval restored graph

Does TensorFlow have cross validation implemented for its users?

I was thinking of trying to choose hyper parameters (like regularization for example) using cross validation or maybe train multiple initializations of a models and then choose the model with highest cross validation accuracy. Implementing k-fold or CV is simple but tedious/annoying (specially if I am trying to train different models in different CPU's, GPU's or even different computers etc). I would expect a library like TensorFlow to have something like this implemented for its user so that we don't have to code the same thing 100 times. Thus, does TensorFlow have a library or something that can help me do Cross Validation?
As an update, it seems one could use scikit learn or something else to do this. If this is the case, then if anyone can provide a simple example of NN training and cross validation with scikit learn it would be awesome! Not sure if this scales to multiple cpus, gpus, clusters etc though.
As already discussed, tensorflow doesn't provide its own way to cross-validate the model. The recommended way is to use KFold. It's a bit tedious, but doable. Here's a complete example of cross-validating MNIST model with tensorflow and KFold:
from sklearn.model_selection import KFold
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
# Parameters
learning_rate = 0.01
batch_size = 500
# TF graph
x = tf.placeholder(tf.float32, [None, 784])
y = tf.placeholder(tf.float32, [None, 10])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
pred = tf.nn.softmax(tf.matmul(x, W) + b)
cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1))
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
init = tf.global_variables_initializer()
mnist = input_data.read_data_sets("data/mnist-tf", one_hot=True)
train_x_all = mnist.train.images
train_y_all = mnist.train.labels
test_x = mnist.test.images
test_y = mnist.test.labels
def run_train(session, train_x, train_y):
print "\nStart training"
session.run(init)
for epoch in range(10):
total_batch = int(train_x.shape[0] / batch_size)
for i in range(total_batch):
batch_x = train_x[i*batch_size:(i+1)*batch_size]
batch_y = train_y[i*batch_size:(i+1)*batch_size]
_, c = session.run([optimizer, cost], feed_dict={x: batch_x, y: batch_y})
if i % 50 == 0:
print "Epoch #%d step=%d cost=%f" % (epoch, i, c)
def cross_validate(session, split_size=5):
results = []
kf = KFold(n_splits=split_size)
for train_idx, val_idx in kf.split(train_x_all, train_y_all):
train_x = train_x_all[train_idx]
train_y = train_y_all[train_idx]
val_x = train_x_all[val_idx]
val_y = train_y_all[val_idx]
run_train(session, train_x, train_y)
results.append(session.run(accuracy, feed_dict={x: val_x, y: val_y}))
return results
with tf.Session() as session:
result = cross_validate(session)
print "Cross-validation result: %s" % result
print "Test accuracy: %f" % session.run(accuracy, feed_dict={x: test_x, y: test_y})
As the dataset gets larger cross validation gets more expensive.In deep learning we usually use large data sets.You should be fine with simple training. Tensorflow doesnt have a built in mechanism for cv as it is not usually used in neural networks.In neural networks, the efficiency of the network relies mainly on the data set, number of epochs and the learning rate.
I have used cv in sklearn
You can check the link:
https://github.com/hackmaster0110/Udacity-Data-Analyst-Nano-Degree-Projects/
In that,go to poi_id.py in Identify fraud from enron data(In Project folder)

Tensor Flow Mninst external image prediction not working

i am new to neural networks. i have gone through TensorFlow mninst ML Beginners
used tensorflow basic mnist tutorial
and trying to get prediction using external image enter image description here
I have the updated the mnist example provided by tensorflow
On top of that i have added few things :
1. Saving trained models locally
2. loading the saved models.
3. preprocessing the image into 28 * 28.
i have attached the image for reference
1. while training the models, save it locally. So i can reuse it at any point of time.
2. once after training, loading the models.
3. creating an external image via gimp which contains any one values ranging from [0 - 9]
4. using opencv to convert the image into 28 * 28 image and reversing the bit as well.
5. Then trying to predict.
i am able to train the models and save it properly.
i am getting predictions which are not right.
Find my codes Below
1.TrainSimple.py
# Load MNIST Data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
from random import randint
from scipy import misc
# Start TensorFlow InteractiveSession
import tensorflow as tf
sess = tf.InteractiveSession()
# Placeholders
x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
# Variables
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
sess.run(tf.initialize_all_variables())
# Predicted Class and Cost Function
y = tf.nn.softmax(tf.matmul(x,W) + b)
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
saver = tf.train.Saver() # defaults to saving all variables
# GradientDescentOptimizer
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
# Train the Model
for i in range(40000):
if (i + 1) == 40000 :
saver.save(sess, "/Users/xxxx/Desktop/TensorFlow/"+"/model.ckpt", global_step=i)
batch = mnist.train.next_batch(50)
train_step.run(feed_dict={x: batch[0], y_: batch[1]})
# Evaluate the Model
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
loadImageAndPredict.py
from random import randint
from scipy import misc
import numpy as np
import cv2
def preProcess(invert_file):
print "preprocessing the images" + invert_file
image=cv2.imread(invert_file,0)
ret,image_thresh = cv2.threshold(image,127,255,cv2.THRESH_BINARY)
l,b=image.shape
fr=0
lr=0
fc=0
lc=0
i=0
while len(set(image_thresh[i,]))==1:
i+=1
fr=i
i=0
while len(set(image_thresh[-1+i,]))==1:
i-=1
lr=i+l
j=0
while len(set(image_thresh[0:,j]))==1:
j+=1
fc=j
j=0
while len(set(image_thresh[0:,-1+j]))==1:
j-=1
lc=j+b
image_crop=image_thresh[fr:lr,fc:lc]
image_padded= cv2.copyMakeBorder(image_crop,5,5,5,5,cv2.BORDER_CONSTANT,value=255)
image_resized = cv2.resize(image_padded, (28, 28))
image_resized = (255-image_resized)
cv2.imwrite(invert_file, image_resized)
import tensorflow as tf
sess = tf.InteractiveSession()
# Placeholders
x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
# # Variables
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
# Predicted Class and Cost Function
y = tf.nn.softmax(tf.matmul(x,W) + b)
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
saver = tf.train.Saver() # defaults to saving all variables - in this case w and b
# Train the Model
# GradientDescentOptimizer
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
flag_1 = 0
# create an an array where we can store 1 picture
images = np.zeros((1,784))
# and the correct values
correct_vals = np.zeros((1,10))
preProcess("4_white.png")
gray = cv2.imread("4_white.png", 0)
flatten = gray.flatten() / 255.0
"""
we need to store the flatten image and generate
the correct_vals array
correct_val for a digit (9) would be
[0,0,0,0,0,0,0,0,0,1]
"""
images[0] = flatten
# print images[0]
print len(images[0])
sess.run(tf.initialize_all_variables())
ckpt = tf.train.get_checkpoint_state("/Users/xxxx/Desktop/TensorFlow")
if ckpt and ckpt.model_checkpoint_path:
saver.restore(sess, ckpt.model_checkpoint_path)
my_classification = sess.run(tf.argmax(y, 1), feed_dict={x: [images[0]]})
print 'Neural Network predicted', my_classification[0], "for your digit"
i am not sure what mistake i have done.
Thinking that simple model might not work i have used this convolution code to predict.
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/image/mnist/convolutional.py
Even that does not predict properly :(
Some things to check:
Does your training loss go down?
Do you get high accuracy on training dataset?
Do you get high accuracy on validation dataset (part of training set set aside?)
Do you get high accuracy on your target dataset?
If you have low training loss (1.), then you are not learning, and need to try different hyper-parameters, such as learning rates.
If you have high (2.) and low (3.), you are overfitting, and need to train for less long, or have higher regularization penalty. If you have high (3.) and low (4.), your training set is not representative of your actual set. Need to make your training set more representative, or at least harder, for instance, by adding distortions

Categories