Neural Network with Tensorflow doesn't update weights/bias - python

Problem
I'm trying to classify some 64x64 images as a black box exercise. The NN I have written doesn't change my weights. First time writing something like this, the same code, but on MNIST letters input works just fine, but on this code it does not train like it should:
import tensorflow as tf
import numpy as np
path = ""
# x is a holder for the 64x64 image
x = tf.placeholder(tf.float32, shape=[None, 4096])
# y_ is a 1 element vector, containing the predicted probability of the label
y_ = tf.placeholder(tf.float32, [None, 1])
# define weights and balances
W = tf.Variable(tf.zeros([4096, 1]))
b = tf.Variable(tf.zeros([1]))
# define our model
y = tf.nn.softmax(tf.matmul(x, W) + b)
# loss is cross entropy
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
# each training step in gradient decent we want to minimize cross entropy
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
train_labels = np.reshape(np.genfromtxt(path + "train_labels.csv", delimiter=',', skip_header=1), (14999, 1))
train_data = np.genfromtxt(path + "train_samples.csv", delimiter=',', skip_header=1)
# perform 150 training steps with each taking 100 train data
for i in range(0, 15000, 100):
sess.run(train_step, feed_dict={x: train_data[i:i+100], y_: train_labels[i:i+100]})
if i % 500 == 0:
print(sess.run(cross_entropy, feed_dict={x: train_data[i:i+100], y_: train_labels[i:i+100]}))
print(sess.run(b), sess.run(W))
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
sess.close()
How do I solve this problem?

The key to the problem is that the class number of you output y_ and y is 1.You should adopt one-hot mode when you use tf.nn.softmax_cross_entropy_with_logits on classification problems in tensorflow. tf.nn.softmax_cross_entropy_with_logits will first compute tf.nn.softmax. When your class number is 1, your results are all the same. For example:
import tensorflow as tf
y = tf.constant([[1],[0],[1]],dtype=tf.float32)
y_ = tf.constant([[1],[2],[3]],dtype=tf.float32)
softmax_var = tf.nn.softmax(logits=y_)
cross_entropy = tf.multiply(y, tf.log(softmax_var))
errors = tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y)
with tf.Session() as sess:
print(sess.run(softmax_var))
print(sess.run(cross_entropy))
print(sess.run(errors))
[[1.]
[1.]
[1.]]
[[0.]
[0.]
[0.]]
[0. 0. 0.]
This means that no matter what your output y_, your loss will be zero. So your weights and bias haven't been updated.
The solution is to modify the class number of y_ and y.
I suppose your class number is n.
First approch:You can change data to one-hot before feed data.Then use the following code.
y_ = tf.placeholder(tf.float32, [None, n])
W = tf.Variable(tf.zeros([4096, n]))
b = tf.Variable(tf.zeros([n]))
Second approch:change data to one-hot after feed data.
y_ = tf.placeholder(tf.int32, [None, 1])
y_ = tf.one_hot(y_,n) # your dtype of y_ need to be tf.int32
W = tf.Variable(tf.zeros([4096, n]))
b = tf.Variable(tf.zeros([n]))

All your initial weights are zeros. When you have that way, the NN doesn't learn well. You need to initialize all the initial weights with random values.
See Why should weights of Neural Networks be initialized to random numbers?
"Why Not Set Weights to Zero?
We can use the same set of weights each time we train the network; for example, you could use the values of 0.0 for all weights.
In this case, the equations of the learning algorithm would fail to make any changes to the network weights, and the model will be stuck. It is important to note that the bias weight in each neuron is set to zero by default, not a small random value.
"
See
https://machinelearningmastery.com/why-initialize-a-neural-network-with-random-weights/

Related

Implementation of a neural model on Tensor-flow

I am trying to implement a neural network model on Tensor flow but seems to be having problems with the shape of the placeholders. I'm new to TF, hence it could just be a simple misunderstanding. Here's my code and data sample:
_data=[[0.4,0.5,0.6,1],[0.7,0.8,0.9,0],....]
The data comprises of arrays of 4 columns, the last column of each array is the label. I want to classify each array as label 0, label 1 or label 2.
import tensorflow as tf
import numpy as np
_data = datamatrix
X = tf.placeholder(tf.float32, [None, 3])
W = tf.Variable(tf.zeros([3, 1]))
b = tf.Variable(tf.zeros([3]))
init = tf.global_variables_initializer()
Y = tf.nn.softmax(tf.matmul(X, W) + b)
# placeholder for correct labels
Y_ = tf.placeholder(tf.float32, [None, 1])
# loss function
import time
start=time.time()
cross_entropy = -tf.reduce_sum(Y_ * tf.log(Y))
# % of correct answers found in batch
is_correct = tf.equal(tf.argmax(Y,1), tf.argmax(Y_,1))
accuracy = tf.reduce_mean(tf.cast(is_correct, tf.float32))
optimizer = tf.train.GradientDescentOptimizer(0.003)
train_step = optimizer.minimize(cross_entropy)
sess = tf.Session()
sess.run(init)
for i in range(1000):
# load batch of images and correct answers
batch_X, batch_Y = [x[:3] for x in _data[:2000]],[x[-1] for x in _data[:2000]]
train_data={X: batch_X, Y_: batch_Y}
# train
sess.run(train_step, feed_dict=train_data)
# success ?
a,c = sess.run([accuracy, cross_entropy], feed_dict=train_data)
I got the following error message after running my code:
ValueError: Cannot feed value of shape (2000,) for Tensor 'Placeholder_1:0', which has shape '(?, 1)'
My desired output should be the performance of the model using cross-entropy; the accuracy value from the codeline below:
a,c = sess.run([accuracy, cross_entropy], feed_dict=train_data)
I would also appreciate any suggestions on how to improve the model, or a model that is more suitable for my data.
The shape of Placeholder_1:0 Y_, and input data batch_Y is mismatched as specified by the error message. Notice the 1-D vs 2-D array.
So you should either define 1-D place holder:
Y_ = tf.placeholder(tf.float32, [None])
or prepare 2-D data:
batch_X, batch_Y = [x[:3] for x in _data[:2000]],[x[-1:] for x in _data[:2000]]

Training in batches but testing individual data item in Tensorflow?

I have trained a convolution neural network with batch size of 10.
However when testing, I want to predict the classification for each dataset separately and not in batches, this gives error:
Assign requires shapes of both tensors to match. lhs shape= [1,3] rhs shape= [10,3]
I understand 10 refers to batch_size and 3 is the number of classes that I am classifying into.
Can we not train using batches and test individually?
Update:
Training Phase:
batch_size=10
classes=3
#vlimit is some constant : same for training and testing phase
X = tf.placeholder(tf.float32, [batch_size,vlimit ], name='X_placeholder')
Y = tf.placeholder(tf.int32, [batch_size, classes], name='Y_placeholder')
w = tf.Variable(tf.random_normal(shape=[vlimit, classes], stddev=0.01), name='weights')
b = tf.Variable(tf.ones([batch_size,classes]), name="bias")
logits = tf.matmul(X, w) + b
entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=Y, name='loss')
loss = tf.reduce_mean(entropy)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(loss)
Testing Phase:
batch_size=1
classes=3
X = tf.placeholder(tf.float32, [batch_size,vlimit ], name='X_placeholder')
Y = tf.placeholder(tf.int32, [batch_size, classes], name='Y_placeholder')
w = tf.Variable(tf.random_normal(shape=[vlimit, classes], stddev=0.01), name='weights')
b = tf.Variable(tf.ones([batch_size,classes]), name="bias")
logits = tf.matmul(X, w) + b
entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=Y, name='loss')
loss = tf.reduce_mean(entropy)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(loss)
When you define your placeholder, use:
X = tf.placeholder(tf.float32, [None, vlimit ], name='X_placeholder')
Y = tf.placeholder(tf.int32, [None, classes], name='Y_placeholder')
...
instead for both your training and testing phase (actually, you shouldn't need to re-define these for the testing phase). Also define your bias as:
b = tf.Variable(tf.ones([classes]), name="bias")
Otherwise you are training a separate bias for each sample in your batch, which is not what you want.
TensorFlow should automatically unroll along the first dimension of your input and recognize that as the batch size, so for training you can feed it batches of 10, and for testing you can feed it individual samples (or batches of 100 or whatever).
Absolutely. Placeholders are 'buckets' that get fed data from your inputs. The only thing they do is direct data into your model. They can act like 'infinite buckets' using the None trick - you can chuck as much (or as little) data into them as you want (depending on available resources obviously).
In training, try replacing batch_size with None for the Training placeholders:
X = tf.placeholder(tf.float32, [None, vlimit ], name='X_placeholder')
Y = tf.placeholder(tf.int32, [None, classes], name='Y_placeholder')
Then define everything else you have as before.
Then do some training ops, for example:
_, Tr_loss, Tr_acc = sess.run([optimizer, loss, accuracy], feed_dict{x: btc_x, y: btc_y})
For testing, re-use these same placeholders (X,Y) and don't bother redefining the other variables.
All Tensorflow variables are static for a single Tensorflow graph definition. If you're restoring the model, then the placeholders still exist from when it was trained. As will the other variables e.g. w,b,logits,entropy & optimizer.
Then do some testing op, for example:
Ts_loss, Ts_acc = sess.run( [loss, accuracy], feed_dict{ x: test_x , y: test_y } )

Simple Tensorflow Multilayer Neural Network Not Learning

I am trying to write a two layer neural network to train a class labeler. The input to the network is a 150-feature list of about 1000 examples; all features on all examples have been L2 normalized.
I only have two outputs, and they should be disjoint--I am just attempting to predict whether the example is a one or a zero.
My code is relatively simple; I am feeding the input data into the hidden layer, and then the hidden layer into the output. As I really just want to see this working in action, I am training on the entire data set with each step.
My code is below. Based on the other NN implementations I have referred to, I believe that the performance of this network should be improving over time. However, regardless of the number of epochs I set, I am getting back an accuracy of about ~20%. The accuracy is not changing when the number of steps are changed, so I don't believe that my weights and biases are being updated.
Is there something obvious I am missing with my model? Thanks!
import numpy as np
import tensorflow as tf
sess = tf.InteractiveSession()
# generate data
np.random.seed(10)
inputs = np.random.normal(size=[1000,150]).astype('float32')*1.5
label = np.round(np.random.uniform(low=0,high=1,size=[1000,1])*0.8)
reverse_label = 1-label
labels = np.append(label,reverse_label,1)
# parameters
learn_rate = 0.01
epochs = 200
n_input = 150
n_hidden = 75
n_output = 2
# set weights/biases
x = tf.placeholder(tf.float32, [None, n_input])
y = tf.placeholder(tf.float32, [None, n_output])
b0 = tf.Variable(tf.truncated_normal([n_hidden]))
b1 = tf.Variable(tf.truncated_normal([n_output]))
w0 = tf.Variable(tf.truncated_normal([n_input,n_hidden]))
w1 = tf.Variable(tf.truncated_normal([n_hidden,n_output]))
# step function
def returnPred(x,w0,w1,b0,b1):
z1 = tf.add(tf.matmul(x, w0), b0)
a2 = tf.nn.relu(z1)
z2 = tf.add(tf.matmul(a2, w1), b1)
h = tf.nn.relu(z2)
return h #return the first response vector from the
y_ = returnPred(x,w0,w1,b0,b1) # predict operation
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=y_,labels=y) # calculate loss between prediction and actual
model = tf.train.GradientDescentOptimizer(learning_rate=learn_rate).minimize(loss) # apply gradient descent based on loss
init = tf.global_variables_initializer()
tf.Session = sess
sess.run(init) #initialize graph
for step in range(0,epochs):
sess.run(model,feed_dict={x: inputs, y: labels }) #train model
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: inputs, y: labels})) # print accuracy
I changed your optimizer to AdamOptimizer (in many cases it performs better than GradientDescentOptimizer).
I also played a bit with the parameters. In particular, I took smaller std for your variable initialization, decreased learning rate (as your loss was unstable and "jumped around") and increased epochs (as I noticed that your loss continues to decrease).
I also reduced the size of the hidden layer. It is harder to train networks with large hidden layer when you don't have that much data.
Regarding your loss, it is better to apply tf.reduce_mean on it so that loss would be a number. In addition, following the answer of ml4294, I used softmax instead of sigmoid, so the loss looks like:
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=y_,labels=y))
The code below achieves accuracy of around 99.9% on the training data:
import numpy as np
import tensorflow as tf
sess = tf.InteractiveSession()
# generate data
np.random.seed(10)
inputs = np.random.normal(size=[1000,150]).astype('float32')*1.5
label = np.round(np.random.uniform(low=0,high=1,size=[1000,1])*0.8)
reverse_label = 1-label
labels = np.append(label,reverse_label,1)
# parameters
learn_rate = 0.002
epochs = 400
n_input = 150
n_hidden = 60
n_output = 2
# set weights/biases
x = tf.placeholder(tf.float32, [None, n_input])
y = tf.placeholder(tf.float32, [None, n_output])
b0 = tf.Variable(tf.truncated_normal([n_hidden],stddev=0.2,seed=0))
b1 = tf.Variable(tf.truncated_normal([n_output],stddev=0.2,seed=0))
w0 = tf.Variable(tf.truncated_normal([n_input,n_hidden],stddev=0.2,seed=0))
w1 = tf.Variable(tf.truncated_normal([n_hidden,n_output],stddev=0.2,seed=0))
# step function
def returnPred(x,w0,w1,b0,b1):
z1 = tf.add(tf.matmul(x, w0), b0)
a2 = tf.nn.relu(z1)
z2 = tf.add(tf.matmul(a2, w1), b1)
h = tf.nn.relu(z2)
return h #return the first response vector from the
y_ = returnPred(x,w0,w1,b0,b1) # predict operation
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=y_,labels=y)) # calculate loss between prediction and actual
model = tf.train.AdamOptimizer(learning_rate=learn_rate).minimize(loss) # apply gradient descent based on loss
init = tf.global_variables_initializer()
tf.Session = sess
sess.run(init) #initialize graph
for step in range(0,epochs):
sess.run([model,loss],feed_dict={x: inputs, y: labels }) #train model
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: inputs, y: labels})) # print accuracy
Just a suggestion in addition to the answer provided by Miriam Farber:
You use a multi-dimensional output label ([0., 1.]) for the classification. I suggest to use the softmax cross entropy tf.nn.softmax_cross_entropy_with_logits() instead of the sigmoid cross entropy, since you assume the outputs to be disjoint softmax on Wikipedia. I achieved much faster convergence with this small modification.
This should also improve your performance once you decide to increase your output dimensionality from 2 to a higher number.
I guess you have some problem here:
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=y_,labels=y) # calculate loss between prediction and actual
It should look smth like that:
loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=y_,labels=y))
Did't look at you code much, so if this would't work out you can check udacity deep learning course or forum they have good samples of that are you trying to do.
GL

How to use a trained model on different inputs

I implemented a relatively straightforward logistic regression function. I save all the necessary variables such as weights, bias, x, y, etc. and then I run the training algorithm...
# launch the graph
with tf.Session() as sess:
sess.run(init)
# training cycle
for epoch in range(FLAGS.training_epochs):
avg_cost = 0
total_batch = int(mnist.train.num_examples/FLAGS.batch_size)
# loop over all batches
for i in range(total_batch):
batch_xs, batch_ys = mnist.train.next_batch(FLAGS.batch_size)
_, c = sess.run([optimizer, cost], feed_dict={x: batch_xs, y: batch_ys})
# compute average loss
avg_cost += c / total_batch
# display logs per epoch step
if (epoch + 1) % FLAGS.display_step == 0:
print("Epoch:", '%04d' % (epoch + 1), "cost=", "{:.9f}".format(avg_cost))
save_path = saver.save(sess, "/tmp/model.ckpt")
The model is saved and the prediction and accuracy of the trained model is displayed...
# list of booleans to determine the correct predictions
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
print(correct_prediction.eval({x:mnist.test.images, y:mnist.test.labels}))
# calculate total accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print("Accuracy:", accuracy.eval({x: mnist.test.images, y: mnist.test.labels}))
This is all fine and dandy. However, now I want to be able to predict any given image using the trained model. For example, I want to feed it picture of say 7 and see what it predicts it to be.
I have another module that restores the model. First we load the variables...
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
# tf Graph Input
x = tf.placeholder(tf.float32, [None, 784]) # mnist data image of shape 28*28=784
y = tf.placeholder(tf.float32, [None, 10]) # 0-9 digits recognition => 10 classes
# set model weights
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
# construct model
pred = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax
# minimize error using cross entropy
cost = tf.reduce_mean(-tf.reduce_sum(y * tf.log(pred), reduction_indices=1))
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(FLAGS.learning_rate).minimize(cost)
# initializing the variables
init = tf.global_variables_initializer()
saver = tf.train.Saver()
with tf.Session() as sess:
save.restore(sess, "/tmp/model.ckpt")
This is good. Now I want to compare one image to the model and get a prediction. In this example, I take the first image from the test dataset mnist.test.images[0] and I attempt to compare it to the model.
classification = sess.run(tf.argmax(pred, 1), feed_dict={x: mnist.test.images[0]})
print(classification)
I know this will not work. I get the error...
ValueError: Cannot feed value of shape (784,) for Tensor 'Placeholder:0', which has shape '(?, 784)'
I am at a loss for ideas. This question is rather long, if a straightforward answer is not possible, some guidance as to the steps I may take to do this is appreciated.
Your input placeholder must be of size (?, 784), the question mark meaning variable size which is probably the batch size. You are feeding an input of size (784,) which does not work as the error message states.
In your case, during prediction time, the batch size is just 1, so the following should work:
import numpy as np
...
x_in = np.expand_dims(mnist.test.images[0], axis=0)
classification = sess.run(tf.argmax(pred, 1), feed_dict={x:x_in})
Assuming that the input image is available as a numpy array. If it is already a tensor, the corresponding function is tf.expand_dims(..).

Tensor Flow Mninst external image prediction not working

i am new to neural networks. i have gone through TensorFlow mninst ML Beginners
used tensorflow basic mnist tutorial
and trying to get prediction using external image enter image description here
I have the updated the mnist example provided by tensorflow
On top of that i have added few things :
1. Saving trained models locally
2. loading the saved models.
3. preprocessing the image into 28 * 28.
i have attached the image for reference
1. while training the models, save it locally. So i can reuse it at any point of time.
2. once after training, loading the models.
3. creating an external image via gimp which contains any one values ranging from [0 - 9]
4. using opencv to convert the image into 28 * 28 image and reversing the bit as well.
5. Then trying to predict.
i am able to train the models and save it properly.
i am getting predictions which are not right.
Find my codes Below
1.TrainSimple.py
# Load MNIST Data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
from random import randint
from scipy import misc
# Start TensorFlow InteractiveSession
import tensorflow as tf
sess = tf.InteractiveSession()
# Placeholders
x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
# Variables
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
sess.run(tf.initialize_all_variables())
# Predicted Class and Cost Function
y = tf.nn.softmax(tf.matmul(x,W) + b)
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
saver = tf.train.Saver() # defaults to saving all variables
# GradientDescentOptimizer
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
# Train the Model
for i in range(40000):
if (i + 1) == 40000 :
saver.save(sess, "/Users/xxxx/Desktop/TensorFlow/"+"/model.ckpt", global_step=i)
batch = mnist.train.next_batch(50)
train_step.run(feed_dict={x: batch[0], y_: batch[1]})
# Evaluate the Model
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
loadImageAndPredict.py
from random import randint
from scipy import misc
import numpy as np
import cv2
def preProcess(invert_file):
print "preprocessing the images" + invert_file
image=cv2.imread(invert_file,0)
ret,image_thresh = cv2.threshold(image,127,255,cv2.THRESH_BINARY)
l,b=image.shape
fr=0
lr=0
fc=0
lc=0
i=0
while len(set(image_thresh[i,]))==1:
i+=1
fr=i
i=0
while len(set(image_thresh[-1+i,]))==1:
i-=1
lr=i+l
j=0
while len(set(image_thresh[0:,j]))==1:
j+=1
fc=j
j=0
while len(set(image_thresh[0:,-1+j]))==1:
j-=1
lc=j+b
image_crop=image_thresh[fr:lr,fc:lc]
image_padded= cv2.copyMakeBorder(image_crop,5,5,5,5,cv2.BORDER_CONSTANT,value=255)
image_resized = cv2.resize(image_padded, (28, 28))
image_resized = (255-image_resized)
cv2.imwrite(invert_file, image_resized)
import tensorflow as tf
sess = tf.InteractiveSession()
# Placeholders
x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
# # Variables
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
# Predicted Class and Cost Function
y = tf.nn.softmax(tf.matmul(x,W) + b)
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
saver = tf.train.Saver() # defaults to saving all variables - in this case w and b
# Train the Model
# GradientDescentOptimizer
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
flag_1 = 0
# create an an array where we can store 1 picture
images = np.zeros((1,784))
# and the correct values
correct_vals = np.zeros((1,10))
preProcess("4_white.png")
gray = cv2.imread("4_white.png", 0)
flatten = gray.flatten() / 255.0
"""
we need to store the flatten image and generate
the correct_vals array
correct_val for a digit (9) would be
[0,0,0,0,0,0,0,0,0,1]
"""
images[0] = flatten
# print images[0]
print len(images[0])
sess.run(tf.initialize_all_variables())
ckpt = tf.train.get_checkpoint_state("/Users/xxxx/Desktop/TensorFlow")
if ckpt and ckpt.model_checkpoint_path:
saver.restore(sess, ckpt.model_checkpoint_path)
my_classification = sess.run(tf.argmax(y, 1), feed_dict={x: [images[0]]})
print 'Neural Network predicted', my_classification[0], "for your digit"
i am not sure what mistake i have done.
Thinking that simple model might not work i have used this convolution code to predict.
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/image/mnist/convolutional.py
Even that does not predict properly :(
Some things to check:
Does your training loss go down?
Do you get high accuracy on training dataset?
Do you get high accuracy on validation dataset (part of training set set aside?)
Do you get high accuracy on your target dataset?
If you have low training loss (1.), then you are not learning, and need to try different hyper-parameters, such as learning rates.
If you have high (2.) and low (3.), you are overfitting, and need to train for less long, or have higher regularization penalty. If you have high (3.) and low (4.), your training set is not representative of your actual set. Need to make your training set more representative, or at least harder, for instance, by adding distortions

Categories