Loading Tensorflow model in different session - python

I'm a bit new to all this so could you please help me? I tried finding the answer to this question but found nothing.
I'm trying to load Tensorflow model in python in a separate function so I can use the model in a loop without having to load it in every iteration of the for loop.
This is my code now:
def load_network():
prediction = neural_network_model(x)
return (prediction)
def use_neural_network(data, prediction):
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
saver = tf.train.import_meta_graph(model_name+'.meta')
saver.restore(sess,model_name)
pred = sess.run(prediction, feed_dict={x: data})
pred = np.asarray(pred)
return pred
if __name__ == '__main__':
result=[]
Load= start_network()
for i in data:
result.append(use_neural_network(i,Load))
And I would like to get something like this:
def load_network():
prediction = neural_network_model(x)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
saver = tf.train.import_meta_graph(model_name+'.meta')
saver.restore(sess,model_name)
return (prediction)
def use_neural_network(data, prediction):
with tf.Session() as sess:
pred = sess.run(prediction, feed_dict={x: data})
pred = np.asarray(pred)
return pred
if __name__ == '__main__':
result=[]
Load= start_network()
for i in data:
result.append(use_neural_network(i,Load))

Generally what you're trying to achieve is easily doable and you're on the right track. In the main block you have start_network() instead of load_network() as in your first line. I'd also recommend against using Load as a variable name but that should not be a problem. Also the TensorFlow Session (sess in your code) should either be a global variable, or you should initialize it either in the main block or in the load_network() function and then pass it on to the use_neural_network() function. The way it's currently written the two sess variables in the two functions are local and therefore refer to different sessions.
If you want to avoid having to use the neural_network_model( x ) function, that is building the model at the beginning, you might want to freeze the model and load it that way, with the architecture embedded as well. Easiest to follow a guide on that, like this one.

Related

How to plot velidation and training loss in same figure in Tensorboard

I'm working with a Tensorflow object detection model with a config file similar to this tensorflow/research/object_detection/samples/configs/ file. However, when I plot the results in Tensorboard, I only see one graph for the loss/precision etc. I want to display both the training and validation loss in order to better evaluate the result, but I am relatively new to working with Tensorflow and thus need some guidance as to where/how this has to be written.
I've looked into tensorboard/tensorboard/plugins/custom_scalar/custom_scalar_demo.py as well as this stackoverflow post, however I am still a bit confused as to where this functionality should be written.
Please refer below sample code to plot both validation and training loss
import os
import tqdm
import tensorflow as tf
def tb_test():
sess = tf.Session()
x = tf.placeholder(dtype=tf.float32)
summary = tf.summary.scalar('Values', x)
merged = tf.summary.merge_all()
sess.run(tf.global_variables_initializer())
writer_1 = tf.summary.FileWriter(os.path.join('tb_summary', 'train_loss'))
writer_2 = tf.summary.FileWriter(os.path.join('tb_summary', 'validation_loss'))
for i in tqdm.tqdm(range(200)):
# train
summary_1 = sess.run(merged, feed_dict={x: i-10})
writer_1.add_summary(summary_1, i)
# eval
summary_2 = sess.run(merged, feed_dict={x: i+10})
writer_2.add_summary(summary_2, i)
writer_1.close()
writer_2.close()
if __name__ == '__main__':
tb_test()
%load_ext tensorboard
%tensorboard --logdir=tb_summary/

How to replace the input of a saved graph, e.g. a placeholder by a Dataset iterator?

I have a saved Tensorflow graph that consumes input through a placeholder with a feed_dict param.
sess.run(my_tensor, feed_dict={input_image: image})
Because feeding data with a Dataset Iterator is more efficient, I want to load the saved graph, replace the input_image placeholder with an Iterator and run. How can I do that? Is there a better way to do it? An answer with code example would be highly appreciated.
You can achieve that by serializing your graph and reimport it using tf.import_graph_def, which has an input_map argument used to plug-in inputs at the desired places.
To do that you need at least to know the name of the inputs you replace and of the outputs you wish to execute (resp. x and y in my examples).
import tensorflow as tf
# restore graph (built from scratch here for the example)
x = tf.placeholder(tf.int64, shape=(), name='x')
y = tf.square(x, name='y')
# just for display -- you don't need to create a Session for serialization
with tf.Session() as sess:
print("with placeholder:")
for i in range(10):
print(sess.run(y, {x: i}))
# serialize the graph
graph_def = tf.get_default_graph().as_graph_def()
tf.reset_default_graph()
# build new pipeline
batch = tf.data.Dataset.range(10).make_one_shot_iterator().get_next()
# plug in new pipeline
[y] = tf.import_graph_def(graph_def, input_map={'x:0': batch}, return_elements=['y:0'])
# enjoy Dataset inputs!
with tf.Session() as sess:
print('with Dataset:')
try:
while True:
print(sess.run(y))
except tf.errors.OutOfRangeError:
pass
Note that the placeholder node is still there as I did not bother here to parse graph_def to remove it -- you could remove it as an improvement, although I think it is also OK to leave it here.
Depending on how you restore your graph, the input replacement may be already built-in in the loader, which makes things simpler (no need to go back to a GraphDef). For example, if you load your graph from a .meta file, you can use tf.train.import_meta_graph which accepts the same input_map argument.
import tensorflow as tf
# build new pipeline
batch = tf.data.Dataset.range(10).make_one_shot_iterator().get_next()
# load your net and plug in new pipeline
# you need to know the name of the tensor where to plug-in your input
restorer = tf.train.import_meta_graph(graph_filepath, input_map={'x:0': batch})
y = tf.get_default_graph().get_tensor_by_name('y:0')
# enjoy Dataset inputs!
with tf.Session() as sess:
# not needed here, but in practice you would also need to restore weights
# restorer.restore(sess, weights_filepath)
print('with Dataset:')
try:
while True:
print(sess.run(y))
except tf.errors.OutOfRangeError:
pass

Tensorflow : saver.restore not restoring

When I'm trying to restore a learned model I have a problem:
The first time my program runs, it doesn't seem to load the variables, the second time I run it, the variables are loaded, the third time I have a huge error on the "saver.restore(sess, 'model.ckpt')" line starting with "NotFoundError: Key beta2_power_2 not found in checkpoint".
Here is the beginning of my code:
with tf.Session() as sess:
myModel = SoundCNN(8)#classes
tf.global_variables_initializer().run()
saver = tf.train.Saver(tf.global_variables())
saver.restore(sess, 'model.ckpt')
You can see the SoundCNN class here, the github project in the model.py file.
I'm new to tensorflow and ML and wanted to use awjuliani's project to learn to use tf for sound oriented ML.
edit: here is the full code:
print ("start")
bpm = 240
samplingRate = 44100
mypath = "instruments/drums/"
iterations = 1000
batchSize = 240
with tf.Session() as sess:
myModel = SoundCNN(8)#classes
tf.global_variables_initializer().run()
saver = tf.train.Saver(tf.global_variables())
print("loading session ...")
saver.restore(sess, 'model.ckpt')
print("session loaded")
print("processing audio ...")
classes,trainX,trainYa,valX,valY,testX,testY = util.processAudio(bpm,samplingRate,mypath)
print("audio processed")
fullTrain = np.concatenate((trainX,trainYa),axis=1)
quitFlag = False
inputsize = fullTrain.shape[0]-1 #6607
print("entering loop...")
while (not quitFlag):
indexstr = input("Type the index (0< _ <" + str(inputsize) + ") of the sample to test then press enter.\nYou can press enter without text for random index.\nType q to quit.\n")
if (indexstr == "q" or indexstr == "Q"):
quitFlag = True
else:
if(indexstr ==""):
index = randint(0, inputsize)
print("Index : " + str(index))
else:
index = int(indexstr)
tensors,labels_ = np.hsplit(fullTrain,[-1])
labels = util.oneHotIt(labels_)
tensor, label = tensors[index,:], labels[index]
tensor = tensor.reshape(1,1024)
result = myModel.prediction.eval(session=sess,feed_dict={myModel.x: tensor, myModel.keep_prob: 1.0})
print("Model found sound: n°"+ str(result) + ".\nActual sound: n°" + str(np.argmax(label)) + ".\n" )
Thanks!
edit2: Okay I tryed with this code:
print ("start")
bpm = 240
samplingRate = 44100
mypath = "instruments/drums/"
iterations = 1000
batchSize = 240
tf.reset_default_graph()
myModel = SoundCNN(8)
saver = tf.train.Saver()
with tf.Session() as sess:
print("loading session ...")
saver.restore(sess, 'model.ckpt')
print("session loaded")
And the variables aren't loaded (bad predictions) but the strange thing is that I can make the code work by adding :
myModel = SoundCNN(8)
saver = tf.train.Saver()
print("loading session ...")
saver.restore(sess, 'model.ckpt')
print("session loaded")
after the first saver.restore(sess, 'model.ckpt')
So I made the code work but it's a nasty ...
Ok so first of all, separate between training and testing of the model.
Run conditional if statement using: tf.train.checkpoint_exists and tf.train.latest_checkpoint.
Something like:
if tf.train.checkpoint_exists(tf.train.latest_checkpoint(".")):
test()
else:
trainNetConv(iterations)
test()
You might as well use only latest_checkpoint as it returns None or a path if checkpoint was found.
Run 'tf.reset_default_graph()' whenever you know you'll be loading a model to clear any existing graphs. From what I experienced it stacks copies of the graphs which slows the runtime and I guess it might lead to other problems. Especially if you plan to do this multiple times during runtime.
Assuming you already have a trained model, you must first create it like you would normally do by calling SoundCNN with the same number of classes as the model that you wish to load. Make sure you create the EXACT same model, i.e same number of classes. In the code you provided, you create the model with 8 classes but the number of classes of the model that is created in 'trainNetConv' is determined by 'util.processAudio'. Worth checking that the number of classes is indeed 8 for any given directory with sound files on which it's being trained on.
The key difference when you load a model is that you don't initialize the variables, i.e you do not call the saver object with global variables or run the global variables initializer.
All you have to do is:
Make sure to run tf.reset_default_graph()
Create the model, call SoundCNN
Create a saver object with no arguments.
Create a session like you do,
Call the function restore of the saver object with the path to the latest checkpoint. Using 'tf.train.latest_checkpoint' with the base dir of the model.
And you're done.
Check my GitHub for complete examples of training and testing phase. Make sure to start with the 'mnist' since it is only one file and the simplest there.
Assuming you wish to define additional variables for your own use, let's say some variable Counter and an operator that increments Counter
if prediction is correct. It needs to be placed after you loaded the model using restore and then you would initialize those additional variables only. Again, I think my examples might help in this case.
If you have any more questions please ask, I'll try to help.

tensorflow how to change dataset

I have a Dataset API doohickey which is part of my tensorflow graph. How do I swap it out when I want to use different data?
dataset = tf.data.Dataset.range(3)
iterator = dataset.make_one_shot_iterator()
next_element = iterator.get_next()
variable = tf.Variable(3, dtype=tf.int64)
model = variable*next_element
#pretend like this is me training my model, or something
with tf.Session() as sess:
sess.run(variable.initializer)
try:
while True:
print(sess.run(model)) # (0,3,6)
except:
pass
dataset = tf.data.Dataset.range(2)
iterator = dataset.make_one_shot_iterator()
next_element = iterator.get_next()
### HOW TO DO THIS THING?
with tf.Session() as sess:
sess.run(variable.initializer) #This would be a saver restore operation, normally...
try:
while True:
print(sess.run(model)) # (0,3)... hopefully
except:
pass
I do not believe this is possible. You are asking to change the computation graph itself, which is not allowed in tensorflow. Rather than explain that myself, I find the accepted answer in this post to be particularly clear in explaining that point Is it possible to modify an existing TensorFlow computation graph?
Now, that said, I think there is a fairly simple/clean way to accomplish what you seek. Essentially, you want to reset the graph and rebuild the Dataset part. Of course you want to reuse the model part of the code. Thus just put that model in a class or function to allow reuse. A simple example built on your code:
# the part of the graph you want to reuse
def get_model(next_element):
variable = tf.Variable(3,dtype=tf.int64)
return variable*next_element
# the first graph you want to build
tf.reset_default_graph()
# the part of the graph you don't want to reuse
dataset = tf.data.Dataset.range(3)
iterator = dataset.make_one_shot_iterator()
next_element = iterator.get_next()
# reusable part
model = get_model(next_element)
#pretend like this is me training my model, or something
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
try:
while True:
print(sess.run(model)) # (0,3,6)
except:
pass
# now the second graph
tf.reset_default_graph()
# the part of the graph you don't want to reuse
dataset = tf.data.Dataset.range(2)
iterator = dataset.make_one_shot_iterator()
next_element = iterator.get_next()
# reusable part
model = get_model(next_element)
### HOW TO DO THIS THING?
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
try:
while True:
print(sess.run(model)) # (0,3)... hopefully
except:
pass
Final Note: you will also see some references here and there to tf.contrib.graph_editor docs here. They specifically say that you can't accomplish exactly what you want with the graph_editor (see in that link: "Here is an example of what you cannot do"; but you can get pretty close). Even still though, it's not good practice; they had good reason to make the graph append only, and I think the above method I suggest is the cleaner way to accomplish what you seek.
One way I would suggest but that will make things slower is by using place_holders followed by the tf.data.dataset. Therefore, you will have the following:
train_data = tf.placeholder(dtype=tf.float32, shape=[None, None, 1]) # just an example
# Then add the tf.data.dataset here
train_data = tf.data.Dataset.from_tensor_slices(train_data).shuffle(10000).batch(batch_size)
Now when running the graph within a session, you have to feed in the data using the placeholder. So you feed whatever you like...
Hope this helps!!

Tensorflow: How to use a trained model in a application?

I have trained a Tensorflow Model, and now I want to export the "function" to use it in my python program. Is that possible, and if yes, how? Any help would be nice, could not find much in the documentation. (I dont want to save a session!)
I have now stored the session as you suggested. I am loading it now like this:
f = open('batches/batch_9.pkl', 'rb')
input = pickle.load(f)
f.close()
sess = tf.Session()
saver = tf.train.Saver()
saver.restore(sess, 'trained_network.ckpt')
y_pred = []
sess.run(y_pred, feed_dict={x: input})
print(y_pred)
However, I get the error "no Variables to save" when I try to initialize the saver.
What I want to do is this: I am writing a bot for a board game, and the input is the situation on the board formatted into a tensor. Now I want to return a tensor which gives me the best position to play next, i.e. a tensor with 0 everywhere and a 1 at one position.
I don't know if there is any other way to do it, but you can use your model in another Python program by saving your session:
Your training code:
# build your model
sess = tf.Session()
# train your model
saver = tf.train.Saver()
saver.save(sess, 'model/model.ckpt')
In your application:
# build your model (same as training)
sess = tf.Session()
saver = tf.train.Saver()
saver.restore(sess, 'model/model.ckpt')
You can then evaluate any tensor in your model using a feed_dict. This obviously depends on your model. For example:
#evaluate tensor
sess.run(y_pred, feed_dict={x: input_data})

Categories