I have seen and tried two methods but could not understand what difference does it makes. Here are the two methods I used:
Method 1:
saver = tf.train.import_meta_graph(tf.train.latest_checkpoint(model_path)+".meta")
sess = tf.Session()
sess.run(tf.global_variables_initializer())
sess.run(tf.local_variables_initializer())
if(tf.train.checkpoint_exists(tf.train.latest_checkpoint(model_path))):
saver.restore(sess, tf.train.latest_checkpoint(model_path))
print(tf.train.latest_checkpoint(model_path) + "Session Loaded for Testing")
Method 2:
saver = tf.train.Saver()
sess =tf.Session()
sess.run(tf.global_variables_initializer())
if(tf.train.checkpoint_exists(tf.train.latest_checkpoint(model_path))):
saver.restore(sess, tf.train.latest_checkpoint(model_path))
print(tf.train.latest_checkpoint(model_path) + "Session Loaded for Testing")
What I want to know is:
What is the difference between the above two methods?
Which is the best method to load the model?
Please let me know what is your suggestions on this.
I will try to be as concise as possible so here are my 2 cents on the matter. I will comment on the important lines of your code to point out what I think.
# Importing the meta graph is same as building the same graph from scratch
# creating the same variables, creating the same placeholders and ect.
# Basically you are only importing the graph definition
saver = tf.train.import_meta_graph(tf.train.latest_checkpoint(model_path)+".meta")
sess = tf.Session()
# Absolutely no need to initialize the variables here. They will be initialized
# when the you restore the learned variables.
sess.run(tf.global_variables_initializer())
sess.run(tf.local_variables_initializer())
if(tf.train.checkpoint_exists(tf.train.latest_checkpoint(model_path))):
saver.restore(sess, tf.train.latest_checkpoint(model_path))
print(tf.train.latest_checkpoint(model_path) + "Session Loaded for Testing")
As for the second method:
# You can't create a saver object like this, you will get an error "No variables to save", which is true.
# You haven't created any variables. The workaround for doing this is:
# saver = tf.train.Saver(defer_build=True) and then after building the graph
# ....Graph building code goes here....
# saver.build()
saver = tf.train.Saver()
sess = tf.Session()
# Absolutely no need to initialize the variables here. They will be initialized
# when the you restore the learned variables.
sess.run(tf.global_variables_initializer())
if(tf.train.checkpoint_exists(tf.train.latest_checkpoint(model_path))):
saver.restore(sess, tf.train.latest_checkpoint(model_path))
print(tf.train.latest_checkpoint(model_path) + "Session Loaded for Testing")
So nothing wrong with the first approach but the second one is flat-out not correct. Don't get me wrong with this, but I don't like either of them. However, this is just a personal taste. What I want to do on the other hand, is the following:
# Have a class that creates the model and instantiate an object of that class
my_trained_model = MyModel()
# This is basically the same as what you are doing with
# saver = tf.train.import_meta_graph(tf.train.latest_checkpoint(model_path)+".meta")
# Then, once I have the graph build, I will create a saver object
saver = tf.train.Saver()
# Then I will create a session
with tf.Session() as sess:
# Restore the trained variables here
saver.restore(sess, model_checkpoint_path)
# Now I can do whatever I want with the my_trained_model object
I hope that this will be helpful for you.
Related
My understandings on Sessions in Tensorflow still seem to be flawed even after reading the official documentation and this tutorial.
In particular, does tf.global_variable_initializer() initialize global variables with regard to a particular session, or for all the sessions in the program? Are there ways to "uninitialize" a variable in / during a session?
Can a tf.variable be used in multiple sessions? The answer seems to be yes (e.g. the following code), but then are there good cases where we want multiple sessions in a program, instead of a single one?
#!/usr/bin/env python
import tensorflow as tf
def main():
x = tf.constant(0.)
with tf.Session() as sess:
print(sess.run(x))
with tf.Session() as sess:
print(sess.run(x))
if __name__ == '__main__':
main()
In particular, does tf.global_variable_initializer() initialize global variables with regard to a particular session, or for all the sessions in the program?
With regards to a particular session. Check this out.
tf.reset_default_graph()
x = tf.Variable(tf.random.normal([1,5]))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
first_sess_out = sess.run(x)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
second_sess_out = sess.run(x)
np.testing.assert_array_equal(first_sess_out, second_sess_out)
The assertion fails so it is per session.
Are there ways to "uninitialize" a variable in / during a session?
tf.reset_default_graph()
x = tf.Variable(tf.random.normal([1,5]))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
first_init_out = sess.run(x)
sess.run(tf.global_variables_initializer())
second_init_out = sess.run(x)
np.testing.assert_array_equal(first_init_out, second_init_out)
Apparently there is, after running tf.global_variables_initializer() the variables got reinitialized. Thus, the assertion fails.
Can a tf.Variable be used in multiple sessions? The answer seems to be yes (e.g. the following code), but then are there good cases where we want multiple sessions in a program, instead of a single one?
Yes, it can be used as you can see on the examples above. Good cases are when you want to execute the graph multiple times in a single run.
I have a module called neural.py
I initialize the variables in the body.
import tensorflow as tf
tf_x = tf.placeholder(tf.float32, [None, length])
tf_y = tf.placeholder(tf.float32, [None, num_classes])
...
I save the checkpoint in a function train() after training:
def train():
...
pred = tf.layers.dense(dropout, num_classes, tf.identity)
...
cross_entropy = tf.losses.softmax_cross_entropy(tf_y, pred)
...
with tf.Session() as sess:
init = tf.global_variables_initializer()
sess.run(init)
saver = tf.train.Saver(tf.trainable_variables())
for ep in range(epochs):
... (training steps)...
saver.save(sess, "checkpoints/cnn")
I want to also restore and run the network after training in the run() function of this module:
def run():
# I have tried adding tf.reset_default_graph() here
# I have also tried with tf.Graph().as_default() as g: and adding (graph=g) in tf.Session()
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, "checkpoints/cnn")
... (run network etc)
It just doesn't work. It gives me either NotFoundError (see above for traceback): Key beta2_power not found in checkpoint or ValueError: No variables to save if I add tf.reset_default_graph() under run(), as commented above.
However, if I put the exact same code for run() in a new module without train() and with tf.reset_default_graph() at the top, it works perfectly. How do I make it work in the same module?
Final snippet:
if __name__ == '__main__':
print("Start training")
train()
print("Finished training. Generate prediction")
run()
This might be a typo, but saver.save(sess, "checkpoints/cnn") should definitely be within with tf.Session() as sess block, otherwise you're saving a closed session.
NotFoundError (see above for traceback): Key beta2_power not found in checkpoint
I think the problem is that part of your graph is defined in train. The beta1_power and beta2_power are the internal variables of AdapOptimizer, which, along with pred and softmax_cross_entropy, is not in the graph, if train() is not invoked (e.g. commented?). So one solution would be to make the whole graph accessible in both train and run.
Another solution is to separate them and use the restored graph in run, instead of default one. Like this:
tf.reset_default_graph()
saver = tf.train.import_meta_graph('checkpoints/cnn.meta')
with tf.Session() as sess:
saver.restore(sess, "checkpoints/cnn")
print("Model restored.")
tf_x = sess.graph.get_tensor_by_name('tf_x:0')
...
But you'll need to give the names to all of your variables (good idea anyway) and then find those tensors in the graph. Can't use previously defined variables here. This method assures that run method works with the saved model version, can be easily extracted in a separate script, etc.
I am trying to restore the saved model and do the testing.
However, I met the problem of Attempting to use uninitialized value. I've read some posts before. It seems I cannot do the global initialization. But the error seems interesting.
My code is:
new_saver = tf.train.import_meta_graph("trained_model_epoch-1.meta")
sess=tf.Session()
new_saver.restore(sess, './trained_model_epoch-1')
print('Test')
run_test_model(sess,y_out,...... split='Test', N=Ntest)
Have you tried using tf.train.Saver() ?
building_graph_method()
saver = tf.train.Saver()
sess = tf.Session()
saver.restore(sess, save_path)
Of course you would need to save your model, using saver
saver.save(sess, save_path)
I believe you are accessing your tensors/operations directly (if they are defined in the same script), rather than pulling them from the restored graph:
sess = tf.Session()
new_saver.restore(sess, './trained_model_epoch-1')
graph = sess.graph
w1 = graph.get_tensor_by_name("w1:0") # this tensor is initialized
w2 = graph.get_tensor_by_name("w2:0") # this tensor is initialized too
I want to ask that the syntax for saving and loading a file in python and tensor flow is different or same?
How can i reload such results
np.save("Result/"+FLAGS.result_file,W)
If you are loading numpy files you can use np.load() to get the results back into a numpy array.
x = np.load("Result/"+FLAGS.result_file)
If you want to save a tensorflow graph, you need to create a saver object after you create your tensors.
x = tf.Variable(..., name="x_saved")
init_op = tf.global_variables_initializer()
...
saver = tf.train.Saver()
Then use the saver object to save the graph to file.
with tf.Session() as sess:
sess.run(init_op)
# Do some work with the model.
..
# Save the variables to disk.
save_path = saver.save(sess, "Result/"+FLAGS.result_file)
When you want to load the model, you need to create same sized tensors, and create a saver object. If you load all your tensors from file, you don't need to call initializer.
saver = tf.train.Saver()
and restore the session using that saver.
with tf.Session() as sess:
# Restore variables from disk.
saver.restore(sess, "Result/"+FLAGS.result_file)
This will load the tensors with values you've saved earlier. If you want to save and load specific tensors only, you can initialize saver object with the names of those tensors.
x_loaded = tf.Variable(..., name="x")
saver = tf.train.Saver({"x_loaded": x})
Bear in mind, If you load some tensors and not the whole graph, you need to initialize any other tensors.
I have finished running a big model in tensorflow python. But I have not saved it inside the session. Now that the training is over, I want to save the variables. I am doing the following:
saver=tf.train.Saver()
with tf.Session(graph=graph) as sess:
save_path = saver.save(sess, "86_model.ckpt")
print("Model saved in file: %s" % save_path)
This returns : ValueError: No variables to save. According to their website what is missing is initialize_all_variables(). The documentation says little about what exactly that does. The word "initialize" scares me, I do not want to reset all my trained values. Any way to save my model without re-running it?
It seems like from the tensorflow documentation, the "session" is the thing that holds the information from the trained model. So presumably somewhere you called sess.run() to train your model - what you want to do is call sess.save() using THAT session, not a new one you create with this saver object.
I believe its because you are not initializing all of your variables in the saver. This should work
with tf.Session() as sess:
tf.initialize_all_variables().run()
saver = tf.train.Saver(tf.all_variables())
-------everything your session does -------------
checkpoint_path = os.path.join(save_dir, 'model.ckpt')
saver.save(sess, checkpoint_path, global_step = your_global_step)
How about using skflow ? With skflow(now skflow is integrated in tensorflow) you can specify the parameter model_dir on your constructor and that automatically will save your model while training(it will save checkpoints so if something goes wrong during training, you can restart from last checkpooint).