In my tensorflow model, output of one network is a tensor. This value I need to feed as input to another pretrained network. I'm loading the pretrained network as follows
input_b_ph = tf.placeholder(shape=(), dtype=float.32, name='input_b_ph')
sess1 = tf.Session()
saver = tf.train.import_meta_graph(model_path.as_posix() + '.meta', input_map={'input/Identity:0': input_b_ph})
graph = tf.get_default_graph()
saver.restore(sess1, model_path.as_posix())
output_b = graph.get_tensor_by_name('output/Identity:0')
I need to feed a tensor to feature_input. How can I achieve this?
Edit 1: Adding end-to-end details:
I have a network A defined in tensorflow which takes input input_a and produces output output_a. This I need to feed to ResNet50 pretrained model. For this I used ResNet50 from tf.keras
from tensorflow.keras.applications.resnet50 import ResNet50
from tensorflow.keras.applications.resnet50 import preprocess_input
resnet_model = ResNet50(include_top=False, pooling='avg')
preprocessed_input = preprocess_input(tf.cast(output_a, tf.float32))
output_resnet = resnet_model([preprocessed_input])
Output of ResNet is output_resnet. This I need to feed to another pretrained network, say network B. B is actually written in Keras. I modified it to use tf.keras. Then I save the trained model as below:
import tensorflow as tf
from tensorflow import keras
curr_sess = keras.backend.get_session()
with tf.name_scope('input'):
_ = tf.identity(quality_net.model.input)
with tf.name_scope('output'):
__ = tf.identity(quality_net.model.output)
saver = tf.train.Saver()
saver.save(curr_sess, output_filepath.as_posix())
I have access to this network B and tried to save the model in h5 format but it gave error that model is thread lock. On searching in internet, I got to know this error comes when there are Lambda layers in the network. So, I resorted to saving the model in tensorflow format - 3 files: meta, weights and index. (Any solution using h5 format is also acceptible).
There is a caveat here. That the structure of network B can keep changing and it is from a different project. So I can't hardcode the architecture of B. I've to load it from saved model. My problem is how can I restore this pretrained model and pass output_resnet as input to network B. The output of network B i.e. output_b is the loss function to train my original network A. Currently I'm able to restore network B as follows:
input_b_ph = tf.placeholder(shape=(), dtype=float.32, name='input_b_ph')
sess1 = tf.Session()
saver = tf.train.import_meta_graph(model_path.as_posix() + '.meta', input_map={'input/Identity:0': input_b_ph})
graph = tf.get_default_graph()
saver.restore(sess1, model_path.as_posix())
output_b = graph.get_tensor_by_name('output/Identity:0')
I have output from resnet as output_resnet which is a tensor. I need a way to set this to input_b_ph. How can I achieve that? Any alternate solutions are also acceptible
Mentioning the Answer in this (Answer) Section (although it is present in Comments Section), for the benefit of the Community.
Placeholder is not required in this case. Just passing output_resnet to input_map should resolve the issue.
Replacing the code,
saver = tf.train.import_meta_graph(model_path.as_posix() + '.meta',
input_map={'input/Identity:0': input_b_ph})
with
saver = tf.train.import_meta_graph(model_path.as_posix() + '.meta',
input_map={'input/Identity:0': output_resnet})
has resolved the issue.
Related
Model.summary() gives me a this output
Now how can i check sequential_1 layers and sequential_3 layer?
I want whole model summary but it gives two sequential so that means two model are combined so how can i get summary of both model?
I only have model.h5 file nothing else
Models saved in .h5 format includes everything about the model.
To inspect the layers summary inside the Model in a Model, like in your case.
You could extract the layers, then call the summary method from each of them.
ie.
layer_summary = [layer.summary() for layer in loaded_model.layers]
Here is the complete code I used in reproducing your scenario.
import tensorflow as tf
print('Running Tensorflow version {}'.format(tf.__version__)) # Tensorflow 2.1.0
model_path = '/content/keras_model.h5'
loaded_model = tf.keras.models.load_model(model_path)
loaded_model.summary()
inp = loaded_model.input
layer_summary = [layer.summary() for layer in loaded_model.layers]
I've also used the model.h5 file you uploaded.
I am fine-tuning an Inception model via tensorflow with the below setup, and am feeding batches tf.DatasetAPI. However, every time I attempt to train this model (before successfully retrieving any batches), I get an OutOfRangeError claiming that the iterator is exhausted:
Caught OutOfRangeError. Stopping Training. End of sequence
[[node IteratorGetNext (defined at <ipython-input-8-c768436e70d8>:13) = IteratorGetNext[output_shapes=[[?,224,224,3], [?,1]], output_types=[DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](OneShotIterator)]]
with tf.Graph().as_default():
I created a function to feed in hard coded batches as the result of get_batch, and this runs and converges without any issues, leading me to believe that the graph and session code is working properly. I also tested the get_batch function to iterate in a session, and this causes no errors. The behavior I would expect is that restarting training (especially with reseting the notebook, etc. ) would produce a fresh iterator over the dataset.
Code to train model:
with tf.Graph().as_default():
tf.logging.set_verbosity(tf.logging.INFO)
images, labels = get_batch(filenames=tf_train_record_path+train_file)
# Create the model, use the default arg scope to configure the batch norm parameters.
with slim.arg_scope(inception.inception_v1_arg_scope()):
logits, ax = inception.inception_v1(images, num_classes=1, is_training=True)
# Specify the loss function:
tf.losses.mean_squared_error(labels,logits)
total_loss = tf.losses.get_total_loss()
tf.summary.scalar('losses/Total_Loss', total_loss)
# Specify the optimizer and create the train op:
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train_op = slim.learning.create_train_op(total_loss, optimizer)
# Run the training:
final_loss = slim.learning.train(
train_op,
logdir=train_dir,
init_fn=get_init_fn(),
number_of_steps=1)
Code to get batches using Dataset
def get_batch(filenames):
dataset = tf.data.TFRecordDataset(filenames=filenames)
dataset = dataset.map(parse)
dataset = dataset.batch(2)
iterator = dataset.make_one_shot_iterator()
data_X, data_y = iterator.get_next()
return data_X, data_y
This previously asked question resembles the issue I am experiencing, however, I am not using a batch_join call. I am not if this is an issue with slim.learning.train, restoring from a checkpoint, or scope. Any help would be appreciated!
Your input pipeline looks ok. The problem might be with damaged TFRecords file. You can try your code with random data, or use your images as numpy arrays with tf.data.Dataset.from_tensor_slices().
Also your parse function may cause problems. Try to print your image/label with sess.run.
And I'd advise using Estimator API as train_op. It is much more convenient and slim will be deprecated soon.
I am trying to use a frozen, pretrained, DeepLabv3 model in a larger tf.keras training pipeline, but have been having trouble figuring out how to use it as a tf.keras Model. I am trying to use tf.keras as I feel there would be a slowdown using a feed_dict (the only way I know of to use a frozen graph) in the middle of multiple forward passes. The deeplab model referenced in the code below is built in regular keras (as opposed to tf.contrib.keras)
from keras import backend as K
# Create, compile and train model...
frozen_graph = freeze_session(K.get_session(),
output_names=[out.op.name for out in deeplab.outputs])
tf.train.write_graph(frozen_graph, "./", "my_model.pb", as_text=False)
graph = load_graph("my_model.pb")
# We can verify that we can access the list of operations in the graph
for op in graph.get_operations():
print(op.name)
# prefix/Placeholder/inputs_placeholder
# ...
# prefix/Accuracy/predictions
# We access the input and output nodes
x = graph.get_tensor_by_name("prefix/input_1:0")
y = graph.get_tensor_by_name("prefix/bilinear_upsampling_2/ResizeBilinear:0")
# We launch a Session
with tf.Session(graph=graph) as sess:
print(graph)
model2 = models.Model(inputs=x,outputs=y)
model2.summary()
and i get an error
ValueError: Input tensors to a Model must come from `tf.layers.Input`. Received: Tensor("prefix/input_1:0", shape=(?, 512, 512, 3), dtype=float32) (missing previous layer metadata).
I feel like I've seen others replace the input tensor with an Input Layer to trick tf.keras into building the graph, but after a few hours I am feeling stuck. Any help would be appreciated!
You can recreate the model object from its config. See the from_config method here https://keras.io/models/about-keras-models/.
The config is stored and loaded back by the save_model/load_model functions. I am not familiar with freeze_session.
I am an absolute beginner to TensorFlow.
If I have a picture (or set of pictures) that I would like to attempt to classify using the code from the Cifar10 TensorFlow tutorial, how would I do so?
I have absolutely no idea where to start.
Train the model using base CIFAR10 dataset exactly as per the tutorial.
Create a new graph with your own inputs - probably easiest to just use a tf.placeholder and feed the data as per below, but there's lots of other ways.
Start a session, load previously saved weights.
Run the session (with a feed_dict if you're using a placeholder as above).
.
import tensorflow as tf
train_dir = '/tmp/cifar10_train' # or use FLAGS as in the train example
batch_size = 8
height = 32
width = 32
image = tf.placeholder(shape=(batch_size, height, width, 3), dtype=tf.uint8)
std_img = tf.image.per_image_standardization(image)
logits = cifar10.inference(std_img)
predictions = tf.argmax(logits, axis=-1)
def get_image_data_batches():
n_batchs = 100
for i in range(n_batchs):
yield (np.random.uniform(size=(batch_size, height, width, 3)*255).astype(np.uint8)
def do_stuff_with(logit_vals, prediction_vals):
pass
with tf.Session() as sess:
# restore variables
saver = tf.train.Saver()
saver.restore(sess, tf.train.latest_checkpoint(train_dir))
# run inference
for batch_data in get_image_data_batches():
logit_vals, prediction_vals = sess.run([logits, predictions], feed_dict={image: image_data})
do_stuff_with(logit_vals, prediction_vals)
There are better ways of getting data into the graph (see tf.data.Dataset), but I believe tf.placeholders are the easiest way for learning and getting something up and running initially.
Also check out tf.estimator.Estimators for a cleaner way of managing sessions. It's very different to the way it's done in this tutorial and -slightly- less flexible, but for standard networks they save you writing a lot of boilerplate code.
I've trained a CNN model in TensorFlow eager mode. Now I'm trying to restore the trained model from a checkpoint file but haven't got any success.
All the examples (as shown below) I've found are talking about restoring checkpoint to a Session. But what I need is to restore the model into eager mode, i.e. without creating a session.
with tf.Session() as sess:
# Restore variables from disk.
saver.restore(sess, "/tmp/model.ckpt")
Basically what I need is something like:
tfe.enable_eager_execution()
model = tfe.restore('model.ckpt')
model.predict(...)
and then I can use the model to make predictions.
Can someone please help?
Update
The example code can be found at: mnist eager mode demo
I've tried to follow the steps from #Jay Shah 's answer and it almost worked but the restored model doesn't have any variables in it.
tfe.save_network_checkpoint(model,'./test/my_model.ckpt')
Out[58]:
'./test/my_model.ckpt-1720'
model2 = MNISTModel()
tfe.restore_network_checkpoint(model2,'./test/my_model.ckpt-1720')
model2.variables
Out[72]:
[]
The original model has lots of variables in it.:
model.variables
[<tf.Variable 'mnist_model_1/conv2d/kernel:0' shape=(5, 5, 1, 32) dtype=float32, numpy=
array([[[[ -8.25184360e-02, 6.77833706e-03, 6.97569922e-02,...
Eager Execution is still a new feature in TensorFlow, and was not included in the latest version, so not all features, are supported, but fortunately, loading a model from a saved checkpoint is.
You'll need to use the tfe.Saver class (which is a thin wrapper over the tf.train.Saver class), and your code should look something like this:
saver = tfe.Saver([x, y])
saver.restore('/tmp/ckpt')
Where [x,y] represents the list of variables and/or models you wish to restore. This should precisely match the variables passed when the saver that created the checkpoint was initially created.
More details, including sample code, can be found here, and the API details of the saver can be found here.
Ok, after spending a few hours running the code in line-by-line mode, I've figured out a way to restore a checkpoint to a new TensorFlow Eager Mode model.
Using the examples from TF Eager Mode MNIST
Steps:
After your model has been trained, find the latest checkpoint(or the checkpoint you want) index file from the checkpoint folder created in the training process, such as 'ckpt-25800.index'. Use only the filename 'ckpt-25800' while restoring in step 5.
Start a new python terminal and enable TensorFlow Eager mode by running:
tfe.enable_eager_execution()
Create a new instance of the MNISTMOdel:
model_new = MNISTModel()
Initialise the variables for model_new by running a dummy train process once.(This step is important. Without initialising the variables first, they can't be restored by the following step. However I can't find another way to initialise variables in Eager mode other than what I did below.)
model_new(tfe.Variable(np.zeros((1,784),dtype=np.float32)), training=True)
Restore the variables to model_new using the checkpoint identified in step 1.
tfe.Saver((model_new.variables)).restore('./tf_checkpoints/ckpt-25800')
If restore process is successful, you should see something like:
INFO:tensorflow:Restoring parameters from ./tf_checkpoints/ckpt-25800
Now the checkpoint has been successfully restored to model_new and you can use it to make predictions on new data.
I like to share TFLearn library which is Deep learning library featuring a higher-level API for TensorFlow. With the help of this library you can easily save and restore a model.
Saving a model
model = tflearn.DNN(net) #Here 'net' is your designed network model.
#This is a sample example for training the model
model.fit(train_x, train_y, n_epoch=10, validation_set=(test_x, test_y), batch_size=10, show_metric=True)
model.save("model_name.ckpt")
Restore a model
model = tflearn.DNN(net)
model.load("model_name.ckpt")
For more example of tflearn you can check some site like...
My first CNN in TFLearn.
Github Link
First you save your model in a checkpoint by doing following:
saver.save(sess, './my_model.ckpt')
In above line you are saving you session in "my_model.ckpt" checkpoint
Following code restores the model
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, './my_model.ckpt')
When you restore the session as a model then you restores your model from the ckpt
For eager mode to save :
tf.contrib.eager.save_network_checkpoint(sess,'./my_model.ckpt')
For eager mode to restore :
tf.contrib.eager.restore_network_checkpoint(sess,'./my_model.ckpt')
sess is an object of class Network. Any object of class Network can be saved and restored. A quick explanation of network objects :-
class TwoLayerNetwork(tfe.Network):
def __init__(self, name):
super(TwoLayerNetwork, self).__init__(name=name)
self.layer_one = self.track_layer(tf.layers.Dense(16, input_shape=(8,)))
self.layer_two = self.track_layer(tf.layers.Dense(1, input_shape=(16,)))
def call(self, inputs):
return self.layer_two(self.layer_one(inputs))
After constructing an object and calling the Network, a list of variables
created by tracked Layers is available via Network.variables:
python
sess = TwoLayerNetwork(name="net") # sess is object of Network
output = sess(tf.ones([1, 8]))
print([v.name for v in sess.variables])
```
=================================================================
This example prints variable names, one kernel and one bias per
`tf.layers.Dense` layer:
['net/dense/kernel:0',
'net/dense/bias:0',
'net/dense_1/kernel:0',
'net/dense_1/bias:0']
These variables can be passed to a `Saver` (`tf.train.Saver`, or
`tf.contrib.eager.Saver` when executing eagerly) to save or restore the
`Network`
=================================================================
```
tfe.save_network_checkpoint(sess,'./my_model.ckpt') # saving the model
tfe.restore_network_checkpoint(sess,'./my_model.ckpt') # restoring
Saving variables with tfe.Saver().save() :
for epoch in range(epochs):
train_and_optimize()
all_variables = model.variables + optimizer.variables()
# save the varibles
tfe.Saver(all_variables).save(checkpoint_prefix)
And then reload saved variables with tfe.Saver().restore() :
tfe.Saver((model.variables + optimizer.variables())).restore(checkpoint_prefix)
Then the model is loaded with the saved variables, and no need to create a new one as in #Stefan Falk 's answer.