Set up tensorboard on Matterport - Mask RCNN - python

I am following this tutorial for image detection using Matterport repo.
I tried following this guide and edited the code to
How can I edit the following code to visualize the tensorboard ?
import tensorflow as tf
import datetime
%load_ext tensorboard
sess = tf.Session()
file_writer = tf.summary.FileWriter('/path/to/logs', sess.graph)
And then in the model area
# prepare config
config = KangarooConfig()
config.display()
# define the model
model = MaskRCNN(mode='training', model_dir='./', config=config)
model.keras_model.metrics_tensors = []
# Tensorflow board
logdir = os.path.join(
"logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1)
# load weights (mscoco) and exclude the output layers
model.load_weights('mask_rcnn_coco.h5',
by_name=True,
exclude=[
"mrcnn_class_logits", "mrcnn_bbox_fc", "mrcnn_bbox",
"mrcnn_mask"
])
# train weights (output layers or 'heads')
model.train(train_set,
test_set,
learning_rate=config.LEARNING_RATE,
epochs=5,
layers='heads')
I am not sure where to callbacks=[tensorboard_callback] ?

In your model.train, if you look closely in the source code documentation, there is parameter called custom_callbacks, which defaults to None.
It is there where you need to write your code, so to train with a custom callback, you will need to add this line of code:
model.train(train_set,
test_set,
learning_rate=config.LEARNING_RATE,
custom_callbacks = [tensorboard_callback],
epochs=5,
layers='heads')

You only have to open Anaconda Prompt and write tensorboard --logdir= yourlogdirectory, where yourlogdirectory is the directory containing the model checkpoint.
It should look something like this: logs\xxxxxx20200528T1755, where xxxx stands for the name you give to your configuration.
This command will generate a web address, copy it in our browser of preference.

Related

Problem building tensorflow model from huggingface weights

I need to work with the pretrained BERT model ('dbmdz/bert-base-italian-xxl-cased') from Huggingface with Tensorflow (at this link).
After reading this on the website,
Currently only PyTorch-Transformers compatible weights are available. If you need access to TensorFlow checkpoints, please raise an issue!
I raised the issue and promptly a download link to an archive containing the following files was given to me. The files are the following ones:
$ ls bert-base-italian-xxl-cased/
config.json model.ckpt.index vocab.txt
model.ckpt.data-00000-of-00001 model.ckpt.meta
I'm now trying to load the model and work with it but everything I tried failed.
I tried following this suggestion from an Huggingface discussion site:
bert_folder = str(Config.MODELS_CONFIG.BERT_CHECKPOINT_DIR) # folder in which I have the files extracted from the archive
from transformers import BertConfig, TFBertModel
config = BertConfig.from_pretrained(bert_folder) # this gets loaded correctly
After this point I tried several combinations in order to load the model but always unsuccessfully.
eg:
model = TFBertModel.from_pretrained("../../models/pretrained/bert-base-italian-xxl-cased/model.ckpt.index", config=config)
model = TFBertModel.from_pretrained("../../models/pretrained/bert-base-italian-xxl-cased/model.ckpt.index", config=config, from_pt=True)
model = TFBertModel.from_pretrained("../../models/pretrained/bert-base-italian-xxl-cased/model.ckpt.index", config=config, from_pt=True)
model = TFBertModel.from_pretrained("../../models/pretrained/bert-base-italian-xxl-cased", config=config, local_files_only=True)
Always results in this error:
404 Client Error: Not Found for url: https://huggingface.co/models/pretrained/bert-base-italian-xxl-cased/model.ckpt.index/resolve/main/tf_model.h5
...
...
OSError: Can't load weights for '../../models/pretrained/bert-base-italian-xxl-cased/model.ckpt.index'. Make sure that:
- '../../models/pretrained/bert-base-italian-xxl-cased/model.ckpt.index' is a correct model identifier listed on 'https://huggingface.co/models'
- or '../../models/pretrained/bert-base-italian-xxl-cased/model.ckpt.index' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin.
So my question is: How can I load this pre-trained BERT model from those files and use it in tensorflow?
You can try the following snippet to load dbmdz/bert-base-italian-xxl-cased in tensorflow.
from transformers import AutoTokenizer, TFBertModel
model_name = "dbmdz/bert-base-italian-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = TFBertModel.from_pretrained(model_name)
If you want to load from the given tensorflow checkpoint, you could try like this:
model = TFBertModel.from_pretrained("../../models/pretrained/bert-base-italian-xxl-cased/model.ckpt.index", config=config, from_tf=True)

NameError:name 'create_model' is not defined ....i have tried importing model from keras but it hasnt solved it .how to solve?

I tried creating a model using tensorflow. When I tried executing it shows me
the other files are in this link------- github.com/llSourcell/tensorflow_chatbot
def train():
enc_train, dec_train=data_utils.prepare_custom_data(
gConfig['working_directory'])
train_set = read_data(enc_train,dec_train)
def seq2seq_f(encoder_inputs,decoder_inputs,do_decode):
return tf.nn.seq2seq.embedding_attention_seq2seq(
encoder_inputs,decoder_inputs, cell,
num_encoder_symbols=source_vocab_size,
num_decoder_symbols=target_vocab_size,
embedding_size=size,
output_projection=output_projection,
feed_previous=do_decode)
with tf.Session(config=config) as sess:
model = create_model(sess,False)
while True:
sess.run(model)
checkpoint_path = os.path.join(gConfig['working_directory'],'seq2seq.ckpt')
model.saver.save(sess, checkpoint_path, global_step=model.global_step)
other than this the other python files ive used are in the github link specified in the comments section below
this is the code defining create_model in the execute.py file
def create_model(session, forward_only):
"""Create model and initialize or load parameters"""
model = seq2seq_model.Seq2SeqModel( gConfig['enc_vocab_size'], gConfig['dec_vocab_size'], _buckets, gConfig['layer_size'], gConfig['num_layers'], gConfig['max_gradient_norm'], gConfig['batch_size'], gConfig['learning_rate'], gConfig['learning_rate_decay_factor'], forward_only=forward_only)
if 'pretrained_model' in gConfig:
model.saver.restore(session,gConfig['pretrained_model'])
return model
ckpt = tf.train.get_checkpoint_state(gConfig['working_directory'])
# the checkpoint filename has changed in recent versions of tensorflow
checkpoint_suffix = ""
if tf.__version__ > "0.12":
checkpoint_suffix = ".index"
if ckpt and tf.gfile.Exists(ckpt.model_checkpoint_path + checkpoint_suffix):
print("Reading model parameters from %s" % ckpt.model_checkpoint_path)
model.saver.restore(session, ckpt.model_checkpoint_path)
else:
print("Created model with fresh parameters.")
session.run(tf.initialize_all_variables())
return model
Okay, it seems like you have copied code but you did not structure it. If create_model() is defined in another file then you have to import it. Have you done that? (i.e. from file_with_methods import create_model). You should consider editing your post and adding more of your code, if you want us to help.
Alternative: You could also clone the github repository(that you shared in your comment) and just change whatever you want to change in the execution.py file. This way you can keep the "hierarchy" that the owner uses and you could add your own code where needed.

Converting Keras model as tensorflow model gives me error

Hi I am trying to save my 'saved models' (h5 files) as tensorflow file.This is the code I used.
import tensorflow as tf
def tensor_function(i):
tf.keras.backend.set_learning_phase(0) # Ignore dropout at inference
model = tf.keras.models.load_model('/home/ram/Downloads/AutoEncoderModels_ch2/19_hour/autoencoder_models_ram/auto_encoder_model_pos_' + str(i) + '.h5')
export_path = '/home/ram/Desktop/tensor/' + str(i)
#sess = tf.Session()
# Fetch the Keras session and save the model
# The signature definition is defined by the input and output tensors
# And stored with the default serving key
with tf.keras.backend.get_session() as sess:
tf.saved_model.simple_save(
sess,
export_path,
inputs={'input_image': model.input},
outputs={t.name: t for t in model.outputs})
sess.close()
for i in range(4954):
tensor_function(i)
I tried to open the session manually by using sess = tf.session() (removed with as well) as well but in vain
And the above error I got when I used jupyter notebook and when I ran the same in linux terminal.I get the following error
tensorflow.python.framework.errors_impl.FailedPreconditionError: Error while reading resource variable dense_73/bias from Container: localhost. This could mean that the variable was uninitialized. Not found: Container localhost does not exist. (Could not find resource: localhost/dense_73/bias)
[[{{node dense_73/bias/Read/ReadVariableOp}} = ReadVariableOp[_class=["loc:#dense_73/bias"], dtype=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](dense_73/bias)]]
And when I tried to save just the one 'saved model file' it ran successfully.Problems happen only when I try to run it in a loop(probably some session problem).
I tried this answer in SO but didnt help much.
For me the following two options work:
Option 1: Add tf.keras.backend.clear_session() at the beginning of your tensor_function and use a 'with' block:
def tensor_function(i):
tf.keras.backend.clear_session()
tf.keras.backend.set_learning_phase(0) # Ignore dropout at inference
model = ...
export_path = 'so-test/' + str(i)
with tf.keras.backend.get_session() as sess:
tf.saved_model.simple_save(
sess,
export_path,
inputs={'input_image': model.input},
outputs={t.name: t for t in model.outputs})
sess.close()
Option 2: Use tf.Session() instead of the 'with' block but add the line sess.run(tf.global_variables_initializer()):
def tensor_function(i):
tf.keras.backend.set_learning_phase(0) # Ignore dropout at inference
model = ...
export_path = 'so-test/' + str(i)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
tf.saved_model.simple_save(
sess,
export_path,
inputs={'input_image': model.input},
outputs={t.name: t for t in model.outputs})
sess.close()

Keras: Save Model In Suitable Format for Google Cloud ML Engine (missing functions)

I'm trying to deploy a recently trained Keras model on Google Cloud ML Engine. I googled around to see what format the saved model needs to be for ML Engine and found this:
import keras.backend as K
import tensorflow as tf
from keras.models import load_model, Sequential
from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import tag_constants, signature_constants
from tensorflow.python.saved_model.signature_def_utils_impl import predict_signature_def
# reset session
K.clear_session()
sess = tf.Session()
K.set_session(sess)
# disable loading of learning nodes
K.set_learning_phase(0)
# load model
model = load_model('model.h5')
config = model.get_config()
weights = model.get_weights()
new_Model = Sequential.from_config(config)
new_Model.set_weights(weights)
# export saved model
export_path = 'YOUR_EXPORT_PATH' + '/export'
builder = saved_model_builder.SavedModelBuilder(export_path)
signature = predict_signature_def(inputs={'NAME_YOUR_INPUT': new_Model.input},
outputs={'NAME_YOUR_OUTPUT': new_Model.output})
with K.get_session() as sess:
builder.add_meta_graph_and_variables(sess=sess,
tags=[tag_constants.SERVING],
signature_def_map={
signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature})
builder.save()
However, in Keras 2.1.3, keras.backend no longer seems to have clear_session(), set_session(), or get_session(). What's the modern way of handling this issue? Do those functions live elsewhere now?
Thanks!

Tensor is not an element of this graph; deploying Keras model

Im deploying a keras model and sending the test data to the model via a flask api. I have two files:
First: My Flask App:
# Let's startup the Flask application
app = Flask(__name__)
# Model reload from jSON:
print('Load model...')
json_file = open('models/model_temp.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
keras_model_loaded = model_from_json(loaded_model_json)
print('Model loaded...')
# Weights reloaded from .h5 inside the model
print('Load weights...')
keras_model_loaded.load_weights("models/Model_temp.h5")
print('Weights loaded...')
# URL that we'll use to make predictions using get and post
#app.route('/predict',methods=['GET','POST'])
def predict():
data = request.get_json(force=True)
predict_request = [data["month"],data["day"],data["hour"]]
predict_request = np.array(predict_request)
predict_request = predict_request.reshape(1,-1)
y_hat = keras_model_loaded.predict(predict_request, batch_size=1, verbose=1)
return jsonify({'prediction': str(y_hat)})
if __name__ == "__main__":
# Choose the port
port = int(os.environ.get('PORT', 9000))
# Run locally
app.run(host='127.0.0.1', port=port)
Second: The file Im using to send the json data sending to the api endpoint:
response = rq.get('api url has been removed')
data=response.json()
currentDT = datetime.datetime.now()
Month = currentDT.month
Day = currentDT.day
Hour = currentDT.hour
url= "http://127.0.0.1:9000/predict"
post_data = json.dumps({'month': month, 'day': day, 'hour': hour,})
r = rq.post(url,post_data)
Im getting this response from Flask regarding Tensorflow:
ValueError: Tensor Tensor("dense_6/BiasAdd:0", shape=(?, 1), dtype=float32) is not an element of this graph.
My keras model is a simple 6 dense layer model and trains with no errors.
Any ideas?
Flask uses multiple threads. The problem you are running into is because the tensorflow model is not loaded and used in the same thread. One workaround is to force tensorflow to use the gloabl default graph .
Add this after you load your model
global graph
graph = tf.get_default_graph()
And inside your predict
with graph.as_default():
y_hat = keras_model_loaded.predict(predict_request, batch_size=1, verbose=1)
It's so much simpler to wrap your keras model in a class and that class can keep track of it's own graph and session. This prevents the problems that having multiple threads/processes/models can cause which is almost certainly the cause of your issue. While other solutions will work this is by far the most general, scalable and catch all. Use this one:
import os
from keras.models import model_from_json
from keras import backend as K
import tensorflow as tf
import logging
logger = logging.getLogger('root')
class NeuralNetwork:
def __init__(self):
self.session = tf.Session()
self.graph = tf.get_default_graph()
# the folder in which the model and weights are stored
self.model_folder = os.path.join(os.path.abspath("src"), "static")
self.model = None
# for some reason in a flask app the graph/session needs to be used in the init else it hangs on other threads
with self.graph.as_default():
with self.session.as_default():
logging.info("neural network initialised")
def load(self, file_name=None):
"""
:param file_name: [model_file_name, weights_file_name]
:return:
"""
with self.graph.as_default():
with self.session.as_default():
try:
model_name = file_name[0]
weights_name = file_name[1]
if model_name is not None:
# load the model
json_file_path = os.path.join(self.model_folder, model_name)
json_file = open(json_file_path, 'r')
loaded_model_json = json_file.read()
json_file.close()
self.model = model_from_json(loaded_model_json)
if weights_name is not None:
# load the weights
weights_path = os.path.join(self.model_folder, weights_name)
self.model.load_weights(weights_path)
logging.info("Neural Network loaded: ")
logging.info('\t' + "Neural Network model: " + model_name)
logging.info('\t' + "Neural Network weights: " + weights_name)
return True
except Exception as e:
logging.exception(e)
return False
def predict(self, x):
with self.graph.as_default():
with self.session.as_default():
y = self.model.predict(x)
return y
Just after loading the model add model._make_predict_function()
`
# Model reload from jSON:
print('Load model...')
json_file = open('models/model_temp.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
keras_model_loaded = model_from_json(loaded_model_json)
print('Model loaded...')
# Weights reloaded from .h5 inside the model
print('Load weights...')
keras_model_loaded.load_weights("models/Model_temp.h5")
print('Weights loaded...')
keras_model_loaded._make_predict_function()
It turns out this way does not need a clear_session call and is at the same time configuration friendly, using the graph object from configured session session = tf.Session(config=_config); self.graph = session.graph and the prediction by the created graph as default with self.graph.as_default(): offers a clean approach
from keras.backend.tensorflow_backend import set_session
...
def __init__(self):
config = self.keras_resource()
self.init_model(config)
def init_model(self, _config, *args):
session = tf.Session(config=_config)
self.graph = session.graph
#set configured session
set_session(session)
self.model = load_model(file_path)
def keras_resource(self):
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
return config
def predict_target(self, to_predict):
with self.graph.as_default():
predict = self.model.predict(to_predict)
return predict
I had the same problem. it was resolved by changing TensorFlow-1 version to TensorFlow-2.
just uninstall ver-1 and install ver-2.
Ya their is a bug when you predict from model with keras. Keras will not be able to build graph due to some error. Try to predict images from model with the help of tensor flow. Just replace this line of code
Keras code:
features = model_places.predict( img )
tensorflow code:
import tensorflow as tf
graph = tf.get_default_graph()
import this library in your code and replace.
with graph.as_default():
features = model_places.predict( img ).tolist()
If Problem still not solved :
if still problem not solved than try to refresh the graph.
As your code is fine, running with a clean environment should solve it.
Clear keras cache at ~/.keras/
Run on a new environment, with the right packages (can be done easily with anaconda)
Make sure you are on a fresh session, keras.backend.clear_session() should remove all existing tf graphs.
Keras Code:
keras.backend.clear_session()
features = model_places.predict( img )
TensorFlow Code:
import tensorflow as tf
with tf.Session() as sess:
tf.reset_default_graph()
Simplest solution is to use tensorflow 2.0. Run your code in Tensorflow 2.0 environment and it will work.
I was facing same issues while exposing a pre-trained model via REST server. I was loading the model at the server startup and later using the loaded model to make predictions via POST/GET request. While predicting it was generating error as session not saved between the predict call. Though when I was loading the model every time prediction is made it was working fine.
Then to avoid this issue with the session I just ran the code in TF=2.0 environment and it ran fine.

Categories