Is it possible to apply GradCam to a TF Lite model - python

I have been studying about GradCam and I noticed most cases are used on a Keras/Tensorflow model. However I have a tensorflow lite model that has been compiled to .tflite format. I am not sure if it's even possible to access my CNN layers after it's been compiled, given that I tried using keras library to load the model and it only accepts specific file types, not exactly .tflite since it threw errors:
from tensorflow.keras.models import load_model
model = load_model("/content/drive/My Drive/tensorflow_lite_model.tflite")
It gives the error:
OSError: SavedModel file does not exist
What I was trying to do was to print the .tflite models using model.summary as a way to confirm If I could perform any operation with the model layers. If that is so, then I don't think it's possible to use Grad-Cam with a tensorflow lite model.
Therefore, I would like to know If that is true, or did I just try to validate it, the wrong way?

TFLite model file is a different serialization format with the TensorFlow model formats, keras and saved model.
Since you already have a TFLite model, you need to use the TensorFlow Lite Interpreter API, instead of using the TensorFlow API.
interpreter = tf.lite.Interpreter(model_path="converted_model.tflite")
interpreter.allocate_tensors()
Please refer to this link for the details.
The TF GradCam model can be converted into the TFLite model. It is technically possible to convert any TF models to the corresponding TFLite model. If you have any issues with the conversion, please file bug at the tensorflow github.

Related

Can you use .tflite model like you would a normal tensorflow model?

I've used tensor flow lite to make my own custom android.tflite model as described in this tflite collab demo.
I was going to test my object detection model using tflite_support, but this module is not Windows compatible.
Can I use my .tflite model with "regular" tensorflow? If so, how do I load the .tflite model? Following this example, we use tf.saved_model.load(model_dir), which I believe will not load our android.tflite file.
Is there a function for loading a .tflite file? Or while I'm making my model with tensorflow_model_maker, to I need to specify a different format for the model?
Additionally, I see that tflite has a way to convert normal tensorflow models to tflite models, but I don't see that it has a means of doing the reverse:
tf.lite.TFLiteConverter(
funcs, trackable_obj=None
)

How to load a tensorflow keras model saved with saved_model to use the predict function?

I have a keras sequential model. I have saved that model using the command.
tf.keras.models.save_model(model, 'model')
Now it has the following folder structure,
Now I am loading the model using
model = tf.saved_model.load('model')
I also tried with
model = tf.keras.models.load_model('model')
then I am trying to predict using
model.predict(padded_seq, verbose=0)
it is giving me error
AttributeError: '_UserObject' object has no attribute 'predict'
how to use the predict on the model loaded. I have tried with h5 model, it worked fine. But my main use is with this kind of model which is throwing error.
You are using the incorrect function to load your model (tf.saved_model.load); It does not return a Keras object (from the docs):
The object returned by tf.saved_model.load is not a Keras object (i.e. doesn't have .fit, .predict, etc. methods).
You should be using tf.keras.models.load_model to load a Keras model.
I have encountered the same problem with SavedModel models downloaded from TFHUB (example: InceptionV3), even loading it with tf.keras.models.load_model() returns a plain model (a sort of a basic generic model to allow back-compatibility) that does not have keras API (predict, fit, summary, build, etc) on top of it, the object type is: <tensorflow.python.saved_model.load.Loader._recreate_base_user_object.<locals>._UserObject object at 0x14a42ac2bcf8>
If you want to use just the inference call (predict), you can call your model directly on data (__call__ method is defined) as follow:
model(padded_seq) # or model.__call__(padded_seq)
One workaround I have found to get the Keras API again is wrapping it inside a KerasLayer in a Sequential model as follow:
import tensorflow as tf
import tensorflow_hub as hub
model = tf.keras.Sequential([
hub.KerasLayer("saved/model/path")
])
model.build(<input_shape>)
Now the model supports all Keras API like predict, summary, etc, and this now should work:
model.predict(padded_seq, verbose=0)

How to convert Theano backend channels-last model , first to Tensorflow backend model, then to tflite?

Problem
I have a trained model (that is not mine) on which I can make inferences with Theano backend. I need to run it on Android, so I try to convert this model to Tensorflow lite (.tflite).
Before converting it to .tflite, I try to make the model work with tensorflow backend but I can't do it properly. (python with keras)
What works
This is what I do with the theano model, theano backend, channels-last ordering, it works fine :
with open('Model/definition.json', 'r') as f:
model = model_from_json(f.read())
model.load_weights('Model/weights.h5')
p = model.predict_proba(preprocessed_data)
print_results(p)
the model has only two outputs (detected or not detected) and it works fine.
What does not work
When I just switch backend to tensorflow and run the same code, the model does not detect anything anymore.
What I have tried already
I first thought it was a dim ordering problem as I saw on this pages for example : Converting Theano-based Keras model definition to TensorFlow.
A theano model should use channels-first dimensions.
A tensorflow
model should use channels-last dimensions.
There is also the script from this thread that I tried : https://github.com/keras-team/keras/issues/5374
It does not work for me because I think that my weights are already channels-last ordering ! (This is what I supposed from netron, see picture on Imgur
Last thing I tried was convert_all_kernels_in_model(), but I got this error :
tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value conv2d_1/kernel
[[{{node _retval_conv2d_1/kernel_0_0}}]]
Question
What do you think guy I need to do on my model to make it run with tensorflow backend (in order to convert it to tflite...) ???

Keras and Tensorflow: Saving non-sequential model weights

(I'm using Tensorflow 1.8.0 ...)
The documentation from Keras on how to save a model mentions no difference between saving a sequential model vs. one created from the functional API. But, all of the following blocks of code fail:
import tensorflow as tf
net = tf.keras.models.Model()
net.save('file')
or
import tensorflow as tf
net = tf.keras.models.Model()
print(net.to_json())
or
import tensorflow as tf
net = tf.keras.models.Model()
print(net.to_yaml())
or
import tensorflow as tf
net = tf.keras.models.Model()
print(net.get_config())
They raise a NotImplementedError. In the Keras module, the relevant lines are
if not self._is_graph_network:
raise NotImplementedError
which shows up in .save and get_config (the latter is also called by to_json and to_yaml.
The only thing that DOES work is the following
import tensorflow as tf
net = tf.keras.models.Model()
net.save_weights('file')
in which case the weights are saved successfully and can be successfully loaded with net.load_weights.
However, replacing the second line of the above blocks of code, net = tf.keras.models.Model(), with net = tf.keras.models.Sequential(), making net a sequential model, allows everything above to work.
Is it really not possible to save the structure of a Keras model made with the functional API (using Model rather than Sequential)? Right now, can we only save weights?
Of course its possible to save Model, all your examples have an empty Model, whcih makes no sense to save. Keras' author simply did not implement that case.
If you test with a non-empty Model you will see that saving works perfectly. We use it every day.

Export tensorflow weights to hdf5 file and model to keras model.json

I recently found this Project which runs inference of keras model in a browser with GPU support using webgl. I have a few tensorflow project that I would like to run inference on a browser, is there a way to export tensorflow models into hdf5 file so it can be run using keras-js
If you are using Keras, you can do something like this.
model.save_weights('my_model.hdf5')
The only way I can see this working is if you use a Keras model as an interface to your TensorFlow workflow. If you do that, you can do this to save the model and its weights:
# save model
with open(model_save_filename, "w") as model_save_file:
model_json = model.to_json()
model_save_file.write(model_json)
# save model weights
model.save_weights(model_weights_save_filename)
More information on using Keras as an interface to Tensorflow workflows here: https://blog.keras.io/keras-as-a-simplified-interface-to-tensorflow-tutorial.html#using-keras-models-with-tensorflow

Categories