Loading pre-trained resnet model to cleverhans model format - python

I am trying to load a pre-trained ResNet model from the MadryLab CIFAR-10 challenge into CleverHans to compute transfer attacks.
However restoring the saved models into the model_zoo.madry_lab_challenges.cifar10_model.ResNet object does not work. It appears to restore fine initially, but when I try to actually use the model, I get an error such as:
Attempting to use uninitialized value
ResNet/unit_3_1/residual_only_activation/BatchNorm/moving_mean
The easiest way to reproduce this error is to actually just run the provided attack_model.py example included in CleverHans here:
https://github.com/tensorflow/cleverhans/blob/master/examples/madry_lab_challenges/cifar10/attack_model.py
It encounters the same error after loading the model when it tries to use it, on both adv_trained and naturally_trained models.
Is there a workaround to this problem?
It seems the other option is to use the cleverhans.model.CallableModelWrapper instead, but I haven't been able to find an example of how to use that.

Related

torch.load(ml_model) in new class I receive an AttributeError: Can't get attribute 'ResNet1D' on <module '__main__'>

I have successfully trained a Convolutional neural network model using Google Colab in a file named model_prep.py. The model receives 92% accuracy. Now that i'm happy with the model I have used pyTorch to save my model.
torch.save(model, '/content/drive/MyDrive/myModel.pt')
My understanding of this is that once the model has been fully trained, I could use pyTorch to save the trained model to then be loaded into future projects for predictions on new data. Therefore I created a separate test.py file where i loaded the trained model like so,
model = torch.load('/content/drive/MyDrive/myModel.pt')
model.eval()
But within the new test.py file, I receive an error message
AttributeError: Can't get attribute 'ResNet1D' on <module '__main__'>
Although this error does not occur when loading the model in the same notebook as the trained model was created (model_prep.py). This error only occurs when loading the model into a separate notebook with no model architecture. How do I go about this problem? I would like to load the trained model into a new separate file to perform on new data. Can someone suggest a solution?
In the future, I would like to create a GUI using tkinter and deploy the trained model to check predictions using new data within the tkinter file. Is this possible?
Even I was facing the same error. What this trying to say is create an instance of your model by calling the class and then do torch.load().
If go to one of blogs on Saving and Loading Models by PyTorch and there in load section, you can clearly see this line # Model class must be defined somewhere.
Hence I would recommend, in your test.py file try to define the model class as you have done in train.py (guessing this is your filename where you have created your model) and then load as shown below.
model = ModelClass()
model = torch.load(PATH, , map_location=torch.device('cpu')) #<--- if current device is 'CPU'
model.eval() #<---- To prevent it from going to retraining mode.
As stated by Pytorch Blog (here) Saving a model in this way will save the entire module using Python’s pickle module. The disadvantage of this approach is that the serialized data is bound to the specific classes and the exact directory structure used when the model is saved. The reason for this is that pickle does not save the model class itself. Rather, it saves a path to the file containing the class, which is used during load time. Because of this, your code can break in various ways when used in other projects or after refactors.
I fixed this with the TorchScript approach. We save model with
model_scripted = torch.jit.script(model)# Export to TorchScript
model_scripted.save('model_scripted.pt') # Save
and to load it
model = torch.jit.load('model_scripted.pt')
model.eval()
More details here

Load tensorflow SavedModel in Rstudio trained in Google Cloud ML

I trained a model in Google Cloud ML and saved it as a saved model format. I've attached the directory for the saved model below.
https://drive.google.com/drive/folders/18ivhz3dqdkvSQY-dZ32TRWGGW5JIjJJ1?usp=sharing
I am trying to load the model into R using the following code but it is returning <tensorflow.python.training.tracking.tracking.AutoTrackable> with an object size of 552 bytes, definetly not correct. If anyone can properly load the model, I would love to know how you did it. It should also be able to be loaded into python I assume, that could work too. The model was trained on GPU, not sure which tensorflow version. Thank you very much!
library(keras)
list.files("/path/to/inceptdual400OG")
og400<-load_model_tf("/path/to/inceptdual400OG")
Since the shared model is not available anymore (it says that is in the trash folder)and it is not specified in the question I can't tell which framework you used to save the model on first place. I will suggest trying the Keras load function or the Tensorflow load function depending on which type of saved file model you have.
Bear in mind modify this argument as "compile = FALSE" if you have the model already compiled.
Remember to import the latest libraries if you trained your model with tf>=2.0 because of dependencies incompatibilities {Tensorflow, Keras} and rsconnect::appDependencies() output would be worth checking.

Tensorflow: How to load a pre-trained ResNet model

I want to use a pre-trained ResNet model which Tensorflow provides here.
First I downloaded the code (resnet_v1.py) to reconstruct the model's graph here. The model's weights (resnet_v1_50.ckpt) can be found on the same page here.
The model can be tested using the following script (resnet_v1_test.py) from here. However, I have problems to extract the right information from resnet_v1_test.py. I don't understand many things that happen in this script. Which are the essential functions to pass a random image through the network? How can I access the weights and activations for further work?
What are the next steps from here? I would appreciate any help!
TL;DR: How can I use the resnet_v1_test.py script to perform classification and access weights and activations?

Tensorflow: How can I restore model for training? (Python)

I want to train a cnn for 20000 steps. In the 100th step I want to save all variables and after that I want to re-run my code restoring model and starting from the 100th step. I am trying to make it work with tensorflow documentation: https://www.tensorflow.org/versions/r0.10/how_tos/variables/index.html but I can't. Any help?
Im stuck in something similar but maybe this link can help you. Im new in tensorflow but i think you cant restore and fit without need to training again you model.
This functionality is still unstable , and the documentation is outdated so is confusing, what worked for me(this was a suggestion of people from google that works directly on tensorflow) was to use the model_dir parameter on the constructor of my models before training, in this you will tell where to store your model, after training you just instantiate again a model using the same model_dir and it will restore the model from the files and checkpoints generated.

TensorFlow: How do I release a model without source code?

I am using Tensorflow + Python.
I am curious if I can release a saved Tensorflow model (architecture + trained variables) without detailed source code. I'm aware of tf.train.Saver(), but it looks to save only variables, and in order to restore/run them, a user needs to "define" the same architecture.
For the testing/running purpose only, is there a way to release a saved {architecture+trained variables} without source code, so that a user can just cast a query and get a result?
The TensorFlow Serving project is intended to make this use case straightforward (assuming that the end user is only using the model for inference, not training). TensorFlow Serving includes an Exporter class that takes your tf.train.Saver, the tf.GraphDef that defines your overall model, and a "signature" that describes the inputs to and output from your model.
The basics tutorial has a good introduction to exporting your model.
You can build a Saver from the MetaGraphDef (saved with checkpoints by default: those .meta files). and then use that Saver to restore your model. So users don't have to re-define your graph in their code. But then they still need to figure out the model signature (input, output variables). I solve this using tf.Collection (but i am interested to find better ways to do it as well).
You can take a look at my example implementation (the eval.py evaluate a model without re-defining a model):
reconstruct saver from meta graph https://github.com/falcondai/cifar10/blob/master/eval.py#L18
get input variables from collections https://github.com/falcondai/cifar10/blob/master/eval.py#L58
how to define your model https://github.com/falcondai/cifar10/blob/master/models/cp2f3d.py

Categories