I want to use a pre-trained ResNet model which Tensorflow provides here.
First I downloaded the code (resnet_v1.py) to reconstruct the model's graph here. The model's weights (resnet_v1_50.ckpt) can be found on the same page here.
The model can be tested using the following script (resnet_v1_test.py) from here. However, I have problems to extract the right information from resnet_v1_test.py. I don't understand many things that happen in this script. Which are the essential functions to pass a random image through the network? How can I access the weights and activations for further work?
What are the next steps from here? I would appreciate any help!
TL;DR: How can I use the resnet_v1_test.py script to perform classification and access weights and activations?
Related
I have a pre-trained image classification model saved in caffe, the model is expected to get grayscale(one channel) images. I want to use this model in a tool that only provides input of RGB(three channels) to the model. It is not possible to change the way this tool provides images so I thought of adding a layer before the input layer that transforms the input to one channel only, is that possible in caffe? and how?
I'm looking for a solution that doesn't require to define new layers to caffe if possible.
Note that I have the ".prototxt" and the ".weights" files of the model.
I previously did a similar thing in tensorflow but I don't know if this is possible in caffe and didn't find much material online.
You can add a Python layer to do it for you.
What is a Python layer.
An example of such a layer can be found here.
I want to learn how to train a convolution neural network in tensorflow.May you write a link with some code.For example a github project.After I train the convolution neural network with my own dataset ,how to save it as .pb or .pbtxt.May you discribe the procedure to do that?
Well not sure if you want code you can understand or just a model. If you want just the model check out this retraining code of tensorflow. For full understanding, this link might be useful. The pb file you're referring to is also called a saved model, see this link.
I am writing a neural network in tensorflow and I want to be able to export my final trained network and import it in another program to play a game. I have found multiple forum posts like:
Tensorflow: How to use a trained model in a application?
Tensorflow: how to save/restore a model?
I also saw in the tf documentations they were using estimators to save the model but I am not sure if that is what I'm looking for and how to apply it.
But those talk about exporting the entire session and importing it into the application and using Session.run, but as I understand it that requires an input of the predicted output and will run another training step on my network. I don't want to continue training my network - it's finished - I now want to evaluate a specific state given to me by the game only.
Thanks in advance for any help available.
As I know, there are 2 way of doing it.
checkpoint files(metagraph)
savedmodel
savedmodel is very convenient, but study curve is higher than checkpoint file. you can check this tutorial
and import model is not continue run the training, it is basically restore all the variable you learned.
I want to train a cnn for 20000 steps. In the 100th step I want to save all variables and after that I want to re-run my code restoring model and starting from the 100th step. I am trying to make it work with tensorflow documentation: https://www.tensorflow.org/versions/r0.10/how_tos/variables/index.html but I can't. Any help?
Im stuck in something similar but maybe this link can help you. Im new in tensorflow but i think you cant restore and fit without need to training again you model.
This functionality is still unstable , and the documentation is outdated so is confusing, what worked for me(this was a suggestion of people from google that works directly on tensorflow) was to use the model_dir parameter on the constructor of my models before training, in this you will tell where to store your model, after training you just instantiate again a model using the same model_dir and it will restore the model from the files and checkpoints generated.
How can I load a .pb protobuf model and then tweak the network as needed (specially the outer layers) in order to train a new model for completely different classes? Effectively doing transfer learning?
I want to do something like these (i.e. train the outer layers with a bigger learning rate than the inner layers) among other things, so I need a way to not only load the graph with the variables, but to alter the network's structure and hyperparameters too.
If anyone has an example to follow with the inception model, it would be amazing!
My question is very similar to this one.
I've searched all over the internet (TF docs, Github, StackOverflow, Google...) but I can't seam to find something useful for a novice.
Thanks a lot!
This is the updated tutorial from official Tensorflow website https://www.tensorflow.org/hub/tutorials/image_retraining
They use the pre trained Inception V3 model and everything works fine. You can change the dataset folder to your own dataset
tf.import_graph_def() is the function for loading a GraphDef:
https://www.tensorflow.org/versions/0.6.0/api_docs/python/framework.html#import_graph_def
Hopefully once imported, you can make the modifications to the graph you need. It would be easier, though, to modify the Python code that generated the graph in the first place, if you have access to that.