Trying to create CNN using Theano and Tensorflow - python

I am trying to create CNN using theano and tensorflow. Defined layers are MLP, convolvemaxpool in different classes now I am trying to create CNN using these classes depending on the input provided by user. CNN should call theano or tensorflow classes depending on the user input. Now, Let's say theano and tensorflow both has classes MLP, convolvemaxpool implementing MLP, Convolution simultaneously. Now my problem is how to call the classes depending on the input? I don't want to use if else since adding more libraries means more if else statement for each class which I don't think is a right solution for now. Any help will be appreciated. Thanks.

Related

Project organization with Tensorflow.keras. Should one subclass tf.keras.Model?

I'm using Tensorflow 1.14 and the tf.keras API to build a number (>10) of differnet neural networks. (I'm also interested in the answers to this question using Tensorflow 2). I'm wondering how I should organize my project.
I convert the keras models into estimators using tf.keras.estimator.model_to_estimator and Tensorboard for visualization. I'm also sometimes using model.summary(). Each of my models has a number (>20) of hyperparameters and takes as input one of three types of input data. I sometimes use hyperparameter optimization, such that I often manually delete models and use tf.keras.backend.clear_session() before trying the next set of hyperparameters.
Currently I'm using functions that take hyperparameters as arguments and return the respective compiled keras model to be turned into an estimator. I use three different "Main_Datatype.py" scripts to train models for the three different input data types. All data is loaded from .tfrecord files and there is an input function for each data type, which is used by all estimators taking that type of data as input. I switch between models (i.e. functions returning a model) in the Main scripts. I also have some building blocks that are part of more than one model, for which I use helper functions returning them, piecing together the final result using the Keras functional API.
The slight incompatibilities of the different models are begining to confuse me and I've decided to organise the project using classes. I'm planing to make a class for each model that keeps track of hyperparameters and correct naming of each model and its model directory. However, I'm wondering if there are established or recomended ways to do this in Tensorflow.
Question: Should I be subclassing tf.keras.Model instead of using functions to build models or python classes that encapsulate them? Would subclassing keras.Model break (or require much work to enable) any of the functionality that I use with keras estimators and tensorboard? I've seen many issues people have with using custom Model classes and am somewhat reluctant to put in the work only to find that it doesn't work for me. Do you have other suggestions how to better organize my project?
Thank you very much in advance.
Subclass only if you absolutely need to. I personally prefer following the following order of implementation. If the complexity of the model you are designing, can not be achieved using the first two options, then of course subclassing is the only option left.
tf.keras Sequential API
tf.keras Functional API
Subclass tf.keras.Model
Seems like a reasonable thing to do: https://www.tensorflow.org/guide/keras/custom_layers_and_models https://www.tensorflow.org/api_docs/python/tf/keras/Model guide

Tensorflow: How to load a pre-trained ResNet model

I want to use a pre-trained ResNet model which Tensorflow provides here.
First I downloaded the code (resnet_v1.py) to reconstruct the model's graph here. The model's weights (resnet_v1_50.ckpt) can be found on the same page here.
The model can be tested using the following script (resnet_v1_test.py) from here. However, I have problems to extract the right information from resnet_v1_test.py. I don't understand many things that happen in this script. Which are the essential functions to pass a random image through the network? How can I access the weights and activations for further work?
What are the next steps from here? I would appreciate any help!
TL;DR: How can I use the resnet_v1_test.py script to perform classification and access weights and activations?

Custom NN architecture using Pytorch

I am trying to make a custom CNN architecture using Pytorch. I want to have about the same control as what I would get if I make the architecture using numpy only.
I am new to Pytorch and would like to see some code samples of CNNs implemented without the nn.module class, if possible.
You have to implement backward() function in your custom class.
However from your question it is not clear whether
your need just a new series of CNN block (so you better use nn.module and something like
nn.Sequential(nn.Conv2d( ...) )
you just need gradient descent https://github.com/jcjohnson/pytorch-examples#pytorch-autograd , so computation backward on your own.

Tensorflow LSTM model parameter learning inside parameter

I'm tryinig to train my LSTM model in tensorflow and my module has to calculate parameter inside parameter. And i want to train both parameters altogether.
More details are in the picture below.
I think that tensorflow LSTM module's input must be a perfect sequence and parameters like "tf.placeholder".
How can i do this in tensorflow? Or can you recommend another appropriate framework better than tensorflow in this task?
Sorry for my poor english.
First of all your usage of the word parameter is quite confusing. Normally parameters are referred as trainable parameters and therefore every variable which is trained by the optimizer. There are also so-called hyper-parameters, which have to be set per hand e.g. like the model topology.
Tensorflow work with tensors, which are representations of data which are used to build the workflow and are filled with data during run time via placeholder which is like an entry point for the data.
Also, if you have trouble to build your model in tensorflow, then there is also keras. Keras can run with tensorflow as its backend but model building is much easier. Also, keras is also available in the tensorflow API as tf.keras. In keras one or multiple LSTMs are simplified as a layer which can be added to your model.
If you like a more specific answer to your question, please provide code to describe your problem.

Tensorflow op in Keras model

I'm trying to use a tensorflow op inside a Keras model. I previously tried to wrap it with a Lambda layer but I believe this disables that layers' backpropagation.
More specifically, I'm trying to use the layers from here in a Keras model, without porting it to Keras layers (I hope to deploy to tensorflow later on). I can compile these layers in a shared library form and load these into python. This gives me tensorflow ops and I don't know how to combine this in a Keras model.
A simple example of a Keras MNIST model, where for example one Conv2D layer is replaced by a tf.nn.conv2d op, would be exactly what I'm looking for.
I've seen this tutorial but it appears to do the opposite of what I am looking for. It seems to insert Keras layers into a tensorflow graph. I'm looking to do the exact opposite.
Best regards,
Hans
Roughly two weeks have passed and it seems I am able to answer my own question now.
It seems like tensorflow can look up gradients if you register them using this decorator. As of writing, this functionality is not (yet) available in C++, which is what I was looking for. A workaround would be to define a normal op in C++ and wrap it in a python method using the mentioned decorator. If these functions with corresponding gradients are registered with tensorflow, backpropagation will happen 'automagically'.

Categories