I'm trying to use a tensorflow op inside a Keras model. I previously tried to wrap it with a Lambda layer but I believe this disables that layers' backpropagation.
More specifically, I'm trying to use the layers from here in a Keras model, without porting it to Keras layers (I hope to deploy to tensorflow later on). I can compile these layers in a shared library form and load these into python. This gives me tensorflow ops and I don't know how to combine this in a Keras model.
A simple example of a Keras MNIST model, where for example one Conv2D layer is replaced by a tf.nn.conv2d op, would be exactly what I'm looking for.
I've seen this tutorial but it appears to do the opposite of what I am looking for. It seems to insert Keras layers into a tensorflow graph. I'm looking to do the exact opposite.
Best regards,
Hans
Roughly two weeks have passed and it seems I am able to answer my own question now.
It seems like tensorflow can look up gradients if you register them using this decorator. As of writing, this functionality is not (yet) available in C++, which is what I was looking for. A workaround would be to define a normal op in C++ and wrap it in a python method using the mentioned decorator. If these functions with corresponding gradients are registered with tensorflow, backpropagation will happen 'automagically'.
Related
I want to better understand how to make custom keras layers but I can't seem to find the following online.... I'm looking to learn how to reproduct the basic (feed-forward) keras layer using the custom layers construction.
Where can I find this online or how do I do it?
I ask since all online resources that I came across only do other simpler layers as examples.
You need to subclass the tf.keras.layers.Layer class. The SimpleDense layer is given as an example here.
I wan to re-write TensorFlow code into Keras. I just wonder if you can use for this purpose the tf.keras.layers to just replace the tf.layers?
Like
tf.layers.max_pooling2d()
to:
tf.keras.layers.max_pooling2d()
Can I re-write TensorFlow to Keras in this way?
Does this define a proper Keras model where you can use the model.fit method?
First of all, I think you meant tf.keras.layers.MaxPool2D, which is a class, not a function. If I got your point, it shouldn't be an issue. There are some minor difference in syntax, but nothing serious. Besides, tf.keras.layers is a direct substitute for tf.layers. As per official docs, tf.layers are wrappers around tf.keras.layers. For example, convolutional layers in Layers API inherit from tf.keras.layers.
#tf_export('layers.Conv1D')
class Conv1D(keras_layers.Conv1D, base.Layer):
"""1D convolution layer (e.g. temporal convolution).
Even more so, Layers API is deprecated and will be removed from TF 2.0.
What is the difference between tf.keras.layers versus tf.layers?
E.g. both of them have Conv2d, do they provide different outputs?
Is there any benefits if you mix them (something like a tf.keras.layers.Conv2d in one hidden layer and in the next, tf.layers.max_pooling2d)?
Since TensorFlow 1.12, tf.layers are merely wrappers around tf.keras.layers.
A few examples:
Convolutional tf.layers just inherit from the convolutional tf.keras.layers, see source code here:
#tf_export('layers.Conv2D')
class Conv2D(keras_layers.Conv2D, base.Layer):
The same is true for all core tf.layers, e.g.:
#tf_export('layers.Dense')
class Dense(keras_layers.Dense, base.Layer):
With the integration of Keras into TensorFlow, it would make little sense to maintain several different layer implementations. tf.keras is becoming the de-facto high-level API for TensorFlow, therefore tf.layers are now just wrappers around tf.keras.layers.
tf.keras.layers.Conv2d is a tensorflow-keras layer while tf.layers.max_pooling2d is a tensorflow 'native layer'
You cannot use a native layer directly within a Keras model, as it will be missing certain attributes required by the Keras API.
However, it is possible to use native layer if wrapped within a tensorflow-keras Lambda layer. A link to the documentation for this is below.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Lambda
tf.layers module is Tensorflow attempt at creating a Keras like API whereas tf.keras.layers is a compatibility wrapper. In fact, most of the implementation refers back to tf.layers, for example the tf.keras.layers.Dense inherits the core implementation:
#tf_export('keras.layers.Dense')
class Dense(tf_core_layers.Dense, Layer):
# ...
Because the tf.keras compatibility module is checked into the Tensorflow repo separately, it might lack behind what Keras actually offers. I would use Keras directly or tf.layers but not necessarily mix them.
I'm tryinig to train my LSTM model in tensorflow and my module has to calculate parameter inside parameter. And i want to train both parameters altogether.
More details are in the picture below.
I think that tensorflow LSTM module's input must be a perfect sequence and parameters like "tf.placeholder".
How can i do this in tensorflow? Or can you recommend another appropriate framework better than tensorflow in this task?
Sorry for my poor english.
First of all your usage of the word parameter is quite confusing. Normally parameters are referred as trainable parameters and therefore every variable which is trained by the optimizer. There are also so-called hyper-parameters, which have to be set per hand e.g. like the model topology.
Tensorflow work with tensors, which are representations of data which are used to build the workflow and are filled with data during run time via placeholder which is like an entry point for the data.
Also, if you have trouble to build your model in tensorflow, then there is also keras. Keras can run with tensorflow as its backend but model building is much easier. Also, keras is also available in the tensorflow API as tf.keras. In keras one or multiple LSTMs are simplified as a layer which can be added to your model.
If you like a more specific answer to your question, please provide code to describe your problem.
I am trying to create CNN using theano and tensorflow. Defined layers are MLP, convolvemaxpool in different classes now I am trying to create CNN using these classes depending on the input provided by user. CNN should call theano or tensorflow classes depending on the user input. Now, Let's say theano and tensorflow both has classes MLP, convolvemaxpool implementing MLP, Convolution simultaneously. Now my problem is how to call the classes depending on the input? I don't want to use if else since adding more libraries means more if else statement for each class which I don't think is a right solution for now. Any help will be appreciated. Thanks.