I want to better understand how to make custom keras layers but I can't seem to find the following online.... I'm looking to learn how to reproduct the basic (feed-forward) keras layer using the custom layers construction.
Where can I find this online or how do I do it?
I ask since all online resources that I came across only do other simpler layers as examples.
You need to subclass the tf.keras.layers.Layer class. The SimpleDense layer is given as an example here.
Related
I have a model in keras and I want to use it as a feature extractor in an autoencoder configuration. The first test I want to do is to mirror the model and use the original version as the encoder, and the second one as the decoder.
e.g. if the blocks on the original model goes
input->A->B->C->output
I want to get
input->A->B->C->(latent space)->C->B->A->reconstructed_input
However, the original model is not small and I am quite new to keras. It has a number of blocks defined with which the total model is built. What I am confused about is the order I have to follow when I reverse things. There are several convolutional, reshaping, and dense layers, as well as skip connections, and I am not sure about the strategy I should follow with each individual case. I know for a fact that I should replace Conv2d layers with Conv2DTranspose to upscale the feature maps back to the original image, but that's about it. I am baffled by the rest.
Does anyone have any recommendations or resources I could follow to make my life easier?
Cheers!
I'm learning to use PyTorch. If anyone is familiar with PyTorch could they tell me if all models can be nn.Sequential?
I'm asking because some framework features only accept as input a model defined as nn.Sequential
Yes, but only in the sense that you could wrap a highly complex model as a single step in ‘nn.Sequential’. If you want an example of a model that breaks sequential behavior, look up ResNet and its ilk. These require data to be passed between “layers”; however, even those can be implemented using ‘nn.Sequential’ by creating special ‘nn.Module’ classes to handle the residual functionality, and stacking those blocks together into sequential.
Hi I'm kind of new to the world of machine learning, but I do know that with sufficient learning one can extract a layer with learned values optimized for a task.
What does all the dense, pooling, batch normalization, etc. all mean in the model summary?
I've worked a little bit with Keras Sequential and I know there are some layer names that I need some simplification on. There's layers like dense, pooling, etc, and I don't know what they mean.
Ultimately, I want to find out which layer type contains those learned values, and maybe create an embedding by fully connecting a layer (I'm not sure how this works)
Furthermore, where in the layer should I extract this layer? I assume it is towards the last parts...
Thanks in advance.
I wan to re-write TensorFlow code into Keras. I just wonder if you can use for this purpose the tf.keras.layers to just replace the tf.layers?
Like
tf.layers.max_pooling2d()
to:
tf.keras.layers.max_pooling2d()
Can I re-write TensorFlow to Keras in this way?
Does this define a proper Keras model where you can use the model.fit method?
First of all, I think you meant tf.keras.layers.MaxPool2D, which is a class, not a function. If I got your point, it shouldn't be an issue. There are some minor difference in syntax, but nothing serious. Besides, tf.keras.layers is a direct substitute for tf.layers. As per official docs, tf.layers are wrappers around tf.keras.layers. For example, convolutional layers in Layers API inherit from tf.keras.layers.
#tf_export('layers.Conv1D')
class Conv1D(keras_layers.Conv1D, base.Layer):
"""1D convolution layer (e.g. temporal convolution).
Even more so, Layers API is deprecated and will be removed from TF 2.0.
I'm trying to use a tensorflow op inside a Keras model. I previously tried to wrap it with a Lambda layer but I believe this disables that layers' backpropagation.
More specifically, I'm trying to use the layers from here in a Keras model, without porting it to Keras layers (I hope to deploy to tensorflow later on). I can compile these layers in a shared library form and load these into python. This gives me tensorflow ops and I don't know how to combine this in a Keras model.
A simple example of a Keras MNIST model, where for example one Conv2D layer is replaced by a tf.nn.conv2d op, would be exactly what I'm looking for.
I've seen this tutorial but it appears to do the opposite of what I am looking for. It seems to insert Keras layers into a tensorflow graph. I'm looking to do the exact opposite.
Best regards,
Hans
Roughly two weeks have passed and it seems I am able to answer my own question now.
It seems like tensorflow can look up gradients if you register them using this decorator. As of writing, this functionality is not (yet) available in C++, which is what I was looking for. A workaround would be to define a normal op in C++ and wrap it in a python method using the mentioned decorator. If these functions with corresponding gradients are registered with tensorflow, backpropagation will happen 'automagically'.