I'm trying to implement this paper in Keras, with a tensorflow backend.
In my understanding, they progressively grow a GAN, fading in additional blocks of layers as the model is trained. New layers are faded in linearly over iterations.
I'm not sure how to introduce the "fading in" they used.
To do this Keras, I'm thinking I'll probably need a Lambda layer -- but that's about as much as I know.
Any suggestions?
Thanks!
I think you should use Keras Functional API. That way you can rearrange the inputs to the layer as well as the outputs and interconnect them the way you want to while the layers keep the weights they learned. You will have to have a few nested if-statements, but I think it should work.
Or you could have all the models prepared (functions that return you the model architecture and can set layer weights) and just populate layers in the new model with weights from corresponding layers from the old model.
Related
I have a model in keras and I want to use it as a feature extractor in an autoencoder configuration. The first test I want to do is to mirror the model and use the original version as the encoder, and the second one as the decoder.
e.g. if the blocks on the original model goes
input->A->B->C->output
I want to get
input->A->B->C->(latent space)->C->B->A->reconstructed_input
However, the original model is not small and I am quite new to keras. It has a number of blocks defined with which the total model is built. What I am confused about is the order I have to follow when I reverse things. There are several convolutional, reshaping, and dense layers, as well as skip connections, and I am not sure about the strategy I should follow with each individual case. I know for a fact that I should replace Conv2d layers with Conv2DTranspose to upscale the feature maps back to the original image, but that's about it. I am baffled by the rest.
Does anyone have any recommendations or resources I could follow to make my life easier?
Cheers!
Hi I'm kind of new to the world of machine learning, but I do know that with sufficient learning one can extract a layer with learned values optimized for a task.
What does all the dense, pooling, batch normalization, etc. all mean in the model summary?
I've worked a little bit with Keras Sequential and I know there are some layer names that I need some simplification on. There's layers like dense, pooling, etc, and I don't know what they mean.
Ultimately, I want to find out which layer type contains those learned values, and maybe create an embedding by fully connecting a layer (I'm not sure how this works)
Furthermore, where in the layer should I extract this layer? I assume it is towards the last parts...
Thanks in advance.
I'm currently stuyind TensorFlow 2.0 and Keras. I know that the activation functions are used to calculate the output of each layer of a neural network, based on mathematical functions. However, when searching about layers, I can't find synthetic and easy-to-read information for a beginner in deep learning.
There's a keras documentation, but I would like to know synthetically:
what are the most common layers used to create a model (Dense, Flatten, MaxPooling2D, Dropout, ...).
In which case to use each of them? (Classification, regression, other)
what is the appropriate way to use each layer depending on each case?
Depending on the problem you want to solve, there are different activation functions and loss functions that you can use.
Regression problem: You want to predict the price of a building. You have N features. Of course, the price of the building is a real number, therefore you need to have mean_squared_error as a loss function and a linear activation for your last node. In this case, you can have a couple of Dense() layers with relu activation, while your last layer is a Dense(1,activation='linear').
In between the Dense() layers, you can add Dropout() so as to mitigate the overfitting effect(if present).
Classification problem: You want to detect whether or not someone is diabetic while taking into account several factors/features. In this case, you can use again stacked Dense() layers but your last layer will be a Dense(1,activation='sigmoid'), since you want to detect whether a patient is or not diabetic. The loss function in this case is 'binary_crossentropy'. In between the Dense() layers, you can add Dropout() so as to mitigate the overfitting effect(if present).
Image processing problems: Here you surely have stacks of [Conv2D(),MaxPool2D(),Dropout()]. MaxPooling2D is an operation which is typical for image processing and also some natural language processing(not going to expand upon here). Sometimes, in convolutional neural network architectures, the Flatten() layer is used. Its purpose is to reduce the dimensionality of the feature maps into 1D vector whose dimension is equal to the total number of elements within the entire feature map depth. For example, if you had a matrix of [28,28], flattening it would reduce it to (1,784), where 784=28*28.
Although the question is quite broad and maybe some people will vote to close it, I tried to provide you a short overview over what you asked. I recommend that your start learning the basics behind neural networks and then delve deeper into using a framework, such as TensorFlow or PyTorch.
I would like to design a neural network for a multi-task deep learning task. Within the Keras API we can either use the "Sequential" or "Functional" approach to build such a neural network. Underneath I provide the code I used to build a network using both approaches to build a network with two outputs:
Sequential
seq_model = Sequential()
seq_model.add(LSTM(32, input_shape=(10,2)))
seq_model.add(Dense(8))
seq_model.add(Dense(2))
seq_model.summary()
Functional
input1 = Input(shape=(10,2))
lay1 = LSTM(32, input_shape=(10,2))(input1)
lay2 = Dense(8)(lay1)
out1 = Dense(1)(lay2)
out2 = Dense(1)(lay2)
func_model = Model(inputs=input1, outputs=[out1, out2])
func_model.summary()
When I look at both the summary outputs for the models, each of them contains identical number of trainable params:
Up until now, this looks fine - however I start doubting myself when I plot both models (using keras.utils.plot_model) which results in the followings graphs:
Personally I do not know how to interpret these. When using a multi-task learning approach, I want all neurons (in my case 8) of the layer before the output-layer to connect to both output neurons. For me this clearly shows in the Functional API (where I have two Dense(1) instances), but this is not very clear from the Sequential API. Nevertheless, the amount of trainable params is identical; suggesting that also the Sequential API the last layer is fully connected to both neurons in the Dense output layer.
Could anybody explain to me the differences between those two examples, or are those fully identical and result in the same neural network architecture? Also, which one would be preferred in this case?
Thank you a lot in advance.
The difference between Sequential and functional keras API:
The sequential API allows you to create models layer-by-layer for most
problems. It is limited in that it does not allow you to create models
that share layers or have multiple inputs or outputs.
the functional API allows you to create models that have a lot more
flexibility as you can easily define models where layers connect to
more than just the previous and next layers. In fact, you can connect
layers to (literally) any other layer. As a result, creating complex
networks such as siamese networks and residual networks become
possible.
To answer your question:
No these APIs are not the same and the number of layers is normal that are the same number.
Which one to use? It depends on the use you want to make of this network. What are you doing the training for? What do you want the output to be?
I recommend this link to make the most of the concept.
Sequential Models & Functional Models
I hope I helped you understand better.
Both models are (in theory) equivalent, as the two output nodes do not have any interaction between them.
It is just that the required outputs have a different shape
[(batch_size,2)]
vs
[(batch_size,),(batch_size,)]
and thus, the loss will be different.
The total loss is averaged for the sequential model in this example, whereas it is summed up for the functional model with two outputs (at least with a default loss such as MSE).
Of course, you can also adapt the functional model to be exactly equivalent to the sequential model:
out1 = Dense(2)(lay2)
#out2 = Dense(1)(lay2)
func_model = Model(inputs=input1, outputs=out1)
Maybe you will also need some activations after the Dense layers.
Both networks are functionally equivalent. Dense layers are fully connected by definition, which is considered to be the most basic and simple design that can be assumed for "normal" neural networks not otherwise specified. The exact learned parameters and behavior may vary slightly based on the implementation. The graph presented is ambiguous only because it does not show the connection of the neurons (which may number in the millions), but rather provides a symbolic representation of the connectivity with its name (Dense), in this case indicating a fully connected layer.
I expect that the sequential model (or equivalent functional model using one dense layer with two neurons as the output) would be faster because it can use a simplified optimization path, but I have not tested this and I have no knowledge of the compile time optimizations performed by Tensorflow.
Tensorflow's DropoutWrapper allows to apply dropout to either the cell's inputs, outputs or states. However, I haven't seen an option to do the same thing for the recurrent weights of the cell (4 out of the 8 different matrices used in the original LSTM formulation). I just wanted to check that this is the case before implementing a Wrapper of my own.
EDIT:
Apparently this functionality has been added in newer versions (my original comment referred to v1.4): https://github.com/tensorflow/tensorflow/issues/13103
It's because original LSTM model only applies dropout on the input and output layers (only to the non-recurrent layers.) This paper is considered as a "textbook" that describes the LSTM with dropout: https://arxiv.org/pdf/1409.2329.pdf
Recently some people tried applying dropout in recurrent layers as well. If you want to look at the implementation and the math behind it, search for "A Theoretically Grounded Application of Dropout in Recurrent Neural Networks" by Yarin Gal. I'm not sure Tensorflow or Keras already implemented this approach though.