I would try to implement a Siamese Neural Network thas has as output not only the similarity metric, but plus also able to classify the labels of each pair of input. The input are semantic audio embeddings.
Actually I have two problems:
1: In a siamese neural network the labels are the "labels of the pair"? Is there the possibility to preserve the label of the single input too? I mean is there the possibility to compute a loss function that combine the loss of the classifier+the similarity metric?
2: Do you think that I should divide the problems? I mean two networks, one is the siamese and then get the output embedding of the siamese and feed the Feed Forward network with the siamese output?(saving the similarity metric and using in the loss function of the second neural network?)
Hope I explain well the problem, and hope that someone has the solution.
Mike
For the first question, you can have a look at https://www.tensorflow.org/guide/keras/train_and_evaluate#custom_losses
There they give an example for how to write a custom loss function.
Related
I am implementing a feed-forward neural network for a specific clustering problem.
I'm not sure if it is possible or even makes sense, but the network consists of multiple layers followed by a clustering layer (say, k-means) used to calculate the clustering loss.
The NN layers act as a feature extractor, while the last layer is only used to calculate the loss (for example, by calculating the similarity score among different data points).
Actually, this network architecture is part of a bigger auto-encoder similar to what is discussed in this paper.
The question here is can I define a custom loss function in Tensorflow/Keras that receives the output of NN and compute the clustering loss? And how?
When training Neural Networks for classification in TensorFlow/Keras, or Pytorch, is it possible to put constraints on the weights in the output layer such that they are chosen from a specific finite feasible set?
For example, let's say W is the weight in the output layer, is it possible to put constraints on W such that the optimal W is selected from the set S={W_1, W_2, ..., W_n}, where each W_i is a given feasible value for W? i.e. I will give the values of the W_1,...,W_n to the model
If this is not possible in TensorFlow or Pytorch, is there any other ways to achieve this?
Thanks!
The answer you might be looking is something similar to set weights of a layer using set_weights in Tensorflow.
If that doesn't help your cause then you can check Tensorflow's Custom Layers.
I have a siamese network trained Model in Keras like this. This Model expects two inputs and it then calculates the distance between these and generates a similarity score. But now I only want to extract the features and use some other techniques to find similar images. As a siamese network is basically one single (same) CNN by which both images are passed and now I don't have to calculate similarity between two, can I only pass a single image at a time and get the features from the aforementioned trained CNN?
I tried this intermediate_layer_model = Model(inputs=model.input[0], outputs=model.get_layer(layer_name).output) as mentioned here but it throws a ValueError as the graph expects 2 inputs.
Adding screenshot of my model.layers
I'm trying to predict some time-series signal using a number of other time-series signals. For that purpose i'm using a LSTM network. The input signals are normalized and the same are the output signals. I'm using MSE loss, and implementing using tensorflow
The network give farely good prediction, but it is very noisy. I want to make it more smooth, as if a LPF filter was used on the LSTM output.
The optimal solution for me is some parameter that i can change that will filter more/less frequencies from the LSTM output.
How can i do that? i was thinking about trying to constrain the loss function somehow?
Thanks
I tried adding a fully connected layers after the LSTM, batch normalization and single and multi-layer LSTM networks
Say we train a multilayer NN in tensorflow for a regression task (i.e. multi input and multi output case). Then we have new instances and we apply the trained model and of course we get the corresponding outputs. Is there a way to backpropagate the outputs and reconstruct the inputs in tensorflow in an easy/efficient manner? What I am thinking is to then use the difference of the original and the reconstructed inputs of the new instances as a QC measure i.e. if the reconstructed inputs are not close enough to the originals then we have a problem etc. I hope I am making myself clear.
No, unfortunately you cannot take a trained model and try to get the corresponding input. The reason for this is that you have infinite possible solutions for each output.
Furthermore, backpropagation is not passing an output backwards through the network. Its the idea of determining what parameters in the model are contributing to what extent to loss function. This will not give the inputs to these hidden layers, but the extent at which the weights affected your decision.