How to evaluate Gaussian Process Latent Variable Model¶ - python

I am following a tutorial on Gaussian Process Latent Variable Model here is the link https://pyro.ai/examples/gplvm.html
It is a dimension-reduction method.
Now I want to evaluate the model and find the accuracy, confusion matrix is it possible to do so?

I think I have found my answer. I have to make a model that takes the transformed data (dimension-reduced data) as input. After training this model I can evaluate the model.

Related

Best Practice for Transforming y_pred in Tensorflow's Metric

In my previous project, I need to frame an image classification task as a regression problem. I implement the regression model using Tensorflow, with standard Sequential model with a 1 node Dense layer with no activation function as the last layer. In order to measure the performance, I need to use standard classification metrics, such as accuracy and cohen kappa.
However, I can't directly use those metrics because my model is a regression model, so I need to clip and round the output before feeding them to the metrics. I use a workaround by defining my own metric, however that workaround is not practical. Therefore, I'm thinking about contributing to Tensorflow by implementing a custom transformation_function to transform y_pred by a Tensor lambda function before storing them in the __update_state method. After reading the source code, I get doubts regarding this idea. So, I'm asking out to you, fellow Tensorflow user/contributors, what is the best practice of transforming y_pred before feeding it to a metric? Is this functionality already implemented in the newest version?
Thank you!

How to write a scikit-lean estimator with different predict results

I'm trying to wrap a new method called "GEMSEC: Embedding with Self Clustering" in a model class that conforms to scikit-learn's conventions.
I read about predict function here and it seems like predict function must return an array of [n_samples,]or [n_samples, n_outputs].
The model I'm implementing does two different things: Learning embeddings (representation) and clustering and I don't know what to return from my the predict function to be suitable for predict functions as defined by scikit-learn.
Thanks in advance

reconstruct inputs from outputs in regression neural networks in tensorflow

Say we train a multilayer NN in tensorflow for a regression task (i.e. multi input and multi output case). Then we have new instances and we apply the trained model and of course we get the corresponding outputs. Is there a way to backpropagate the outputs and reconstruct the inputs in tensorflow in an easy/efficient manner? What I am thinking is to then use the difference of the original and the reconstructed inputs of the new instances as a QC measure i.e. if the reconstructed inputs are not close enough to the originals then we have a problem etc. I hope I am making myself clear.
No, unfortunately you cannot take a trained model and try to get the corresponding input. The reason for this is that you have infinite possible solutions for each output.
Furthermore, backpropagation is not passing an output backwards through the network. Its the idea of determining what parameters in the model are contributing to what extent to loss function. This will not give the inputs to these hidden layers, but the extent at which the weights affected your decision.

neural nets - How can I associate a confidence to my loss function?

I am trying doing OCC (one class classification) using an autoencoder based neural network.
To make a long story short I train my neural network with 200 matrices each containing 128 dataelements. Those are then compressed (see autoencoder).
Once the training is done I pass a new matrix to my neural net (test data) and based on the loss function I know whether the data I passed to it belongs to the target class or not.
I would like to know how I can compute a classification confidence in % based on the loss function I obtain when passing test data.
Thanks
In case it helps I am using Tensorflow
Well actually normally you try to minimize your cost function (or in the case of one training observation your loss function). Normally the probability of the class you want to predict is not done using the loss function, but using a sigmoid output layer for example. You need a function that goes from 0 to 1 and that behaves like a probability. Where did you get the idea of using the loss function to evaluate your proabibility? But I am not an expert in one class classification (or outliers detection)... I guess you actually want the probability of your observation of not belonging to your class right?

How to examine the feature weights of a Tensorflow LinearClassifier?

I am trying to understand the Large-scale Linear Models with TensorFlow documentation. The docs motivate these models as follows:
Linear model can be interpreted and debugged more easily than neural
nets. You can examine the weights assigned to each feature to figure
out what's having the biggest impact on a prediction.
So I ran the extended code example from the accompanying TensorFlow Linear Model Tutorial. In particular, I ran the example code from GitHub with the model-type flag set to wide. This correctly ran and produced accuracy: 0.833733, similar to the accuracy: 0.83557522 on the Tensorflow web page.
The example uses a tf.estimator.LinearClassifier to train the weights. However, in contrast to the quoted motivation of being able to examine the weights, I can't find any function to actually extract the trained weights in the LinearClassifier documentation.
Question: how do I access the trained weights for the various feature columns in a tf.estimator.LinearClassifier? I'd prefer to be able to extract all the weights in a NumPy array.
Note: I am coming from an R environment where linear regression / classification models have a coefs method to extract learned weights. I want to be able to compare linear models in both R and TensorFlow on the same datasets.
After training the model with Estimator, you could use the tf.train.load_variable to retrieve the weights from checkpoint. You can use tf.train.list_variables to find the names for model weights.
There are plans to add this support in Estimator directly also.

Categories