I wish to use classification metrics like matthews_corrcoef as a metric to a neural network built with CNTK. The way I could find as of now was to evaluate the value by passing the predictions and label as shown
matthews_corrcoef(cntk.argmax(y_true, axis=-1).eval(), cntk.argmax(y_pred, axis=-1).eval())
Ideally I'd like to pass the metric to the trainer object while building my network.
One of the ways would be to create own custom metric and pass that to the trainer object. Although possible, it'll be better to be able to reuse the already existing metrics present in other libraries.
Unless this metric is already implemented in CNTK, implement your own custom "metric" function in whatever format CNTK requires, and have it pass the inputs on to scikit-learn's metric function.
Related
I need a scikit-learn composite estimator for vector targets, but I need to define different hyperparameters for each target.
My first instinct was to define a MultiOutputRegressor of dummy estimators, then overwrite the estimators_ attribute with the desired regressors, but this does not work as only the base estimator is defined on construction; it is then copied on fit.
Do I need to write my own meta-estimator class, or is there a better solution I'm not thinking of?
This was answered off-site by Dr. Lemaitre - no prepacked solution exists to define multiple different regressors into a single multi-output regressor, but a decent work-around is to use one of the -CV family of regressors such as ElasticNetCV as a base estimator. This will allow different hyperparameters for each output, assuming the parameters can be decently tuned on each instance of fit.
In my previous project, I need to frame an image classification task as a regression problem. I implement the regression model using Tensorflow, with standard Sequential model with a 1 node Dense layer with no activation function as the last layer. In order to measure the performance, I need to use standard classification metrics, such as accuracy and cohen kappa.
However, I can't directly use those metrics because my model is a regression model, so I need to clip and round the output before feeding them to the metrics. I use a workaround by defining my own metric, however that workaround is not practical. Therefore, I'm thinking about contributing to Tensorflow by implementing a custom transformation_function to transform y_pred by a Tensor lambda function before storing them in the __update_state method. After reading the source code, I get doubts regarding this idea. So, I'm asking out to you, fellow Tensorflow user/contributors, what is the best practice of transforming y_pred before feeding it to a metric? Is this functionality already implemented in the newest version?
Thank you!
Using python and any machine learning library, I'm trying to have two target labels and a custom loss function. From my understanding, there is only one way to achieve this and that is by using Keras. Is this correct?
Here is a list of other things I have tried, have I missed something?
LightGBM
This article is the first that pops up when searching for custom loss functions. Unfortunately, LightGBM doees not support more than one target label and it doesn't seem like that's going to change anytime soon.
XGBoost
Has the same problem as LightGBM, you cannot have multiple labels only multiple target classes (Done by duplicating those rows) as discussed here.
SciKit-Learn: GridSearchCV and make_scorer
This initially looked good as you can have several target labels. However, the make_scorer method only scores the result of the model and it is not the loss function the model itself uses.
I am trying to make a custom CNN architecture using Pytorch. I want to have about the same control as what I would get if I make the architecture using numpy only.
I am new to Pytorch and would like to see some code samples of CNNs implemented without the nn.module class, if possible.
You have to implement backward() function in your custom class.
However from your question it is not clear whether
your need just a new series of CNN block (so you better use nn.module and something like
nn.Sequential(nn.Conv2d( ...) )
you just need gradient descent https://github.com/jcjohnson/pytorch-examples#pytorch-autograd , so computation backward on your own.
I'm trying to implement Google's Facenet paper:
First of all, is it possible to implement this paper using the Sequential API of Keras or should I go for the Graph API?
In either case, could you please tell me how do I pass the custom loss function tripletLoss to the model compile and how do I receive the anchor embedding, positive embedding and the negative embedding as parameters to calculate the loss?
Also, what should be the second parameter Y in model.fit(), I do not have any in this case...
This issue explains how to create a custom objective (loss) in Keras:
def dummy_objective(y_true, y_pred):
return 0.5 # your implem of tripletLoss here
model.compile(loss=dummy_objective, optimizer='adadelta')
Regarding the y parameter of .fit(), since you are the one handling it in the end (the y_true parameter of the objective function is taken from it), I would say you can pass whatever you need that can fit through Keras plumbing. And maybe a dummy vector to pass dimension checks if your really don't need any supervision.
Eventually, as to how to implement this particular paper, looking for triplet or facenet in Keras doc didn't return anything. So you'll probably have to either implement it yourself or find someone who has.