I have evolved a neural network using neuralfit, using the mse (mean squared error) loss in the process. How can I evaluate the loss with a different metric after evolution? I want to know the mae (mean absolute error). A minimal working example similar to my own code is:
import neuralfit
import numpy as np
# Define dataset y = x^2
x = np.arange(10).reshape(-1,1)
y = x**2
# Evolve model
model = neuralfit.Model(1,1)
model.compile(loss='mse')
model.evolve(x,y)
# This evaluates using mse, not mae
print(model.evaluate(x,y))
How can I evaluate the mean absolute error of the model?
Related
I am training a VGG-16 model toward a multi-class classification task with Tensorflow 2.4, Keras 2.4.0 versions. The y-true labels are one-hot encoded. I use a couple of custom loss functions, individually, to train the model. First, I used a custom cauchy-schwarz divergence loss function as shown below:
from math import sqrt
from math import log
from scipy.stats import gaussian_kde
from scipy import special
def cs_divergence(p1, p2):
"""p1 (numpy array): first pdfs, p2 (numpy array): second pdfs, Returns:float: CS divergence"""
r = range(0, p1.shape[0])
p1_kernel = gaussian_kde(p1)
p2_kernel = gaussian_kde(p2)
p1_computed = p1_kernel(r)
p2_computed = p2_kernel(r)
numerator = sum(p1_computed * p2_computed)
denominator = sqrt(sum(p1_computed ** 2) * sum(p2_computed**2))
return -log(numerator/denominator)
Then, I used a negative log likelihood custom loss function as shown below:
def nll(y_true, y_pred):
loss = -special.xlogy(y_true, y_pred) - special.xlogy(1-y_true, 1-y_pred)
return loss
And compiled the models as below during training the models individually with these losses:
sgd = SGD(lr=0.0001, decay=1e-6, momentum=0.9, nesterov=True)
model_vgg16.compile(optimizer=sgd,
loss=[cs_divergence],
metrics=['accuracy'])
and
sgd = SGD(lr=0.0001, decay=1e-6, momentum=0.9, nesterov=True)
model_vgg16.compile(optimizer=sgd,
loss=[nll],
metrics=['accuracy'])
I got the following errors when training the model with these loss function:
With cs_divergence, I got the following error:
TypeError: 'NoneType' object cannot be interpreted as an integer
With nll custom loss, I got the following error:
NotImplementedError: Cannot convert a symbolic Tensor (IteratorGetNext:1) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported
I downgraded the Numpy version to 1.19.5 as discussed in NotImplementedError: Cannot convert a symbolic Tensor (2nd_target:0) to a numpy array but it didn't help.
try maybe:
loss=cs_divergence
means without the Brackets.
Is there a way to measure the accuracy of an ARMA-GARCH model in Python using a prediction interval (alpha=0.05)? I fitted an ARMA-GARCH model on log returns and used some classical metrics such as RMSE, MSE (out-of-sample), AIC (in-sample), check on residuals and so on. I would like to add a prediction interval as another measurement of accuracy based on my ARMA-GARCH model predictions. I used the armagarch library (https://github.com/iankhr/armagarch).
I already checked on how to use prediction intervals but not sure how to use it with ARMA-GARCH.
I found these formula searching online: Estimator +- 1.96 (for 95%) * Standard Error.
So far i got it, but i have several Standard Errors in my model output for each parameter in the ARMA and GARCH part, which one i have to use? Is there one Standard Error for the whole model itself?
I would be really happy if anyone could help.
ARMA-GARCH model output
So far I created an ARMA(2,2)-GARCH(1,1) model:
#final test of function
import armagarch as ag
#definitions framework
data = pd.DataFrame(data)
meanMdl = ag.ARMA(order = {'AR':2,'MA':2})
volMdl = ag.garch(order = {'p':1,'q':1})
distMdl = ag.normalDist()
model = ag.empModel(data, meanMdl, volMdl, distMdl)
model_fit = model.fit()
After the model fit defining prediction length and
Recieved two arrays as an output (mean + variance) put them into the correct length:
#first array is mean, second is variance
pred = model.predict(nsteps=len(df_test))
#correct the shapes!
df_pred_mean = pd.DataFrame(np.reshape(pred[0], (len(df_test),
1)))
df_pred_variance = pd.DataFrame(np.reshape(pred[1],
(len(df_test), 1)))
So far so good, now i would like to implement a prediction interval.
I got that one has to use the ARMA part +- 1.96 (95%)* GARCH prediction for each prediction. I implemented it for the upper and lower bound. It just shows the upper bound lower bound is same but using * (-1.96) at the end of the formula.
#upper bound
df_all["upper bound"] =df_all["pred_Mean"]+df_all["pred_Variance"]*1.96
Using it on the actual log returns i trained the model with fails in the way its completely wrong. Now I'm unsure if the main approach i used is wrong or the model I used means the package.
prediction interval vs. actual log return
I have already been given a custom metric code on which my model is going to be evaluated but they've used sklearn's metrices. I know If I have a metric I can use it in callbacks like
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy', custom_metric])
ModelCheckpoint(monitor='val_custom_metric',
save_best_only=True,
save_weights_only=True,
mode='max',
verbose=1)
It is a multi-output problem with 3 labels,
Submissions are evaluated using a hierarchical macro-averaged recall. First, a standard macro-averaged recall is calculated for each component (label_1,label_2 or label_3). The final score is the weighted average of those three scores, with the label_1 given double weight. You can replicate the metric with the following python snippet:
and I am unable to comprehend how do I implement the code given below in keras-
import numpy as np
import sklearn.metrics
scores = []
for component in ['label_1', 'label_2', 'label_3']:
y_true_subset = solution[solution[component] == component]['target'].values
y_pred_subset = submission[submission[component] == component]['target'].values
scores.append(sklearn.metrics.recall_score(
y_true_subset, y_pred_subset, average='macro'))
final_score = np.average(scores, weights=[2,1,1])
How can I convert it in the form to use as a metric? or more preciely, how can I use keras.backend or to implement this code?
You can only implement the metric, the rest is very obscure and will certainly not participate in Keras.
threshold = 0.5 #you can work this threshold for better results
#considering y_true is made of 0 and 1 only
#considering output shape is (batch, 3)
def custom_metric(y_true, y_pred):
weights = K.constant([2,1,1]) #shape (3,)
y_pred = K.cast(K.greater(y_pred, threshold), K.floatx()) #shape (batch, 3)
true_positives = K.sum(y_pred * y_true, axis=0) #shape (3,)
false_negatives = K.sum((1-y_pred) * y_true, axis=0) #shape (3,)
recall = true_positives / (true_positives + false_negatives)
return K.mean(recall * weights)
Notice that this will be calulated batchwise, and since the denominator is different depending on the results, the calculated metric batchwise will be different compared to when you use the metric for the entire dataset.
You may need big batch sizes to avoid metric instability. And it might be interesting to apply the metric on the entire data with a callback to get the exact result.
For my problem, I want to predict customer review scores ranging from 1 to 5.
I thought it would be good to implement this as a regression problem because a predicted 1 from the model while 5 being the true value should be a "worse" prediction than 4.
It is also wished, that the model performs somehow equally good for all review score classes.
Because my dataset is highly unbalanced I want to create a metric/loss that is capable of capturing this (I think just as F1 for classification).
Therefore I created following metric (for now just mse is relevant):
def custom_metric(y_true, y_pred):
df = pd.DataFrame(np.column_stack([y_pred, y_true]), columns=["Predicted", "Truth"])
class_mse = 0
#class_mae = 0
print("MAE for Classes:")
for i in df.Truth.unique():
temp = df[df["Truth"]==i]
mse = mean_squared_error(temp.Truth, temp.Predicted)
#mae = mean_absolute_error(temp.Truth, temp.Predicted)
print("Class {}: {}".format(i, mse))
class_mse += mse
#class_mae += mae
print()
print("AVG MSE over Classes {}".format(class_mse/len(df.Truth.unique())))
#print("AVG MAE over Classes {}".format(class_mae/len(df.Truth.unique())))
Now an example prediction:
import numpy as np
import pandas as pd
from sklearn.metrics import mean_squared_error, mean_absolute_error
# sample predictions: "model" messed up at class 2 and 3
y_true = np.array((1,1,1,2,2,2,3,3,3,4,4,4,5,5,5))
y_pred = np.array((1,1,1,2,2,3,5,4,3,4,4,4,5,5,5))
custom_metric(y_true, y_pred)
Now my question: Is it able to create a custom tensorflow loss function which is able to act in a similar behaviour? I also worked on this implementation which is not yet ready for tensorflow but maybe more alike:
def custom_metric(y_true, y_pred):
mse_class = 0
num_classes = len(np.unique(y_true))
stacked = np.vstack((y_true, y_pred))
for i in np.unique(stacked[0]):
y_true_temp = stacked[0][np.where(stacked[0]==i)]
y_pred_temp = stacked[1][np.where(stacked[0]==i)]
mse = np.mean(np.square(y_pred_temp - y_true_temp))
mse_class += mse
return mse_class/num_classes
But still, I am not sure how to work around the for loop for a tensorflow like definition.
Thanks in advance for any help!
The for loop should be dealt with exactly by means of numpy/tensorflow operations on a tensor.
A custom metric example would be:
from keras import backend as K
def custom_mean_squared_error(y_true, y_pred):
return K.mean(K.square(y_pred - y_true), axis=-1)
where y_true is the ground truth label, y_pred are your predictions. You can see there are not explicit for-loops.
The motivation for not using for loops is that vectorized operations (which are present both in numpy and tensorflow) take advantage of the modern CPU architectures, turning multiple iterative operations into matrix ones. Consider that a dot-product implementation in numpy takes approximately 30 times less than a regular for-loop in Python.
I'm currently working on a variation of Variational Autoencoder in a sequential setting, where the task is to fit/recover a sequence of real-valued observation data (hence it is a regression problem).
I have built my model using tf.keras with eager execution enabled, and tensorflow_probability (tfp). Following VAE concept, the generative net emits the distribution parameters of the observation data, which I model as multivariate normal. Therefore the outputs are mean and logvar of the predicted distribution.
Regarding training process, the first component of the loss is reconstruction error. That is the log likelihood of the true observation, given the predicted (parameters) distribution from the generative net. Here, I use tfp.distributions, since it is fast and handy.
However, after training is done, marked by a considerably low loss value, it turns out that my model seems not to learn anything. The predicted value from the model is just barely flat across the time dimension (recall that the problem is sequential).
Nevertheless, for the sake of sanity check, when I replace log likelihood with MSE loss (which is not justifiable while working on VAE), it yields very good data fitting. So I conclude that there must be something wrong with this log likelihood term. Is there anyone having some clue and/or solution for this?
I have considered replacing the log likelihood with cross-entropy loss, but I think that is not applicable in my case, since my problem is regression and the data can't be normalized into [0,1] range.
I also have tried to implement annealed KL term (i.e. weighing the KL term with constant < 1) when using the log likelihood as the reconstruction loss. But it also didn't work.
Here is my code snippet of the original (using log likelihood as reconstruction error) loss function:
import tensorflow as tf
tfe = tf.contrib.eager
tf.enable_eager_execution()
import tensorflow_probability as tfp
tfd = tfp.distributions
def loss(model, inputs):
outputs, _ = SSM_model(model, inputs)
#allocate the corresponding output component
infer_mean = outputs[:,:,:latent_dim] #mean of latent variable from inference net
infer_logvar = outputs[:,:,latent_dim : (2 * latent_dim)]
trans_mean = outputs[:,:,(2 * latent_dim):(3 * latent_dim)] #mean of latent variable from transition net
trans_logvar = outputs[:,:, (3 * latent_dim):(4 * latent_dim)]
obs_mean = outputs[:,:,(4 * latent_dim):((4 * latent_dim) + output_obs_dim)] #mean of observation from generative net
obs_logvar = outputs[:,:,((4 * latent_dim) + output_obs_dim):]
target = inputs[:,:,2:4]
#transform logvar to std
infer_std = tf.sqrt(tf.exp(infer_logvar))
trans_std = tf.sqrt(tf.exp(trans_logvar))
obs_std = tf.sqrt(tf.exp(obs_logvar))
#computing loss at each time step
time_step_loss = []
for i in range(tf.shape(outputs)[0].numpy()):
#distribution of each module
infer_dist = tfd.MultivariateNormalDiag(infer_mean[i],infer_std[i])
trans_dist = tfd.MultivariateNormalDiag(trans_mean[i],trans_std[i])
obs_dist = tfd.MultivariateNormalDiag(obs_mean[i],obs_std[i])
#log likelihood of observation
likelihood = obs_dist.prob(target[i]) #shape = 1D = batch_size
likelihood = tf.clip_by_value(likelihood, 1e-37, 1)
log_likelihood = tf.log(likelihood)
#KL of (q|p)
kl = tfd.kl_divergence(infer_dist, trans_dist) #shape = batch_size
#the loss
loss = - log_likelihood + kl
time_step_loss.append(loss)
time_step_loss = tf.convert_to_tensor(time_step_loss)
overall_loss = tf.reduce_sum(time_step_loss)
overall_loss = tf.cast(overall_loss, dtype='float32')
return overall_loss