h2o: F1 score and other binary classification metrics missing - python

I am able to run the following example code and get an F1 score:
import h2o
from h2o.estimators.gbm import H2OGradientBoostingEstimator
h2o.init()
# import the airlines dataset:
# This dataset is used to classify whether a flight will be delayed 'YES' or not "NO"
# original data can be found at http://www.transtats.bts.gov/
airlines= h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/airlines/allyears2k_headers.zip")
# convert columns to factors
airlines["Year"]= airlines["Year"].asfactor()
airlines["Month"]= airlines["Month"].asfactor()
airlines["DayOfWeek"] = airlines["DayOfWeek"].asfactor()
airlines["Cancelled"] = airlines["Cancelled"].asfactor()
airlines['FlightNum'] = airlines['FlightNum'].asfactor()
# set the predictor names and the response column name
predictors = ["Origin", "Dest", "Year", "UniqueCarrier",
"DayOfWeek", "Month", "Distance", "FlightNum"]
response = "IsDepDelayed"
# split into train and validation sets
train, valid = airlines.split_frame(ratios = [.8], seed = 1234)
# train your model
airlines_gbm = H2OGradientBoostingEstimator(sample_rate = .7, seed = 1234)
airlines_gbm.train(x = predictors,
y = response,
training_frame = train,
validation_frame = valid)
# retrieve the model performance
perf = airlines_gbm.model_performance(valid)
perf
With output like so:
ModelMetricsBinomial: gbm
** Reported on test data. **
MSE: 0.20546330299964743
RMSE: 0.4532806007316521
LogLoss: 0.5967028742962095
Mean Per-Class Error: 0.31720065289432364
AUC: 0.7414970113257631
AUCPR: 0.7616331690362552
Gini: 0.48299402265152613
Confusion Matrix (Act/Pred) for max f1 # threshold = 0.35417599264806404:
NO YES Error Rate
0 NO 1641.0 2480.0 0.6018 (2480.0/4121.0)
1 YES 595.0 4011.0 0.1292 (595.0/4606.0)
2 Total 2236.0 6491.0 0.3524 (3075.0/8727.0)
...
However, my dataset doesn't work in a similar manner, despite appearing to be of the same form. My dataset target variable also has a binary label. Some information about my dataset:
y_test.nunique()
failure 2
dtype: int64
Yet my performance (perf) metrics are a much smaller subset of the example code:
perf = gbm.model_performance(hf_test)
perf
ModelMetricsRegression: gbm
** Reported on test data. **
MSE: 0.02363221438767555
RMSE: 0.1537277281028883
MAE: 0.07460874699751764
RMSLE: 0.12362377397478382
Mean Residual Deviance: 0.02363221438767555
It is difficult to share my data due to its sensitive nature. Any ideas on what to check?

You're training a regression model and that's why you're missing the binary classification metrics. The way that H2O knows whether to train a regression vs classification model is by looking at the data type of the response column.
We explain it here in the H2O User Guide, but this is a frequent question we get since it's different than how scikit-learn works, which uses different methods for regression vs classification and doesn't require you to think about column types.
y_test.nunique()
failure 2
dtype: int64
On the response column in your training data, you can do something like this:
train["response"] = train["response"].asfactor()
Alternatively, when you read the file in from disk, you can parse the response column as "enum" type, so you don't have to convert it, after-the-fact. There's some examples of how to do that in Python here. If the response is stored as integers, H2O just assumes it's a numeric column when it reads in the data from disk, but if the response is stored as strings, it will correctly parse it as a categorical (aka. "enum") column and you won't need to specify or convert it.

Related

SVM classifier n_samples, n_splits problem sklearn Python

I'm trying to predict volatility one step ahead with an SVM model based on O'Reilly book example (Machine Learning for Financial Risk Management with Python). When I copy exactly the example (with S&P500 data) it works well but now I'm having troubles with this chunk of code with a particular fund returns data:
# returns
r = np.array([ nan, 0.0013933 , 0.00118874, 0.00076462, 0.00168565,
-0.00018507, -0.00390753, 0.00307275, -0.00351472])
# horizon
t = 252
# mean of returns
mu = r.mean()
# critical value
z = norm.ppf(0.95)
# realized volatility
vol = r.rolling(5).std()
vol = pd.DataFrame(vol)
vol.reset_index(drop=True, inplace=True)
# SVM GARCH
r_svm = r ** 2
r_svm = r_svm.reset_index()
# inputs X (returns and realized volatility)
X = pd.concat([vol, r_svm], axis=1, ignore_index=True)
X = X.dropna().copy()
X = X.reset_index()
X.drop([1, 'index'], axis=1, inplace=True)
# labels y realized volatility shifted 1 period onward
vol = vol.dropna().reset_index()
vol.drop('index', axis=1, inplace=True)
# linear kernel
svr_lin = SVR(kernel='linear')
# hyperparameters grid
para_grid = {'gamma': sp_rand(),
'C': sp_rand(),
'epsilon': sp_rand()}
# svm classifier (regression?)
clf = RandomizedSearchCV(svr_lin, para_grid)
clf.fit(X[:-1].dropna().values,
vol[1:].values.reshape(-1,))
# prediction
n_vol = clf.predict(X.iloc[-1:])
The raised error is:
ValueError: Cannot have number of splits n_splits=5 greater than the number of samples: n_samples=3.
The code works with longer returns series so I assume that the problem is the length of the array but I can't figure out how to solve it. can someone help me with that?
This error is getting raised because you use RandomizedSearchCV with default cv parameter.
By default RandomizedSearchCV is running 5-folds cross-validation to find the best hyperparameters for the model.
5-folds cross-validation means splitting your training data into 5 subsets and training 5 different models based on these splits.
Looks like you have less than 5 objects in your training set, so splitting your data into 5 folds isn't possible.
To fix the issue you should either add more data or decrease number of folds for the RandomizedSearchCV by adding cv parameter:
clf = RandomizedSearchCV(svr_lin, para_grid, cv=2)
I'd recommend to collect more data, since 4 data points most likely won't be enough to make the model accurate or predictive.

How to forecast time series using AutoReg in python

I'm trying to build old school model using only auto regression algorithm. I found out that there's an implementation of it in statsmodel package. I've read the documentation, and as I understand it should work as ARIMA. So, here's my code:
import statsmodels.api as sm
model = sm.tsa.AutoReg(df_train.beer, 12).fit()
And when I want to predict new values, I'm trying to follow the documentation:
y_pred = model.predict(start=df_test.index.min(), end=df_test.index.max())
# or
y_pred = model.predict(start=100, end=1000)
Both returns a list of NaNs.
Also, when I type model.predict(0, df_train.size - 1) it predicts real values, but model.predict(0, df_train.size) predicts NaNs list.
Am I doing something wrong?
P.S. I know there's ARIMA, ARMA or SARIMAX algorithms, that can be used as basic auto regression. But I need exactly AutoReg.
We can do the forecasting in couple of ways:
by directly using the predict() function and
by using the definition of AR(p) process and the parameters learnt with AutoReg(): this will be helpful for short-term predictions, as we shall see.
Let's start with a sample dataset from statsmodels, the data looks like the following:
import statsmodels.api as sm
data = sm.datasets.sunspots.load_pandas().data['SUNACTIVITY']
plt.plot(range(len(data)), data)
Let's fit an AR(p) process to model the time series and use partial autocorrelation plot to find the order p, as shown below
As seen from above, the first few PACF values remain significant, let's use p=10 for the AR(p).
Let's divide the data into training and validation (test) datasets and fit auto-regressive model of order 10 using the training data:
from statsmodels.tsa.ar_model import AutoReg
n = len(data)
ntrain = int(n*0.9)
ntest = n - ntrain
lag = 10
res = AutoReg(data[:ntrain], lags = lag).fit()
Now, use the predict() function for forecasting all values corresponding to the held-out dataset:
preds = res.model.predict(res.params, start=n-ntest, end=n)
Notice that we can get the exactly same predictions using the parameters from the trained model, as shown below:
x = data[ntrain-lag:ntrain].values
preds1 = []
for t in range(ntrain, n):
pred = res.params[0] + np.sum(res.params[1:]*x[::-1])
x[:lag-1], x[lag-1] = x[-(lag-1):], pred
preds1.append(pred)
Note that the forecast values generated this way is same as the ones obtained using the predict() function above.
np.allclose(preds.values, np.array(preds1))
# True
Now, let's plot the forecast values for the test data:
As can be seen, for long term prediction, quality of forecasting is not that good (since the forecasted values are used for long term prediction).
Let's instead go for short-term predictions now and use the last lag points from the dataset to forecast the next value, as shown in the next code snippet.
preds = []
for t in range(ntrain, n):
pred = res.params[0] + np.sum(res.params[1:]*data[t-lag:t].values[::-1])
preds.append(pred)
As can be seen from the next plot, short term forecasting works way better:
You can use this code for forecasting
import statsmodels as sm
model = sm.tsa.AutoReg(df_train.beer, 12).fit()
y_pred = model.model.predict(model.params, start=df_test.index.min(), end=df_test.index.max())
from statsmodels.tsa.ar_model import AutoReg
model=AutoReg(dataset[''],lags=1)
ARFit=model.fit()
forecasted=ARFit.predict(start=len(dataset),end=len(dataset)+12)
#visualizacion
dataset[''].plot(figsize=(12,8),legend=True)
forecasted.plot(legend=True)

LightFM train_interactions shared among train and test sets: This will cause incorrect evaluation, check your data split

tl;dr: Working with Yelp Dataset to make a recommendation System but running into Test interactions matrix and train interactions matrix share 68 interactions. This will cause incorrect evaluation, check your data split. error when running the following LightFM code.
test_auc = auc_score(model,
test,
#train_interactions=train, #Unable to run with this line uncommented
item_features=sparse_features_matrix,
num_threads=NUM_THREADS).mean()
print('Hybrid test set AUC: %s' % test_auc)
Full Story: Working with Yelp Dataset to build a recommendation system.
Going off the code provided in example documentation (https://making.lyst.com/lightfm/docs/examples/hybrid_crossvalidated.html) for Hybrid Collaborative Filtering.
I ran my code the following way:
from sklearn.model_selection import train_test_split
from lightfm import LightFM
from scipy import sparse
from lightfm.evaluation import auc_score
train, test = train_test_split(sparse_Rating_Matrix, test_size=0.25,random_state=4)
# Set the number of threads; you can increase this
# if you have more physical cores available.
NUM_THREADS = 2
NUM_COMPONENTS = 100
NUM_EPOCHS = 3
ITEM_ALPHA = 1e-6
# Define a new model instance
model = LightFM(loss='warp',
item_alpha=ITEM_ALPHA,
no_components=NUM_COMPONENTS)
# Fit the hybrid model. Note that this time, we pass
# in the item features matrix.
model = model.fit(train,
item_features=sparse_features_matrix,
epochs=NUM_EPOCHS,
num_threads=NUM_THREADS)
# Don't forget the pass in the item features again!
train_auc = auc_score(model,
train,
item_features=sparse_features_matrix,
num_threads=NUM_THREADS).mean()
print('Hybrid training set AUC: %s' % train_auc)
test_auc = auc_score(model,
test,
#train_interactions=train, # Unable to run with this line uncommented
item_features=sparse_features_matrix,
num_threads=NUM_THREADS).mean()
print('Hybrid test set AUC: %s' % test_auc)
I had 2 problems:
1) Running the line in question uncommented (train_interactions=train) originally yielded Inconsistent Shape
which was resolved by the following:
"test" data set was modified by the following block of code to append a block of zeros below it until the dimensions match that of my train data set (per this recommendation: https://github.com/lyst/lightfm/issues/369):
#Add X users to Test so that the number of rows in Train match Test
N = train.shape[0] #Rows in Train set
n,m = test.shape #Rows & columns in Test set
z = np.zeros([(N-n),m]) #Create the necessary rows of zeros with m columns
test = test.todense() #Temporarily convert Test into a numpy array
test = np.vstack((test,z)) #Vertically stack Test on top of the blank users
test = sparse.csr_matrix(test) #Convert back to sparse
2) After the shape issue was resolved, I tried to implement "train_interactions=train"
But ran into Test interactions matrix and train interactions matrix share 68 interactions. This will cause incorrect evaluation, check your data split.
And I"m not sure how to resolve this 2nd issue. Any ideas?
Details:
-"sparse_features_matrix" is a sparse matrix of {items x categories} where if an item was "Italian" and "Pizza" then the category of "Italian" and "Pizza" would have a value "1" for that item's row ... "0" elsewhere.
-"sparse_Rating_Matrix" is a sparse matrix of {users x items} containing values of the user's ratings to the restaurant (item).
04/08/2020 Update:
LightFM has a whole Database() class object that you should use to prep your data set prior to model evaluation. I found a great github post (https://github.com/lyst/lightfm/issues/494) where user Med-ELOMARI provides an amazing walk through on a small test data set.
When I prepped my data through this method, I was able to add in user_features that I wanted to model (E.g: User_1592 likes "Thai","Mexican","Sushi" cuisines).
Per Turbo's comment, I used LightFM's random_train_test_split method (had originally split my data via sklearn's train_test_split method) and ran the auc_score with the new train/test sets AND the correctly (as far as im aware) prepared model I still run into the same error code:
Input:
%%time
(train,test) = random_train_test_split(lightfm_interactions,test_percentage=0.25) #LightFM's method to split
# Don't forget the pass in the item features again!
train_auc = auc_score(model_users,
train,
user_features=lightfm_user_features_list,
num_threads=NUM_THREADS).mean()
print('User_feature training set AUC: %s' % train_auc)
test_auc = auc_score(model_users,
test,
#train_interactions=train, #Still can't get this to function
user_features=lightfm_user_features_list,
num_threads=NUM_THREADS).mean()
print('User_feature test set AUC: %s' % test_auc)
Output if "train_interactions=train" is used:
ValueError: Test interactions matrix and train interactions matrix share 435 interactions. This will cause incorrect evaluation, check your data split.
Good news however is --- by switching from sklearn's train_test_split to LightFM's random_train_test_split my model's AUC score went from 0.49 to 0.96 on training. So I guess it's important to stick with LightFM's methods if available!
LightFM provide a way of splitting your dataset, did you look on it?
With it, it might work.
https://making.lyst.com/lightfm/docs/cross_validation.html

Multivariate time series forecasting with 3 months dataset

I have 3 months of data (each row corresponding to each day) generated and I want to perform a multivariate time series analysis for the same :
the columns that are available are -
Date Capacity_booked Total_Bookings Total_Searches %Variation
Each Date has 1 entry in the dataset and has 3 months of data and I want to fit a multivariate time series model to forecast other variables as well.
So far, this was my attempt and I tried to achieve the same by reading articles.
I did the same -
df['Date'] = pd.to_datetime(Date , format = '%d/%m/%Y')
data = df.drop(['Date'], axis=1)
data.index = df.Date
from statsmodels.tsa.vector_ar.vecm import coint_johansen
johan_test_temp = data
coint_johansen(johan_test_temp,-1,1).eig
#creating the train and validation set
train = data[:int(0.8*(len(data)))]
valid = data[int(0.8*(len(data))):]
freq=train.index.inferred_freq
from statsmodels.tsa.vector_ar.var_model import VAR
model = VAR(endog=train,freq=train.index.inferred_freq)
model_fit = model.fit()
# make prediction on validation
prediction = model_fit.forecast(model_fit.data, steps=len(valid))
cols = data.columns
pred = pd.DataFrame(index=range(0,len(prediction)),columns=[cols])
for j in range(0,4):
for i in range(0, len(prediction)):
pred.iloc[i][j] = prediction[i][j]
I have a validation set and prediction set. However the predictions are way worse than expected.
The plots of the dataset are -
1. % Variation
Capacity_Booked
Total bookings and searches
The output that I am receiving are -
Prediction dataframe -
Validation Dataframe -
As you can see that predictions are way off what is expected. Can anyone advise a way to improve the accuracy. Also, if I fit the model on whole data and then print the forecasts, it doesn't take into account that new month has started and hence to predict as such. How can that be incorporated in here. any help is appreciated.
EDIT
Link to the dataset - Dataset
Thanks
One manner to improve your accuracy is to look to the autocorrelation of each variable, as suggested in the VAR documentation page:
https://www.statsmodels.org/dev/vector_ar.html
The bigger the autocorrelation value is for a specific lag, the more useful this lag will be to the process.
Another good idea is to look to the AIC criterion and the BIC criterion to verify your accuracy (the same link above has an example of usage). Smaller values indicate that there is a bigger probability that you have found the true estimator.
This way, you can vary the order of your autoregressive model and see the one that provides the lowest AIC and BIC, both analyzed together. If AIC indicates the best model is with lag of 3 and the BIC indicates the best model has a lag of 5, you should analyze the values of 3,4 and 5 to see the one with best results.
The best scenario would be to have more data (as 3 months is not much), but you can try these approaches to see if it helps.

Scaling TEST data which is not true representative of train data

I've built a model which I would like to test on unseen data. I feed the data in daily, which can have a different range everyday. For example, if I use MinMaxScaler(), I scale the training data to [0,1] interval.
Now, the maximum value in the training set is 100, which will be transformed to 1.
When my test data comes in daily, it could actually turn out that maximum value was actually 10, which would also be transformed to 1.
# min_max_scaler = preprocessing.MinMaxScaler()
# df_scaled = min_max_scaler.fit_transform(df.values)
I tried using normalisation instead, e.g. df_norm = (df - df.mean()) / (df.max() - df.min()), and then using these values on the test data:
test_norm = (test_df - df.mean()) / (df.max() - df.min())
But my data is not normally distributed. It is probably exponentially distributed, with high number of 0s and lower large values.
No your maximum value of test (ie 10) will not be scaled to 1, but to 0.1 if used properly against learned max and min from training data.
That can be achieved by calling only min_max_scaler.transform() on test data. fit() or fit_transform() is to be used on training data only.
So for training data the code is same:
df_train_scaled = min_max_scaler.fit_transform(df_train.values)
But for testing data, it becomes:
df_test_scaled = min_max_scaler.transform(df_test.values)
This way, the MinMaxScaler will store the max and min values seen during the fit() on the training data and then use them on test data, to properly scale the data.

Categories