I'm trying to predict volatility one step ahead with an SVM model based on O'Reilly book example (Machine Learning for Financial Risk Management with Python). When I copy exactly the example (with S&P500 data) it works well but now I'm having troubles with this chunk of code with a particular fund returns data:
# returns
r = np.array([ nan, 0.0013933 , 0.00118874, 0.00076462, 0.00168565,
-0.00018507, -0.00390753, 0.00307275, -0.00351472])
# horizon
t = 252
# mean of returns
mu = r.mean()
# critical value
z = norm.ppf(0.95)
# realized volatility
vol = r.rolling(5).std()
vol = pd.DataFrame(vol)
vol.reset_index(drop=True, inplace=True)
# SVM GARCH
r_svm = r ** 2
r_svm = r_svm.reset_index()
# inputs X (returns and realized volatility)
X = pd.concat([vol, r_svm], axis=1, ignore_index=True)
X = X.dropna().copy()
X = X.reset_index()
X.drop([1, 'index'], axis=1, inplace=True)
# labels y realized volatility shifted 1 period onward
vol = vol.dropna().reset_index()
vol.drop('index', axis=1, inplace=True)
# linear kernel
svr_lin = SVR(kernel='linear')
# hyperparameters grid
para_grid = {'gamma': sp_rand(),
'C': sp_rand(),
'epsilon': sp_rand()}
# svm classifier (regression?)
clf = RandomizedSearchCV(svr_lin, para_grid)
clf.fit(X[:-1].dropna().values,
vol[1:].values.reshape(-1,))
# prediction
n_vol = clf.predict(X.iloc[-1:])
The raised error is:
ValueError: Cannot have number of splits n_splits=5 greater than the number of samples: n_samples=3.
The code works with longer returns series so I assume that the problem is the length of the array but I can't figure out how to solve it. can someone help me with that?
This error is getting raised because you use RandomizedSearchCV with default cv parameter.
By default RandomizedSearchCV is running 5-folds cross-validation to find the best hyperparameters for the model.
5-folds cross-validation means splitting your training data into 5 subsets and training 5 different models based on these splits.
Looks like you have less than 5 objects in your training set, so splitting your data into 5 folds isn't possible.
To fix the issue you should either add more data or decrease number of folds for the RandomizedSearchCV by adding cv parameter:
clf = RandomizedSearchCV(svr_lin, para_grid, cv=2)
I'd recommend to collect more data, since 4 data points most likely won't be enough to make the model accurate or predictive.
Related
I'm trying to predict the future values of a share with SKLearn regressors (but it could be the next value of anything, I've tried the same function on Covid Cases data with the same results) but it doesn't work.
I've written a function that takes my train dataset, the target variable, the test Xs and the features to take into account and gives back the prediction:
def predict_share_valuesGRDBST(data, target_variable, X_test, features=None):
# Split data into features (X) and target (y)
if features:
X = data[features]
else:
X = data.drop(target_variable, axis=1)
y = data[target_variable]
# Fit Gradient Boosting model to training data
model = GradientBoostingRegressor(n_estimators=200,random_state=20)
model.fit(X, y)
# Use model to make predictions on next num_predictions values
next_values = model.predict(X_test[features])
return next_values
variable data is like
Date
CloseValue
OpenValue
TradeVolume
...
...
...
...
2023-01-19
100
90
1000
2023-01-20
110
101
1100
Target_variable is 'CloseValue'
X_test is like data but with values in future dates
features variable is like ['OpenValue', 'TradeVolume', 'Date']
but the returned values don't fit at all:
I've tried with other regressors (AdaBoost, RandomForest) but they al give me the same, wrong, results:
that's why I'm think that I am doing something wrong and it's not just a problem of correlation between variables, it seems that they're working on wrong data but I cannot figure out how to fix it. Any ideas?
I have built a multi-step, multi-variate LSTM model to predict the target variable 5 days into the future with 5 days of look-back. The model runs smooth (even though it has to be further improved), but I cannot correctly invert the transformation applied, once I get my predictions.
I have seen on the web that there are many ways to pre-process and transform data. I decided to follow these steps:
Data fetching and cleaning
df = yfinance.download(['^GSPC', '^GDAXI', 'CL=F', 'AAPL'], period='5y', interval='1d')['Adj Close'];
df.dropna(axis=0, inplace=True)
df.describe()
Data set table
Split the data set into train and test
size = int(len(df) * 0.80)
df_train = df.iloc[:size]
df_test = df.iloc[size:]
Scaled train and test sets separately with MinMaxScaler()
scaler = MinMaxScaler(feature_range=(0,1))
df_train_sc = scaler.fit_transform(df_train)
df_test_sc = scaler.transform(df_test)
Creation of 3D X and y time-series compatible with the LSTM model
I borrowed the following function from this article
def create_X_Y(ts: np.array, lag=1, n_ahead=1, target_index=0) -> tuple:
"""
A method to create X and Y matrix from a time series array for the training of
deep learning models
"""
# Extracting the number of features that are passed from the array
n_features = ts.shape[1]
# Creating placeholder lists
X, Y = [], []
if len(ts) - lag <= 0:
X.append(ts)
else:
for i in range(len(ts) - lag - n_ahead):
Y.append(ts[(i + lag):(i + lag + n_ahead), target_index])
X.append(ts[i:(i + lag)])
X, Y = np.array(X), np.array(Y)
# Reshaping the X array to an RNN input shape
X = np.reshape(X, (X.shape[0], lag, n_features))
return X, Y
#In this example let's assume that the first column (AAPL) is the target variable.
trainX,trainY = create_X_Y(df_train_sc,lag=5, n_ahead=5, target_index=0)
testX,testY = create_X_Y(df_test_sc,lag=5, n_ahead=5, target_index=0)
Model creation
def build_model(optimizer):
grid_model = Sequential()
grid_model.add(LSTM(64,activation='tanh', return_sequences=True,input_shape=(trainX.shape[1],trainX.shape[2])))
grid_model.add(LSTM(64,activation='tanh', return_sequences=True))
grid_model.add(LSTM(64,activation='tanh'))
grid_model.add(Dropout(0.2))
grid_model.add(Dense(trainY.shape[1]))
grid_model.compile(loss = 'mse',optimizer = optimizer)
return grid_model
grid_model = KerasRegressor(build_fn=build_model,verbose=1,validation_data=(testX,testY))
parameters = {'batch_size' : [12,24],
'epochs' : [8,30],
'optimizer' : ['adam','Adadelta'] }
grid_search = GridSearchCV(estimator = grid_model,
param_grid = parameters,
cv = 3)
grid_search = grid_search.fit(trainX,trainY)
grid_search.best_params_
my_model = grid_search.best_estimator_.model
Get predictions
yhat = my_model.predict(testX)
Invert transformation of predictions and actual values
Here my problems begin, because I am not sure which way to go. I have read many tutorials, but it seems that those authors prefer to apply MinMaxScaler() on the entire dataset before splitting the data into train and test. I do not agree on this, because, otherwise, training data will be incorrectly scaled with information we should not use (i.e. the test set). So, I followed my approach, but I am stucked here.
I found this possible solution on another post, but it's not working for me:
# invert scaling for forecast
pred_scaler = MinMaxScaler(feature_range=(0, 1)).fit(df_test.values[:,0].reshape(-1, 1))
inv_yhat = pred_scaler.inverse_transform(yhat)
# invert scaling for actual
inv_y = pred_scaler.inverse_transform(testY)
In fact, when I double check the last values of the target from my original data set they don't match with the inverted scaled version of the testY.
Can someone please help me on this? Many thanks in advance for your support!
Two things could be mentioned here. First, you cannot inverse transform something you did not see. This happens because you use two different scalers. The NN will predict values in the range of Scaler 1, where it is not said that this lies within the range of Scaler 2 (scaled on test data). Second, the best practice is to fit your scaler on the training set and use the same scaler (only transform) on the test data as well. Now, you should be able to reverse transform your test results. Third if scaling wents off, because the test set has completely different values - e.g. happens with live streaming data, it is up to you to deal with it, e.g. the min-max scaler will produce values > 1.0.
The utility of Shapley Additive Explanations (SHAP values) is to understand how each feature contributes to a model's prediction. For some objectives, such as regression with RMSE as an objective function, SHAP values are in the native units of the label values. For example, SHAP values could be expressed as USD if estimating housing costs. As you will see below, this is not the case for all objective functions. In particular, Tweedie regression objectives do not yield SHAP values in native units. This is a problem for interpretation, as we would want to know how housing costs are impacted by features in terms of +/- dollars.
Given this information, my question is: How do we transform the SHAP values of each individual feature into the data space of the target labels when explaining models with a Tweedie regression objective?
I'm not aware of any packages that currently implements such a transformation. This remains unresolved in the package put out by the shap authors themselves.
I illustrate the finer points of this question with the R implementation of lightgbm in the following:
library(tweedie)
library(lightgbm)
set.seed(123)
tweedie_variance_power <- 1.2
labels <- rtweedie(1000, mu = 1, phi = 1, power = tweedie_variance_power)
hist(labels)
feat1 <- labels + rnorm(1000) #good signal for label with some noise
feat2 <-rnorm(1000) #garbage feature
feat3 <-rnorm(1000) #garbage feature
features <- cbind(feat1, feat2, feat3)
dTrain <- lgb.Dataset(data = features,
label = labels)
params <- c(objective = 'tweedie',
tweedie_variance_power = tweedie_variance_power)
mod <- lgb.train(data = dTrain,
params = params,
nrounds = 100)
#Predictions in the native units of the labels
predsNative <- predict(mod, features, rawscore = FALSE)
#Predictions in the raw format
predsRaw <- predict(mod, features, rawscore = TRUE)
#We do not expect these values to be equal
all.equal(predsTrans, predsRaw)
"Mean relative difference: 1.503072"
#We expect values to be equal if raw scores are exponentiated
all.equal(predsTrans, exp(predsRaw))
"TRUE" #... our expectations are correct
#SHAP values
shapNative <- predict(mod, features, rawscore = FALSE, predcontrib = TRUE)
shapRaw <- predict(mod, features, rawscore = TRUE, predcontrib = TRUE )
#Are there differences between shap values when rawscore is TRUE or FALSE?
all.equal(shapNative, shapRaw)
"TRUE" #outputs are identical, that is surprising!
#So are the shap values in raw or native formats?
#To anwser this question we can sum them
#testing raw the raw case first
all.equal(rowSums(shapRaw), predsRaw)
"TRUE"
#from this we can conclude that shap values are not in native units,
#regardless of whether rawscore is TRUE or FALSE
#Test native scores just to prove point
all.equal(rowSums(shapNative), predsNative)
"Mean relative difference: 1.636892" # reaffirms that shap values are not in native units
#However, we can perform this operation on the raw shap scores
#to get the prediction in the native value
all.equal(exp(rowSums(shapRaw)), predsNative)
'TRUE'
#reversing the operations does not yield the same result
all.equal(rowSums(exp(shapRaw)), predsNative)
"Mean relative difference: 0.7662481"
#The last line is relevant because it implies
#The relationship between native predictions
#and exponentiated shap values is not linear
#So, given the point of SHAP is to understand how each
#feature impacts the prediction in its native units
#the raw shap values are not as useful as they could be
#Thus, how how would we convert
#each of these four raw shap value elements to native units,
#thus understanding their contributions to their predictions
#in currency of native units?
shapRaw[1,]
-0.15429227 0.04858757 -0.27715359 -0.48454457
ORIGINAL POST AND EDIT
My understanding of SHAP values is that they are in the native units of the labels/response when conducting regression, and that the sum of the SHAP values approximates the model's prediction.
I am trying to extract SHAP values in LightGBM package, with a Tweedie regression objective, but find that the SHAP values are not in the native units of the labels and that they do not sum to predicted values.
It appears that they must be exponentiated, is this correct?
Side note: I understand that the final column of the SHAP values matrix represents the base prediction, and must be added.
Reproducible example:
library(tweedie)
library(caret)
library(lightgbm)
set.seed(123)
tweedie_variance_power <- 1.2
labels <- rtweedie(1000, mu = 1, phi = 1, power = tweedie_variance_power)
hist(labels)
feat1 <- labels + rnorm(1000) #good signal for label with some noise
feat2 <-rnorm(1000) #garbage feature
feat3 <-rnorm(1000) #garbage feature
features <- cbind(feat1, feat2, feat3)
dTrain <- lgb.Dataset(data = features,
label = labels)
params <- c(objective = 'tweedie',
tweedie_variance_power = tweedie_variance_power)
mod <- lgb.train(data = dTrain,
params = params,
nrounds = 100)
preds <- predict(mod, features)
plot(preds, labels,
main = paste('RMSE =',
RMSE(pred = preds, obs = labels)))
#shap values are summing to negative values?
shap_vals <- predict(mod, features, predcontrib = TRUE, rawscore = FALSE)
shaps_sum <- rowSums(shap_vals)
plot(shaps_sum, labels,
main = paste('RMSE =',
RMSE(pred = shaps_sum, obs = labels)))
#maybe we need to exponentiate?
shap_vals_exp <- exp(shap_vals)
shap_vals_exp_sum <- rowSums(shap_vals_exp)
#still looks a little weird, overpredicting
plot(shap_vals_exp_sum, labels,
main = paste('RMSE =',
RMSE(pred = shap_vals_exp_sum, obs = labels)))
EDIT
The order of operations is to sum first and then exponentiate the SHAP values, which will give you the predictions in native unit. Though I still am unclear on how to transform the feature level values to the native response units.
shap_vals_sum_exp <- exp(shaps_sum)
plot(shap_vals_sum_exp, labels,
main = paste('RMSE =',
RMSE(pred = shap_vals_sum_exp, obs = labels)))
I will show how to reconcile shap values and model predictions in Python, both in raw scores and original units. Hopefully it will help you understand where you are in R.
Step 1. Generate dataset
# pip install tweedie
import tweedie
y = tweedie.tweedie(1.2,1,1).rvs(size=1000)
X = np.random.randn(1000,3)
Step 2. Fit model
from lightgbm.sklearn import LGBMRegressor
lgb = LGBMRegressor(objective = 'tweedie')
lgb.fit(X,y)
Step 3. Understand what shap values are.
Shap values for 0th data point
shap_values = lgb.predict(X, pred_contrib=True)
shap_values[0]
array([ 0.36841812, -0.15985678, 0.28910617, -0.27317984])
The first 3 are model contributions to baseline, i.e. shap values themselves:
shap_values[0,:3].sum()
0.4976675073764354
The 4th is baseline in raw scores:
shap_values[0,3]
-0.2731798364061747
Sum of them add up to model prediction in raw scores:
shap_values[0,:3].sum() + shap_values[0,3]
0.22448767097026068
Let's check against raw model predictions:
preds = lgb.predict(X, raw_score=True)
preds[0]
0.2244876709702609
EDIT. Conversion between raw scores and original utits
To convert between raw scores and original units for Tweedie (and for Poisson and for Gamma) distribution you need to be aware of 2 facts:
Original is exp of raw
exp of sum is product of exps
Demo:
0th prediction in original units:
lgb.predict([X[0,:]])
array([0.39394102])
Shap values for 0th row in raw score space:
shap_values = lgb.predict(X, pred_contrib=True, raw_score=True)
shap_values[0]
array([-0.77194274, -0.08343294, 0.22740536, -0.30358374])
Conversion of shap values to original units (product of exponents):
np.prod(np.exp(shap_values[0]))
0.3939410249402226
Looks similar to me again.
I'm trying to build old school model using only auto regression algorithm. I found out that there's an implementation of it in statsmodel package. I've read the documentation, and as I understand it should work as ARIMA. So, here's my code:
import statsmodels.api as sm
model = sm.tsa.AutoReg(df_train.beer, 12).fit()
And when I want to predict new values, I'm trying to follow the documentation:
y_pred = model.predict(start=df_test.index.min(), end=df_test.index.max())
# or
y_pred = model.predict(start=100, end=1000)
Both returns a list of NaNs.
Also, when I type model.predict(0, df_train.size - 1) it predicts real values, but model.predict(0, df_train.size) predicts NaNs list.
Am I doing something wrong?
P.S. I know there's ARIMA, ARMA or SARIMAX algorithms, that can be used as basic auto regression. But I need exactly AutoReg.
We can do the forecasting in couple of ways:
by directly using the predict() function and
by using the definition of AR(p) process and the parameters learnt with AutoReg(): this will be helpful for short-term predictions, as we shall see.
Let's start with a sample dataset from statsmodels, the data looks like the following:
import statsmodels.api as sm
data = sm.datasets.sunspots.load_pandas().data['SUNACTIVITY']
plt.plot(range(len(data)), data)
Let's fit an AR(p) process to model the time series and use partial autocorrelation plot to find the order p, as shown below
As seen from above, the first few PACF values remain significant, let's use p=10 for the AR(p).
Let's divide the data into training and validation (test) datasets and fit auto-regressive model of order 10 using the training data:
from statsmodels.tsa.ar_model import AutoReg
n = len(data)
ntrain = int(n*0.9)
ntest = n - ntrain
lag = 10
res = AutoReg(data[:ntrain], lags = lag).fit()
Now, use the predict() function for forecasting all values corresponding to the held-out dataset:
preds = res.model.predict(res.params, start=n-ntest, end=n)
Notice that we can get the exactly same predictions using the parameters from the trained model, as shown below:
x = data[ntrain-lag:ntrain].values
preds1 = []
for t in range(ntrain, n):
pred = res.params[0] + np.sum(res.params[1:]*x[::-1])
x[:lag-1], x[lag-1] = x[-(lag-1):], pred
preds1.append(pred)
Note that the forecast values generated this way is same as the ones obtained using the predict() function above.
np.allclose(preds.values, np.array(preds1))
# True
Now, let's plot the forecast values for the test data:
As can be seen, for long term prediction, quality of forecasting is not that good (since the forecasted values are used for long term prediction).
Let's instead go for short-term predictions now and use the last lag points from the dataset to forecast the next value, as shown in the next code snippet.
preds = []
for t in range(ntrain, n):
pred = res.params[0] + np.sum(res.params[1:]*data[t-lag:t].values[::-1])
preds.append(pred)
As can be seen from the next plot, short term forecasting works way better:
You can use this code for forecasting
import statsmodels as sm
model = sm.tsa.AutoReg(df_train.beer, 12).fit()
y_pred = model.model.predict(model.params, start=df_test.index.min(), end=df_test.index.max())
from statsmodels.tsa.ar_model import AutoReg
model=AutoReg(dataset[''],lags=1)
ARFit=model.fit()
forecasted=ARFit.predict(start=len(dataset),end=len(dataset)+12)
#visualizacion
dataset[''].plot(figsize=(12,8),legend=True)
forecasted.plot(legend=True)
I use python's scikit-learn module for predicting some values in the CSV file. I am using Random Forest Regressor to do it. As example, i have 8 train values and 3 values to predict - which of codes i must use? As a values to be predicted, I have to give all target values at once (A) or separately (B)?
Variant A:
#Readind CSV file
dataset = genfromtxt(open('Data/for training.csv','r'), delimiter=',', dtype='f8')[1:]
#Target value to predict
target = [x[8:11] for x in dataset]
#Train values to train
train = [x[0:8] for x in dataset]
#Starting traing
rf = RandomForestRegressor(n_estimators=300,compute_importances = True)
rf.fit(train, target)
Variant B:
#Readind CSV file
dataset = genfromtxt(open('Data/for training.csv','r'), delimiter=',', dtype='f8')[1:]
#Target values to predict
target1 = [x[8] for x in dataset]
target2 = [x[9] for x in dataset]
target3 = [x[10] for x in dataset]
#Train values to train
train = [x[0:8] for x in dataset]
#Starting traings
rf1 = RandomForestRegressor(n_estimators=300,compute_importances = True)
rf1.fit(train, target1)
rf2 = RandomForestRegressor(n_estimators=300,compute_importances = True)
rf2.fit(train, target2)
rf3 = RandomForestRegressor(n_estimators=300,compute_importances = True)
rf3.fit(train, target3)
Which version is correct?
Thanks in advance!
Both are possible, but do different things.
The first learns independent models for the different entries of y. The second learns a joint model for all entries of y. If there are meaningful relations between the entries of y that can be learned, the second should be more accurate.
As you are training on very little data and don't regularize, I imagine you are simply overfitting in the second case. I am not entirely sure about the splitting criteria in the regression case but it takes much longer for a leaf to be "pure" if the label-space is three dimensional than if it is just one-dimensional. So you will learn more complex models, that are not warranted by the little data you have.
"8 train values and 3 values" is probably best expressed as "8 features and 3 target variables" in usual machine learning parlance.
Both variants should work and yield the similar predictions as RandomForestRegressor has been made to support multi output regression.
The predictions won't be exactly the same as RandomForestRegressor is a non deterministic algorithm though. But on average the predictive quality of both approaches should be the same.
Edit: see Andreas answer instead.