lightGBM predicts same value - python

I have one problem concerning lgb. When I write
lgb.train(.......)
it finishes in less than milisecond. (for (10 000,25) ) shape dataset.
and when I write predict, all the output variables have same value.
train = pd.read_csv('data/train.csv', dtype = dtypes)
test = pd.read_csv('data/test.csv')
test.head()
X = train.iloc[:10000, 3:-1].values
y = train.iloc[:10000, -1].values
sc = StandardScaler()
X = sc.fit_transform(X)
#pca = PCA(0.95)
#X = pca.fit_transform(X)
d_train = lgb.Dataset(X, label=y)
params = {}
params['learning_rate'] = 0.003
params['boosting_type'] = 'gbdt'
params['objective'] = 'binary'
params['metric'] = 'binary_logloss'
params['sub_feature'] = 0.5
params['num_leaves'] = 10
params['min_data'] = 50
params['max_depth'] = 10
num_round = 10
clf = lgb.train(params, d_train, num_round, verbose_eval=1000)
X_test = sc.transform(test.iloc[:100,3:].values)
pred = clf.predict(X_test, num_iteration = clf.best_iteration)
when I print pred, all the values are (0.49)
It's my first time using lightgbm module. Do I have some error in the code? or I should look for some mismatches in dataset.

Your num_round is too small, it just starts to learn and stops there. Other than that, make your verbose_eval smaller, so see the results visually upon training. My suggestion for you to try the lgb.train code as below:
clf = lgb.train(params, d_train, num_boost_round=5000, verbose_eval=10, early_stopping_rounds = 3500)
Always use early_stopping_rounds since the model should stop if there is no evident learning or the model starts to overfit.
Do not hesitate to ask more. Have fun.

Related

Create train and test with lags of multiple features

I have a classification problem for which I want to create a train and test dataframe with 21 lags of multiple features (X-variables). I already have an easy way to do this with only one feature but I don't know how to adjust this code if I want to use more variables (e.g. df['ETHLogReturn']).
The code I have for one variable is:
Ntest = 252
train = df.iloc[:-Ntest]
test = df.iloc[-Ntest:]
# Create data ready for machine learning algoritm
series = df['BTCLogReturn'].to_numpy()[1:] # first change is NaN
# Did the price go up or down?
target = (targets > 0) * 1
T = 21 # 21 Lags
X = []
Y = []
for t in range(len(series)-T):
x = series[t:t+T]
X.append(x)
y = target[t+T]
Y.append(y)
X = np.array(X).reshape(-1,T)
Y = np.array(Y)
N = len(X)
print("X.shape", X.shape, "Y.shape", Y.shape)
#output --> X.shape (8492, 21) Y.shape (8492,)
Then I create my train and test datasets like this:
Xtrain, Ytrain = X[:-Ntest], Y[:-Ntest]
Xtest, Ytest = X[-Ntest:], Y[-Ntest:]
# example of model:
lr = LogisticRegression()
lr.fit(Xtrain, Ytrain)
print(lr.score(Xtrain, Ytrain))
print(lr.score(Xtest, Ytest))
Does anyone have a suggestion how to adjust this code for a model with lagging variables of multiple columns? Like:
df[['BTCLogReturn','ETHLogReturn']]
Many thanks for your help!

What causes overfitting in the algorithm

import numpy as np
import pandas as pd
import tensorflow as tf
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
#reproducible random seed
seed = 1
np.random.seed(seed)
#Import and normalize the data
df = pd.read_csv('creditcard.csv')
#Exploring the data
# print df.head()
# print df.describe()
# print df.isnull().sum()
# count_class = pd.value_counts(df['Class'])
# count_class.plot(kind = 'bar')
# plt.title('Fraud class histogram')
# plt.xlabel('class')
# plt.ylabel('Frequency')
# plt.show()
# print('Clearly the data is totally unbalanced!')
#to normalize the amount column
# data['normAmount'] = StandardScaler().fit_transform(data['Amount'].reshape(-1, 1))
df['normAmount'] = StandardScaler().fit_transform(df['Amount'].values.reshape(-1, 1))
df = df.drop(['Time','V28','V27','V26','V25','V24','V23','V22','V20','V15','V13','V8','Amount'], axis =1)
X = df.iloc[:,df.columns!='Class']
Y = df.iloc[:,df.columns=='Class']
# number of records in the minority class
number_record_fraud = len(df[df.Class==1])
fraud_indices = np.array(df[df.Class==1].index)
#picking normal class
normal_indices = np.array(df[df.Class==0].index)
#select random x(number_record_fraud) numbers from normal_indices
random_normal_indices = np.random.choice(normal_indices,number_record_fraud,replace=False)
random_normal_indices = np.array(random_normal_indices)
#under sample data
under_sample_indices = np.concatenate([fraud_indices,random_normal_indices])
under_sample_data = df.iloc[under_sample_indices,:]
X_undersample = under_sample_data.iloc[:,under_sample_data.columns!='Class']
Y_undersample = under_sample_data.iloc[:,under_sample_data.columns=='Class']
# split data into train and test dataset
X_train,X_test,Y_train,Y_test = train_test_split(X,Y,test_size = 0.3)
X_train_undersample,X_test_undersample,Y_train_undersample,Y_test_undersample = train_test_split(X_undersample,Y_undersample,test_size=0.3)
#parameters
learning_rate = 0.05
training_epoch = 10
batch_size = 43
display_step = 1
#tf graph input
x = tf.placeholder(tf.float32,[None,18])
y = tf.placeholder(tf.float32,[None,1])
#set model weights
w = tf.Variable(tf.zeros([18,1]))
b = tf.Variable(tf.zeros([1]))
#construct model
pred = tf.nn.softmax(tf.matmul(x,w) + b) #softmax activation
#minimize error using cross entropy
cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred),reduction_indices=1))
#Gradient descent
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
#initializing variables
init = tf.global_variables_initializer()
#launch the graph
with tf.Session() as sess:
sess.run(init)
#training cycle
for epoch in range(training_epoch):
total_batch = len(X_train_undersample)/batch_size
avg_cost = 0
#loop over all the batches
for batch in range(total_batch):
batch_xs = X_train.iloc[(batch)*batch_size:(batch+1) *batch_size]
batch_ys = Y_train.iloc[(batch)*batch_size:(batch+1) *batch_size]
# run optimizer and cost operation
_,c= sess.run([optimizer,cost],feed_dict={x:batch_xs,y:batch_ys})
avg_cost += c/total_batch
correct_prediction = tf.equal(tf.argmax(pred,1),tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
#disply log per epoch step
if (epoch+1) % display_step == 0:
train_accuracy, newCost = sess.run([accuracy, cost], feed_dict={x: X_test,y: Y_test})
print "test_set_accuracy:",accuracy.eval({x:X_test_undersample,y:Y_test_undersample})*100
print "whole_set_accuracy:",accuracy.eval({x:X,y:Y})*100
# print train_accuracy
# print "cost",newCost
print
print 'optimization finished.'
Things I've tried to figure out what's causing it:
Tried changing train dataset length.
Dropped some not needed fields.
Tried putting validation blocks.
Dataset :link
There can be multiple reasons of why it is overfitting , and as well there can be multiple ways to debug it and to fix it. Its hard to tell just from the code, because it also depends on the data, but here are some common reaons as well as fixes:
Too small dataset, adding more data its a common overfitting fix
Too complex model, if you have many features, or complex polonomial features, try to reducing complexity using feature selection
Add regularization: i dont see regularization in your code, try to add it.

Can't reproduce Xgb.cv cross-validation results

I am using Python 3.5 and python implementation of XGBoost, version 0.6
I built a forward feature selection routine in Python, which iteratively builds the optimal set of features (leading to the best score, here metric is binary classification error).
On my data set, using xgb.cv routine, I can get down to an error rate of around 0.21 by increasing max_depth (of trees) up to 40...
But then if I do a custom cross-validation, using the same XG Boost parameters, same folds, same metric and same data set, I reach the best score being 0.70 with max_depth of 4 ... if I use the optimal max_depth obtained by my xgb.cv routine, my score drops to 0.65 ... I just don't understand what is happening ...
My best guess is that xgb.cv is using different folds (i.e. shuffles the data before partitioning), but I also think I submit the folds as an input to xgb.cv (with option Shuffle=False) ... so, it might be something completely different ...
Here is the code of the forward_feature_selection (using xgb.cv):
def Forward_Feature_Selection(train, y_train, params, num_round=30, threshold=0, initial_score=0.5, to_exclude = [], nfold = 5):
k_fold = KFold(n_splits=13)
selected_features = []
gain = threshold + 1
previous_best_score = initial_score
train = train.drop(train.columns[to_exclude], axis=1) # df.columns is zero-based pd.Index
features = train.columns.values
selected = np.zeros(len(features))
scores = np.zeros(len(features))
while (gain > threshold): # we start a add-a-feature loop
for i in range(0,len(features)):
if (selected[i]==0): # take only features not yet selected
selected_features.append(features[i])
new_train = train.iloc[:][selected_features]
selected_features.remove(features[i])
dtrain = xgb.DMatrix(new_train, y_train, missing = None)
# dtrain = xgb.DMatrix(pd.DataFrame(new_train), y_train, missing = None)
if (i % 10 == 0):
print("Launching XGBoost for feature "+ str(i))
xgb_cv = xgb.cv(params, dtrain, num_round, nfold=13, folds=k_fold, shuffle=False)
if params['objective'] == 'binary:logistic':
scores[i] = xgb_cv.tail(1)["test-error-mean"] #classification
else:
scores[i] = xgb_cv.tail(1)["test-rmse-mean"] #regression
else:
scores[i] = initial_score # discard already selected variables from candidates
best = np.argmin(scores)
gain = previous_best_score - scores[best]
if (gain > 0):
previous_best_score = scores[best]
selected_features.append(features[best])
selected[best] = 1
print("Adding feature: " + features[best] + " increases score by " + str(gain) + ". Final score is now: " + str(previous_best_score))
return (selected_features, previous_best_score)
and here is my "custom" cross validation:
mean_error_rate = 0
for train, test in k_fold.split(ds):
dtrain = xgb.DMatrix(pd.DataFrame(ds.iloc[train]), dc.iloc[train]["bin_spread"], missing = None)
gbm = xgb.train(params, dtrain, 30)
dtest = xgb.DMatrix(pd.DataFrame(ds.iloc[test]), dc.iloc[test]["bin_spread"], missing = None)
res.ix[test,"pred"] = gbm.predict(dtest)
cv_reg = reg.fit(pd.DataFrame(ds.iloc[train]), dc.iloc[train]["bin_spread"])
res.ix[test,"lasso"] = cv_reg.predict(pd.DataFrame(ds.iloc[test]))
res.ix[test,"y_xgb"] = res.loc[test,"pred"] > 0.5
res.ix[test, "xgb_right"] = (res.loc[test,"y_xgb"]==res.loc[test,"bin_spread"])
print (str(100*np.sum(res.loc[test, "xgb_right"])/(N/13)))
mean_error_rate += 100*(np.sum(res.loc[test, "xgb_right"])/(N/13))
print("mean_error_rate is : " + str(mean_error_rate/13))
using the following parameters:
params = {"objective": "binary:logistic",
"booster":"gbtree",
"max_depth":4,
"eval_metric" : "error",
"eta" : 0.15}
res = pd.DataFrame(dc["bin_spread"])
k_fold = KFold(n_splits=13)
N = dc.shape[0]
num_trees = 30
And finally the call to my forward feature selection:
selfeat = Forward_Feature_Selection(dc,
dc["bin_spread"],
params,
num_round = num_trees,
threshold = 0,
initial_score=999,
to_exclude = [0,1,5,30,31],
nfold = 13)
Any help to understand what is happening will be greatly appreciated ! Thanks in advance for any tip !
This is normal. I have experienced the same. Firstly, Kfold is splitting differently each time. You have specified the folds in XGBoost but KFold is not splitting consistently, which is normal.
Next, initial state of the model are different each time.
There are inner random states withing XGBoost which can cause this too, try changing the eval metric to see if the variance reduces. If a particular metric suits your needs, try to average the best parameters and use that as your optimal parameters.

Pipeline giving different answer in sklearn python

I have written two programs which are supposed to follow the same logic. But both of them are giving different answers.
First-
train_data = train_features[:1710][:]
train_label = label_features[:1710][:].ravel()
test_data = train_features[1710:][:]
test_label = label_features[1710:][:].ravel()
def getAccuracy(ans):
d = 0
for i in range(np.size(ans,0)):
if(ans[i] == test_label[i]):
d+=1
return (d*100)/float(np.size(ans,0))
estimators = [('pps', pps.RobustScaler()), ('clf', LogisticRegression())]
pipe = Pipeline(estimators)
pipe = pipe.fit(train_data,train_label)
ans = pipe.predict(test_data)
getAccuracy(ans)
Second-
train_data = train_features[:1710][:]
train_label = label_features[:1710][:].ravel()
test_data = train_features[1710:][:]
test_label = label_features[1710:][:].ravel()
def getAccuracy(ans):
d = 0
for i in range(np.size(ans,0)):
if(ans[i] == test_label[i]):
d+=1
return (d*100)/float(np.size(ans,0))
def preprocess(features):
return pps.RobustScaler().fit_transform(features)
train_data = preprocess(train_data)
clf = LogisticRegression().fit(train_data,train_label)
test_data = preprocess(test_data)
ans = clf.predict(test_data)
getAccuracy(ans)
First one gives 80.81 and second one gives 84.92. Why are both of them different?
Your second code is invalid, since your "preprocess" fits the scaler to test set, which should not happen. Pipeline, on the other hand only fits RobustScaler to your train data and then calls "transform" on the test one.

Number of features of the model must match the input

For some reason the features of this dataset is being interpreted as rows, "Model n_features is 16 and input n_features is 18189" Where 18189 is the number of rows and 16 is the correct feature list.
The suspect code is here:
for var in cat_cols:
num = LabelEncoder()
train[var] = num.fit_transform(train[var].astype('str'))
train['output'] = num.fit_transform(train['output'].astype('str'))
for var in cat_cols:
num = LabelEncoder()
test[var] = num.fit_transform(test[var].astype('str'))
test['output'] = num.fit_transform(test['output'].astype('str'))
clf = RandomForestClassifier(n_estimators = 10)
xTrain = train[list(features)].values
yTrain = train["output"].values
xTest = test[list(features)].values
xTest = test["output"].values
clf.fit(xTrain,yTrain)
clfProbs = clf.predict(xTest)#Error happens here.
Anyone got any ideas?
Sample training date csv
tr4,42,"JobCat4","divorced","tertiary","yes",2,"yes","no","unknown",5,"may",0,1,-1,0,"unknown","TypeA"
Sample test data csv
tst2,47,"JobCat3","married","unknown","no",1506,"yes","no","unknown",5,"may",0,1,-1,0,"unknown",?
You have a small typo - you created the variable xTest and then are immediately overwriting to something incorrect. Change the offending lines to:
xTest = test[list(features)].values
yTest = test["output"].values

Categories