Sci-kit learn machine learning script for 2 datasets - python

Not alot of wisdom here... But I have a script that will compile and test the algorithm two times with the for i in range loop to see if there is any variation in root mean squared error.
Is it possible to modify the code where the loop will work to test two different datasets? IE, a df would run first one time compile rmse and then a df2 could run compile rmse and then I can compare/print the rmse between the two.. Both datasets would have the same ['Demand'] as the response variable.
#Test random Forest
import numpy as np
from sklearn import preprocessing, neighbors
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn.externals import joblib
import math
rmses = []
for i in range(2):
X = np.array(df2.drop(['Demand'],1))
y = np.array(df2['Demand'])
offset = int(X.shape[0] * 0.7)
X_train, y_train = X[:offset], y[:offset]
X_test, y_test = X[offset:], y[offset:]
clf = RandomForestRegressor(n_estimators=60, min_samples_split=6)
clf.fit(X_train, y_train)
mse = mean_squared_error(y_test, clf.predict(X_test))
rmse = math.sqrt(mse)
print("rmse: %.4f" % rmse)
rmses.append(rmse)
print(sum(rmses)/len(rmses))

You can create a list of dfs and iterate over that:
rmses = []
df_lst = [df1, df2]
for df in df_lst:
X = np.array(df.drop(['Demand'],1))
y = np.array(df['Demand'])
offset = int(X.shape[0] * 0.7)
X_train, y_train = X[:offset], y[:offset]
X_test, y_test = X[offset:], y[offset:]
clf = RandomForestRegressor(n_estimators=60, min_samples_split=6)
clf.fit(X_train, y_train)
mse = mean_squared_error(y_test, clf.predict(X_test))
rmse = math.sqrt(mse)
print("rmse: %.4f" % rmse)
rmses.append(rmse)
print(sum(rmses)/len(rmses))

You could use an auxiliar df and assign the dataframe you want to compile on each iteration by using a condition:
for i in range(2):
if i==1:
aux_df = df
else:
aux_df = df2
.
.
.
That way you can use the first df in the first iteration and df2 in the second iteration.

Related

How can we include a prediction column in the initial dataset/dataframe after performing K-Fold cross validation?

I would like to run a K-fold cross validation on my data using a classifier. I want to include the prediction (or predicted probability) columns for each sample directly into the initial dataset/dataframe. Any ideas?
from sklearn.metrics import accuracy_score
import pandas as pd
from sklearn.model_selection import KFold
k = 5
kf = KFold(n_splits=k, random_state=None)
acc_score = []
auroc_score = []
for train_index , test_index in kf.split(X):
X_train , X_test = X.iloc[train_index,:],X.iloc[test_index,:]
y_train , y_test = y[train_index] , y[test_index]
model.fit(X_train, y_train)
pred_values = model.predict(X_test)
predict_prob = model.predict_proba(X_test.values)[:,1]
auroc = roc_auc_score(y_test, predict_prob)
acc = accuracy_score(pred_values , y_test)
auroc_score.append(auroc)
acc_score.append(acc)
avg_acc_score = sum(acc_score)/k
print('accuracy of each fold - {}'.format(acc_score))
print('Avg accuracy : {}'.format(avg_acc_score))
print('AUROC of each fold - {}'.format(auroc_score))
print('Avg AUROC : {}'.format(sum(auroc_score)/k))
Given this code, how could I begin to generate such an idea: add a prediction column or, even better, the prediction probability columns for each sample within the initial dataset?
In 10-fold cross-validation, each example (sample) will be used exactly once in a test set and 9 times in a training set. So, after 10-fold cross-validation, the result should be a dataframe where I would have the predicted class for ALL examples in the dataset. Each example will be assigned its initial features, its labelled class, and the class predicted computed in the cross-validation fold where that example was used in the test set.
You can use cross_val_predict, see help page, it basically returns you the cross validated estimates:
from sklearn.datasets import make_classification
from sklearn.model_selection import cross_val_predict, KFold
from sklearn.metrics import accuracy_score
from sklearn import datasets, linear_model
from sklearn.linear_model import LogisticRegression
import pandas as pd
X,y = make_classification()
df = pd.DataFrame(X,columns = ["feature{:02d}".format(i) for i in range(X.shape[1])])
df['label'] = y
df['pred'] = cross_val_predict(LogisticRegression(), X, y, cv=KFold(5))
You can use the .loc method to accomplish this. This question has a nice answer that shows how to use it: df.loc[index_position, "column_name"] = some_value
So, an edited version of the code you posted (I needed data, and removed auc_roc since we aren't using probabilities per your edit):
from sklearn.metrics import accuracy_score, roc_auc_score
import pandas as pd
from sklearn.model_selection import KFold
from sklearn.datasets import load_breast_cancer
from sklearn.neural_network import MLPClassifier
X,y = load_breast_cancer(return_X_y=True, as_frame=True)
model = MLPClassifier()
k = 5
kf = KFold(n_splits=k, random_state=None)
acc_score = []
auroc_score = []
# Create columns
X['Prediction'] = 1
# Define what values to use for the model
model_columns = [x for x in X.columns if x != 'Prediction']
for train_index , test_index in kf.split(X):
X_train , X_test = X.iloc[train_index,:],X.iloc[test_index,:]
y_train , y_test = y[train_index] , y[test_index]
model.fit(X_train[model_columns], y_train)
pred_values = model.predict(X_test[model_columns])
acc = accuracy_score(pred_values , y_test)
acc_score.append(acc)
# Add values to the dataframe
X.loc[test_index, 'Prediction'] = pred_values
avg_acc_score = sum(acc_score)/k
print('accuracy of each fold - {}'.format(acc_score))
print('Avg accuracy : {}'.format(avg_acc_score))
# Add label back per question
X['Label'] = y
# Print first 5 rows to show that it works
print(X.head(n=5))
Yields
accuracy of each fold - [0.9210526315789473, 0.9122807017543859, 0.9736842105263158, 0.9649122807017544, 0.8672566371681416]
Avg accuracy : 0.927837292345909
mean radius mean texture ... Prediction Label
0 17.99 10.38 ... 0 0
1 20.57 17.77 ... 0 0
2 19.69 21.25 ... 0 0
3 11.42 20.38 ... 1 0
4 20.29 14.34 ... 0 0
[5 rows x 32 columns]
(Obviously the model/values etc are all arbitrary)

Training loop for XGBoost in different dataset

I have developed some different datasets and I want to write a for loop to do the training for each of which and at the end, I want to have RMSE for each dataset. I tried by passing through a for loop but it does not work since it gives back the same value for each dataset while I know that it should be different. The code that I have written is below:
for i in NEW_middle_index:
DF = df1.iloc[i-100:i+100,:]
# Append an empty sublist inside the list
FINAL_DF.append(DF)
y = DF.iloc[:,3]
X = DF.drop(columns='Target')
index_train = int(0.7 * len(X))
X_train = X[:index_train]
y_train = y[:index_train]
X_test = X[index_train:]
y_test = y[index_train:]
scaler_x = MinMaxScaler().fit(X_train)
X_train = scaler_x.transform(X_train)
X_test = scaler_x.transform(X_test)
xgb_r = xg.XGBRegressor(objective ='reg:linear',
n_estimators = 20, seed = 123)
for i in range(len(NEW_middle_index)):
# print(i)
# Fitting the model
xgb_r.fit(X_train,y_train)
# Predict the model
pred = xgb_r.predict(X_test)
# RMSE Computation
rmse = np.sqrt(mean_squared_error(y_test,pred))
# print(rmse)
RMSE.append(rmse)
Not sure if you indented it correctly. You are overwriting X_train and X_test and when you fit your model, its always on the same dataset, hence you get the same results.
One option is to fit the model once you create the train / test dataframes. Else if you want to keep the train / test set, maybe something like below, to store them in a list of dictionaries, without changing too much of your code:
import pandas as pd
import numpy as np
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
import xgboost as xg
df1 = pd.DataFrame(np.random.normal(0,1,(600,3)))
df1['Target'] = np.random.uniform(0,1,600)
NEW_middle_index = [100,300,500]
NEWDF = []
for i in NEW_middle_index:
y = df1.iloc[i-100:i+100:,3]
X = df1.iloc[i-100:i+100,:].drop(columns='Target')
index_train = int(0.7 * len(X))
scaler_x = MinMaxScaler().fit(X)
X_train = scaler_x.transform(X[:index_train])
y_train = y[:index_train]
X_test = scaler_x.transform(X[index_train:])
y_test = y[index_train:]
NEWDF.append({'X_train':X_train,'y_train':y_train,'X_test':X_test,'y_test':y_test})
Then we fit and calculate RMSE:
RMSE = []
xgb_r = xg.XGBRegressor(objective ='reg:linear',n_estimators = 20, seed = 123)
for i in range(len(NEW_middle_index)):
xgb_r.fit(NEWDF[i]['X_train'],NEWDF[i]['y_train'])
pred = xgb_r.predict(NEWDF[i]['X_test'])
rmse = np.sqrt(mean_squared_error(NEWDF[i]['y_test'],pred))
RMSE.append(rmse)
RMSE
[0.3524827559800294, 0.3098101362502435, 0.3843173269966071]

KNN model, accuracy(clf.score) returns 0

I am working one a simple KNN model with 3NN to predict a weight,
However, the accuracy is 0.0, I don't know why.
The code can give me a prediction on weight with 58 / 59.
This is the reproducible code
import numpy as np
from sklearn import preprocessing, neighbors
from sklearn.model_selection import train_test_split
import pandas as pd
from sklearn.metrics import accuracy_score
#Create df
data = {"ID":[i for i in range(1,11)],
"Height":[5,5.11,5.6,5.9,4.8,5.8,5.3,5.8,5.5,5.6],
"Age":[45,26,30,34,40,36,19,28,23,32],
"Weight": [77,47,55,59,72,60,40,60,45,58]
}
df = pd.DataFrame(data, columns = [x for x in data.keys()])
print("This is the original df:")
print(df)
#Feature Engineering
df.drop(["ID"], 1, inplace = True)
X = np.array(df.drop(["Weight"],1))
y = np.array(df["Weight"])
#define training and testing
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size =0.2)
#Build clf with n =3
clf = neighbors.KNeighborsClassifier(n_neighbors=3)
clf.fit(X_train, y_train)
#accuracy
accuracy = clf.score(X_test, y_test)
print("\n accruacy = ", accuracy)
#Prediction on 11th
ans = np.array([5.5,38])
ans = ans.reshape(1,-1)
prediction = clf.predict(ans)
print("\nThis is the ans: ", prediction)
You are classifying Weight which is a continuous (not a discrete) variable. This should be a regression rather than a classification. Try KNeighborsRegressor.
To evaluate your result, use metrics for regression such as R2 score.
If your score is low, that can mean different things: training set too small, test set too different from training set, regression model not adequate...

Improve precision of my predictive technique in Python

I am using the following Python code to make output predictions depending on some values using decision trees based on entropy/gini index. My input data is contained in the file: https://drive.google.com/file/d/1C8GZ2wiqFUW3WuYxyc0G3axgkM1Uwsb6/view?usp=sharing
The first column "gold" in the file contains the output that I am trying to predict (either T or N). The remaining columns represents some 0 or 1 data that I can use to predict the first column. I am using a test set of 30% and a training set of 70%. I am getting the same precision/recall using either entropy or gini index. I am getting a precision of 0.80 for T and a recall of 0.54 for T. I would like to increase the precision of T and I am okay if the recall for T goes down as well, I am willing to accept this tradeoff. I do not care about the precision/recall of N predictions, I am just trying to improve the precision of T, that's all I care about. I guess increasing the precision means that we should abstain from making predictions in some situations that we are not certain about. How to do that?
# Run this program on your local python
# interpreter, provided you have installed
# the required libraries.
# Importing the required packages
import numpy as np
import pandas as pd
from sklearn.metrics import confusion_matrix
from sklearn.cross_validation import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
from sklearn.ensemble import ExtraTreesClassifier
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
from sklearn.externals.six import StringIO
from IPython.display import Image
from sklearn.tree import export_graphviz
from sklearn import tree
import collections
import pydotplus
# Function importing Dataset
column_count =0
def importdata():
balance_data = pd.read_csv( 'data1extended.txt', sep= ',')
row_count, column_count = balance_data.shape
# Printing the dataswet shape
print ("Dataset Length: ", len(balance_data))
print ("Dataset Shape: ", balance_data.shape)
print("Number of columns ", column_count)
# Printing the dataset obseravtions
print ("Dataset: ",balance_data.head())
return balance_data, column_count
def columns(balance_data):
row_count, column_count = balance_data.shape
return column_count
# Function to split the dataset
def splitdataset(balance_data, column_count):
# Separating the target variable
X = balance_data.values[:, 1:column_count]
Y = balance_data.values[:, 0]
# Splitting the dataset into train and test
X_train, X_test, y_train, y_test = train_test_split(
X, Y, test_size = 0.3, random_state = 100)
return X, Y, X_train, X_test, y_train, y_test
# Function to perform training with giniIndex.
def train_using_gini(X_train, X_test, y_train):
# Creating the classifier object
clf_gini = DecisionTreeClassifier(criterion = "gini",
random_state = 100,max_depth=3, min_samples_leaf=5)
# Performing training
clf_gini.fit(X_train, y_train)
return clf_gini
# Function to perform training with entropy.
def tarin_using_entropy(X_train, X_test, y_train):
# Decision tree with entropy
clf_entropy = DecisionTreeClassifier(
criterion = "entropy", random_state = 100,
max_depth = 3, min_samples_leaf = 5)
# Performing training
clf_entropy.fit(X_train, y_train)
return clf_entropy
# Function to make predictions
def prediction(X_test, clf_object):
# Predicton on test with giniIndex
y_pred = clf_object.predict(X_test)
print("Predicted values:")
print(y_pred)
return y_pred
# Function to calculate accuracy
def cal_accuracy(y_test, y_pred):
print("Confusion Matrix: ",
confusion_matrix(y_test, y_pred))
print ("Accuracy : ",
accuracy_score(y_test,y_pred)*100)
print("Report : ",
classification_report(y_test, y_pred))
#Univariate selection
def selection(column_count, data):
# data = pd.read_csv("data1extended.txt")
X = data.iloc[:,1:column_count] #independent columns
y = data.iloc[:,0] #target column i.e price range
#apply SelectKBest class to extract top 10 best features
bestfeatures = SelectKBest(score_func=chi2, k=5)
fit = bestfeatures.fit(X,y)
dfscores = pd.DataFrame(fit.scores_)
dfcolumns = pd.DataFrame(X.columns)
df=pd.DataFrame(data, columns=X)
#concat two dataframes for better visualization
featureScores = pd.concat([dfcolumns,dfscores],axis=1)
featureScores.columns = ['Specs','Score'] #naming the dataframe columns
print(featureScores.nlargest(5,'Score')) #print 10 best features
return X,y,data,df
#Feature importance
def feature(X,y):
model = ExtraTreesClassifier()
model.fit(X,y)
print(model.feature_importances_) #use inbuilt class feature_importances of tree based classifiers
#plot graph of feature importances for better visualization
feat_importances = pd.Series(model.feature_importances_, index=X.columns)
feat_importances.nlargest(5).plot(kind='barh')
plt.show()
#Correlation Matrix
def correlation(data, column_count):
corrmat = data.corr()
top_corr_features = corrmat.index
plt.figure(figsize=(column_count,column_count))
#plot heat map
g=sns.heatmap(data[top_corr_features].corr(),annot=True,cmap="RdYlGn")
def generate_decision_tree(X,y):
clf = DecisionTreeClassifier(random_state=0)
data_feature_names = ['callersAtLeast1T','CalleesAtLeast1T','callersAllT','calleesAllT','CallersAtLeast1N','CalleesAtLeast1N','CallersAllN','CalleesAllN','childrenAtLeast1T','parentsAtLeast1T','childrenAtLeast1N','parentsAtLeast1N','childrenAllT','parentsAllT','childrenAllN','ParentsAllN','ParametersatLeast1T','FieldMethodsAtLeast1T','ReturnTypeAtLeast1T','ParametersAtLeast1N','FieldMethodsAtLeast1N','ReturnTypeN','ParametersAllT','FieldMethodsAllT','ParametersAllN','FieldMethodsAllN']
#generate model
model = clf.fit(X, y)
# Create DOT data
dot_data = tree.export_graphviz(clf, out_file=None,
feature_names=data_feature_names,
class_names=y)
# Draw graph
graph = pydotplus.graph_from_dot_data(dot_data)
# Show graph
Image(graph.create_png())
# Create PDF
graph.write_pdf("tree.pdf")
# Create PNG
graph.write_png("tree.png")
# Driver code
def main():
# Building Phase
data,column_count = importdata()
X, Y, X_train, X_test, y_train, y_test = splitdataset(data, column_count)
clf_gini = train_using_gini(X_train, X_test, y_train)
clf_entropy = tarin_using_entropy(X_train, X_test, y_train)
# Operational Phase
print("Results Using Gini Index:")
# Prediction using gini
y_pred_gini = prediction(X_test, clf_gini)
cal_accuracy(y_test, y_pred_gini)
print("Results Using Entropy:")
# Prediction using entropy
y_pred_entropy = prediction(X_test, clf_entropy)
cal_accuracy(y_test, y_pred_entropy)
#COMMENTED OUT THE 4 FOLLOWING LINES DUE TO MEMORY ERROR
#X,y,dataheaders,df=selection(column_count,data)
#generate_decision_tree(X,y)
#feature(X,y)
#correlation(dataheaders,column_count)
# Calling main function
if __name__=="__main__":
main()
I would suggest using Pipelines, to build data pipelines and GridSearchCV to find the best possible hyper-parameters and classifiers for the pipe.
A basic example;
from sklearn.tree import DecisionTreeClassifier
from sklearn.feature_selection import SelectKBest, chi2, f_class
from sklearn.metrics import accuracy_score
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
pipe = Pipeline[('kbest', SelectKBest(chi2, k=3000)),
('clf', DecisionTreeClassifier())
])
pipe_params = {'kbest__k': range(1, 10, 1),
'kbest__score_func': [f_classif, chi2],
'clf__max_depth': np.arange(1,30),
'clf__min_samples_leaf': [1,2,4,5,10,20,30,40,80,100]}
grid_search = GridSearchCV(pipe, pipe_params, n_jobs=-1
scoring=accuracy_score, cv=10)
grid_search.fit(X_train, Y_train)
This will iterate over every hyper-parameters in pipe_params and choose the best classifier based on accuracy_score.

Random forest algorithm not working for new datasets

I used random forest algorithm in python to train my dataset1 and I got an accuracy of 99%. But when I tried with the new dataset2 to predict the values, I am getting wrong values. I manually checked the results for the new dataset and when I compared with the prediction results, the accuracy is very low.
Below is my Code :
from IPython import get_ipython
get_ipython().run_line_magic('matplotlib', 'inline')
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize']=(20.0,10.0)
data = pd.read_csv('D:/Users/v477sjp/lpthw/Extract.csv', usecols=['CON_ID',
'CON_LEGACY_ID', 'CON_CREATE_TD',
'CON_CREATE_LT', 'BUL_CSYS_ID_ORIG', 'BUL_CSYS_ID_CORIG',
'BUL_CSYS_ID_DEST', 'BUL_CSYS_ID_CLEAR', 'TOP_ID', 'CON_DG_IN',
'PTP_ID', 'SMO_ID_1',
'SMO_ID_8', 'LOB_ID', 'PRG_ID', 'PSG_ID', 'SMP_ID', 'COU_ISO_ID_ORIG',
'COU_ISO_ID_DEST', 'CON_DELIV_DUE_DT', 'CON_DELIV_DUE_LT',
'CON_POSTPONED_DT', 'CON_DELIV_PLAN_DT', 'CON_INTL_IN', 'PCE_NR',
'CON_TC_PCE_QT', 'CON_TC_GRS_WT', 'CON_TC_VL', 'PCE_OC_LN',
'PCE_OC_WD','PCE_OC_HT', 'PCE_OC_VL', 'PCE_OC_WT', 'PCE_OA_LN',
'PCE_OA_WD','PCE_OA_HT', 'PCE_OA_VL', 'PCE_OA_WT', 'COS_EVENT_TD',
'COS_EVENT_LT',
'((XSF_ID||XSS_ID)||XSG_ID)', 'BUL_CSYS_ID_OCC',
'PCE_NR.1', 'PCS_EVENT_TD', 'PCS_EVENT_LT',
'((XSF_ID||XSS_ID)||XSG_ID).1', 'BUL_CSYS_ID_OCC.1',
'BUL_CSYS_ID_1', 'BUL_CSYS_ID_2', 'BUL_CSYS_ID_3',
'BUL_CSYS_ID_4', 'BUL_CSYS_ID_5', 'BUL_CSYS_ID_6', 'BUL_CSYS_ID_7',
'BUL_CSYS_ID_8', 'BUL_CSYS_ID_9', 'BUL_CSYS_ID_10', 'BUL_CSYS_ID_11',
'BUL_CSYS_ID_12', 'BUL_CSYS_ID_13', 'BUL_CSYS_ID_14',
'BUL_CSYS_ID_15',
'BUL_CSYS_ID_16', 'CON_TOT_SECT_NR', 'DELAY'] )
df = pd.DataFrame(data.values ,columns=data.columns)
for col_name in df.columns:
if(df[col_name].dtype == 'object' and col_name != 'DELAY'):
df[col_name]= df[col_name].astype('category')
df[col_name] = df[col_name].cat.codes
target_attribute = df['DELAY']
input_attribute=df.loc[:,'CON_ID':'CON_TOT_SECT_NR']
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(input_attribute,target_attribute, test_size=0.3)
from sklearn.ensemble import RandomForestRegressor
rf = RandomForestRegressor(n_estimators = 1000, random_state = 42)
rf.fit(X_train, y_train);
predictions = rf.predict(X_test)
errors = abs(predictions - y_test)
print(errors)
print('Mean Absolute Error:', round(np.mean(errors), 2), 'result.')
mape = 100 * (errors / y_test)
accuracy = 100 - np.mean(mape)
print('Accuracy:', round(accuracy, 2), '%.')
data_new = pd.read_csv('D:/Users/v477sjp/lpthw/Extract-0401- TestCurrentData-Null.csv', usecols=['CON_ID','CON_LEGACY_ID','CON_CREATE_TD','CON_CREATE_LT','BUL_CSYS_ID_ORIG', 'BUL_CSYS_ID_CORIG','BUL_CSYS_ID_DEST','BUL_CSYS_ID_CLEAR','TOP_ID',
'CON_DG_N', 'PTP_ID', 'SMO_ID_1',
'SMO_ID_8', 'LOB_ID', 'PRG_ID', 'PSG_ID', 'SMP_ID', 'COU_ISO_ID_ORIG',
'COU_ISO_ID_DEST', 'CON_DELIV_DUE_DT', 'CON_DELIV_DUE_LT',
'CON_POSTPONED_DT', 'CON_DELIV_PLAN_DT', 'CON_INTL_IN', 'PCE_NR',
'CON_TC_PCE_QT', 'CON_TC_GRS_WT', 'CON_TC_VL','PCE_OC_LN','PCE_OC_WD',
'PCE_OC_HT', 'PCE_OC_VL', 'PCE_OC_WT', 'PCE_OA_LN', 'PCE_OA_WD',
'PCE_OA_HT', 'PCE_OA_VL', 'PCE_OA_WT', 'COS_EVENT_TD', 'COS_EVENT_LT',
'((XSF_ID||XSS_ID)||XSG_ID)', 'BUL_CSYS_ID_OCC',
'PCE_NR.1','PCS_EVENT_TD', 'PCS_EVENT_LT',
'((XSF_ID||XSS_ID)||XSG_ID).1', 'BUL_CSYS_ID_OCC.1',
'BUL_CSYS_ID_1', 'BUL_CSYS_ID_2', 'BUL_CSYS_ID_3',
'BUL_CSYS_ID_4', 'BUL_CSYS_ID_5', 'BUL_CSYS_ID_6', 'BUL_CSYS_ID_7',
'BUL_CSYS_ID_8', 'BUL_CSYS_ID_9', 'BUL_CSYS_ID_10', 'BUL_CSYS_ID_11',
'BUL_CSYS_ID_12', 'BUL_CSYS_ID_13', 'BUL_CSYS_ID_14','BUL_CSYS_ID_15',
'BUL_CSYS_ID_16', 'CON_TOT_SECT_NR', 'DELAY'] )
df_new = pd.DataFrame(data_new.values ,columns=data_new.columns)
for col_name in df_new.columns:
if(df_new[col_name].dtype == 'object' and col_name != 'DELAY' ):
df_new[col_name]= df_new[col_name].astype('category')
df_new[col_name] = df_new[col_name].cat.codes
X_test_new=df_new.loc[:,'CON_ID':'CON_TOT_SECT_NR']
y_pred_new = rf.predict(X_test_new)
df_new['Delay_1']=y_pred_new
df_new.to_csv('prediction_new.csv')
The prediction results are wrong for the new dataset and the accuracy is very low. Accuracy for data is 99%. I should be getting negative results for the new dataset. But all the values I got are positive. Please help
Seems like the algorithm is overfitted to the training dataset. Some options to try are
1) Use a larger dataset
2) decrease the features/columns or do some feature engineering
3) use regularization
4) if the dataset is not too large try decreasing the number of estimators in case of random forest.
5) play with other parameters like max_features, max_depth, min_samples_split, min_samples_leaf
Hope this helps

Categories