I used random forest algorithm in python to train my dataset1 and I got an accuracy of 99%. But when I tried with the new dataset2 to predict the values, I am getting wrong values. I manually checked the results for the new dataset and when I compared with the prediction results, the accuracy is very low.
Below is my Code :
from IPython import get_ipython
get_ipython().run_line_magic('matplotlib', 'inline')
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize']=(20.0,10.0)
data = pd.read_csv('D:/Users/v477sjp/lpthw/Extract.csv', usecols=['CON_ID',
'CON_LEGACY_ID', 'CON_CREATE_TD',
'CON_CREATE_LT', 'BUL_CSYS_ID_ORIG', 'BUL_CSYS_ID_CORIG',
'BUL_CSYS_ID_DEST', 'BUL_CSYS_ID_CLEAR', 'TOP_ID', 'CON_DG_IN',
'PTP_ID', 'SMO_ID_1',
'SMO_ID_8', 'LOB_ID', 'PRG_ID', 'PSG_ID', 'SMP_ID', 'COU_ISO_ID_ORIG',
'COU_ISO_ID_DEST', 'CON_DELIV_DUE_DT', 'CON_DELIV_DUE_LT',
'CON_POSTPONED_DT', 'CON_DELIV_PLAN_DT', 'CON_INTL_IN', 'PCE_NR',
'CON_TC_PCE_QT', 'CON_TC_GRS_WT', 'CON_TC_VL', 'PCE_OC_LN',
'PCE_OC_WD','PCE_OC_HT', 'PCE_OC_VL', 'PCE_OC_WT', 'PCE_OA_LN',
'PCE_OA_WD','PCE_OA_HT', 'PCE_OA_VL', 'PCE_OA_WT', 'COS_EVENT_TD',
'COS_EVENT_LT',
'((XSF_ID||XSS_ID)||XSG_ID)', 'BUL_CSYS_ID_OCC',
'PCE_NR.1', 'PCS_EVENT_TD', 'PCS_EVENT_LT',
'((XSF_ID||XSS_ID)||XSG_ID).1', 'BUL_CSYS_ID_OCC.1',
'BUL_CSYS_ID_1', 'BUL_CSYS_ID_2', 'BUL_CSYS_ID_3',
'BUL_CSYS_ID_4', 'BUL_CSYS_ID_5', 'BUL_CSYS_ID_6', 'BUL_CSYS_ID_7',
'BUL_CSYS_ID_8', 'BUL_CSYS_ID_9', 'BUL_CSYS_ID_10', 'BUL_CSYS_ID_11',
'BUL_CSYS_ID_12', 'BUL_CSYS_ID_13', 'BUL_CSYS_ID_14',
'BUL_CSYS_ID_15',
'BUL_CSYS_ID_16', 'CON_TOT_SECT_NR', 'DELAY'] )
df = pd.DataFrame(data.values ,columns=data.columns)
for col_name in df.columns:
if(df[col_name].dtype == 'object' and col_name != 'DELAY'):
df[col_name]= df[col_name].astype('category')
df[col_name] = df[col_name].cat.codes
target_attribute = df['DELAY']
input_attribute=df.loc[:,'CON_ID':'CON_TOT_SECT_NR']
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(input_attribute,target_attribute, test_size=0.3)
from sklearn.ensemble import RandomForestRegressor
rf = RandomForestRegressor(n_estimators = 1000, random_state = 42)
rf.fit(X_train, y_train);
predictions = rf.predict(X_test)
errors = abs(predictions - y_test)
print(errors)
print('Mean Absolute Error:', round(np.mean(errors), 2), 'result.')
mape = 100 * (errors / y_test)
accuracy = 100 - np.mean(mape)
print('Accuracy:', round(accuracy, 2), '%.')
data_new = pd.read_csv('D:/Users/v477sjp/lpthw/Extract-0401- TestCurrentData-Null.csv', usecols=['CON_ID','CON_LEGACY_ID','CON_CREATE_TD','CON_CREATE_LT','BUL_CSYS_ID_ORIG', 'BUL_CSYS_ID_CORIG','BUL_CSYS_ID_DEST','BUL_CSYS_ID_CLEAR','TOP_ID',
'CON_DG_N', 'PTP_ID', 'SMO_ID_1',
'SMO_ID_8', 'LOB_ID', 'PRG_ID', 'PSG_ID', 'SMP_ID', 'COU_ISO_ID_ORIG',
'COU_ISO_ID_DEST', 'CON_DELIV_DUE_DT', 'CON_DELIV_DUE_LT',
'CON_POSTPONED_DT', 'CON_DELIV_PLAN_DT', 'CON_INTL_IN', 'PCE_NR',
'CON_TC_PCE_QT', 'CON_TC_GRS_WT', 'CON_TC_VL','PCE_OC_LN','PCE_OC_WD',
'PCE_OC_HT', 'PCE_OC_VL', 'PCE_OC_WT', 'PCE_OA_LN', 'PCE_OA_WD',
'PCE_OA_HT', 'PCE_OA_VL', 'PCE_OA_WT', 'COS_EVENT_TD', 'COS_EVENT_LT',
'((XSF_ID||XSS_ID)||XSG_ID)', 'BUL_CSYS_ID_OCC',
'PCE_NR.1','PCS_EVENT_TD', 'PCS_EVENT_LT',
'((XSF_ID||XSS_ID)||XSG_ID).1', 'BUL_CSYS_ID_OCC.1',
'BUL_CSYS_ID_1', 'BUL_CSYS_ID_2', 'BUL_CSYS_ID_3',
'BUL_CSYS_ID_4', 'BUL_CSYS_ID_5', 'BUL_CSYS_ID_6', 'BUL_CSYS_ID_7',
'BUL_CSYS_ID_8', 'BUL_CSYS_ID_9', 'BUL_CSYS_ID_10', 'BUL_CSYS_ID_11',
'BUL_CSYS_ID_12', 'BUL_CSYS_ID_13', 'BUL_CSYS_ID_14','BUL_CSYS_ID_15',
'BUL_CSYS_ID_16', 'CON_TOT_SECT_NR', 'DELAY'] )
df_new = pd.DataFrame(data_new.values ,columns=data_new.columns)
for col_name in df_new.columns:
if(df_new[col_name].dtype == 'object' and col_name != 'DELAY' ):
df_new[col_name]= df_new[col_name].astype('category')
df_new[col_name] = df_new[col_name].cat.codes
X_test_new=df_new.loc[:,'CON_ID':'CON_TOT_SECT_NR']
y_pred_new = rf.predict(X_test_new)
df_new['Delay_1']=y_pred_new
df_new.to_csv('prediction_new.csv')
The prediction results are wrong for the new dataset and the accuracy is very low. Accuracy for data is 99%. I should be getting negative results for the new dataset. But all the values I got are positive. Please help
Seems like the algorithm is overfitted to the training dataset. Some options to try are
1) Use a larger dataset
2) decrease the features/columns or do some feature engineering
3) use regularization
4) if the dataset is not too large try decreasing the number of estimators in case of random forest.
5) play with other parameters like max_features, max_depth, min_samples_split, min_samples_leaf
Hope this helps
Related
I am working one a simple KNN model with 3NN to predict a weight,
However, the accuracy is 0.0, I don't know why.
The code can give me a prediction on weight with 58 / 59.
This is the reproducible code
import numpy as np
from sklearn import preprocessing, neighbors
from sklearn.model_selection import train_test_split
import pandas as pd
from sklearn.metrics import accuracy_score
#Create df
data = {"ID":[i for i in range(1,11)],
"Height":[5,5.11,5.6,5.9,4.8,5.8,5.3,5.8,5.5,5.6],
"Age":[45,26,30,34,40,36,19,28,23,32],
"Weight": [77,47,55,59,72,60,40,60,45,58]
}
df = pd.DataFrame(data, columns = [x for x in data.keys()])
print("This is the original df:")
print(df)
#Feature Engineering
df.drop(["ID"], 1, inplace = True)
X = np.array(df.drop(["Weight"],1))
y = np.array(df["Weight"])
#define training and testing
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size =0.2)
#Build clf with n =3
clf = neighbors.KNeighborsClassifier(n_neighbors=3)
clf.fit(X_train, y_train)
#accuracy
accuracy = clf.score(X_test, y_test)
print("\n accruacy = ", accuracy)
#Prediction on 11th
ans = np.array([5.5,38])
ans = ans.reshape(1,-1)
prediction = clf.predict(ans)
print("\nThis is the ans: ", prediction)
You are classifying Weight which is a continuous (not a discrete) variable. This should be a regression rather than a classification. Try KNeighborsRegressor.
To evaluate your result, use metrics for regression such as R2 score.
If your score is low, that can mean different things: training set too small, test set too different from training set, regression model not adequate...
I am writing a python script that deal with sentiment analysis and I did the pre-process for the text and vectorize the categorical features and split the dataset, then I use the LogisticRegression model and I got accuracy 84%
When I upload a new dataset and try to deploy the created model I got accuracy 51,84%
code:
import pandas as pd
import numpy as np
import re
import string
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
from sklearn.feature_extraction.text import TfidfVectorizer,CountVectorizer,TfidfTransformer
from sklearn.model_selection import train_test_split
from nltk.stem import PorterStemmer
from nltk.stem import WordNetLemmatizer
# ML Libraries
from sklearn.metrics import accuracy_score
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
stop_words = set(stopwords.words('english'))
import joblib
def load_dataset(filename, cols):
dataset = pd.read_csv(filename, encoding='latin-1')
dataset.columns = cols
return dataset
dataset = load_dataset("F:\AIenv\sentiment_analysis\input_2_balanced.csv", ["id","label","date","text"])
dataset.head()
dataset['clean_text'] = dataset['text'].apply(processTweet)
# create doc2vec vector columns
from gensim.test.utils import common_texts
from gensim.models.doc2vec import Doc2Vec, TaggedDocument
documents = [TaggedDocument(doc, [i]) for i, doc in enumerate(dataset["clean_text"].apply(lambda x: x.split(" ")))]
# train a Doc2Vec model with our text data
model = Doc2Vec(documents, vector_size=5, window=2, min_count=1, workers=4)
# transform each document into a vector data
doc2vec_df = dataset["clean_text"].apply(lambda x: model.infer_vector(x.split(" "))).apply(pd.Series)
doc2vec_df.columns = ["doc2vec_vector_" + str(x) for x in doc2vec_df.columns]
dataset = pd.concat([dataset, doc2vec_df], axis=1)
# add tf-idfs columns
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer(min_df = 10)
tfidf_result = tfidf.fit_transform(dataset["clean_text"]).toarray()
tfidf_df = pd.DataFrame(tfidf_result, columns = tfidf.get_feature_names())
tfidf_df.columns = ["word_" + str(x) for x in tfidf_df.columns]
tfidf_df.index = dataset.index
dataset = pd.concat([dataset, tfidf_df], axis=1)
x = dataset.iloc[:,3]
y = dataset.iloc[:,1]
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size = 0.20, random_state = 42)
from sklearn.pipeline import Pipeline
# create pipeline
pipeline = Pipeline([
('bow', CountVectorizer(strip_accents='ascii',
stop_words=['english'],
lowercase=True)),
('tfidf', TfidfTransformer()),
('classifier', LogisticRegression(C=15.075475376884423,penalty="l2")),
])
# Parameter grid settings for LogisticRegression
parameters = {'bow__ngram_range': [(1, 1), (1, 2)],
'tfidf__use_idf': (True, False),
}
grid = GridSearchCV(pipeline, cv=10, param_grid=parameters, verbose=1,n_jobs=-1)
grid.fit(X_train,y_train)
from sklearn.metrics import classification_report,confusion_matrix,accuracy_score
#get predictions from best model above
y_preds = grid.predict(X_test)
cm = confusion_matrix(y_test, y_preds)
print("accuracy score: ",accuracy_score(y_test,y_preds))
print("\n")
print("confusion matrix: \n",cm)
print("\n")
print(classification_report(y_test,y_preds))
joblib.dump(grid,"F:\\AIenv\\sentiment_analysis\\RF_jupyter.pkl")
RF_Model = joblib.load("F:\\AIenv\\sentiment_analysis\\RF_jupyter.pkl")
test_twtr_preds = RF_Model.predict(test_twtr["clean_text"])
I have conducted survey research on different classifications performance in Sentiment analysis.
For a specific twitter dataset, I used to perform models like Logistic Regression, Naïve Bayes, Support vector machine, k-nearest neighbors (KNN), and Decision tree as well.
Observations of the selected dataset show that Logistic Regression and Naïve Bayes perform well in all types of testing with accuracy. SVM at the next. Then Decision tree classification with accuracy. As of the result, KNN scores lowest with accuracy level. Logistic regression and Naïve Bayes models are performing respectively better in sentiment analysis and predictions.
Sentiment Classifier (Accuracy Score RMSE)
LR (78.3541 1.053619)
NB (76.764706 1.064738)
SVM (73.5835 1.074752)
DT (69.2941 1.145234)
KNN (62.9476 1.376589)
Feature extraction is very critical in these cases.
#This may help you.
Importing Essentials
import pandas as pd
from sklearn import metrics
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.linear_model import LogisticRegression
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV
import time
df = pd.read_csv('FilePath', header=0)
X = df['content']
y = df['sentiment']
def lrSentimentAnalysis(n):
# Using CountVectorizer to convert text into tokens/features
vect = CountVectorizer(ngram_range=(1, 1))
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1, test_size=n)
# Using training data to transform text into counts of features for each message
vect.fit(X_train)
X_train_dtm = vect.transform(X_train)
X_test_dtm = vect.transform(X_test)
# dual = [True, False]
max_iter = [100, 110, 120, 130, 140, 150]
C = [1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5]
solvers = ['newton-cg', 'lbfgs', 'liblinear']
param_grid = dict(max_iter=max_iter, C=C, solver=solvers)
LR1 = LogisticRegression(penalty='l2', multi_class='auto')
grid = GridSearchCV(estimator=LR1, param_grid=param_grid, cv=10, n_jobs=-1)
grid_result = grid.fit(X_train_dtm, y_train)
# Summarize results
print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
y_pred = grid_result.predict(X_test_dtm)
print ('Accuracy Score: ', metrics.accuracy_score(y_test, y_pred) * 100, '%')
# print('Confusion Matrix: ',metrics.confusion_matrix(y_test,y_pred))
# print('MAE:', metrics.mean_absolute_error(y_test, y_pred))
# print('MSE:', metrics.mean_squared_error(y_test, y_pred))
print ('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, y_pred)))
return [n, metrics.accuracy_score(y_test, y_pred) * 100, grid_result.best_estimator_.get_params()['max_iter'],
grid_result.best_estimator_.get_params()['C'], grid_result.best_estimator_.get_params()['solver']]
def darwConfusionMetrix(accList):
# Using CountVectorizer to convert text into tokens/features
vect = CountVectorizer(ngram_range=(1, 1))
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1, test_size=accList[0])
# Using training data to transform text into counts of features for each message
vect.fit(X_train)
X_train_dtm = vect.transform(X_train)
X_test_dtm = vect.transform(X_test)
# Accuracy using Logistic Regression Model
LR = LogisticRegression(penalty='l2', max_iter=accList[2], C=accList[3], solver=accList[4])
LR.fit(X_train_dtm, y_train)
y_pred = LR.predict(X_test_dtm)
# creating a heatmap for confusion matrix
data = metrics.confusion_matrix(y_test, y_pred)
df_cm = pd.DataFrame(data, columns=np.unique(y_test), index=np.unique(y_test))
df_cm.index.name = 'Actual'
df_cm.columns.name = 'Predicted'
plt.figure(figsize=(10, 7))
sns.set(font_scale=1.4) # for label size
sns.heatmap(df_cm, cmap="Blues", annot=True, annot_kws={"size": 16}) # font size
fig0 = plt.gcf()
fig0.show()
fig0.savefig('FilePath', dpi=100)
def findModelWithBestAccuracy(accList):
accuracyList = []
for item in accList:
accuracyList.append(item[1])
N = accuracyList.index(max(accuracyList))
print('Best Model:', accList[N])
return accList[N]
accList = []
print('Logistic Regression')
print('grid search method for hyperparameter tuning (accurcy by cross validation) ')
for i in range(2, 7):
n = i / 10.0
print ("\nsplit ", i - 1, ": n=", n)
accList.append(lrSentimentAnalysis(n))
darwConfusionMetrix(findModelWithBestAccuracy(accList))
Preprocessing is a vital part of building a well-performing classifier. When you have such a large discrepancy between training and test set performance, it is likely that some error has occurred in your preprocessing (of your test set).
A classifier is also available without any programming. The second video here (below) shows how sentiments can be classified from keywords in mails.
You can visit the web service insight classifiers and try a free build first.
Your new data can be very different from the first dataset you used to train and test your model. Preprocessing techniques and statistical analysis will help you characterise your data and compare different datasets. Poor performance on new data can be observed for various reasons including:
your initial dataset is not statistically representative of a bigger data dataset (example: your dataset is a corner case)
Overfitting: you over-train your model which incorporates specificities (noise) of the training data
Different preprocessing methods
Unbalanced training data set. ML techniques work best with balanced dataset (equal occurrence of different classes in the training set)
I'm trying to train a decision tree classifier using Python. I'm using MinMaxScaler() to scale the data, and f1_score for my evaluation metric. The strange thing is that I'm noticing my model giving me different results in a pattern at each run.
data in my code is a (2000, 7) pandas.DataFrame, with 6 feature columns and the last column being the target value. Columns 1, 3, and 5 are categorical data.
The following code is what I did to preprocess and format my data:
import numpy as np
import pandas as pd
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import f1_score
# Data Preprocessing Step
# =============================================================================
data = pd.read_csv("./data/train.csv")
X = data.iloc[:, :-1]
y = data.iloc[:, 6]
# Choose which columns are categorical data, and convert them to numeric data.
labelenc = LabelEncoder()
categorical_data = list(data.select_dtypes(include='object').columns)
for i in range(len(categorical_data)):
X[categorical_data[i]] = labelenc.fit_transform(X[categorical_data[i]])
# Convert categorical numeric data to one-of-K data, and change y from Series to ndarray.
onehotenc = OneHotEncoder()
X = onehotenc.fit_transform(X).toarray()
y = y.values
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2)
min_max_scaler = MinMaxScaler()
X_train_scaled = min_max_scaler.fit_transform(X_train)
X_val_scaled = min_max_scaler.fit_transform(X_val)
The next code is for the actual decision tree model training:
dectree = DecisionTreeClassifier(class_weight='balanced')
dectree = dectree.fit(X_train_scaled, y_train)
predictions = dectree.predict(X_val_scaled)
score = f1_score(y_val, predictions, average='macro')
print("Score is = {}".format(score))
The output that I get (i.e. the score) varies, but in a pattern. For example, it would circulate among data within the range of 0.39 and 0.42.
On some iterations, I even get the UndefinedMetricWarning, that claims "F-score is ill-defined and being set to 0.0 in labels with no predicted samples."
I'm familiar with what the UndefinedMetricWarning means, after doing some searching on this community and Google. I guess the two questions I have may be organized to:
Why does my output vary for each iteration? Is there something in the preprocessing stage that happens which I'm not aware of?
I've also tried to use the F-score with other data splits, but I always get the warning. Is this unpreventable?
Thank you.
You are splitting the dataset into train and test which randomly divides sets for both train and test. Due to this, when you train your model with different training data everytime, and testing it with different test data, you will get a range of F score depending on how well the model is trained.
In order to replicate the result each time you run, use random_state parameter. It will maintain a random number state which will give you the same random number each time you run. This shows that the random numbers are generated in the same order. This can be any number.
#train test split
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=13)
#Decision tree model
dectree = DecisionTreeClassifier(class_weight='balanced', random_state=2018)
Hi I am working with a difficult data set, in that there is low correlation between the input and output, yet results are very good (99.9% accuracy with the test set). I'm sure I'm doing something wrong, just don't know what.
label is 'unsafe' column, which is either 0 or 1 (was originally 0 or 100 but I limited the maximum value - it made no difference with the result. I started with random forests and then ran k nearest neighbors and got almost the same accuracy, 99.9%. Screenshots of df are:
there are many more 0s than 1s (in the training set out of 80,000 there are only 169 1s, and there is also a run of 1s at the end but this is just how the original file was imported)
import os
import glob
import numpy as np
import pandas as pd
import sklearn as sklearn
from sklearn.tree import DecisionTreeClassifier
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_pickle('/Users/shellyganga/Downloads/ola.pickle')
maxVal = 1
df.unsafe = df['unsafe'].where(df['unsafe'] <= maxVal, maxVal)
print(df.head)
df.drop(df.columns[0], axis=1, inplace=True)
df.drop(df.columns[-2], axis=1, inplace=True)
#setting features and labels
labels = np.array(df['unsafe'])
features= df.drop('unsafe', axis = 1)
# Saving feature names for later use
feature_list = list(features.columns)
# Convert to numpy array
features = np.array(features)
from sklearn.model_selection import train_test_split
# 30% examples in test data
train, test, train_labels, test_labels = train_test_split(features, labels,
stratify = labels,
test_size = 0.3,
random_state = 0)
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier()
knn.fit(train, train_labels)
print(np.mean(train_labels))
print(train_labels.shape)
print('accuracy on train: {:.5f}'.format(knn.score(train, train_labels)))
print('accuracy on test: {:.5f}'.format(knn.score(test, test_labels)))
output:
0.0023654350798950337
(81169,)
accuracy on train: 0.99763
accuracy on test: 0.99761
The fact that you have many more instances of 0 than 1 is an example of class imbalance. Here is a really cool stats.stackexchange question on the topic.
Basically, if only 169 out of your 80000 labels are 1 and the rest are 0, then your model could naively predict the label 0 for every instance, and still have a training-set accuracy (= fraction of misclassified instances) of 99.78875%.
I suggest trying the F1 score, which is the harmonic mean of precision, AKA positive predictive value = TP/(TP + FP), and recall, AKA sensitivity = TP/(TP + FN): https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html#sklearn.metrics.f1_score
from sklearn.metrics import f1_score
print('F1 score on train: {:.5f}'.format(f1_score(train, train_labels)))
print('F1 score on test: {:.5f}'.format(f1_score(test, test_labels)))
I am trying to find a reliable testing method to compute the error of my model / training parameters, but I am seeing weird results when I play with the train/test ratio.
When I change the ratio of my train/test data, the RMSE converges towards different values, see below:
You can see the test ratio on the top-right corner.
Zoomed:
After 50K iteration, it doesn't converge towards the same value.
Here is the code:
import time
import sys
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.linear_model import Lasso
from sklearn.model_selection import train_test_split
from sklearn.utils import shuffle
np.random.seed(int(time.time()))
def seed():
np.random.randint(2**32-1)
n_scores_per_test = 50000
test_ratios = [.1, .2, .4, .6, .8]
model = Lasso(alpha=0.0005, random_state=seed(), tol=0.00001, copy_X=True)
# load our training data
train = pd.read_csv('train.csv')
X = train[['OverallCond']].values
y = np.log(train['SalePrice'].values)
# custom RMSE
def rmse(y_predicted, y_actual):
tmp = np.power(y_actual - y_predicted, 2) / y_actual.size
return np.sqrt(np.sum(tmp, axis=0))
for test_ratio in test_ratios:
print 'Testing test ratio:', test_ratio
scores = []
avg_scores = []
for i in range(n_scores_per_test):
if i % 200 == 0:
print i, '/', n_scores_per_test
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=test_ratio, random_state=seed())
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
scores.append(rmse(y_pred, y_test))
avg_scores.append(np.array(scores).mean())
plt.plot(avg_scores, label=str(test_ratio))
plt.legend(loc='upper right')
plt.show()
Any idea why they don't all converge nicely together?
See https://github.com/benji/rmse_convergence/
UPDATE:
Use selection=random for Lasso
using random_state in Lasso
using random_state train_test_split
removed redundant shuffle()
setting low tol on Lasso model