how am I supposed to implement Gaussian Naive Bayes, in two training sets.
I need:
Create a training set by selecting the rows with id <= 160
Train a Gaussian Naive-Bayes classifier as we saw in class to determine if a campaign will be successful, given the amounts used in each marketing channel
Calculate the fraction of the training set that is correctly classified.
and:
Create a test set by selecting the rows with id> 160
Evaluate the performance of the classifier as follows:
What percentage of the test set was classified
correctly (correct answers on the total)? It is desirable that this number reaches at least 80%
What is the ratio of false positives to false negatives?
Successful marketing campaign:
successful_marketing_campaign = (dataset['sales'] > 15) | (dataset['total_invested'] < 20)
And my code:
X = dataset.iloc[:, [0, 3]].values.astype('int')
y = dataset.iloc[:, [4]].values.astype('int')
X_train = dataset.iloc[0:160, [0, 3]].values.astype('int')
y_train = dataset.iloc[0:160, 4].values.astype('int')
X_test = dataset.iloc[160:, [0, 3]].values.astype('int')
y_test = dataset.iloc[160:, 4].values.astype('int')
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
clf = GaussianNB()
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
matrix = confusion_matrix(y_test, y_pred)
print(matrix)
Related
#split dataset in features and target variable
feature_cols = ['RIAGENDR_0', 'RIDAGEYR', 'RIDRETH3_2', 'RIDRETH3_3', 'RIDRETH3_4', 'RIDRETH3_6', 'RIDRETH3_7', 'INDFMPIR', 'DMDMARTZ_1.0', 'DMDMARTZ_2.0', 'DMDMARTZ_3.0', 'DMDMARTZ_4.0', 'DMDMARTZ_6.0', 'DMDEDUC2', 'RFXT010', 'BMXWT', 'BMXBMI', 'URXUMA', 'LBDHDD', 'LBXFER', 'LBXGH', 'LBXBPB', 'LBXBCD', 'LBXBSE', 'LBXBMN', 'URXUBA', 'URXUCD', 'URXUCO', 'URXUCS', 'URXUMO', 'URXUMN', 'URXUPB', 'URXUSB', 'URXUSN', 'URXUTL', 'URXUTU']
X = data[feature_cols] # Features
scale = StandardScaler()
X = scale.fit_transform(X)
y = data['depre_score'] # Target variable
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=1) # 70% training and 30% test
clf = DecisionTreeClassifier()
clf = clf.fit(X_train,y_train)
y_pred = clf.predict(X_test)
print(y_test)
print(y_pred)
confusion = metrics.confusion_matrix(y_test, y_pred)
print(confusion)
print("Accuracy:",metrics.accuracy_score(y_test, y_pred))
recall_sensitivity = metrics.recall_score(y_test, y_pred, pos_label=1)
recall_specificity = metrics.recall_score(y_test, y_pred, pos_label=0)
print(recall_sensitivity, recall_specificity)
Why do you think you are doing something wrong? Perhaps your data are such that you can achieve a perfect classification... e.g., see this mushroom classification.
Having said that, it is also possible that there is some leakage in your data as specified by #gtomer. That means an exact point that is present in training set is available in your test set. You can do K-fold test on your data and see how it follows up with the accuracy. And secondly, use different classifiers too (it is better to use Random Forests compared to Decision Trees)
I need to create a FOR loop in Python that will repeat steps 1-2 1,00 times.
Split sample randomly into training test using a 632:368 ratio.
Build the model using the 63.2% training data and compute R square in holdout data.
I can't seem to grab the R square for the dataset :
y=data['Amount']
xall = data
xall.drop(["No","Amount", "Class"], axis = 1, inplace = True)
for seed in range(10_00):
X_train, X_test, y_train, y_test = train_test_split(xall, y,
test_size=0.382,
random_state=seed)
modelall = LinearRegression()
modelall.fit(xall, y)
modelall = LinearRegression().fit(xall, y)
r_sq = modelall.score(xall, y)
print('coefficient of determination:', r_sq)
Fit the model using the TRAINING data and estimate the score using the TEST data.
Use this:
y=data['Amount']
xall = data
xall.drop(["No","Amount", "Class"], axis = 1, inplace = True)
for seed in range(100):
X_train, X_test, y_train, y_test = train_test_split(xall, y, test_size=0.382, random_state=seed)
modelall = LinearRegression()
modelall.fit(X_train, y_train)
r_sq = modelall.score(X_test, y_test)
print('coefficient of determination:', r_sq)
You are fitting a linear model to the whole dataset (xall) with a different seed number. Linear regression should give you the same output irrespective of the seed value.
I have the following code that attempts to valuate stock on non-price based features.
price = df.loc[:,'regularMarketPrice']
features = df.loc[:,feature_list]
#
X_train, X_test, y_train, y_test = train_test_split(features, price, test_size = 0.15, random_state = 1)
if len(X_train.shape) < 2:
X_train = np.array(X_train).reshape(-1,1)
X_test = np.array(X_test).reshape(-1,1)
#
model = LinearRegression()
model.fit(X_train,y_train)
#
print('Train Score:', model.score(X_train,y_train))
print('Test Score:', model.score(X_test,y_test))
#
y_predicted = model.predict(X_test)
In my df (which is very large), there is never an instance where 'regularMarketPrice' is less than 0. However, I occasionally receive a value less than 0 for some points in y_predicted.
Is there a way in Scikit to say anything less than 0 is an invalid prediction? I am hoping this makes my model more accurate.
Please comment if there is a need for further explanation.
To make more prediction larger than 0, you should not use linear regression. You should consider generalized linear regression (glm), such as poisson regression.
from sklearn.linear_model import PoissonRegressor
price = df.loc[:,'regularMarketPrice']
features = df.loc[:,feature_list]
#
X_train, X_test, y_train, y_test = train_test_split(features, price, test_size = 0.15, random_state = 1)
if len(X_train.shape) < 2:
X_train = np.array(X_train).reshape(-1,1)
X_test = np.array(X_test).reshape(-1,1)
#
model = PoissonRegressor()
model.fit(X_train,y_train)
#
print('Train Score:', model.score(X_train,y_train))
print('Test Score:', model.score(X_test,y_test))
#
y_predicted = model.predict(X_test)
All prediction is greater than or equal to 0
Consider using something other than a Gaussian response variable. Plot your y-values with a histogram. If the data are right skewed, considering modeling with a glm, gamma distribution, and log link.
Alternatively, you could set the y_predicted to the max of the model.score value and 0.
I am working on a classification problem whose evaluation metric in ROC AUC. So far I have tried using xgb with different parameters. Here is the function which I used to sample the data. And you can find the relevant notebook here (google colab)
def get_data(x_train, y_train, shuffle=False):
if shuffle:
total_train = pd.concat([x_train, y_train], axis=1)
# generate n random number in range(0, len(data))
n = np.random.randint(0, len(total_train), size=len(total_train))
x_train = total_train.iloc[n]
y_train = total_train.iloc[n]['is_pass']
x_train.drop('is_pass', axis=1, inplace=True)
# keep the first 1000 rows as test data
x_test = x_train.iloc[:1000]
# keep the 1000 to 10000 rows as validation data
x_valid = x_train.iloc[1000:10000]
x_train = x_train.iloc[10000:]
y_test = y_train[:1000]
y_valid = y_train[1000:10000]
y_train = y_train.iloc[10000:]
return x_train, x_valid, x_test, y_train, y_valid, y_test
else:
# keep the first 1000 rows as test data
x_test = x_train.iloc[:1000]
# keep the 1000 to 10000 rows as validation data
x_valid = x_train.iloc[1000:10000]
x_train = x_train.iloc[10000:]
y_test = y_train[:1000]
y_valid = y_train[1000:10000]
y_train = y_train.iloc[10000:]
return x_train, x_valid, x_test, y_train, y_valid, y_test
Here are the two outputs that I get after running on shuffled and non shuffled data
AUC with shuffling: 0.9021756235738453
AUC without shuffling: 0.8025162142685565
Can you find out what's the issue here ?
The problem is that in your implementation of shuffling- np.random.randint generates random numbers, but they can be repeated, thus you have the same events appearing in your train and test+valid sets. You should use np.random.permutation instead (and consider to use np.random.seed to ensure reproducibility of the outcome).
Another note- you have very large difference in performance between training and validation/testing sets (the training shows almost perfect ROC AUC). I guess, this is due to too high max depth of the tree (14) that you allow for the size of the dataset (~60K) that you have in hand
P.S. Thanks for sharing collaboratory link- I was not aware of it, but it is very useful.
I am newbie in Machine learning. Recently, I have learnt how to calculate confusion_matrix for Test set of KNN Classification. But I do not know, how to calculate confusion_matrix for Training set of KNN Classification?
How can I compute confusion_matrix for Training set of KNN Classification from the following code ?
Following code is for computing confusion_matrix for Test set :
# Split test and train data
import numpy as np
from sklearn.model_selection import train_test_split
X = np.array(dataset.ix[:, 1:10])
y = np.array(dataset['benign_malignant'])
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
#Define Classifier
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors = 5, metric = 'minkowski', p = 2)
knn.fit(X_train, y_train)
# Predicting the Test set results
y_pred = knn.predict(X_test)
# Making the Confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred) # Calulate Confusion matrix for test set.
For k-fold cross-validation:
I am also trying to find confusion_matrix for Training set using k-fold cross-validation.
I am confused to this line knn.fit(X_train, y_train).
Whether I will change this line knn.fit(X_train, y_train) ?
Where should I change following code for computing confusion_matrix for training set ?
# Applying k-fold Method
from sklearn.cross_validation import StratifiedKFold
kfold = 10 # no. of folds (better to have this at the start of the code)
skf = StratifiedKFold(y, kfold, random_state = 0)
# Stratified KFold: This first divides the data into k folds. Then it also makes sure that the distribution of the data in each fold follows the original input distribution
# Note: in future versions of scikit.learn, this module will be fused with kfold
skfind = [None]*len(skf) # indices
cnt=0
for train_index in skf:
skfind[cnt] = train_index
cnt = cnt + 1
# skfind[i][0] -> train indices, skfind[i][1] -> test indices
# Supervised Classification with k-fold Cross Validation
from sklearn.metrics import confusion_matrix
from sklearn.neighbors import KNeighborsClassifier
conf_mat = np.zeros((2,2)) # Initializing the Confusion Matrix
n_neighbors = 1; # better to have this at the start of the code
# 10-fold Cross Validation
for i in range(kfold):
train_indices = skfind[i][0]
test_indices = skfind[i][1]
clf = []
clf = KNeighborsClassifier(n_neighbors = 5, metric = 'minkowski', p = 2)
X_train = X[train_indices]
y_train = y[train_indices]
X_test = X[test_indices]
y_test = y[test_indices]
# fit Training set
clf.fit(X_train,y_train)
# predict Test data
y_predcit_test = []
y_predict_test = clf.predict(X_test) # output is labels and not indices
# Compute confusion matrix
cm = []
cm = confusion_matrix(y_test,y_predict_test)
print(cm)
# conf_mat = conf_mat + cm
You dont have to make much changes
# Predicting the train set results
y_train_pred = knn.predict(X_train)
cm_train = confusion_matrix(y_train, y_train_pred)
Here instead of using X_test we use X_train for classification and then we produce a classification matrix using the predicted classes for the training dataset and the actual classes.
The idea behind a classification matrix is essentially to find out the number of classifications falling into four categories(if y is binary) -
predicted True but actually false
predicted True and actually True
predicted False but actually True
predicted False and actually False
So as long as you have two sets - predicted and actual, you can create the confusion matrix. All you got to do is predict the classes, and use the actual classes to get the confusion matrix.
EDIT
In the cross validation part, you can add a line y_predict_train = clf.predict(X_train) to calculate the confusion matrix for each iteration. You can do this because in the loop, you initialize the clf everytime which basically means reseting your model.
Also, in your code you are finding the confusion matrix each time but you are not storing it anywhere. At the end you'll be left with a cm of just the last test set.