DictVectorizer learns more features for the training set - python

I have the following code which works as expected:
clf = Pipeline([
('vectorizer', DictVectorizer(sparse=False)),
('classifier', DecisionTreeClassifier(criterion='entropy'))
])
clf.fit(X[:size], y[:size])
score = clf.score(X_test, y_test)
I wanted to do the same logic without using Pipeline:
v = DictVectorizer(sparse=False)
Xdv = v.fit_transform(X[:size])
Xdv_test = v.fit_transform(X_test)
clf = DecisionTreeClassifier(criterion='entropy')
clf.fit(Xdv[:size], y[:size])
clf.score(Xdv_test, y_test)
But I receive the following error:
ValueError: Number of features of the model must match the input. Model n_features is 8251 and input n_features is 14303
It seems that DictVectorizer learns more features for the test set than for the training set. I want to know how does Pipeline handle this issue and how can I accomplish the same.

Dont call fit_transform again.
Do this:
Xdv_test = v.transform(X_test)
When you do fit() or fit_transform(), the dict vectorizer will forget the features learnt during previous call (on training data) and re-fits again, hence different number of features.
Pipeline will automatically handle the test data appropriately when you do clf.score(X_test, y_test) on the pipeline.

Related

Keras prediction incorrect with scaler and feature selection

I build an application that trains a Keras binary classifier model (0 or 1) every x time (hourly,daily) given the new data. The data preparation, training and testing works well, or at least as expected. It tests different features and scales it with MinMaxScaler (some values are negative).
On live data predictions with one single data point, the values are unrealistic (around 0.9987 to 1 most of the time, which is inaccurate). Since the result should be how close to "1" the prediction is, getting such high numbers constantly raises alerts.
Code for live prediction is as follows
current_df is a pandas dataframe that contains the 1 row with the data pulled live and the column headers, we select the "features" (since why load the features from the db and we implement dynamic feature selection when training the model, which could mean on every model there are different features)
Get the features as a list:
# Convert literal str to list
features = ast.literal_eval(features)
Then select only the features that I need in the dataframe:
# Select the features
selected_df = current_df[features]
Get the values as a list:
# Get the values of the df
current_list = selected_df.values.tolist()[0]
Then I reshape it:
# Reshape to allow scaling and predicting
current_list = np.reshape(current_list, (-1, 1))
If I call "transform" instead of "fit_transform" in the line above, I get the following error: This MinMaxScaler instance is not fitted yet. Call 'fit' with appropriate arguments before using this estimator.
Reshape again:
# Reshape to be able to scale
current_list = np.reshape(current_list, (1, -1))
Loads the model using Keras (model_location is a Path) and predict:
# Loads the model from the local folder
reconstructed_model = keras.models.load_model(model_location)
prediction = reconstructed_model.predict(current_list)
prediction = prediction.flat[0]
Updated
The data gets scaled using fit_transform and transform (MinMaxScaler although it can be Standard Scaler):
X_train = pd.DataFrame(scaler.fit_transform(X_train), columns=X_train.columns, index=X_train.index)
X_test = pd.DataFrame(scaler.transform(X_test), columns=X_test.columns, index=X_test.index)
And this is run when training the model (the "model" config is not shown):
# Compile the model
model.compile(optimizer=optimizer,
loss=loss,
metrics=['binary_accuracy'])
# build the model
model.fit(X_train, y_train, epochs=epochs, verbose=0)
# Evaluate using Keras built-in function
scores = model.evaluate(X_test, y_test, verbose=0)
testing_accuracy = scores[1]
# create model with sklearn KerasClassifier for evaluation
eval_model = KerasClassifier(model=model, epochs=epochs, batch_size=10, verbose=0)
# Evaluate model using RepeatedStratifiedKFold
accuracy = ML.evaluate_model_KFold(eval_model, X_test, y_test)
# Predict testing data
pred_test= model.predict(X_test, verbose=0)
pred_test = pred_test.flatten()
# extract the predicted class labels
y_predicted_test = np.where(pred_test > 0.5, 1, 0)
Regarding feature selection, the features are not always the same --I use both SelectKBest (10 or 15 features) or RFECV. And select the trained model with highest accuracy, meaning the features can be different.
Is there anything I'm doing wrong here? I'm thinking maybe the scaling should be done before the feature selection or there's some issue with the scaling (since maybe some values might be 0 when training and 100 when using it and the features are not necessarily the same when scaling).
The issues seems to stem from a StandardScaler / MinMaxScaler. The following example shows how to apply the former. However, if there are separate scripts handling learning/prediction, then the scaler will also need to be serialized and loaded at prediction time.
Set up a classification problem:
X, y = make_classification(n_samples=10_000)
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y)
Fit a StandardScaler instance on the training set and use the same parameters to .transform the test set:
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
# Train time: Serialize the scaler to a pickle file.
with open("scaler.pkl", "wb") as fh:
pickle.dump(scaler, fh)
# Test time: Load the scaler and apply to the test set.
with open("scaler.pkl", "rb") as fh:
new_scaler = pickle.load(fh)
X_test = new_scaler.transform(X_test)
Which means that the model should be fit on features with similar distributions:
model = keras.Sequential([
keras.Input(shape=X_train.shape[1]),
layers.Dense(100),
layers.Dropout(0.1),
layers.Dense(1, activation="relu")])
model.compile(loss="binary_crossentropy", optimizer="adam", metrics=["binary_accuracy"])
model.fit(X_train, y_train, epochs=25)
y_pred = np.where(model.predict(X_test)[:, 0] > 0.5, 1, 0)
print(accuracy_score(y_test, y_pred))
# 0.8708
Alexander's answer is correct, I think there is just some confusion between testing and live prediction. What he said regarding the testing step is equally applicable to live prediction step. After you've called scaler.fit_transform on your training set, add the following code to save the scaler:
with open("scaler.pkl", "wb") as fh:
pickle.dump(scaler, fh)
Then, during live prediction step, you don't call fit_transform. Instead, you load the scaler saved during training and call transform:
with open("scaler.pkl", "rb") as fh:
new_scaler = pickle.load(fh)
# Load features, reshape them, etc
# Scaling step
current_list = new_scaler.transform(current_list)
# Features are scaled properly now, put the rest of your prediction code here
You always call fit_transform only once per model, during the training step on your training pool. After that (during testing or calculating predictions after model deployment) you never call it, only call transform. Treat scaler as part of the model. Naturally, you fit the model on the training set and then during testing and live prediction you use the same model, never refitting it. The same should be true for the scaler.
If you call scaler.fit_transform on live prediction features it creates a new scaler that has no prior knowledge of feature distribution on training set.

Sklearn.pipeline producing incorrect result

I am trying to construct a pipeline with a StandardScaler() and LogisticRegression(). I get different results when I code it with and without the pipeline. Here's my code without the pipeline:
clf_LR = linear_model.LogisticRegression()
scalar = StandardScaler()
X_train_std = scalar.fit_transform(X_train)
X_test_std = scalar.fit_transform(X_test)
clf_LR.fit(X_train_std, y_train)
print('Testing score without pipeline: ', clf_LR.score(X_test_std, y_test))
My code with pipeline:
pipe_LR = Pipeline([('scaler', StandardScaler()),
('classifier', linear_model.LogisticRegression())
])
pipe_LR.fit(X_train, y_train)
print('Testing score with pipeline: ', pipe_LR.score(X_test, y_test))
Here is my result:
Testing score with pipeline: 0.821917808219178
Testing score without pipeline: 0.8767123287671232
While trying to debug the problem, it seems the data is being standardized. But the result with pipeline matches the result of training the model on my original X_train data (without applying StandardScaler()).
clf_LR_orig = linear_model.LogisticRegression()
clf_LR_orig.fit(X_train, y_train)
print('Testing score without Standardization: ', clf_LR_orig.score(X_test, y_test))
Testing score without Standardization: 0.821917808219178
Is there something I am missing in the construction of the pipeline?
Thanks very much!
As szymon-bednorz commented ,generally we don't fit_transform on test data, rather we go for fit_transform(X_train) and transform(X_test).This works pretty well, when your training and test data are from same distribution, and size of X_train is greater than X_test.
Further as you found while debugging that fitting through pipeline gives same accuracy as fitting logistic regression hints that X_train and X_test already scaled. Although I am not sure about this.

Multilabel text classification with Sklearn

I have already tried everything that I can think of in order to solve my multilabel text classification in Python and I would really appreciate any help. I have based my result in here using multilabelbinarizer and in this web page .
I am trying to predict certain categories in a dataset written in Spanish where I have 7 different labels, where my dataset is shown here. I have a message written and different labels for each of the rows. Each of the text messages has either one or two labels, depending on the message.
df2=df.copy()
df2.drop(["mensaje", "pregunta_parseada", "tags_totales"], axis=1, inplace=True)
# Divide into train and test
X_train, X_test, y_train, y_test = train_test_split(df['pregunta_parseada'],
df2,
test_size=0.15,
random_state=42)
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
features_train = tfidf.fit_transform(X_train).toarray()
labels_train = y_train
features_test = tfidf.transform(X_test).toarray()
labels_test = y_test
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
from sklearn.metrics import accuracy_score
from sklearn.multiclass import OneVsRestClassifier
lr = LogisticRegression(solver='sag', n_jobs=1)
clf = OneVsRestClassifier(lr)
# fit model on train data
clf.fit(features_train, labels_train)
# make predictions for validation set
y_pred = clf.predict(features_test)
So far, so good, but when I try to validate the problem it seems as almost every category is classified as "None"
y_pred[2]
accuracy_score(y_test,y_pred)
Output
array([0, 0, 0, 0, 0, 0, 0])
0.2574626865671642
I also tried with MultiLabelBinarizer and I had the same problem, what am I doing wrong? Trying with MultiLabelBinarizer raised the following results:
z=[["Generico"],["Mantenimiento"],["Motor"],["Generico"],["Motor"],
["Generico"],["Motor"],["Generico","Configuracion"],["Generico"],
["Motor"],["Consumo"],...,["Consumo"]]
from sklearn.preprocessing import MultiLabelBinarizer
mlb = MultiLabelBinarizer()
y=mlb.fit_transform(z)
message = df["pregunta_parseada"].to_numpy()
X_train, X_test, y_train, y_test = train_test_split(message,
y,
test_size=0.15,
random_state=42)
classifier = Pipeline([
('vectorizer', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', OneVsRestClassifier(LinearSVC()))])
classifier.fit(X_train, y_train)
predicted = classifier.predict(X_test)
accuracy_score(y_test, predicted)
#predicted[150]
all_labels = mlb.inverse_transform(predicted)
all_labels
With the following output
(),
(),
(),
(),
('Generico',),
(),
(),
(),
(),
('Compra',),
('Motor', 'extras'),
Thank you so much for your help
The problem I think is with your data. It could be too sparse.
I see you're using OneVsRestClassifier, so it builds multiple binary classifiers to decide the tags.
I think, there's no straight-forward bug in your code, but the choice of model is just not right for the task.
The problem with these binary classifiers is data imbalance, let's say even if you have the exactly the same number of samples (n) per class (c), the binary classifier will divide the data into n vs (n-1) x c samples for the positive and negative class.
So, obviously there is more data in negative class than positive class for all the classifiers. They are biased towards the negative class, as a result each binary classifier tends to predict (All in oneVsall scenario) for most of the cases.
If you don't want to change your setup, then one thing you can do is:
Instead of predict, use predict_proba to get the probability per class and set a lower threshold (<0.5) to decide which set of classes to choose.
Your test accuracy is pretty low, maybe re-adjust the threshold to get better accuracy.
Use Deep Learning based approach if possible like Bert which will give much better performance.

How to Save and Load Machine Learning (One-vs-Rest) Models (PYTHON)

I have here my code wherein it loops through each label or category then creates a model out of it. However, what I want is to create a general model that will be able to accept new predictions that are inputs from a user.
I'm aware that the code below saves the model that is fit for the last category in the loop. How can I fix this so that models for each category will be saved so that when I load those models, i would be able to predict a label for a new text?
vectorizer = TfidfVectorizer(strip_accents='unicode',
stop_words=stop_words, analyzer='word', ngram_range=(1,3), norm='l2')
vectorizer.fit(train_text)
vectorizer.fit(test_text)
x_train = vectorizer.transform(train_text)
y_train = train.drop(labels = ['question_body'], axis=1)
x_test = vectorizer.transform(test_text)
y_test = test.drop(labels = ['question_body'], axis=1)
# Using pipeline for applying linearSVC and one vs rest classifier
SVC_pipeline = Pipeline([
('clf', OneVsRestClassifier(LinearSVC(), n_jobs=1)),
])
for category in categories:
print('... Processing {}'.format(category))
# train the SVC model using X_dtm & y
SVC_pipeline.fit(x_train, train[category])
# compute the testing accuracy of SVC
svc_prediction = SVC_pipeline.predict(x_test)
print("SVC Prediction:")
print(svc_prediction)
print('Test accuracy is {}'.format(f1_score(test[category], svc_prediction)))
print("\n")
#save the model to disk
filename = 'svc_model.sav'
pickle.dump(SVC_pipeline, open(filename, 'wb'))
There are multiple mistakes in your code.
You are fitting your TfidfVectorizer on both train and test:
vectorizer.fit(train_text)
vectorizer.fit(test_text)
This is wrong. Calling fit() is not incremental. It will not learn on both data if called two times. The most recent call to fit() will forget everything from past calls. You never fit (learn) something on test data.
What you need to do is this:
vectorizer.fit(train_text)
The pipeline does not work the way you think:
# Using pipeline for applying linearSVC and one vs rest classifier
SVC_pipeline = Pipeline([
('clf', OneVsRestClassifier(LinearSVC(), n_jobs=1)),
])
See that you are passing LinearSVC inside the OneVsRestClassifier, so it will automatically use that without the need of Pipeline. Pipeline will not do anything here. Pipeline is of use when you sequentially want to pass your data through multiple models. Something like this:
pipe = Pipeline([
('pca', pca),
('logistic', LogisticRegression())
])
What the above pipe will do is pass the data to PCA which will transform it. Then that new data is passed to LogisticRegression and so on..
Correct usage of pipeline in your case can be:
SVC_pipeline = Pipeline([
('vectorizer', vectorizer)
('clf', OneVsRestClassifier(LinearSVC(), n_jobs=1)),
])
See more examples here:
https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html#examples-using-sklearn-pipeline-pipeline
You need to describe more about your "categories". Show some examples of your data. You are not using y_train and y_test anywhere. Is the categories different from "question_body"?

Setting the n_estimators argument using **kwargs (Scikit Learn)

I am trying to follow this tutorial to learn the machine learning based prediction but I have got two questions on it?
Ques1. How to set the n_estimators in the below piece of code, otherwise it will always assume the default value.
from sklearn.cross_validation import KFold
def run_cv(X,y,clf_class,**kwargs):
# Construct a kfolds object
kf = KFold(len(y),n_folds=5,shuffle=True)
y_pred = y.copy()
# Iterate through folds
for train_index, test_index in kf:
X_train, X_test = X[train_index], X[test_index]
y_train = y[train_index]
# Initialize a classifier with key word arguments
clf = clf_class(**kwargs)
clf.fit(X_train,y_train)
y_pred[test_index] = clf.predict(X_test)
return y_pred
It is being called as:
from sklearn.svm import SVC
print "%.3f" % accuracy(y, run_cv(X,y,SVC))
Ques2: How to use the already trained model file (e.g. obtained from SVM) so that I can use it to predict more (test) data which I didn't used for training?
For your first question, in the above code you would call run_cv(X,y,SVC,n_classifiers=100), the **kwargs will pass this to the classifier initializer with the step clf = clf_class(**kwargs).
For your second question, the cross validation in the code you've linked is just for model evaluation, i.e. comparing different types of models and hyperparameters, and determining the likely effectiveness of your model in production. Once you've decided on your model, you need to refit the model on the whole dataset:
clf.fit(X,y)
Then you can get predictions with clf.predict or clf.predict_proba.

Categories