I would like to use RBM in scikit. I can define and train a RBM like many other classifiers.
from sklearn.neural_network import BernoulliRBM
clf = BernoulliRBM(random_state=0, verbose=True)
clf.fit(X_train, y_train)
But I can't seem to find a function that makes me a prediction. I am looking for an equivalent for one of the following in scikit.
y_score = clf.decision_function(X_test)
y_score = clf.predict(X_test)
Neither functions are present in BernoulliRBM.
The BernoulliRBM is an unsupervised method so you won't be able to do clf.fit(X_train, y_train) but rather clf.fit(X_train). It is mostly used for non-linear feature extraction that can be feed to a classifier. It would look like this:
logistic = linear_model.LogisticRegression()
rbm = BernoulliRBM(random_state=0, verbose=True)
classifier = Pipeline(steps=[('rbm', rbm), ('logistic', logistic)])
So the features extracted by rbm are passed to the LogisticRegression model. Take a look here for a full example.
Related
I am reading Geron's Hands-on Machine Learning. In page 90, there is a section about Confusion Matrix. He says that we need some predictions, so he does the following:
from sklearn.model_selection import cross_val_predict
y_train_pred = cross_val_predict(sgd_clf, X_train, y_train5, cv=3)
This object sgd_clf is a stochastic gradient descent classifier which was previously fitted with the train data in the previous section. My question is: why, if already fitted, it is better to split the train set in three parts and retrain (?) the sgd_clf in two of them, then make a prediction and so on, if sgd_clf is already trained? Why not just let it predict on full X_train? Or just take a new not-fitted classifier as imput? Why put sgd_clf already trained as imput to retrain? I am a bit confused.
I see your confusion and I think Geron doesn't mean you should use the fitted model for cross-validation. He just wants to compare the naive fitting method with cross-validation.
The complete code should be as follows:
from sklearn.linear_model import SGDClassifier
# No cross-validation
sgd_clf1 = SGDClassifier(random_state=42)
sgd_clf1.fit(X_train, y_train)
# With cross-validation
sgd_clf2 = SGDClassifier(random_state=42)
cross_val_score(sgd_clf2, X_train, y_train, cv=3, scoring='accuracy')
I'm confused about using cross_val_predict in a test data set.
I created a simple Random Forest model and used cross_val_predict to make predictions:
from sklearn.ensemble import RandomForestClassifier
from sklearn.cross_validation import cross_val_predict, KFold
lr = RandomForestClassifier(random_state=1, class_weight="balanced", n_estimators=25, max_depth=6)
kf = KFold(train_df.shape[0], random_state=1)
predictions = cross_val_predict(lr,train_df[features_columns], train_df["target"], cv=kf)
predictions = pd.Series(predictions)
I'm confused on the next step here. How do I use what is learnt above to make predictions on the test data set?
I don't think cross_val_score or cross_val_predict uses fit before predicting. It does it on the fly. If you look at the documentation (section 3.1.1.1), you'll see that they never mention fit anywhere.
As #DmitryPolonskiy commented, the model has to be trained (with the fit method) before it can be used to predict.
# Train the model (a.k.a. `fit` training data to it).
lr.fit(train_df[features_columns], train_df["target"])
# Use the model to make predictions based on testing data.
y_pred = lr.predict(test_df[feature_columns])
# Compare the predicted y values to actual y values.
accuracy = (y_pred == test_df["target"]).mean()
cross_val_predict is a method of cross validation, which lets you determine the accuracy of your model. Take a look at sklearn's cross-validation page.
I am not sure the question was answered. I had a similar thought. I want compare the results (Accuracy for example) with the method that does not apply CV. The CV valiadte accuracy is on the X_train and y_train. The other method fit the model using X_trian and y_train, tested on the X_test and y_test. So the comparison is not fair since they are on different datasets.
What you can do is using the estimator returned by the cross_validate
lr_fit = cross_validate(lr, train_df[features_columns], train_df["target"], cv=kf, return_estimator=Ture)
y_pred = lr_fit.predict(test_df[feature_columns])
accuracy = (y_pred == test_df["target"]).mean()
I am trying to reproduce the example here but using RandomForestClassifer.
I can't see how to transform this part of the code
# Learn to predict each class against the other
classifier = OneVsRestClassifier(svm.SVC(kernel='linear', probability=True,
random_state=random_state))
y_score = classifier.fit(X_train, y_train).decision_function(X_test)
I tried
# Learn to predict each class against the other
classifier = OneVsRestClassifier(RandomForestClassifier())
y_score = classifier.fit(X_train, y_train).decision_function(X_test)
but I get
AttributeError: Base estimator doesn't have a decision_function
attribute.
Is there a workaround?
Well you should know what is decision_function used for. Its only used with a SVM classifier reason being it gives out the distance of your data points from the hyperplane that separates the data, whereas when you do it using a RandomForestClassifier it makes no sense. You can use other methods that are supported by RFC. You can use predict_proba if you want to get the probabilities of your classified data points.
Here is the reference for the supported functions
Just to mention RFC do supports oob_decision_function, which is the out of bag estimate on your training set.
So just replace your line like -
y_score = classifier.fit(X_train, y_train).predict_proba(X_test)
or
y_score = classifier.fit(X_train, y_train).predict(X_test)
I am using sklearn for SVM training. I am using the cross-validation to evaluate the estimator and avoid the overfitting model.
I split the data into two parts. Train data and test data. Here is the code:
import numpy as np
from sklearn import cross_validation
from sklearn import datasets
from sklearn import svm
X_train, X_test, y_train, y_test = cross_validation.train_test_split(
iris.data, iris.target, test_size=0.4, random_state=0
)
clf = svm.SVC(kernel='linear', C=1)
scores = cross_validation.cross_val_score(clf, X_train, y_train, cv=5)
print scores
Now I need to evaluate the estimator clf on X_test.
clf.score(X_test, y_test)
here, I get an error saying that the model is not fitted using fit(), but normally, in cross_val_score function the model is fitted? What is the problem?
cross_val_score is basically a convenience wrapper for the sklearn cross-validation iterators. You give it a classifier and your whole (training + validation) dataset and it automatically performs one or more rounds of cross-validation by splitting your data into random training/validation sets, fitting the training set, and computing the score on the validation set. See the documentation here for an example and more explanation.
The reason why clf.score(X_test, y_test) raises an exception is because cross_val_score performs the fitting on a copy of the estimator rather than the original (see the use of clone(estimator) in the source code here). Because of this, clf remains unchanged outside of the function call, and is therefore not properly initialized when you call clf.fit.
I'm working on an example of applying Restricted Boltzmann Machine on Iris dataset. Essentially, I'm trying to make a comparison between RMB and LDA. LDA seems to produce a reasonable correct output result, but the RBM isn't. Following a suggestion, I binarized the feature inputs using skearn.preprocessing.Binarizer, and also tried different threshold parameter values. I tried several different ways to apply binarization, but none seemed to work for me.
Below is my modified version of the code based on this user's version User: covariance.
Any helpful comments are greatly appreciated.
from sklearn import linear_model, datasets, preprocessing
from sklearn.cross_validation import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.neural_network import BernoulliRBM
from sklearn.lda import LDA
# import some data to play with
iris = datasets.load_iris()
X = iris.data[:,:2] # we only take the first two features.
Y = iris.target
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=10)
# Models we will use
rbm = BernoulliRBM(random_state=0, verbose=True)
binarizer = preprocessing.Binarizer(threshold=0.01,copy=True)
X_binarized = binarizer.fit_transform(X_train)
hidden_layer = rbm.fit_transform(X_binarized, Y_train)
logistic = linear_model.LogisticRegression()
logistic.coef_ = hidden_layer
classifier = Pipeline(steps=[('rbm', rbm), ('logistic', logistic)])
lda = LDA(n_components=3)
#########################################################################
# Training RBM-Logistic Pipeline
logistic.fit(X_train, Y_train)
classifier.fit(X_binarized, Y_train)
#########################################################################
# Get predictions
print "The RBM model:"
print "Predict: ", classifier.predict(X_test)
print "Real: ", Y_test
print
print "Linear Discriminant Analysis: "
lda.fit(X_train, Y_train)
print "Predict: ", lda.predict(X_test)
print "Real: ", Y_test
RBM and LDA are not directly comparable, as RBM doesn't perform classification on its own. Though you are using it as a feature engineering step with logistic regression at the end, LDA is itself a classifier - so the comparison isn't very meaningful.
The BernoulliRBM in scikit learn only handles binary inputs. The iris dataset has no sensible binarization, so you aren't going to get any meaningful outputs.