Obtaining out-of-bag errors with scikit-learn's RandomForestClassifier - python

I'm trying to implement out-of-bag samples so that I won't have to partition my data into a training set and test set for random forest. Looking around, it seems that RandomForestClassifier takes in a boolean parameter oob_score, but I'm not sure if this is what will help me. (As far as I know, this only outputs an R^2 estimate?)
In R, using the randomForest package, predict.RandomForest automatically does oob if you don't input a data set to predict. Is there an equivalent way in Python?

Related

Using sklearn's cross_val_score with different training and testing datasets

I have a quick question about the following short snippet of code (my version of sklearn, from which cross_val_score and LinearDiscriminantAnalysis are imported from, is 1.1.1):
cv_results = cross_val_score(LinearDiscriminantAnalysis(),data,isTarget,cv=kfold,scoring='accuracy')
I am trying to train a LinearDiscriminantAnalysis ML algorithm on the 'data' variable and the 'isTarget' variable, which are numpy arrays of the features of the samples in my ML dataset and a list of which samples are targets (1) or non-targets (0), respectfully. kfold is just a method for scoring the algorithm, it isn't important here.
My question is this: I am trying to score this algorithm by training it on 'data' and 'isTarget', but I would like to test it on a different dataset, 'data_val' and 'isTarget_val,' but cross_val_score does not have parameters for training an algoirithm on one dataset and testing it on another. I've been searching for other functions that will do this, and I feel that it is a really simple answer and I just can't find it.
Can someone help me out? Thanks :)
This is how cross-validation is designed to work. The cv argument you are supplying specifies that you want to do K-Fold cross-validation, which means that the entirety of your dataset will be used for both training and testing in K different folds.
You can read up more on cross-validation here.
You can accomplish this using a PredefinedSplit (docs) as the cv argument.

Gridsearch CV give different best parameters when trained in different data

I am looking for the best way to tune a Randomforest Classifier and MLP Classifier in sklearn. The problem is that the Grisearch CV gives me slightly different best parameters each time I run my code. I assume that this happens because each time my train and test data are splitted differently. I have 2 questions:
1) Giving me a bit different best parameters each time means that my data are noisy or something like that?
2) Is there any way to choose the best parameters that fit all my training sets? At least the most usual best parameters.
Bonus Question: I want to classify 3 variables. My general classification accuracy_score(y_test1,pred1) gives arround 57% which i assume that is low. I mostly care of the high probability classifications. When I calculate predict_proba(X_test1)>0.8 and count the correct and false classifications I get a score 0.90% which is satisfactory. Should I be satisfied with this process? When I run on new test data, will my model's high probability predictions achieve the 0.90% score?
Best regards,
Nick
1) You can use a seed to maintain reproducibility of results. Try using the train_test_split function in SKLearn to split your data and specify a value for the random_seed parameter. See here. Having different distributions of the training data and testing data on separate runs and receiving different results does not mean there is noise in the signal.
2) Can you elaborate here? The best_estimator_ and best_params_ attributes of the GridSearchCV object, once it has been fit, should contain the data you need.

Do I need to extract feature vectors from MNIST before using Kmeans

I am practicing with MNIST by sklearn.cluster.KMeans.
Intuitively, I just fit the training data to the sklearn function. But I have got pretty low accuracy. I am wondering what step I have missed. Should I extract feature vectors by PCA in the first place? Or should I change a bigger n_clusters?
from sklearn import cluster
from sklearn.metrics import accuracy_score
clf = cluster.KMeans(init='k-means++', n_clusters=10, random_state=42)
clf.fit(X_train)
y_pred=clf.predict(X_test)
print(accuracy_score(y_test, y_pred))
I got poor 0.137 as result. Any recommendation? Thanks!
How are you passing the images in? Are pixels flattened or kept in the 2d format?Are pixels being normalized to between 0-1?
As you are running clustering I would advise against PCA regardless and instead opt for T-SNE which keeps neighbourhood info but you should not need to do so before running K-Means.
The best way to debug is to see what your fitted model is predicting as the clusters. You can see an example here:
https://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_digits.html
With this info, you can get an idea of where mistakes might be. Good luck!
Adding a note: K-Means also probably is not the best model for your purposes. It's best for unsupervised contexts to cluster data. Whereas, MNIST is a classification usecase. KNN would be a better option while still allowing you to experiment with neighbours and such.
Here is an example I created with KNN: https://gist.github.com/andrew-x/0bb997b129647f3a7b7c0907b7e836fc
Unless I'm missing something: you are comparing clustering labels which are arbitrarily numbered 0-9, to labels which are unarbitrarily numbered 0-9. The 0s in your clustering might not end up in cluster number 0, yet this is the comparison you make. Clustering results are evaluated differently because of this. Some options to get a correct evaluation:
Generate a contingency matrix and plot it
Calculate the adjusted rand index

sklearn calibrated classifier with random forest

Scikit has a very useful classifier wrappers called CalibratedClassifer and CalibratedClassifierCV, which try to make sure that the predict_proba function of a classifier really predicts a probability and not just an arbitrary number (albeit perhaps well-ranked) between zero and one.
However, when using random forests it is customary to use oob_decision_function_ to determine the performance on the training data, but this is no longer available when using the the calibrated models. The calibration should therefore work well for new data but not for the training data. How can we evaluate performance on the training data to determine, e.g., overfitting?
Apparently there really was no solution to this, and so I made a pull request to scikit-learn.
The problem was that the out-of-bag predictions are created during learning. Therefore, in the CalibratedClassifierCV each of the sub-classifiers does have its own oob decision function. However, this decision function is calculated on a fold of the data. Therefore, it is necessary to store each oob prediction (keeping nan values for samples that are not in the fold), then convert all the predictions using the calibration transformation, and then average the calibrated oob predictions to create an updated oob prediction.
As mentioned, I created a pull request at https://github.com/scikit-learn/scikit-learn/pull/11175. It will probably be a while before it is merged into the package, though, so if anyone really needs to use it then feel free to use my fork of scikit-learn at https://github.com/yishaishimoni/scikit-learn.

Break up Random forest classification fit into pieces in python?

I have almost 900,000 rows of information that I want to run through scikit-learn's Random Forest Classifier algorithm. Problem is, when I try to create the model my computer freezes completely, so what I want to try is running the model every 50,000 rows but I'm not sure if this is possible.
So the code I have now is
# This code freezes my computer
rfc.fit(X,Y)
#what I want is
model = rfc.fit(X.ix[0:50000],Y.ix[0:50000])
model = rfc.fit(X.ix[0:100000],Y.ix[0:100000])
model = rfc.fit(X.ix[0:150000],Y.ix[0:150000])
#... and so on
Feel free to correct me if I'm wrong, but I assume you're not using the most current version of scikit-learn (0.16.1 as of writing this), that you're on a Windows machine and using n_jobs=-1 (or a combination of all three). So my suggestion would be to first upgrade scikit-learn or set n_jobs=1 and try fitting on the whole dataset.
If that fails, take a look at the warm_start parameter. By setting it to True and gradually incrementing n_estimators you can fit additional trees on subsets of your data:
# First build 100 trees on the first chunk
clf = RandomForestClassifier(n_estimators=100, warm_start=True)
clf.fit(X.ix[0:50000],Y.ix[0:50000])
# add another 100 estimators on chunk 2
clf.set_params(n_estimators=200)
clf.fit(X.ix[0:100000],Y.ix[0:100000])
# and so forth...
clf.set_params(n_estimators=300)
clf.fit(X.ix[0:150000],Y.ix[0:150000])
Another possibility is to fit a new classifier on each chunk and then simply average the predictions from all classifiers or merging the trees into one big random forest like described here.
Another method similar to the one linked in Andreus' answer is to grow the trees in the forest individually.
I did this a while back: basically I trained a number of DecisionTreeClassifier's one at a time on different partitions of the training data. I saved each model via pickling, and afterwards I loaded them into a list which was assigned to the estimators_ attribute of a RandomForestClassifier object. You also have to take care to set the rest of the RandomForestClassifier attributes appropriately.
I ran into memory issues when I built all the trees in a single python script. If you use this method and run into that issue, there's a work-around, I posted in the linked question.
from sklearn.datasets import load_iris
boston = load_iris()
X, y = boston.data, boston.target
### RandomForestClassifier
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier(n_estimators=10, warm_start=True)
rfc.fit(X[:50], y[:50])
print(rfc.score(X, y))
rfc.n_estimators += 10
rfc.fit(X[51:100], y[51:100])
print(rfc.score(X, y))
rfc.n_estimators += 10
rfc.fit(X[101:150], y[101:150])
print(rfc.score(X, y))
Below is differentiation between warm_start and partial_fit.
When fitting an estimator repeatedly on the same dataset, but for multiple parameter values (such as to find the value maximizing performance as in grid search), it may be possible to reuse aspects of the model learnt from the previous parameter value, saving time. When warm_start is true, the existing fitted model attributes an are used to initialise the new model in a subsequent call to fit.
Note that this is only applicable for some models and some parameters, and even some orders of parameter values. For example, warm_start may be used when building random forests to add more trees to the forest (increasing n_estimators) but not to reduce their number.
partial_fit also retains the model between calls, but differs: with warm_start the parameters change and the data is (more-or-less) constant across calls to fit; with partial_fit, the mini-batch of data changes and model parameters stay fixed.
There are cases where you want to use warm_start to fit on different, but closely related data. For example, one may initially fit to a subset of the data, then fine-tune the parameter search on the full dataset. For classification, all data in a sequence of warm_start calls to fit must include samples from each class.
Some algorithms in scikit-learn implement 'partial_fit()' methods, which is what you are looking for. There are random forest algorithms that do this, however, I believe the scikit-learn algorithm is not such an algorithm.
However, this question and answer may have a workaround that would work for you. You can train forests on different subsets, and assemble a really big forest at the end:
Combining random forest models in scikit learn

Categories