What is the difference between xgboost, extratreeclassifier, and randomforrestclasiffier? - python

I am new to all these methods and am trying to get a simple answer to that or perhaps if someone could direct me to a high level explanation somewhere on the web. My googling only returned kaggle sample codes.
Are the extratree and randomforrest essentially the same? And xgboost uses boosting when it chooses the features for any particular tree i.e. sampling the features. But then how do the other two algorithms select the features?
Thanks!

Extra-trees(ET) aka. extremely randomized trees is quite similar to random forest (RF). Both methods are bagging methods aggregating some fully grow decision trees. RF will only try to split by e.g. a third of features, but evaluate any possible break point within these features and pick the best. However, ET will only evaluate a random few break points and pick the best of these. ET can bootstrap samples to each tree or use all samples. RF must use bootstrap to work well.
xgboost is an implementation of gradient boosting and can work with decision trees, typical smaller trees. Each tree is trained to correct the residuals of previous trained trees. Gradient boosting can be more difficult to train, but can achieve a lower model bias than RF. For noisy data bagging is likely to be most promising. For low noise and complex data structures boosting is likely to be most promising.

Related

Feature importance with LightGBM

I have trained a model using several algorithms, including Random Forest from skicit-learn and LightGBM. and these model performs similarly in term of accuracy and other stats.
The issue is the inconsistent behavior between these two algorithms in terms of feature importance. I used default parameters and I know that they are using different method for calculating the feature importance but I suppose the highly correlated features should always have the most influence to the model's prediction. Random Forest makes more sense to me because the highly correlated features appear at top while it is not the case for LightGBM.
Is there a way to explain for this behavior and does this result with LightGBM is trustworthy to be presented?
Random Forest feature importance
LightGBM feature importance
Correlation with target
I have had a similar issue. The default feature importance for LGBM is based on 'split', and when I changed this to 'gain', the plots gave similar results.
Well, GBM is often shown to perform better especially when you comparing with random forest. Especially when comparing it with LightGBM. A properly-tuned LightGBM will most likely win in terms of performance and speed compared with random forest.
GBM advantages :
More developed. A lot of new features are developed for modern GBM model (xgboost, lightgbm, catboost) which affect its performance, speed, and scalability.
GBM disadvantages :
Number of parameters to tune
Tendency to overfit easily
If you aren't completely sure the hyperparameters are tuned correctly for the LightGBM, stick with the Random Forest; this will be easier to use and maintain.

Does a tree taken from Random Forests have reference value?

I use scikit-learn in Python to run RandomForestClassifier(). Because I want to visualize Random Forests to realize the correlation between different features, I use export_graphviz() to achieve this goal.
estimator1 = best_model1.estimators_[0]
from sklearn.tree import export_graphviz
export_graphviz(estimator1,
'tree_from_optimized_forest.dot',
rounded = True,
feature_names=X_train.columns,
class_names = ["No", "Yes"],
filled = True)
from subprocess import call
call(['dot', '-Tpng', 'tree_from_optimized_forest.dot', '-o', 'tree_from_optimized_forest.png', '-Gdpi=200'])
from IPython.display import Image
Image('tree_from_optimized_forest.png', "w")
However, unlike Decision Tree, Random Forests will produce many trees, which are depended on the number of n_estimators in RandomForestClassifier().
best_model1 = RandomForestClassifier(n_estimators= 100,
criterion='gini',
random_state= 42,
)
Besides, because DecisionTreeClassifier() uses all the samples to produce just one tree, we can explain directly the results on this single tree.
In opposite, Random Forests is trained to make several different trees, then voting inside these trees to decide the result. In addition, the content of these trees are different because Random Forests has the methods of Bootstrap, Bagging, Out-of-bag...and so on.
Therefore, I want to ask that if I only visualize one of trees from the result of RandomForestClassifier(), whether this tree has a certain reference value?
Can I directly explain the content of this tree as the analysis result of whole data? if not, whether DecisionTreeClassifier() is the only way to analyze the correlation between features through visualized image?
Thanks a lot!!
There have always been this relation in machine learning between the model's interpret-ability and complexity and your post is directly relating to this.
Some of the models that are quite simple but are used intensively for their interpret ability is the decision trees, but since they are not complex enough (suffer from a bias), they usually fail to learn very complex function, hence people came up with the random forest classifiers. Random forests reduce the bias of the vanilla decision tree and add more variance, but unfortunately in that process they took away the straightforward interpret-ability attribute with it.
Yet, there is still some tools that could help you gain some insight on the learnt function and the contribution of the features, one of those tools is treeinterpreter, you can learn more about it in this article.

Classification results depend on random_state?

I want to implement a AdaBoost model using scikit-learn (sklearn). My question is similar to another question but it is not totally the same. As far as I understand, the random_state variable described in the documentation is for randomly splitting the training and testing sets, according to the previous link. So if I understand correctly, my classification results should not be dependent on the seeds, is it correct? Should I be worried if my classification results turn out to be dependent on the random_state variable?
Your classification scores will depend on random_state. As #Ujjwal rightly said, it is used for splitting the data into training and test test. Not just that, a lot of algorithms in scikit-learn use the random_state to select the subset of features, subsets of samples, and determine the initial weights etc.
For eg.
Tree based estimators will use the random_state for random selections of features and samples (like DecisionTreeClassifier, RandomForestClassifier).
In clustering estimators like Kmeans, random_state is used to initialize centers of clusters.
SVMs use it for initial probability estimation
Some feature selection algorithms also use it for initial selection
And many more...
Its mentioned in the documentation that:
If your code relies on a random number generator, it should never use functions like numpy.random.random or numpy.random.normal. This approach can lead to repeatability issues in tests. Instead, a numpy.random.RandomState object should be used, which is built from a random_state argument passed to the class or function.
Do read the following questions and answers for better understanding:
Choosing random_state for sklearn algorithms
confused about random_state in decision tree of scikit learn
It does matter. When your training set differs then your trained state also changes. For a different subset of data you can end up with a classifier which is little different from the one trained with some other subset.
Hence, you should use a constant seed like 0 or another integer, so that your results are reproducible.

DecisionTreeClassifier's fit() returns different trees with same data

I have been playing around with sklearn a bit and following some simple examples online using the iris data.
I've now begun to play with some other datas. I'm not sure if this behaviour is correct and I'm misunderstanding but everytime I call fit(x,y) I get completely different tree data. So when I then run predictions I get varying differences (of around 10%), ie 60%, then 70%, then 65% etc...
I ran the code below twice to output 2 trees so I could read them in Word. I tried searching values from one doc in the other and a lot of them I couldn't find.
I kind of assumed fit(x, y) would always return the same tree - if this is the case then I assume my train data of floats is punking me.
clf_dt = tree.DecisionTreeClassifier()
clf_dt.fit(x_train, y_train)
with open("output2.dot", "w") as output_file:
tree.export_graphviz(clf_dt, out_file=output_file)
There is a random component to the algorithm, which you can read about in the user guide. The relevant part:
The problem of learning an optimal decision tree is known to be NP-complete under several aspects of optimality and even for simple concepts. Consequently, practical decision-tree learning algorithms are based on heuristic algorithms such as the greedy algorithm where locally optimal decisions are made at each node. Such algorithms cannot guarantee to return the globally optimal decision tree. This can be mitigated by training multiple trees in an ensemble learner, where the features and samples are randomly sampled with replacement.
If you want to achieve the same results each time, set the random_state parameter to an integer (by default it's None) and you should get the same result each time.

When using multiple classifiers - How to measure the ensemble's performance? [SciKit Learn]

I have a classification problem (predicting whether a sequence belongs to a class or not), for which I decided to use multiple classification methods, in order to help filter out the false positives.
(The problem is in bioinformatics - classifying protein sequences as being Neuropeptide precursors sequences. Here's the original article if anyone's interested, and the code used to generate features and to train a single predictor) .
Now, the classifiers have roughly similar performance metrics (83-94% accuracy/precision/etc' on the training set for 10-fold CV), so my 'naive' approach was to simply use multiple classifiers (Random Forests, ExtraTrees, SVM (Linear kernel), SVM (RBF kernel) and GRB) , and to use a simple majority vote.
MY question is:
How can I get the performance metrics for the different classifiers and/or their votes predictions?
That is, I want to see if using the multiple classifiers improves my performance at all, or which combination of them does.
My intuition is maybe to use the ROC score, but I don't know how to "combine" the results and to get it from a combination of classifiers. (That is, to see what the ROC curve is just for each classifier alone [already known], then to see the ROC curve or AUC for the training data using combinations of classifiers).
(I currently filter the predictions using "predict probabilities" with the Random Forests and ExtraTrees methods, then I filter arbitrarily for results with a predicted score below '0.85'. An additional layer of filtering is "how many classifiers agree on this protein's positive classification").
Thank you very much!!
(The website implementation, where we're using the multiple classifiers - http://neuropid.cs.huji.ac.il/ )
The whole shebang is implemented using SciKit learn and python. Citations and all!)
To evaluate the performance of the ensemble, simply follow the same approach as you would normally. However, you will want to get the 10 fold data set partitions first, and for each fold, train all of your ensemble on that same fold, measure the accuracy, rinse and repeat with the other folds and then compute the accuracy of the ensemble. So the key difference is to not train the individual algorithms using k fold cross-validation when evaluating the ensemble. The important thing is not to let the ensemble see the test data either directly or by letting one of it's algorithms see the test data.
Note also that RF and Extra Trees are already ensemble algorithms in their own right.
An alternative approach (again making sure the ensemble approach) is to take the probabilities and \ or labels output by your classifiers, and feed them into another classifier (say a DT, RF, SVM, or whatever) that produces a prediction by combining the best guesses from these other classifiers. This is termed "Stacking"
You can use a linear regression for stacking. For each 10-fold, you can split the data with:
8 training sets
1 validation set
1 test set
Optimise the hyper-parameters for each algorithm using the training set and validation set, then stack yours predictions by using a linear regression - or a logistic regression - over the validation set. Your final model will be p = a_o + a_1 p_1 + … + a_k p_K, where K is the number of classifier, p_k is the probability given by model k and a_k is the weight of the model k. You can also directly use the predicted outcomes, if the model doesn't give you probabilities.
If yours models are the same, you can optimise for the parameters of the models and the weights in the same time.
If you have obvious differences, you can do different bins with different parameters for each. For example one bin could be short sequences and the other long sequences. Or different type of proteins.
You can use the metric whatever metric you want, as long as it makes sens, like for not blended algorithms.
You may want to look at the 2007 Belkor solution of the Netflix challenges, section Blending. In 2008 and 2009 they used more advances technics, it may also be interesting for you.

Categories