Random forest tree growing algorithm - python

I'm doing a Random Forest implementation (for classification), and I have some questions regarding the tree growing algorithm mentioned in literature.
When training a decision tree, there are 2 criteria to stop growing a tree:
a. Stop when there are no more features left to split a node on.
b. Stop when the node has all samples in it belonging to the same class.
Based on that,
1. Consider growing one tree in the forest. When splitting a node of the tree, I randomly select m of the M total features, and then from these m features I find that one feature with maximum information gain. After I've found this one feature, say f, should I remove this feature from the feature list, before proceeding down to the children of the node? If I don't remove this feature, then this feature might get selected again down the tree.
If I implement the algorithm without removing the feature selected at a node, then the only way to stop growing the tree is when the leaves of the tree become "pure". When I did this, I got the "maximum recursion depth" reached error in Python, because the tree couldn't reach that "pure" condition earlier.
The RF literature even those written by Breiman say that the tree should be grown to the maximum . What does this mean?
2. At a node split, after selecting the best feature to split on (by information gain), what should be the threshold on which to split? One approach is to have no threshold, create one child node for every unique value of the feature; but I've continuous-valued features too, so that means creating one child node per sample!

Q1
You shouldn't remove the features from M. Otherwise it will not be able to detect some types of relationships (ex: linear relationships)
Maybe you can stop earlier, in your condition it might go up to leaves with only 1 sample, this will have no statistical significance. So it's better to stop at say, when the number of samples at leaf is <= 3
Q2
For continuous features maybe you can bin them to groups and use them to figure out a splitting point.

Related

Decision Tree leaf node condition for numeric dataset

I am asked to implement a Random Forest Classifier, which to my understanding is just a bunch of Decision Trees, on which the test data is ran through after training and the classification is then determined by majority voting of all the trees.
This is all well and good, and I even understand that entropy determines which feature to use next. What I am struggling to understand, is that for numeric data, how do I determine the conditions?
An example, is whether a person will play golf depending on weather conditions. Given 3 features (outlook, humidity, wind), and a classification label (play -> yes or no), we first start with outlook:
Outlook -> Overcast (pure), Sunny, Rain
From Sunny, choose Humidity next: High, Normal (pure)
From Outlook to Rain, choose Wind (last feature): Weak (pure), Strong
Essentially, in this case the values of the features are taken individually. But what happens, when I have a dataset with a bunch of decimals?
(Some of) the data:
In this case I would start by first looking at the label (0 or 1), then progress to the feature with the highest entropy in each. But how do I know the conditions of going to a leaf node? Or even, how many children a parent have?
A poor diagram to aid my question:
For a theoretical answer to your question, I would start by recommending this excellent visual tutorial.
http://www.r2d3.us/visual-intro-to-machine-learning-part-1/
In terms of implementation, there are several ways to go into it. You could try the following algorithm (inspired by this answer):
For each column (feature) in your dataset, start by sorting it. At every point where you have a class change, split your dataset. Say, for example, that your data points change from class 0 to 1 when feature A is equal to 5. All data points with A < 5 will belong to class 0, and the ones with A > 5 will belong to class 1. In case your dataset is not as simple, you can then proceed in the way you would proceed with a categorical decision tree, for example, by calculating the entropy at each splitting candidate. You then calculate the data points that arrive at each children node, and proceed recursively.

Determine the amount of splits in a decision tree of sklearn

I developed a decision tree (ensemble) in Matlab by using the "fitctree"-function (link: https://de.mathworks.com/help/stats/classificationtree-class.html).
Now I want to rebuild the same ensemble in python. Therefor I am using the sklearn library with the "DecisionTreeClassifier" (link: http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html).
In Matlab I defined the maximum amount of splits in each tree by setting:
'MaxNumSplits' — Maximal number of decision splits in the "fitctree"-function.
So with this the amount of branch nodes can be defined.
Now as I understand the attributes of the "DecisionTreeClassifier" object, there isn't any option like this. Am I right? All I found to control the amount of nodes in each tree is the "max_leaf_nodes" which obviously controls the number of leaf nodes.
And secondly: What does "max_depth" exactly control? If it's not "None" what does the integer "max_depth = int" stand for?
I appreciate your help and suggestions. Thank you!
As far I know there is no option to limit the total number of splits (nodes) in scikit-learn. However, you can set max_leaf_nodes to MaxNumSplits + 1 and the result should be equivalent.
Assume our tree has n_split split nodes and n_leaf leaf nodes. If we split a leaf node, we turn it into a split node and add two new leaf nodes. So n_splits and n_leafs both increase by 1. We usually start with only the root node (n_splits=0, n_leafs=1) and every splits increases both numbers. In consequence, the number of leaf nodes is always n_leafs == n_splits + 1.
As for max_depth; the depth is how many "layers" the tree has. In other words, the depth is the maximum number of nodes between the root and the furthest leaf node. The max_depth parameter restricts this depth. It prevents further splitting of a node if it is too far down the tree. (You can think of max_depth as a limiting to the number of splits before a decision is made.)

Difference between min_samples_split and min_samples_leaf in sklearn DecisionTreeClassifier

I was going through sklearn class DecisionTreeClassifier.
Looking at parameters for the class, we have two parameters min_samples_split and min_samples_leaf. Basic idea behind them looks similar, you specify a minimum number of samples required to decide a node to be leaf or split further.
Why do we need two parameters when one implies the other?. Is there any reason or scenario which distinguish them?.
From the documentation:
The main difference between the two is that min_samples_leaf guarantees a minimum number of samples in a leaf, while min_samples_split can create arbitrary small leaves, though min_samples_split is more common in the literature.
To get a grasp of this piece of documentation I think you should make the distinction between a leaf (also called external node) and an internal node. An internal node will have further splits (also called children), while a leaf is by definition a node without any children (without any further splits).
min_samples_split specifies the minimum number of samples required to split an internal node, while min_samples_leaf specifies the minimum number of samples required to be at a leaf node.
For instance, if min_samples_split = 5, and there are 7 samples at an internal node, then the split is allowed. But let's say the split results in two leaves, one with 1 sample, and another with 6 samples. If min_samples_leaf = 2, then the split won't be allowed (even if the internal node has 7 samples) because one of the leaves resulted will have less then the minimum number of samples required to be at a leaf node.
As the documentation referenced above mentions, min_samples_leaf guarantees a minimum number of samples in every leaf, no matter the value of min_samples_split.
Both parameters will produce similar results, the difference is the point of view.
The min_samples_split parameter will evaluate the number of samples in the node, and if the number is less than the minimum the split will be avoided and the node will be a leaf.
The min_samples_leaf parameter checks before the node is generated, that is, if the possible split results in a child with fewer samples, the split will be avoided (since the minimum number of samples for the child to be a leaf has not been reached) and the node will be replaced by a leaf.
In all cases, when we have samples with more than one Class in a leaf, the Final Class will be the most likely to happen, according to the samples that reached it in training.
In decision trees, there are many rules one can set up to configure how the tree should end up. Roughly, there are more 'design' oriented rules like max_depth. Max_depth is more like when you build a house, the architect asks you how many floors you want on the house.
Some other rules are 'defensive' rules. We often call them stopping rules. min_samples_leaf and min_samples_split belong to this type. All explanations already provided are very well said. My cent: rules interact when the tree is being built. For example, min_samples_leaf=100, you may very well end up with tree where all the terminal nodes are way larger than 100 because others rule kick in to have stopped the tree from expanding.
lets say that min_samples_split = 9 and min_samples_leaf =3 .
in the internal node,right split not allowed (3<9) and left split is allowed (10>9) .
but because min_samples_leaf =3 and one leaf is 2 (the right one) so 10 will not split to 2 and 8.
Look at the leaf with the number 3 (from the first splitting).
If we decide that mim_samples_leaf =4 and not 3 so even the first splitting would not be happen (13 to 10 and 3) .
min_sample_split tells above the minimum no. of samples reqd. to split an internal node. If an integer value is taken then consider min_samples_split as the minimum no. If float, then it shows the percentage. By default, it takes “2” value.
min_sample_leaf is the minimum number of samples required to be at a leaf node. If an integer value is taken then consider - -min_samples_leaf as the minimum no. If float, then it shows the percentage. By default, it takes “1” value.

Random Forest implementation in Python

all!
Could anybody give me an advice on Random Forest implementation in Python? Ideally I need something that outputs as much information about the classifiers as possible, especially:
which vectors from the train set are used to train each decision
trees
which features are selected at random in each node in each
tree, which samples from the training set end up in this node, which
feature(s) are selected for split and which threashold is used for
split
I have found quite some implementations, the most well known one is probably from scikit, but it is not clear how to do (1) and (2) there (see this question). Other implementations seem to have the same problems, except the one from openCV, but it is in C++ (python interface does not cover all methods for Random Forests).
Does anybody know something that satisfies (1) and (2)? Alternatively, any idea how to improve scikit implementation to get the features (1) and (2)?
Solved: checked the source code of sklearn.tree._tree.Tree. It has good comments (which fully describe the tree):
children_left : int*
children_left[i] holds the node id of the left child of node i.
For leaves, children_left[i] == TREE_LEAF. Otherwise,
children_left[i] > i. This child handles the case where
X[:, feature[i]] <= threshold[i].
children_right : int*
children_right[i] holds the node id of the right child of node i.
For leaves, children_right[i] == TREE_LEAF. Otherwise,
children_right[i] > i. This child handles the case where
X[:, feature[i]] > threshold[i].
feature : int*
feature[i] holds the feature to split on, for the internal node i.
threshold : double*
threshold[i] holds the threshold for the internal node i.
You can get nearly all the information in scikit-learn. What exactly was the problem? You can even visualize the trees using dot.
I don't think you can find out which split candidates were sampled at random, but you can find out which were selected in the end.
Edit: Look at the tree_ property of the decision tree. I agree, it is not very well documented. There really should be an example to visualize the leaf distributions etc. You can have a look at the visualization function to get an understanding of how to get to the properties.

How to track the progress of a tree traversal?

I have a tree. It has a flat bottom. We're only interested in the bottom-most leaves, but this is roughly how many leaves there are at the bottom...
2 x 1600 x 1600 x 10 x 4 x 1600 x 10 x 4
That's ~13,107,200,000,000 leaves? Because of the size (the calculation performed on each leaf seems unlikely to be optimised to ever take less than one second) I've given up thinking it will be possible to visit every leaf.
So I'm thinking I'll build a 'smart' leaf crawler which inspects the most "likely" nodes first (based on results from the ones around it). So it's reasonable to expect the leaves to be evaluated in branches/groups of neighbours, but the groups will vary in size and distribution.
What's the smartest way to record which leaves have been visited and which have not?
You don't give a lot of information, but I would suggest tuning your search algorithm to help you keep track of what it's seen. If you had a global way of ranking leaves by "likelihood", you wouldn't have a problem since you could just visit leaves in descending order of likelihood. But if I understand you correctly, you're just doing a sort of hill climbing, right? You can reduce storage requirements by searching complete subtrees (e.g., all 1600 x 10 x 4 leaves in a cluster that was chosen as "likely"), and keeping track of clusters rather than individual leaves.
It sounds like your tree geometry is consistent, so depending on how your search works, it should be easy to merge your nodes upwards... e.g., keep track of level 1 nodes whose leaves have all been examined, and when all children of a level 2 node are in your list, drop the children and keep their parent. This might also be a good way to choose what to examine: If three children of a level 3 node have been examined, the fourth and last one is probably worth examining too.
Finally, a thought: Are you really, really sure that there's no way to exclude some solutions in groups (without examining every individual one)? Problems like sudoku have an astronomically large search space, but a good brute-force solver eliminates large blocks of possibilities without examining every possible 9 x 9 board. Given the scale of your problem, this would be the most practical way to attack it.
It seems that you're looking for a quick and efficient ( in terms of memory usage ) way to do a membership test. If so and if you can cope with some false-positives go for a bloom filter.
Bottom line is : Use bloom filters in situations where your data set is really big AND all what you need is checking if a particular element exists in the set AND a small chance of false positives is tolerable.
Some implementation for Python should exist.
Hope this will help.
Maybe this is too obvious, but you could store your results in a similar tree. Since your computation is slow, the results tree should not grow out of hand too quickly. Then just look up if you have results for a given node.

Categories