How are categorical features encoded in lightGBM? - python

LightGBM has support for categorical variables. I would like to know how it encodes them. It doesn't seem to be one hot encode since the algorithm is pretty fast (I tried with data that took a lot of time to one hot encode).

https://github.com/Microsoft/LightGBM/issues/699#issue-243313657
The basic idea is sorting the histogram according to it's accumulate values (sum_gradient / sum_hessian), then find the best split on the sorted histogram, just like the numerical features.

Related

PCA on text data in python

I want to use PCA to reduce our features(columns) in a dataset, but one of the features is a text feature.
For this, our solution was convert text features to numeric, how can we do this?
Or any other solution to use PCA on text features?
for example this dataframe:
For the text, you can build vectors from text.
List of vectorizers in scikit-learn that work on text can be found here - https://scikit-learn.org/stable/modules/classes.html#module-sklearn.feature_extraction.text.
PCA finds the axis of the data with the greatest variance. For this all the inputs must be numerical.
You could take the length of the text string, that would provide a number, but it is unlikely that will provide any useful information. Ultimately, it is up to you to decide what you want from the data and that will inform how to change this. If your text field is categorical one way is to can create dummy variables that split a categorical variable into multiple binary variables. You can do this in Pandas with the get_dummies method.
In my opinion, a better question to ask is why you want to reduce your feature set and if the text is even relevant to your analysis.

One hot encoder what is the industry norm, to encode before train/split or after

I know there're some people who have answered this. I'm still trying to get this straight though.
I'm still a little bit confused over the one hot encoder. I was just thinking, if we were to encode before splitting, there shouldnt be any 'information leakage' into the test set. So why do people advocate doing the encoding after? Isn't the one hot encoder just used to convert categorical variables into binary.
And if we were to encode after splitting, the results can vary quite significantly as was pointed out here : Scikit-Learn One-hot-encode before or after train/test split
I'm just wondering what is the industry norm.
Thanks
Specifically for the One-Hot-Encoder, it should not make much difference, except when there are categories that are not represented in a split.
But in that case, there is information leakage. With splitting training/test data, you are trying to simulate how well your model (and that includes all feature selection/transformation!) generalizes. If there are categories that are present in the test set but not the training set, then arguably there can surely be categories in the real world that your whole data set does not contain. In that case you are betraying yourself if you encode before splitting.
There are cases where you would want to encode before, though. If you have few data points and are sampling to get balanced splits, you might want to ensure each split gets all the categories, or something like that. In such cases it might be useful to encode before.
In general, always keep in mind that feature selection and transformation are part of your model. One-hot encoding in particular depends on the data, so that applies even more.
One hot encoding is a technique to specify the desired class of a data item. It is a replacement to integer coding where you can just put integers. A simple example would be:
Let's say, we have 3 classes: Cat, Dog, Human
In integer encoding we would give the classes as (say):
Cat - 1, Dog - 2, Human - 3
In One-hot encoding, we would do these classes as:
Cat - [1,0,0]. Dog - [0,1,0], Human - [0,0,1]
So you can get an idea, one-hot encoding works only for categorical data!
Hence, the whole dataset has to be labeled in a homogenous manner. Hence the One-hot encoding has to be performed even before the test-train split.
I come to the same conclusion as #em_bis_me. I think most of the people are just doing that because they saw it a notebook where somebody was doing that before and they are just doing a copy + paste.( Kaggle is the best community to see that, a ton of people just copy + paste work of others without stopping to consider whether it is right or wrong).
Here you can see a example from Kaggle where they are doing the encoding after split.
https://www.kaggle.com/code/prashant111/logistic-regression-classifier-tutorial/notebook
here you have the same dataset encoding before split.
https://github.com/Enrique1987/machine_learning/blob/master/1_Classification_algorithms/01_Logistic_Regresion_Australian_Weather.ipynb
Of course: Same results.

What is TargetEncoder and BinaryEncoder in sklearn category_encoders?

I've been looking for a way to vectorize categorical variable and then I've come across category_encoders. It supports multiple ways to categorize.
I tried TargetEncoder and BinaryEncoder but the docs doesn't explain much about the working of it?
I really appreciate if anyone could explain how target encoder and binary encoder work and how they are different from one hot encoding?
Target encoding maps the categorical variable to the mean of the target variable. As it uses the target, steps must be taken to avoid overfitting (usually done with smoothing).
Binary encoding converts each integer into binary digits with each binary digit having its one column. It is essentially a form of feature hashing.
Both help with lowering the cardinality of categorical variables which helps improve some model performance, most notably with tree-based models.

how can i apply kmeans algorithm on data from database and after use confusion matrix

I am trying to apply a K Means algorithm on data from my database. First of all I am taking data like this:
So my questions are how I can make column with strings to numbers like "trash"=1, "car"=2 "truck"=3 and if I can use all columns and values for clustering.
The best you can do is to use label encoder of sklearn library
KMeans doesn't need "magic numbers".
It needs proper continuous variables, where the mean is meaningful. It's not the proper algorithm for your data. Minimizing least squares of encoded "dictionary numbers" is not sound.

Machine Learning dataset with many discrete features

I am working with a medical data set that contains many variables with discrete outputs. For example: type of anesthesia, infection site, Diabetes y/n. And to deal with this I have just been converting them into multiple columns with ones and zeros and then removing one to make sure there is not a direct correlation between them but I was wondering if there was a more efficient way of doing this
It depends on the purpose of the transformation. Converting categories to numerical labels may not make sense if the ordinal representation does not correspond to the logic of the categories. In this case, the "one-hot" encoding approach you have adopted is the best way to go, if (as I surmise from your post) the intention is to use the generated variables as the input to some sort of regression model. You can achieve what you are looking to do using pandas.get_dummies.

Categories