Change values in Pandas dataframe to one-hot encoding [duplicate] - python

I have a machine learning classification problem with 80% categorical variables. Must I use one hot encoding if I want to use some classifier for the classification? Can i pass the data to a classifier without the encoding?
I am trying to do the following for feature selection:
I read the train file:
num_rows_to_read = 10000
train_small = pd.read_csv("../../dataset/train.csv", nrows=num_rows_to_read)
I change the type of the categorical features to 'category':
non_categorial_features = ['orig_destination_distance',
'srch_adults_cnt',
'srch_children_cnt',
'srch_rm_cnt',
'cnt']
for categorical_feature in list(train_small.columns):
if categorical_feature not in non_categorial_features:
train_small[categorical_feature] = train_small[categorical_feature].astype('category')
I use one hot encoding:
train_small_with_dummies = pd.get_dummies(train_small, sparse=True)
The problem is that the 3'rd part often get stuck, although I am using a strong machine.
Thus, without the one hot encoding I can't do any feature selection, for determining the importance of the features.
What do you recommend?

Approach 1: You can use pandas' pd.get_dummies.
Example 1:
import pandas as pd
s = pd.Series(list('abca'))
pd.get_dummies(s)
Out[]:
a b c
0 1.0 0.0 0.0
1 0.0 1.0 0.0
2 0.0 0.0 1.0
3 1.0 0.0 0.0
Example 2:
The following will transform a given column into one hot. Use prefix to have multiple dummies.
import pandas as pd
df = pd.DataFrame({
'A':['a','b','a'],
'B':['b','a','c']
})
df
Out[]:
A B
0 a b
1 b a
2 a c
# Get one hot encoding of columns B
one_hot = pd.get_dummies(df['B'])
# Drop column B as it is now encoded
df = df.drop('B',axis = 1)
# Join the encoded df
df = df.join(one_hot)
df
Out[]:
A a b c
0 a 0 1 0
1 b 1 0 0
2 a 0 0 1
Approach 2: Use Scikit-learn
Using a OneHotEncoder has the advantage of being able to fit on some training data and then transform on some other data using the same instance. We also have handle_unknown to further control what the encoder does with unseen data.
Given a dataset with three features and four samples, we let the encoder find the maximum value per feature and transform the data to a binary one-hot encoding.
>>> from sklearn.preprocessing import OneHotEncoder
>>> enc = OneHotEncoder()
>>> enc.fit([[0, 0, 3], [1, 1, 0], [0, 2, 1], [1, 0, 2]])
OneHotEncoder(categorical_features='all', dtype=<class 'numpy.float64'>,
handle_unknown='error', n_values='auto', sparse=True)
>>> enc.n_values_
array([2, 3, 4])
>>> enc.feature_indices_
array([0, 2, 5, 9], dtype=int32)
>>> enc.transform([[0, 1, 1]]).toarray()
array([[ 1., 0., 0., 1., 0., 0., 1., 0., 0.]])
Here is the link for this example: http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html

Much easier to use Pandas for basic one-hot encoding. If you're looking for more options you can use scikit-learn.
For basic one-hot encoding with Pandas you pass your data frame into the get_dummies function.
For example, if I have a dataframe called imdb_movies:
...and I want to one-hot encode the Rated column, I do this:
pd.get_dummies(imdb_movies.Rated)
This returns a new dataframe with a column for every "level" of rating that exists, along with either a 1 or 0 specifying the presence of that rating for a given observation.
Usually, we want this to be part of the original dataframe. In this case, we attach our new dummy coded frame onto the original frame using "column-binding.
We can column-bind by using Pandas concat function:
rated_dummies = pd.get_dummies(imdb_movies.Rated)
pd.concat([imdb_movies, rated_dummies], axis=1)
We can now run an analysis on our full dataframe.
SIMPLE UTILITY FUNCTION
I would recommend making yourself a utility function to do this quickly:
def encode_and_bind(original_dataframe, feature_to_encode):
dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
res = pd.concat([original_dataframe, dummies], axis=1)
return(res)
Usage:
encode_and_bind(imdb_movies, 'Rated')
Result:
Also, as per #pmalbu comment, if you would like the function to remove the original feature_to_encode then use this version:
def encode_and_bind(original_dataframe, feature_to_encode):
dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
res = pd.concat([original_dataframe, dummies], axis=1)
res = res.drop([feature_to_encode], axis=1)
return(res)
You can encode multiple features at the same time as follows:
features_to_encode = ['feature_1', 'feature_2', 'feature_3',
'feature_4']
for feature in features_to_encode:
res = encode_and_bind(train_set, feature)

You can do it with numpy.eye and a using the array element selection mechanism:
import numpy as np
nb_classes = 6
data = [[2, 3, 4, 0]]
def indices_to_one_hot(data, nb_classes):
"""Convert an iterable of indices to one-hot encoded labels."""
targets = np.array(data).reshape(-1)
return np.eye(nb_classes)[targets]
The the return value of indices_to_one_hot(nb_classes, data) is now
array([[[ 0., 0., 1., 0., 0., 0.],
[ 0., 0., 0., 1., 0., 0.],
[ 0., 0., 0., 0., 1., 0.],
[ 1., 0., 0., 0., 0., 0.]]])
The .reshape(-1) is there to make sure you have the right labels format (you might also have [[2], [3], [4], [0]]).

One hot encoding with pandas is very easy:
def one_hot(df, cols):
"""
#param df pandas DataFrame
#param cols a list of columns to encode
#return a DataFrame with one-hot encoding
"""
for each in cols:
dummies = pd.get_dummies(df[each], prefix=each, drop_first=False)
df = pd.concat([df, dummies], axis=1)
return df
EDIT:
Another way to one_hot using sklearn's LabelBinarizer :
from sklearn.preprocessing import LabelBinarizer
label_binarizer = LabelBinarizer()
label_binarizer.fit(all_your_labels_list) # need to be global or remembered to use it later
def one_hot_encode(x):
"""
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
"""
return label_binarizer.transform(x)

Firstly, easiest way to one hot encode: use Sklearn.
http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html
Secondly, I don't think using pandas to one hot encode is that simple (unconfirmed though)
Creating dummy variables in pandas for python
Lastly, is it necessary for you to one hot encode? One hot encoding exponentially increases the number of features, drastically increasing the run time of any classifier or anything else you are going to run. Especially when each categorical feature has many levels. Instead you can do dummy coding.
Using dummy encoding usually works well, for much less run time and complexity. A wise prof once told me, 'Less is More'.
Here's the code for my custom encoding function if you want.
from sklearn.preprocessing import LabelEncoder
#Auto encodes any dataframe column of type category or object.
def dummyEncode(df):
columnsToEncode = list(df.select_dtypes(include=['category','object']))
le = LabelEncoder()
for feature in columnsToEncode:
try:
df[feature] = le.fit_transform(df[feature])
except:
print('Error encoding '+feature)
return df
EDIT: Comparison to be clearer:
One-hot encoding: convert n levels to n-1 columns.
Index Animal Index cat mouse
1 dog 1 0 0
2 cat --> 2 1 0
3 mouse 3 0 1
You can see how this will explode your memory if you have many different types (or levels) in your categorical feature. Keep in mind, this is just ONE column.
Dummy Coding:
Index Animal Index Animal
1 dog 1 0
2 cat --> 2 1
3 mouse 3 2
Convert to numerical representations instead. Greatly saves feature space, at the cost of a bit of accuracy.

You can use numpy.eye function.
import numpy as np
def one_hot_encode(x, n_classes):
"""
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
"""
return np.eye(n_classes)[x]
def main():
list = [0,1,2,3,4,3,2,1,0]
n_classes = 5
one_hot_list = one_hot_encode(list, n_classes)
print(one_hot_list)
if __name__ == "__main__":
main()
Result
D:\Desktop>python test.py
[[ 1. 0. 0. 0. 0.]
[ 0. 1. 0. 0. 0.]
[ 0. 0. 1. 0. 0.]
[ 0. 0. 0. 1. 0.]
[ 0. 0. 0. 0. 1.]
[ 0. 0. 0. 1. 0.]
[ 0. 0. 1. 0. 0.]
[ 0. 1. 0. 0. 0.]
[ 1. 0. 0. 0. 0.]]

pandas as has inbuilt function "get_dummies" to get one hot encoding of that particular column/s.
one line code for one-hot-encoding:
df=pd.concat([df,pd.get_dummies(df['column name'],prefix='column name')],axis=1).drop(['column name'],axis=1)

Here is a solution using DictVectorizer and the Pandas DataFrame.to_dict('records') method.
>>> import pandas as pd
>>> X = pd.DataFrame({'income': [100000,110000,90000,30000,14000,50000],
'country':['US', 'CAN', 'US', 'CAN', 'MEX', 'US'],
'race':['White', 'Black', 'Latino', 'White', 'White', 'Black']
})
>>> from sklearn.feature_extraction import DictVectorizer
>>> v = DictVectorizer()
>>> qualitative_features = ['country','race']
>>> X_qual = v.fit_transform(X[qualitative_features].to_dict('records'))
>>> v.vocabulary_
{'country=CAN': 0,
'country=MEX': 1,
'country=US': 2,
'race=Black': 3,
'race=Latino': 4,
'race=White': 5}
>>> X_qual.toarray()
array([[ 0., 0., 1., 0., 0., 1.],
[ 1., 0., 0., 1., 0., 0.],
[ 0., 0., 1., 0., 1., 0.],
[ 1., 0., 0., 0., 0., 1.],
[ 0., 1., 0., 0., 0., 1.],
[ 0., 0., 1., 1., 0., 0.]])

One-hot encoding requires bit more than converting the values to indicator variables. Typically ML process requires you to apply this coding several times to validation or test data sets and applying the model you construct to real-time observed data. You should store the mapping (transform) that was used to construct the model. A good solution would use the DictVectorizer or LabelEncoder (followed by get_dummies. Here is a function that you can use:
def oneHotEncode2(df, le_dict = {}):
if not le_dict:
columnsToEncode = list(df.select_dtypes(include=['category','object']))
train = True;
else:
columnsToEncode = le_dict.keys()
train = False;
for feature in columnsToEncode:
if train:
le_dict[feature] = LabelEncoder()
try:
if train:
df[feature] = le_dict[feature].fit_transform(df[feature])
else:
df[feature] = le_dict[feature].transform(df[feature])
df = pd.concat([df,
pd.get_dummies(df[feature]).rename(columns=lambda x: feature + '_' + str(x))], axis=1)
df = df.drop(feature, axis=1)
except:
print('Error encoding '+feature)
#df[feature] = df[feature].convert_objects(convert_numeric='force')
df[feature] = df[feature].apply(pd.to_numeric, errors='coerce')
return (df, le_dict)
This works on a pandas dataframe and for each column of the dataframe it creates and returns a mapping back. So you would call it like this:
train_data, le_dict = oneHotEncode2(train_data)
Then on the test data, the call is made by passing the dictionary returned back from training:
test_data, _ = oneHotEncode2(test_data, le_dict)
An equivalent method is to use DictVectorizer. A related post on the same is on my blog. I mention it here since it provides some reasoning behind this approach over simply using get_dummies post (disclosure: this is my own blog).

You can pass the data to catboost classifier without encoding. Catboost handles categorical variables itself by performing one-hot and target expanding mean encoding.

You can do the following as well. Note for the below you don't have to use pd.concat.
import pandas as pd
# intialise data of lists.
data = {'Color':['Red', 'Yellow', 'Red', 'Yellow'], 'Length':[20.1, 21.1, 19.1, 18.1],
'Group':[1,2,1,2]}
# Create DataFrame
df = pd.DataFrame(data)
for _c in df.select_dtypes(include=['object']).columns:
print(_c)
df[_c] = pd.Categorical(df[_c])
df_transformed = pd.get_dummies(df)
df_transformed
You can also change explicit columns to categorical. For example, here I am changing the Color and Group
import pandas as pd
# intialise data of lists.
data = {'Color':['Red', 'Yellow', 'Red', 'Yellow'], 'Length':[20.1, 21.1, 19.1, 18.1],
'Group':[1,2,1,2]}
# Create DataFrame
df = pd.DataFrame(data)
columns_to_change = list(df.select_dtypes(include=['object']).columns)
columns_to_change.append('Group')
for _c in columns_to_change:
print(_c)
df[_c] = pd.Categorical(df[_c])
df_transformed = pd.get_dummies(df)
df_transformed

I know I'm late to this party, but the simplest way to hot encode a dataframe in an automated way is to use this function:
def hot_encode(df):
obj_df = df.select_dtypes(include=['object'])
return pd.get_dummies(df, columns=obj_df.columns).values

This works for me:
pandas.factorize( ['B', 'C', 'D', 'B'] )[0]
Output:
[0, 1, 2, 0]

I used this in my acoustic model:
probably this helps in ur model.
def one_hot_encoding(x, n_out):
x = x.astype(int)
shape = x.shape
x = x.flatten()
N = len(x)
x_categ = np.zeros((N,n_out))
x_categ[np.arange(N), x] = 1
return x_categ.reshape((shape)+(n_out,))

Short Answer
Here is a function to do one-hot-encoding without using numpy, pandas, or other packages. It takes a list of integers, booleans, or strings (and perhaps other types too).
import typing
def one_hot_encode(items: list) -> typing.List[list]:
results = []
# find the unique items (we want to unique items b/c duplicate items will have the same encoding)
unique_items = list(set(items))
# sort the unique items
sorted_items = sorted(unique_items)
# find how long the list of each item should be
max_index = len(unique_items)
for item in items:
# create a list of zeros the appropriate length
one_hot_encoded_result = [0 for i in range(0, max_index)]
# find the index of the item
one_hot_index = sorted_items.index(item)
# change the zero at the index from the previous line to a one
one_hot_encoded_result[one_hot_index] = 1
# add the result
results.append(one_hot_encoded_result)
return results
Example:
one_hot_encode([2, 1, 1, 2, 5, 3])
# [[0, 1, 0, 0],
# [1, 0, 0, 0],
# [1, 0, 0, 0],
# [0, 1, 0, 0],
# [0, 0, 0, 1],
# [0, 0, 1, 0]]
one_hot_encode([True, False, True])
# [[0, 1], [1, 0], [0, 1]]
one_hot_encode(['a', 'b', 'c', 'a', 'e'])
# [[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0], [1, 0, 0, 0], [0, 0, 0, 1]]
Long(er) Answer
I know there are already a lot of answers to this question, but I noticed two things. First, most of the answers use packages like numpy and/or pandas. And this is a good thing. If you are writing production code, you should probably be using robust, fast algorithms like those provided in the numpy/pandas packages. But, for the sake of education, I think someone should provide an answer which has a transparent algorithm and not just an implementation of someone else's algorithm. Second, I noticed that many of the answers do not provide a robust implementation of one-hot encoding because they do not meet one of the requirements below. Below are some of the requirements (as I see them) for a useful, accurate, and robust one-hot encoding function:
A one-hot encoding function must:
handle list of various types (e.g. integers, strings, floats, etc.) as input
handle an input list with duplicates
return a list of lists corresponding (in the same order as) to the inputs
return a list of lists where each list is as short as possible
I tested many of the answers to this question and most of them fail on one of the requirements above.

Try this:
!pip install category_encoders
import category_encoders as ce
categorical_columns = [...the list of names of the columns you want to one-hot-encode ...]
encoder = ce.OneHotEncoder(cols=categorical_columns, use_cat_names=True)
df_train_encoded = encoder.fit_transform(df_train_small)
df_encoded.head()
The resulting dataframe df_train_encoded is the same as the original, but the categorical features are now replaced with their one-hot-encoded versions.
More information on category_encoders here.

To add to other questions, let me provide how I did it with a Python 2.0 function using Numpy:
def one_hot(y_):
# Function to encode output labels from number indexes
# e.g.: [[5], [0], [3]] --> [[0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0]]
y_ = y_.reshape(len(y_))
n_values = np.max(y_) + 1
return np.eye(n_values)[np.array(y_, dtype=np.int32)] # Returns FLOATS
The line n_values = np.max(y_) + 1 could be hard-coded for you to use the good number of neurons in case you use mini-batches for example.
Demo project/tutorial where this function has been used:
https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition

It can and it should be easy as :
class OneHotEncoder:
def __init__(self,optionKeys):
length=len(optionKeys)
self.__dict__={optionKeys[j]:[0 if i!=j else 1 for i in range(length)] for j in range(length)}
Usage :
ohe=OneHotEncoder(["A","B","C","D"])
print(ohe.A)
print(ohe.D)

Expanding #Martin Thoma's answer
def one_hot_encode(y):
"""Convert an iterable of indices to one-hot encoded labels."""
y = y.flatten() # Sometimes not flattened vector is passed e.g (118,1) in these cases
# the function ends up creating a tensor e.g. (118, 2, 1). flatten removes this issue
nb_classes = len(np.unique(y)) # get the number of unique classes
standardised_labels = dict(zip(np.unique(y), np.arange(nb_classes))) # get the class labels as a dictionary
# which then is standardised. E.g imagine class labels are (4,7,9) if a vector of y containing 4,7 and 9 is
# directly passed then np.eye(nb_classes)[4] or 7,9 throws an out of index error.
# standardised labels fixes this issue by returning a dictionary;
# standardised_labels = {4:0, 7:1, 9:2}. The values of the dictionary are mapped to keys in y array.
# standardised_labels also removes the error that is raised if the labels are floats. E.g. 1.0; element
# cannot be called by an integer index e.g y[1.0] - throws an index error.
targets = np.vectorize(standardised_labels.get)(y) # map the dictionary values to array.
return np.eye(nb_classes)[targets]

Lets assume out of 10 variables, you have 3 categorical variables in your data frame named as cname1, cname2 and cname3.
Then following code will automatically create one hot encoded variable in the new dataframe.
import category_encoders as ce
encoder_var=ce.OneHotEncoder(cols=['cname1','cname2','cname3'],handle_unknown='return_nan',return_df=True,use_cat_names=True)
new_df = encoder_var.fit_transform(old_df)

A simple example using vectorize in numpy and apply example in pandas:
import numpy as np
a = np.array(['male','female','female','male'])
#define function
onehot_function = lambda x: 1.0 if (x=='male') else 0.0
onehot_a = np.vectorize(onehot_function)(a)
print(onehot_a)
# [1., 0., 0., 1.]
# -----------------------------------------
import pandas as pd
s = pd.Series(['male','female','female','male'])
onehot_s = s.apply(onehot_function)
print(onehot_s)
# 0 1.0
# 1 0.0
# 2 0.0
# 3 1.0
# dtype: float64

Here i tried with this approach :
import numpy as np
#converting to one_hot
def one_hot_encoder(value, datal):
datal[value] = 1
return datal
def _one_hot_values(labels_data):
encoded = [0] * len(labels_data)
for j, i in enumerate(labels_data):
max_value = [0] * (np.max(labels_data) + 1)
encoded[j] = one_hot_encoder(i, max_value)
return np.array(encoded)

Related

Adding a column in a pandas df from an if in another column [duplicate]

I have a machine learning classification problem with 80% categorical variables. Must I use one hot encoding if I want to use some classifier for the classification? Can i pass the data to a classifier without the encoding?
I am trying to do the following for feature selection:
I read the train file:
num_rows_to_read = 10000
train_small = pd.read_csv("../../dataset/train.csv", nrows=num_rows_to_read)
I change the type of the categorical features to 'category':
non_categorial_features = ['orig_destination_distance',
'srch_adults_cnt',
'srch_children_cnt',
'srch_rm_cnt',
'cnt']
for categorical_feature in list(train_small.columns):
if categorical_feature not in non_categorial_features:
train_small[categorical_feature] = train_small[categorical_feature].astype('category')
I use one hot encoding:
train_small_with_dummies = pd.get_dummies(train_small, sparse=True)
The problem is that the 3'rd part often get stuck, although I am using a strong machine.
Thus, without the one hot encoding I can't do any feature selection, for determining the importance of the features.
What do you recommend?
Approach 1: You can use pandas' pd.get_dummies.
Example 1:
import pandas as pd
s = pd.Series(list('abca'))
pd.get_dummies(s)
Out[]:
a b c
0 1.0 0.0 0.0
1 0.0 1.0 0.0
2 0.0 0.0 1.0
3 1.0 0.0 0.0
Example 2:
The following will transform a given column into one hot. Use prefix to have multiple dummies.
import pandas as pd
df = pd.DataFrame({
'A':['a','b','a'],
'B':['b','a','c']
})
df
Out[]:
A B
0 a b
1 b a
2 a c
# Get one hot encoding of columns B
one_hot = pd.get_dummies(df['B'])
# Drop column B as it is now encoded
df = df.drop('B',axis = 1)
# Join the encoded df
df = df.join(one_hot)
df
Out[]:
A a b c
0 a 0 1 0
1 b 1 0 0
2 a 0 0 1
Approach 2: Use Scikit-learn
Using a OneHotEncoder has the advantage of being able to fit on some training data and then transform on some other data using the same instance. We also have handle_unknown to further control what the encoder does with unseen data.
Given a dataset with three features and four samples, we let the encoder find the maximum value per feature and transform the data to a binary one-hot encoding.
>>> from sklearn.preprocessing import OneHotEncoder
>>> enc = OneHotEncoder()
>>> enc.fit([[0, 0, 3], [1, 1, 0], [0, 2, 1], [1, 0, 2]])
OneHotEncoder(categorical_features='all', dtype=<class 'numpy.float64'>,
handle_unknown='error', n_values='auto', sparse=True)
>>> enc.n_values_
array([2, 3, 4])
>>> enc.feature_indices_
array([0, 2, 5, 9], dtype=int32)
>>> enc.transform([[0, 1, 1]]).toarray()
array([[ 1., 0., 0., 1., 0., 0., 1., 0., 0.]])
Here is the link for this example: http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html
Much easier to use Pandas for basic one-hot encoding. If you're looking for more options you can use scikit-learn.
For basic one-hot encoding with Pandas you pass your data frame into the get_dummies function.
For example, if I have a dataframe called imdb_movies:
...and I want to one-hot encode the Rated column, I do this:
pd.get_dummies(imdb_movies.Rated)
This returns a new dataframe with a column for every "level" of rating that exists, along with either a 1 or 0 specifying the presence of that rating for a given observation.
Usually, we want this to be part of the original dataframe. In this case, we attach our new dummy coded frame onto the original frame using "column-binding.
We can column-bind by using Pandas concat function:
rated_dummies = pd.get_dummies(imdb_movies.Rated)
pd.concat([imdb_movies, rated_dummies], axis=1)
We can now run an analysis on our full dataframe.
SIMPLE UTILITY FUNCTION
I would recommend making yourself a utility function to do this quickly:
def encode_and_bind(original_dataframe, feature_to_encode):
dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
res = pd.concat([original_dataframe, dummies], axis=1)
return(res)
Usage:
encode_and_bind(imdb_movies, 'Rated')
Result:
Also, as per #pmalbu comment, if you would like the function to remove the original feature_to_encode then use this version:
def encode_and_bind(original_dataframe, feature_to_encode):
dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
res = pd.concat([original_dataframe, dummies], axis=1)
res = res.drop([feature_to_encode], axis=1)
return(res)
You can encode multiple features at the same time as follows:
features_to_encode = ['feature_1', 'feature_2', 'feature_3',
'feature_4']
for feature in features_to_encode:
res = encode_and_bind(train_set, feature)
You can do it with numpy.eye and a using the array element selection mechanism:
import numpy as np
nb_classes = 6
data = [[2, 3, 4, 0]]
def indices_to_one_hot(data, nb_classes):
"""Convert an iterable of indices to one-hot encoded labels."""
targets = np.array(data).reshape(-1)
return np.eye(nb_classes)[targets]
The the return value of indices_to_one_hot(nb_classes, data) is now
array([[[ 0., 0., 1., 0., 0., 0.],
[ 0., 0., 0., 1., 0., 0.],
[ 0., 0., 0., 0., 1., 0.],
[ 1., 0., 0., 0., 0., 0.]]])
The .reshape(-1) is there to make sure you have the right labels format (you might also have [[2], [3], [4], [0]]).
One hot encoding with pandas is very easy:
def one_hot(df, cols):
"""
#param df pandas DataFrame
#param cols a list of columns to encode
#return a DataFrame with one-hot encoding
"""
for each in cols:
dummies = pd.get_dummies(df[each], prefix=each, drop_first=False)
df = pd.concat([df, dummies], axis=1)
return df
EDIT:
Another way to one_hot using sklearn's LabelBinarizer :
from sklearn.preprocessing import LabelBinarizer
label_binarizer = LabelBinarizer()
label_binarizer.fit(all_your_labels_list) # need to be global or remembered to use it later
def one_hot_encode(x):
"""
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
"""
return label_binarizer.transform(x)
Firstly, easiest way to one hot encode: use Sklearn.
http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html
Secondly, I don't think using pandas to one hot encode is that simple (unconfirmed though)
Creating dummy variables in pandas for python
Lastly, is it necessary for you to one hot encode? One hot encoding exponentially increases the number of features, drastically increasing the run time of any classifier or anything else you are going to run. Especially when each categorical feature has many levels. Instead you can do dummy coding.
Using dummy encoding usually works well, for much less run time and complexity. A wise prof once told me, 'Less is More'.
Here's the code for my custom encoding function if you want.
from sklearn.preprocessing import LabelEncoder
#Auto encodes any dataframe column of type category or object.
def dummyEncode(df):
columnsToEncode = list(df.select_dtypes(include=['category','object']))
le = LabelEncoder()
for feature in columnsToEncode:
try:
df[feature] = le.fit_transform(df[feature])
except:
print('Error encoding '+feature)
return df
EDIT: Comparison to be clearer:
One-hot encoding: convert n levels to n-1 columns.
Index Animal Index cat mouse
1 dog 1 0 0
2 cat --> 2 1 0
3 mouse 3 0 1
You can see how this will explode your memory if you have many different types (or levels) in your categorical feature. Keep in mind, this is just ONE column.
Dummy Coding:
Index Animal Index Animal
1 dog 1 0
2 cat --> 2 1
3 mouse 3 2
Convert to numerical representations instead. Greatly saves feature space, at the cost of a bit of accuracy.
You can use numpy.eye function.
import numpy as np
def one_hot_encode(x, n_classes):
"""
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
"""
return np.eye(n_classes)[x]
def main():
list = [0,1,2,3,4,3,2,1,0]
n_classes = 5
one_hot_list = one_hot_encode(list, n_classes)
print(one_hot_list)
if __name__ == "__main__":
main()
Result
D:\Desktop>python test.py
[[ 1. 0. 0. 0. 0.]
[ 0. 1. 0. 0. 0.]
[ 0. 0. 1. 0. 0.]
[ 0. 0. 0. 1. 0.]
[ 0. 0. 0. 0. 1.]
[ 0. 0. 0. 1. 0.]
[ 0. 0. 1. 0. 0.]
[ 0. 1. 0. 0. 0.]
[ 1. 0. 0. 0. 0.]]
pandas as has inbuilt function "get_dummies" to get one hot encoding of that particular column/s.
one line code for one-hot-encoding:
df=pd.concat([df,pd.get_dummies(df['column name'],prefix='column name')],axis=1).drop(['column name'],axis=1)
Here is a solution using DictVectorizer and the Pandas DataFrame.to_dict('records') method.
>>> import pandas as pd
>>> X = pd.DataFrame({'income': [100000,110000,90000,30000,14000,50000],
'country':['US', 'CAN', 'US', 'CAN', 'MEX', 'US'],
'race':['White', 'Black', 'Latino', 'White', 'White', 'Black']
})
>>> from sklearn.feature_extraction import DictVectorizer
>>> v = DictVectorizer()
>>> qualitative_features = ['country','race']
>>> X_qual = v.fit_transform(X[qualitative_features].to_dict('records'))
>>> v.vocabulary_
{'country=CAN': 0,
'country=MEX': 1,
'country=US': 2,
'race=Black': 3,
'race=Latino': 4,
'race=White': 5}
>>> X_qual.toarray()
array([[ 0., 0., 1., 0., 0., 1.],
[ 1., 0., 0., 1., 0., 0.],
[ 0., 0., 1., 0., 1., 0.],
[ 1., 0., 0., 0., 0., 1.],
[ 0., 1., 0., 0., 0., 1.],
[ 0., 0., 1., 1., 0., 0.]])
One-hot encoding requires bit more than converting the values to indicator variables. Typically ML process requires you to apply this coding several times to validation or test data sets and applying the model you construct to real-time observed data. You should store the mapping (transform) that was used to construct the model. A good solution would use the DictVectorizer or LabelEncoder (followed by get_dummies. Here is a function that you can use:
def oneHotEncode2(df, le_dict = {}):
if not le_dict:
columnsToEncode = list(df.select_dtypes(include=['category','object']))
train = True;
else:
columnsToEncode = le_dict.keys()
train = False;
for feature in columnsToEncode:
if train:
le_dict[feature] = LabelEncoder()
try:
if train:
df[feature] = le_dict[feature].fit_transform(df[feature])
else:
df[feature] = le_dict[feature].transform(df[feature])
df = pd.concat([df,
pd.get_dummies(df[feature]).rename(columns=lambda x: feature + '_' + str(x))], axis=1)
df = df.drop(feature, axis=1)
except:
print('Error encoding '+feature)
#df[feature] = df[feature].convert_objects(convert_numeric='force')
df[feature] = df[feature].apply(pd.to_numeric, errors='coerce')
return (df, le_dict)
This works on a pandas dataframe and for each column of the dataframe it creates and returns a mapping back. So you would call it like this:
train_data, le_dict = oneHotEncode2(train_data)
Then on the test data, the call is made by passing the dictionary returned back from training:
test_data, _ = oneHotEncode2(test_data, le_dict)
An equivalent method is to use DictVectorizer. A related post on the same is on my blog. I mention it here since it provides some reasoning behind this approach over simply using get_dummies post (disclosure: this is my own blog).
You can pass the data to catboost classifier without encoding. Catboost handles categorical variables itself by performing one-hot and target expanding mean encoding.
You can do the following as well. Note for the below you don't have to use pd.concat.
import pandas as pd
# intialise data of lists.
data = {'Color':['Red', 'Yellow', 'Red', 'Yellow'], 'Length':[20.1, 21.1, 19.1, 18.1],
'Group':[1,2,1,2]}
# Create DataFrame
df = pd.DataFrame(data)
for _c in df.select_dtypes(include=['object']).columns:
print(_c)
df[_c] = pd.Categorical(df[_c])
df_transformed = pd.get_dummies(df)
df_transformed
You can also change explicit columns to categorical. For example, here I am changing the Color and Group
import pandas as pd
# intialise data of lists.
data = {'Color':['Red', 'Yellow', 'Red', 'Yellow'], 'Length':[20.1, 21.1, 19.1, 18.1],
'Group':[1,2,1,2]}
# Create DataFrame
df = pd.DataFrame(data)
columns_to_change = list(df.select_dtypes(include=['object']).columns)
columns_to_change.append('Group')
for _c in columns_to_change:
print(_c)
df[_c] = pd.Categorical(df[_c])
df_transformed = pd.get_dummies(df)
df_transformed
I know I'm late to this party, but the simplest way to hot encode a dataframe in an automated way is to use this function:
def hot_encode(df):
obj_df = df.select_dtypes(include=['object'])
return pd.get_dummies(df, columns=obj_df.columns).values
This works for me:
pandas.factorize( ['B', 'C', 'D', 'B'] )[0]
Output:
[0, 1, 2, 0]
I used this in my acoustic model:
probably this helps in ur model.
def one_hot_encoding(x, n_out):
x = x.astype(int)
shape = x.shape
x = x.flatten()
N = len(x)
x_categ = np.zeros((N,n_out))
x_categ[np.arange(N), x] = 1
return x_categ.reshape((shape)+(n_out,))
Short Answer
Here is a function to do one-hot-encoding without using numpy, pandas, or other packages. It takes a list of integers, booleans, or strings (and perhaps other types too).
import typing
def one_hot_encode(items: list) -> typing.List[list]:
results = []
# find the unique items (we want to unique items b/c duplicate items will have the same encoding)
unique_items = list(set(items))
# sort the unique items
sorted_items = sorted(unique_items)
# find how long the list of each item should be
max_index = len(unique_items)
for item in items:
# create a list of zeros the appropriate length
one_hot_encoded_result = [0 for i in range(0, max_index)]
# find the index of the item
one_hot_index = sorted_items.index(item)
# change the zero at the index from the previous line to a one
one_hot_encoded_result[one_hot_index] = 1
# add the result
results.append(one_hot_encoded_result)
return results
Example:
one_hot_encode([2, 1, 1, 2, 5, 3])
# [[0, 1, 0, 0],
# [1, 0, 0, 0],
# [1, 0, 0, 0],
# [0, 1, 0, 0],
# [0, 0, 0, 1],
# [0, 0, 1, 0]]
one_hot_encode([True, False, True])
# [[0, 1], [1, 0], [0, 1]]
one_hot_encode(['a', 'b', 'c', 'a', 'e'])
# [[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0], [1, 0, 0, 0], [0, 0, 0, 1]]
Long(er) Answer
I know there are already a lot of answers to this question, but I noticed two things. First, most of the answers use packages like numpy and/or pandas. And this is a good thing. If you are writing production code, you should probably be using robust, fast algorithms like those provided in the numpy/pandas packages. But, for the sake of education, I think someone should provide an answer which has a transparent algorithm and not just an implementation of someone else's algorithm. Second, I noticed that many of the answers do not provide a robust implementation of one-hot encoding because they do not meet one of the requirements below. Below are some of the requirements (as I see them) for a useful, accurate, and robust one-hot encoding function:
A one-hot encoding function must:
handle list of various types (e.g. integers, strings, floats, etc.) as input
handle an input list with duplicates
return a list of lists corresponding (in the same order as) to the inputs
return a list of lists where each list is as short as possible
I tested many of the answers to this question and most of them fail on one of the requirements above.
Try this:
!pip install category_encoders
import category_encoders as ce
categorical_columns = [...the list of names of the columns you want to one-hot-encode ...]
encoder = ce.OneHotEncoder(cols=categorical_columns, use_cat_names=True)
df_train_encoded = encoder.fit_transform(df_train_small)
df_encoded.head()
The resulting dataframe df_train_encoded is the same as the original, but the categorical features are now replaced with their one-hot-encoded versions.
More information on category_encoders here.
To add to other questions, let me provide how I did it with a Python 2.0 function using Numpy:
def one_hot(y_):
# Function to encode output labels from number indexes
# e.g.: [[5], [0], [3]] --> [[0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0]]
y_ = y_.reshape(len(y_))
n_values = np.max(y_) + 1
return np.eye(n_values)[np.array(y_, dtype=np.int32)] # Returns FLOATS
The line n_values = np.max(y_) + 1 could be hard-coded for you to use the good number of neurons in case you use mini-batches for example.
Demo project/tutorial where this function has been used:
https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition
It can and it should be easy as :
class OneHotEncoder:
def __init__(self,optionKeys):
length=len(optionKeys)
self.__dict__={optionKeys[j]:[0 if i!=j else 1 for i in range(length)] for j in range(length)}
Usage :
ohe=OneHotEncoder(["A","B","C","D"])
print(ohe.A)
print(ohe.D)
Expanding #Martin Thoma's answer
def one_hot_encode(y):
"""Convert an iterable of indices to one-hot encoded labels."""
y = y.flatten() # Sometimes not flattened vector is passed e.g (118,1) in these cases
# the function ends up creating a tensor e.g. (118, 2, 1). flatten removes this issue
nb_classes = len(np.unique(y)) # get the number of unique classes
standardised_labels = dict(zip(np.unique(y), np.arange(nb_classes))) # get the class labels as a dictionary
# which then is standardised. E.g imagine class labels are (4,7,9) if a vector of y containing 4,7 and 9 is
# directly passed then np.eye(nb_classes)[4] or 7,9 throws an out of index error.
# standardised labels fixes this issue by returning a dictionary;
# standardised_labels = {4:0, 7:1, 9:2}. The values of the dictionary are mapped to keys in y array.
# standardised_labels also removes the error that is raised if the labels are floats. E.g. 1.0; element
# cannot be called by an integer index e.g y[1.0] - throws an index error.
targets = np.vectorize(standardised_labels.get)(y) # map the dictionary values to array.
return np.eye(nb_classes)[targets]
Lets assume out of 10 variables, you have 3 categorical variables in your data frame named as cname1, cname2 and cname3.
Then following code will automatically create one hot encoded variable in the new dataframe.
import category_encoders as ce
encoder_var=ce.OneHotEncoder(cols=['cname1','cname2','cname3'],handle_unknown='return_nan',return_df=True,use_cat_names=True)
new_df = encoder_var.fit_transform(old_df)
A simple example using vectorize in numpy and apply example in pandas:
import numpy as np
a = np.array(['male','female','female','male'])
#define function
onehot_function = lambda x: 1.0 if (x=='male') else 0.0
onehot_a = np.vectorize(onehot_function)(a)
print(onehot_a)
# [1., 0., 0., 1.]
# -----------------------------------------
import pandas as pd
s = pd.Series(['male','female','female','male'])
onehot_s = s.apply(onehot_function)
print(onehot_s)
# 0 1.0
# 1 0.0
# 2 0.0
# 3 1.0
# dtype: float64
Here i tried with this approach :
import numpy as np
#converting to one_hot
def one_hot_encoder(value, datal):
datal[value] = 1
return datal
def _one_hot_values(labels_data):
encoded = [0] * len(labels_data)
for j, i in enumerate(labels_data):
max_value = [0] * (np.max(labels_data) + 1)
encoded[j] = one_hot_encoder(i, max_value)
return np.array(encoded)

Making New Table structure using pandas [duplicate]

I have a machine learning classification problem with 80% categorical variables. Must I use one hot encoding if I want to use some classifier for the classification? Can i pass the data to a classifier without the encoding?
I am trying to do the following for feature selection:
I read the train file:
num_rows_to_read = 10000
train_small = pd.read_csv("../../dataset/train.csv", nrows=num_rows_to_read)
I change the type of the categorical features to 'category':
non_categorial_features = ['orig_destination_distance',
'srch_adults_cnt',
'srch_children_cnt',
'srch_rm_cnt',
'cnt']
for categorical_feature in list(train_small.columns):
if categorical_feature not in non_categorial_features:
train_small[categorical_feature] = train_small[categorical_feature].astype('category')
I use one hot encoding:
train_small_with_dummies = pd.get_dummies(train_small, sparse=True)
The problem is that the 3'rd part often get stuck, although I am using a strong machine.
Thus, without the one hot encoding I can't do any feature selection, for determining the importance of the features.
What do you recommend?
Approach 1: You can use pandas' pd.get_dummies.
Example 1:
import pandas as pd
s = pd.Series(list('abca'))
pd.get_dummies(s)
Out[]:
a b c
0 1.0 0.0 0.0
1 0.0 1.0 0.0
2 0.0 0.0 1.0
3 1.0 0.0 0.0
Example 2:
The following will transform a given column into one hot. Use prefix to have multiple dummies.
import pandas as pd
df = pd.DataFrame({
'A':['a','b','a'],
'B':['b','a','c']
})
df
Out[]:
A B
0 a b
1 b a
2 a c
# Get one hot encoding of columns B
one_hot = pd.get_dummies(df['B'])
# Drop column B as it is now encoded
df = df.drop('B',axis = 1)
# Join the encoded df
df = df.join(one_hot)
df
Out[]:
A a b c
0 a 0 1 0
1 b 1 0 0
2 a 0 0 1
Approach 2: Use Scikit-learn
Using a OneHotEncoder has the advantage of being able to fit on some training data and then transform on some other data using the same instance. We also have handle_unknown to further control what the encoder does with unseen data.
Given a dataset with three features and four samples, we let the encoder find the maximum value per feature and transform the data to a binary one-hot encoding.
>>> from sklearn.preprocessing import OneHotEncoder
>>> enc = OneHotEncoder()
>>> enc.fit([[0, 0, 3], [1, 1, 0], [0, 2, 1], [1, 0, 2]])
OneHotEncoder(categorical_features='all', dtype=<class 'numpy.float64'>,
handle_unknown='error', n_values='auto', sparse=True)
>>> enc.n_values_
array([2, 3, 4])
>>> enc.feature_indices_
array([0, 2, 5, 9], dtype=int32)
>>> enc.transform([[0, 1, 1]]).toarray()
array([[ 1., 0., 0., 1., 0., 0., 1., 0., 0.]])
Here is the link for this example: http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html
Much easier to use Pandas for basic one-hot encoding. If you're looking for more options you can use scikit-learn.
For basic one-hot encoding with Pandas you pass your data frame into the get_dummies function.
For example, if I have a dataframe called imdb_movies:
...and I want to one-hot encode the Rated column, I do this:
pd.get_dummies(imdb_movies.Rated)
This returns a new dataframe with a column for every "level" of rating that exists, along with either a 1 or 0 specifying the presence of that rating for a given observation.
Usually, we want this to be part of the original dataframe. In this case, we attach our new dummy coded frame onto the original frame using "column-binding.
We can column-bind by using Pandas concat function:
rated_dummies = pd.get_dummies(imdb_movies.Rated)
pd.concat([imdb_movies, rated_dummies], axis=1)
We can now run an analysis on our full dataframe.
SIMPLE UTILITY FUNCTION
I would recommend making yourself a utility function to do this quickly:
def encode_and_bind(original_dataframe, feature_to_encode):
dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
res = pd.concat([original_dataframe, dummies], axis=1)
return(res)
Usage:
encode_and_bind(imdb_movies, 'Rated')
Result:
Also, as per #pmalbu comment, if you would like the function to remove the original feature_to_encode then use this version:
def encode_and_bind(original_dataframe, feature_to_encode):
dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
res = pd.concat([original_dataframe, dummies], axis=1)
res = res.drop([feature_to_encode], axis=1)
return(res)
You can encode multiple features at the same time as follows:
features_to_encode = ['feature_1', 'feature_2', 'feature_3',
'feature_4']
for feature in features_to_encode:
res = encode_and_bind(train_set, feature)
You can do it with numpy.eye and a using the array element selection mechanism:
import numpy as np
nb_classes = 6
data = [[2, 3, 4, 0]]
def indices_to_one_hot(data, nb_classes):
"""Convert an iterable of indices to one-hot encoded labels."""
targets = np.array(data).reshape(-1)
return np.eye(nb_classes)[targets]
The the return value of indices_to_one_hot(nb_classes, data) is now
array([[[ 0., 0., 1., 0., 0., 0.],
[ 0., 0., 0., 1., 0., 0.],
[ 0., 0., 0., 0., 1., 0.],
[ 1., 0., 0., 0., 0., 0.]]])
The .reshape(-1) is there to make sure you have the right labels format (you might also have [[2], [3], [4], [0]]).
One hot encoding with pandas is very easy:
def one_hot(df, cols):
"""
#param df pandas DataFrame
#param cols a list of columns to encode
#return a DataFrame with one-hot encoding
"""
for each in cols:
dummies = pd.get_dummies(df[each], prefix=each, drop_first=False)
df = pd.concat([df, dummies], axis=1)
return df
EDIT:
Another way to one_hot using sklearn's LabelBinarizer :
from sklearn.preprocessing import LabelBinarizer
label_binarizer = LabelBinarizer()
label_binarizer.fit(all_your_labels_list) # need to be global or remembered to use it later
def one_hot_encode(x):
"""
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
"""
return label_binarizer.transform(x)
Firstly, easiest way to one hot encode: use Sklearn.
http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html
Secondly, I don't think using pandas to one hot encode is that simple (unconfirmed though)
Creating dummy variables in pandas for python
Lastly, is it necessary for you to one hot encode? One hot encoding exponentially increases the number of features, drastically increasing the run time of any classifier or anything else you are going to run. Especially when each categorical feature has many levels. Instead you can do dummy coding.
Using dummy encoding usually works well, for much less run time and complexity. A wise prof once told me, 'Less is More'.
Here's the code for my custom encoding function if you want.
from sklearn.preprocessing import LabelEncoder
#Auto encodes any dataframe column of type category or object.
def dummyEncode(df):
columnsToEncode = list(df.select_dtypes(include=['category','object']))
le = LabelEncoder()
for feature in columnsToEncode:
try:
df[feature] = le.fit_transform(df[feature])
except:
print('Error encoding '+feature)
return df
EDIT: Comparison to be clearer:
One-hot encoding: convert n levels to n-1 columns.
Index Animal Index cat mouse
1 dog 1 0 0
2 cat --> 2 1 0
3 mouse 3 0 1
You can see how this will explode your memory if you have many different types (or levels) in your categorical feature. Keep in mind, this is just ONE column.
Dummy Coding:
Index Animal Index Animal
1 dog 1 0
2 cat --> 2 1
3 mouse 3 2
Convert to numerical representations instead. Greatly saves feature space, at the cost of a bit of accuracy.
You can use numpy.eye function.
import numpy as np
def one_hot_encode(x, n_classes):
"""
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
"""
return np.eye(n_classes)[x]
def main():
list = [0,1,2,3,4,3,2,1,0]
n_classes = 5
one_hot_list = one_hot_encode(list, n_classes)
print(one_hot_list)
if __name__ == "__main__":
main()
Result
D:\Desktop>python test.py
[[ 1. 0. 0. 0. 0.]
[ 0. 1. 0. 0. 0.]
[ 0. 0. 1. 0. 0.]
[ 0. 0. 0. 1. 0.]
[ 0. 0. 0. 0. 1.]
[ 0. 0. 0. 1. 0.]
[ 0. 0. 1. 0. 0.]
[ 0. 1. 0. 0. 0.]
[ 1. 0. 0. 0. 0.]]
pandas as has inbuilt function "get_dummies" to get one hot encoding of that particular column/s.
one line code for one-hot-encoding:
df=pd.concat([df,pd.get_dummies(df['column name'],prefix='column name')],axis=1).drop(['column name'],axis=1)
Here is a solution using DictVectorizer and the Pandas DataFrame.to_dict('records') method.
>>> import pandas as pd
>>> X = pd.DataFrame({'income': [100000,110000,90000,30000,14000,50000],
'country':['US', 'CAN', 'US', 'CAN', 'MEX', 'US'],
'race':['White', 'Black', 'Latino', 'White', 'White', 'Black']
})
>>> from sklearn.feature_extraction import DictVectorizer
>>> v = DictVectorizer()
>>> qualitative_features = ['country','race']
>>> X_qual = v.fit_transform(X[qualitative_features].to_dict('records'))
>>> v.vocabulary_
{'country=CAN': 0,
'country=MEX': 1,
'country=US': 2,
'race=Black': 3,
'race=Latino': 4,
'race=White': 5}
>>> X_qual.toarray()
array([[ 0., 0., 1., 0., 0., 1.],
[ 1., 0., 0., 1., 0., 0.],
[ 0., 0., 1., 0., 1., 0.],
[ 1., 0., 0., 0., 0., 1.],
[ 0., 1., 0., 0., 0., 1.],
[ 0., 0., 1., 1., 0., 0.]])
One-hot encoding requires bit more than converting the values to indicator variables. Typically ML process requires you to apply this coding several times to validation or test data sets and applying the model you construct to real-time observed data. You should store the mapping (transform) that was used to construct the model. A good solution would use the DictVectorizer or LabelEncoder (followed by get_dummies. Here is a function that you can use:
def oneHotEncode2(df, le_dict = {}):
if not le_dict:
columnsToEncode = list(df.select_dtypes(include=['category','object']))
train = True;
else:
columnsToEncode = le_dict.keys()
train = False;
for feature in columnsToEncode:
if train:
le_dict[feature] = LabelEncoder()
try:
if train:
df[feature] = le_dict[feature].fit_transform(df[feature])
else:
df[feature] = le_dict[feature].transform(df[feature])
df = pd.concat([df,
pd.get_dummies(df[feature]).rename(columns=lambda x: feature + '_' + str(x))], axis=1)
df = df.drop(feature, axis=1)
except:
print('Error encoding '+feature)
#df[feature] = df[feature].convert_objects(convert_numeric='force')
df[feature] = df[feature].apply(pd.to_numeric, errors='coerce')
return (df, le_dict)
This works on a pandas dataframe and for each column of the dataframe it creates and returns a mapping back. So you would call it like this:
train_data, le_dict = oneHotEncode2(train_data)
Then on the test data, the call is made by passing the dictionary returned back from training:
test_data, _ = oneHotEncode2(test_data, le_dict)
An equivalent method is to use DictVectorizer. A related post on the same is on my blog. I mention it here since it provides some reasoning behind this approach over simply using get_dummies post (disclosure: this is my own blog).
You can pass the data to catboost classifier without encoding. Catboost handles categorical variables itself by performing one-hot and target expanding mean encoding.
You can do the following as well. Note for the below you don't have to use pd.concat.
import pandas as pd
# intialise data of lists.
data = {'Color':['Red', 'Yellow', 'Red', 'Yellow'], 'Length':[20.1, 21.1, 19.1, 18.1],
'Group':[1,2,1,2]}
# Create DataFrame
df = pd.DataFrame(data)
for _c in df.select_dtypes(include=['object']).columns:
print(_c)
df[_c] = pd.Categorical(df[_c])
df_transformed = pd.get_dummies(df)
df_transformed
You can also change explicit columns to categorical. For example, here I am changing the Color and Group
import pandas as pd
# intialise data of lists.
data = {'Color':['Red', 'Yellow', 'Red', 'Yellow'], 'Length':[20.1, 21.1, 19.1, 18.1],
'Group':[1,2,1,2]}
# Create DataFrame
df = pd.DataFrame(data)
columns_to_change = list(df.select_dtypes(include=['object']).columns)
columns_to_change.append('Group')
for _c in columns_to_change:
print(_c)
df[_c] = pd.Categorical(df[_c])
df_transformed = pd.get_dummies(df)
df_transformed
I know I'm late to this party, but the simplest way to hot encode a dataframe in an automated way is to use this function:
def hot_encode(df):
obj_df = df.select_dtypes(include=['object'])
return pd.get_dummies(df, columns=obj_df.columns).values
This works for me:
pandas.factorize( ['B', 'C', 'D', 'B'] )[0]
Output:
[0, 1, 2, 0]
I used this in my acoustic model:
probably this helps in ur model.
def one_hot_encoding(x, n_out):
x = x.astype(int)
shape = x.shape
x = x.flatten()
N = len(x)
x_categ = np.zeros((N,n_out))
x_categ[np.arange(N), x] = 1
return x_categ.reshape((shape)+(n_out,))
Short Answer
Here is a function to do one-hot-encoding without using numpy, pandas, or other packages. It takes a list of integers, booleans, or strings (and perhaps other types too).
import typing
def one_hot_encode(items: list) -> typing.List[list]:
results = []
# find the unique items (we want to unique items b/c duplicate items will have the same encoding)
unique_items = list(set(items))
# sort the unique items
sorted_items = sorted(unique_items)
# find how long the list of each item should be
max_index = len(unique_items)
for item in items:
# create a list of zeros the appropriate length
one_hot_encoded_result = [0 for i in range(0, max_index)]
# find the index of the item
one_hot_index = sorted_items.index(item)
# change the zero at the index from the previous line to a one
one_hot_encoded_result[one_hot_index] = 1
# add the result
results.append(one_hot_encoded_result)
return results
Example:
one_hot_encode([2, 1, 1, 2, 5, 3])
# [[0, 1, 0, 0],
# [1, 0, 0, 0],
# [1, 0, 0, 0],
# [0, 1, 0, 0],
# [0, 0, 0, 1],
# [0, 0, 1, 0]]
one_hot_encode([True, False, True])
# [[0, 1], [1, 0], [0, 1]]
one_hot_encode(['a', 'b', 'c', 'a', 'e'])
# [[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0], [1, 0, 0, 0], [0, 0, 0, 1]]
Long(er) Answer
I know there are already a lot of answers to this question, but I noticed two things. First, most of the answers use packages like numpy and/or pandas. And this is a good thing. If you are writing production code, you should probably be using robust, fast algorithms like those provided in the numpy/pandas packages. But, for the sake of education, I think someone should provide an answer which has a transparent algorithm and not just an implementation of someone else's algorithm. Second, I noticed that many of the answers do not provide a robust implementation of one-hot encoding because they do not meet one of the requirements below. Below are some of the requirements (as I see them) for a useful, accurate, and robust one-hot encoding function:
A one-hot encoding function must:
handle list of various types (e.g. integers, strings, floats, etc.) as input
handle an input list with duplicates
return a list of lists corresponding (in the same order as) to the inputs
return a list of lists where each list is as short as possible
I tested many of the answers to this question and most of them fail on one of the requirements above.
Try this:
!pip install category_encoders
import category_encoders as ce
categorical_columns = [...the list of names of the columns you want to one-hot-encode ...]
encoder = ce.OneHotEncoder(cols=categorical_columns, use_cat_names=True)
df_train_encoded = encoder.fit_transform(df_train_small)
df_encoded.head()
The resulting dataframe df_train_encoded is the same as the original, but the categorical features are now replaced with their one-hot-encoded versions.
More information on category_encoders here.
To add to other questions, let me provide how I did it with a Python 2.0 function using Numpy:
def one_hot(y_):
# Function to encode output labels from number indexes
# e.g.: [[5], [0], [3]] --> [[0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0]]
y_ = y_.reshape(len(y_))
n_values = np.max(y_) + 1
return np.eye(n_values)[np.array(y_, dtype=np.int32)] # Returns FLOATS
The line n_values = np.max(y_) + 1 could be hard-coded for you to use the good number of neurons in case you use mini-batches for example.
Demo project/tutorial where this function has been used:
https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition
It can and it should be easy as :
class OneHotEncoder:
def __init__(self,optionKeys):
length=len(optionKeys)
self.__dict__={optionKeys[j]:[0 if i!=j else 1 for i in range(length)] for j in range(length)}
Usage :
ohe=OneHotEncoder(["A","B","C","D"])
print(ohe.A)
print(ohe.D)
Expanding #Martin Thoma's answer
def one_hot_encode(y):
"""Convert an iterable of indices to one-hot encoded labels."""
y = y.flatten() # Sometimes not flattened vector is passed e.g (118,1) in these cases
# the function ends up creating a tensor e.g. (118, 2, 1). flatten removes this issue
nb_classes = len(np.unique(y)) # get the number of unique classes
standardised_labels = dict(zip(np.unique(y), np.arange(nb_classes))) # get the class labels as a dictionary
# which then is standardised. E.g imagine class labels are (4,7,9) if a vector of y containing 4,7 and 9 is
# directly passed then np.eye(nb_classes)[4] or 7,9 throws an out of index error.
# standardised labels fixes this issue by returning a dictionary;
# standardised_labels = {4:0, 7:1, 9:2}. The values of the dictionary are mapped to keys in y array.
# standardised_labels also removes the error that is raised if the labels are floats. E.g. 1.0; element
# cannot be called by an integer index e.g y[1.0] - throws an index error.
targets = np.vectorize(standardised_labels.get)(y) # map the dictionary values to array.
return np.eye(nb_classes)[targets]
Lets assume out of 10 variables, you have 3 categorical variables in your data frame named as cname1, cname2 and cname3.
Then following code will automatically create one hot encoded variable in the new dataframe.
import category_encoders as ce
encoder_var=ce.OneHotEncoder(cols=['cname1','cname2','cname3'],handle_unknown='return_nan',return_df=True,use_cat_names=True)
new_df = encoder_var.fit_transform(old_df)
A simple example using vectorize in numpy and apply example in pandas:
import numpy as np
a = np.array(['male','female','female','male'])
#define function
onehot_function = lambda x: 1.0 if (x=='male') else 0.0
onehot_a = np.vectorize(onehot_function)(a)
print(onehot_a)
# [1., 0., 0., 1.]
# -----------------------------------------
import pandas as pd
s = pd.Series(['male','female','female','male'])
onehot_s = s.apply(onehot_function)
print(onehot_s)
# 0 1.0
# 1 0.0
# 2 0.0
# 3 1.0
# dtype: float64
Here i tried with this approach :
import numpy as np
#converting to one_hot
def one_hot_encoder(value, datal):
datal[value] = 1
return datal
def _one_hot_values(labels_data):
encoded = [0] * len(labels_data)
for j, i in enumerate(labels_data):
max_value = [0] * (np.max(labels_data) + 1)
encoded[j] = one_hot_encoder(i, max_value)
return np.array(encoded)

using pandas apply() to create new columns based on condition [duplicate]

I have a machine learning classification problem with 80% categorical variables. Must I use one hot encoding if I want to use some classifier for the classification? Can i pass the data to a classifier without the encoding?
I am trying to do the following for feature selection:
I read the train file:
num_rows_to_read = 10000
train_small = pd.read_csv("../../dataset/train.csv", nrows=num_rows_to_read)
I change the type of the categorical features to 'category':
non_categorial_features = ['orig_destination_distance',
'srch_adults_cnt',
'srch_children_cnt',
'srch_rm_cnt',
'cnt']
for categorical_feature in list(train_small.columns):
if categorical_feature not in non_categorial_features:
train_small[categorical_feature] = train_small[categorical_feature].astype('category')
I use one hot encoding:
train_small_with_dummies = pd.get_dummies(train_small, sparse=True)
The problem is that the 3'rd part often get stuck, although I am using a strong machine.
Thus, without the one hot encoding I can't do any feature selection, for determining the importance of the features.
What do you recommend?
Approach 1: You can use pandas' pd.get_dummies.
Example 1:
import pandas as pd
s = pd.Series(list('abca'))
pd.get_dummies(s)
Out[]:
a b c
0 1.0 0.0 0.0
1 0.0 1.0 0.0
2 0.0 0.0 1.0
3 1.0 0.0 0.0
Example 2:
The following will transform a given column into one hot. Use prefix to have multiple dummies.
import pandas as pd
df = pd.DataFrame({
'A':['a','b','a'],
'B':['b','a','c']
})
df
Out[]:
A B
0 a b
1 b a
2 a c
# Get one hot encoding of columns B
one_hot = pd.get_dummies(df['B'])
# Drop column B as it is now encoded
df = df.drop('B',axis = 1)
# Join the encoded df
df = df.join(one_hot)
df
Out[]:
A a b c
0 a 0 1 0
1 b 1 0 0
2 a 0 0 1
Approach 2: Use Scikit-learn
Using a OneHotEncoder has the advantage of being able to fit on some training data and then transform on some other data using the same instance. We also have handle_unknown to further control what the encoder does with unseen data.
Given a dataset with three features and four samples, we let the encoder find the maximum value per feature and transform the data to a binary one-hot encoding.
>>> from sklearn.preprocessing import OneHotEncoder
>>> enc = OneHotEncoder()
>>> enc.fit([[0, 0, 3], [1, 1, 0], [0, 2, 1], [1, 0, 2]])
OneHotEncoder(categorical_features='all', dtype=<class 'numpy.float64'>,
handle_unknown='error', n_values='auto', sparse=True)
>>> enc.n_values_
array([2, 3, 4])
>>> enc.feature_indices_
array([0, 2, 5, 9], dtype=int32)
>>> enc.transform([[0, 1, 1]]).toarray()
array([[ 1., 0., 0., 1., 0., 0., 1., 0., 0.]])
Here is the link for this example: http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html
Much easier to use Pandas for basic one-hot encoding. If you're looking for more options you can use scikit-learn.
For basic one-hot encoding with Pandas you pass your data frame into the get_dummies function.
For example, if I have a dataframe called imdb_movies:
...and I want to one-hot encode the Rated column, I do this:
pd.get_dummies(imdb_movies.Rated)
This returns a new dataframe with a column for every "level" of rating that exists, along with either a 1 or 0 specifying the presence of that rating for a given observation.
Usually, we want this to be part of the original dataframe. In this case, we attach our new dummy coded frame onto the original frame using "column-binding.
We can column-bind by using Pandas concat function:
rated_dummies = pd.get_dummies(imdb_movies.Rated)
pd.concat([imdb_movies, rated_dummies], axis=1)
We can now run an analysis on our full dataframe.
SIMPLE UTILITY FUNCTION
I would recommend making yourself a utility function to do this quickly:
def encode_and_bind(original_dataframe, feature_to_encode):
dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
res = pd.concat([original_dataframe, dummies], axis=1)
return(res)
Usage:
encode_and_bind(imdb_movies, 'Rated')
Result:
Also, as per #pmalbu comment, if you would like the function to remove the original feature_to_encode then use this version:
def encode_and_bind(original_dataframe, feature_to_encode):
dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
res = pd.concat([original_dataframe, dummies], axis=1)
res = res.drop([feature_to_encode], axis=1)
return(res)
You can encode multiple features at the same time as follows:
features_to_encode = ['feature_1', 'feature_2', 'feature_3',
'feature_4']
for feature in features_to_encode:
res = encode_and_bind(train_set, feature)
You can do it with numpy.eye and a using the array element selection mechanism:
import numpy as np
nb_classes = 6
data = [[2, 3, 4, 0]]
def indices_to_one_hot(data, nb_classes):
"""Convert an iterable of indices to one-hot encoded labels."""
targets = np.array(data).reshape(-1)
return np.eye(nb_classes)[targets]
The the return value of indices_to_one_hot(nb_classes, data) is now
array([[[ 0., 0., 1., 0., 0., 0.],
[ 0., 0., 0., 1., 0., 0.],
[ 0., 0., 0., 0., 1., 0.],
[ 1., 0., 0., 0., 0., 0.]]])
The .reshape(-1) is there to make sure you have the right labels format (you might also have [[2], [3], [4], [0]]).
One hot encoding with pandas is very easy:
def one_hot(df, cols):
"""
#param df pandas DataFrame
#param cols a list of columns to encode
#return a DataFrame with one-hot encoding
"""
for each in cols:
dummies = pd.get_dummies(df[each], prefix=each, drop_first=False)
df = pd.concat([df, dummies], axis=1)
return df
EDIT:
Another way to one_hot using sklearn's LabelBinarizer :
from sklearn.preprocessing import LabelBinarizer
label_binarizer = LabelBinarizer()
label_binarizer.fit(all_your_labels_list) # need to be global or remembered to use it later
def one_hot_encode(x):
"""
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
"""
return label_binarizer.transform(x)
Firstly, easiest way to one hot encode: use Sklearn.
http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html
Secondly, I don't think using pandas to one hot encode is that simple (unconfirmed though)
Creating dummy variables in pandas for python
Lastly, is it necessary for you to one hot encode? One hot encoding exponentially increases the number of features, drastically increasing the run time of any classifier or anything else you are going to run. Especially when each categorical feature has many levels. Instead you can do dummy coding.
Using dummy encoding usually works well, for much less run time and complexity. A wise prof once told me, 'Less is More'.
Here's the code for my custom encoding function if you want.
from sklearn.preprocessing import LabelEncoder
#Auto encodes any dataframe column of type category or object.
def dummyEncode(df):
columnsToEncode = list(df.select_dtypes(include=['category','object']))
le = LabelEncoder()
for feature in columnsToEncode:
try:
df[feature] = le.fit_transform(df[feature])
except:
print('Error encoding '+feature)
return df
EDIT: Comparison to be clearer:
One-hot encoding: convert n levels to n-1 columns.
Index Animal Index cat mouse
1 dog 1 0 0
2 cat --> 2 1 0
3 mouse 3 0 1
You can see how this will explode your memory if you have many different types (or levels) in your categorical feature. Keep in mind, this is just ONE column.
Dummy Coding:
Index Animal Index Animal
1 dog 1 0
2 cat --> 2 1
3 mouse 3 2
Convert to numerical representations instead. Greatly saves feature space, at the cost of a bit of accuracy.
You can use numpy.eye function.
import numpy as np
def one_hot_encode(x, n_classes):
"""
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
"""
return np.eye(n_classes)[x]
def main():
list = [0,1,2,3,4,3,2,1,0]
n_classes = 5
one_hot_list = one_hot_encode(list, n_classes)
print(one_hot_list)
if __name__ == "__main__":
main()
Result
D:\Desktop>python test.py
[[ 1. 0. 0. 0. 0.]
[ 0. 1. 0. 0. 0.]
[ 0. 0. 1. 0. 0.]
[ 0. 0. 0. 1. 0.]
[ 0. 0. 0. 0. 1.]
[ 0. 0. 0. 1. 0.]
[ 0. 0. 1. 0. 0.]
[ 0. 1. 0. 0. 0.]
[ 1. 0. 0. 0. 0.]]
pandas as has inbuilt function "get_dummies" to get one hot encoding of that particular column/s.
one line code for one-hot-encoding:
df=pd.concat([df,pd.get_dummies(df['column name'],prefix='column name')],axis=1).drop(['column name'],axis=1)
Here is a solution using DictVectorizer and the Pandas DataFrame.to_dict('records') method.
>>> import pandas as pd
>>> X = pd.DataFrame({'income': [100000,110000,90000,30000,14000,50000],
'country':['US', 'CAN', 'US', 'CAN', 'MEX', 'US'],
'race':['White', 'Black', 'Latino', 'White', 'White', 'Black']
})
>>> from sklearn.feature_extraction import DictVectorizer
>>> v = DictVectorizer()
>>> qualitative_features = ['country','race']
>>> X_qual = v.fit_transform(X[qualitative_features].to_dict('records'))
>>> v.vocabulary_
{'country=CAN': 0,
'country=MEX': 1,
'country=US': 2,
'race=Black': 3,
'race=Latino': 4,
'race=White': 5}
>>> X_qual.toarray()
array([[ 0., 0., 1., 0., 0., 1.],
[ 1., 0., 0., 1., 0., 0.],
[ 0., 0., 1., 0., 1., 0.],
[ 1., 0., 0., 0., 0., 1.],
[ 0., 1., 0., 0., 0., 1.],
[ 0., 0., 1., 1., 0., 0.]])
One-hot encoding requires bit more than converting the values to indicator variables. Typically ML process requires you to apply this coding several times to validation or test data sets and applying the model you construct to real-time observed data. You should store the mapping (transform) that was used to construct the model. A good solution would use the DictVectorizer or LabelEncoder (followed by get_dummies. Here is a function that you can use:
def oneHotEncode2(df, le_dict = {}):
if not le_dict:
columnsToEncode = list(df.select_dtypes(include=['category','object']))
train = True;
else:
columnsToEncode = le_dict.keys()
train = False;
for feature in columnsToEncode:
if train:
le_dict[feature] = LabelEncoder()
try:
if train:
df[feature] = le_dict[feature].fit_transform(df[feature])
else:
df[feature] = le_dict[feature].transform(df[feature])
df = pd.concat([df,
pd.get_dummies(df[feature]).rename(columns=lambda x: feature + '_' + str(x))], axis=1)
df = df.drop(feature, axis=1)
except:
print('Error encoding '+feature)
#df[feature] = df[feature].convert_objects(convert_numeric='force')
df[feature] = df[feature].apply(pd.to_numeric, errors='coerce')
return (df, le_dict)
This works on a pandas dataframe and for each column of the dataframe it creates and returns a mapping back. So you would call it like this:
train_data, le_dict = oneHotEncode2(train_data)
Then on the test data, the call is made by passing the dictionary returned back from training:
test_data, _ = oneHotEncode2(test_data, le_dict)
An equivalent method is to use DictVectorizer. A related post on the same is on my blog. I mention it here since it provides some reasoning behind this approach over simply using get_dummies post (disclosure: this is my own blog).
You can pass the data to catboost classifier without encoding. Catboost handles categorical variables itself by performing one-hot and target expanding mean encoding.
You can do the following as well. Note for the below you don't have to use pd.concat.
import pandas as pd
# intialise data of lists.
data = {'Color':['Red', 'Yellow', 'Red', 'Yellow'], 'Length':[20.1, 21.1, 19.1, 18.1],
'Group':[1,2,1,2]}
# Create DataFrame
df = pd.DataFrame(data)
for _c in df.select_dtypes(include=['object']).columns:
print(_c)
df[_c] = pd.Categorical(df[_c])
df_transformed = pd.get_dummies(df)
df_transformed
You can also change explicit columns to categorical. For example, here I am changing the Color and Group
import pandas as pd
# intialise data of lists.
data = {'Color':['Red', 'Yellow', 'Red', 'Yellow'], 'Length':[20.1, 21.1, 19.1, 18.1],
'Group':[1,2,1,2]}
# Create DataFrame
df = pd.DataFrame(data)
columns_to_change = list(df.select_dtypes(include=['object']).columns)
columns_to_change.append('Group')
for _c in columns_to_change:
print(_c)
df[_c] = pd.Categorical(df[_c])
df_transformed = pd.get_dummies(df)
df_transformed
I know I'm late to this party, but the simplest way to hot encode a dataframe in an automated way is to use this function:
def hot_encode(df):
obj_df = df.select_dtypes(include=['object'])
return pd.get_dummies(df, columns=obj_df.columns).values
This works for me:
pandas.factorize( ['B', 'C', 'D', 'B'] )[0]
Output:
[0, 1, 2, 0]
I used this in my acoustic model:
probably this helps in ur model.
def one_hot_encoding(x, n_out):
x = x.astype(int)
shape = x.shape
x = x.flatten()
N = len(x)
x_categ = np.zeros((N,n_out))
x_categ[np.arange(N), x] = 1
return x_categ.reshape((shape)+(n_out,))
Short Answer
Here is a function to do one-hot-encoding without using numpy, pandas, or other packages. It takes a list of integers, booleans, or strings (and perhaps other types too).
import typing
def one_hot_encode(items: list) -> typing.List[list]:
results = []
# find the unique items (we want to unique items b/c duplicate items will have the same encoding)
unique_items = list(set(items))
# sort the unique items
sorted_items = sorted(unique_items)
# find how long the list of each item should be
max_index = len(unique_items)
for item in items:
# create a list of zeros the appropriate length
one_hot_encoded_result = [0 for i in range(0, max_index)]
# find the index of the item
one_hot_index = sorted_items.index(item)
# change the zero at the index from the previous line to a one
one_hot_encoded_result[one_hot_index] = 1
# add the result
results.append(one_hot_encoded_result)
return results
Example:
one_hot_encode([2, 1, 1, 2, 5, 3])
# [[0, 1, 0, 0],
# [1, 0, 0, 0],
# [1, 0, 0, 0],
# [0, 1, 0, 0],
# [0, 0, 0, 1],
# [0, 0, 1, 0]]
one_hot_encode([True, False, True])
# [[0, 1], [1, 0], [0, 1]]
one_hot_encode(['a', 'b', 'c', 'a', 'e'])
# [[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0], [1, 0, 0, 0], [0, 0, 0, 1]]
Long(er) Answer
I know there are already a lot of answers to this question, but I noticed two things. First, most of the answers use packages like numpy and/or pandas. And this is a good thing. If you are writing production code, you should probably be using robust, fast algorithms like those provided in the numpy/pandas packages. But, for the sake of education, I think someone should provide an answer which has a transparent algorithm and not just an implementation of someone else's algorithm. Second, I noticed that many of the answers do not provide a robust implementation of one-hot encoding because they do not meet one of the requirements below. Below are some of the requirements (as I see them) for a useful, accurate, and robust one-hot encoding function:
A one-hot encoding function must:
handle list of various types (e.g. integers, strings, floats, etc.) as input
handle an input list with duplicates
return a list of lists corresponding (in the same order as) to the inputs
return a list of lists where each list is as short as possible
I tested many of the answers to this question and most of them fail on one of the requirements above.
Try this:
!pip install category_encoders
import category_encoders as ce
categorical_columns = [...the list of names of the columns you want to one-hot-encode ...]
encoder = ce.OneHotEncoder(cols=categorical_columns, use_cat_names=True)
df_train_encoded = encoder.fit_transform(df_train_small)
df_encoded.head()
The resulting dataframe df_train_encoded is the same as the original, but the categorical features are now replaced with their one-hot-encoded versions.
More information on category_encoders here.
To add to other questions, let me provide how I did it with a Python 2.0 function using Numpy:
def one_hot(y_):
# Function to encode output labels from number indexes
# e.g.: [[5], [0], [3]] --> [[0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0]]
y_ = y_.reshape(len(y_))
n_values = np.max(y_) + 1
return np.eye(n_values)[np.array(y_, dtype=np.int32)] # Returns FLOATS
The line n_values = np.max(y_) + 1 could be hard-coded for you to use the good number of neurons in case you use mini-batches for example.
Demo project/tutorial where this function has been used:
https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition
It can and it should be easy as :
class OneHotEncoder:
def __init__(self,optionKeys):
length=len(optionKeys)
self.__dict__={optionKeys[j]:[0 if i!=j else 1 for i in range(length)] for j in range(length)}
Usage :
ohe=OneHotEncoder(["A","B","C","D"])
print(ohe.A)
print(ohe.D)
Expanding #Martin Thoma's answer
def one_hot_encode(y):
"""Convert an iterable of indices to one-hot encoded labels."""
y = y.flatten() # Sometimes not flattened vector is passed e.g (118,1) in these cases
# the function ends up creating a tensor e.g. (118, 2, 1). flatten removes this issue
nb_classes = len(np.unique(y)) # get the number of unique classes
standardised_labels = dict(zip(np.unique(y), np.arange(nb_classes))) # get the class labels as a dictionary
# which then is standardised. E.g imagine class labels are (4,7,9) if a vector of y containing 4,7 and 9 is
# directly passed then np.eye(nb_classes)[4] or 7,9 throws an out of index error.
# standardised labels fixes this issue by returning a dictionary;
# standardised_labels = {4:0, 7:1, 9:2}. The values of the dictionary are mapped to keys in y array.
# standardised_labels also removes the error that is raised if the labels are floats. E.g. 1.0; element
# cannot be called by an integer index e.g y[1.0] - throws an index error.
targets = np.vectorize(standardised_labels.get)(y) # map the dictionary values to array.
return np.eye(nb_classes)[targets]
Lets assume out of 10 variables, you have 3 categorical variables in your data frame named as cname1, cname2 and cname3.
Then following code will automatically create one hot encoded variable in the new dataframe.
import category_encoders as ce
encoder_var=ce.OneHotEncoder(cols=['cname1','cname2','cname3'],handle_unknown='return_nan',return_df=True,use_cat_names=True)
new_df = encoder_var.fit_transform(old_df)
A simple example using vectorize in numpy and apply example in pandas:
import numpy as np
a = np.array(['male','female','female','male'])
#define function
onehot_function = lambda x: 1.0 if (x=='male') else 0.0
onehot_a = np.vectorize(onehot_function)(a)
print(onehot_a)
# [1., 0., 0., 1.]
# -----------------------------------------
import pandas as pd
s = pd.Series(['male','female','female','male'])
onehot_s = s.apply(onehot_function)
print(onehot_s)
# 0 1.0
# 1 0.0
# 2 0.0
# 3 1.0
# dtype: float64
Here i tried with this approach :
import numpy as np
#converting to one_hot
def one_hot_encoder(value, datal):
datal[value] = 1
return datal
def _one_hot_values(labels_data):
encoded = [0] * len(labels_data)
for j, i in enumerate(labels_data):
max_value = [0] * (np.max(labels_data) + 1)
encoded[j] = one_hot_encoder(i, max_value)
return np.array(encoded)

Python: Parse string to array

I am currently having the problem parsing a string to a numpy array.
The string look like this:
input = '{{13,1},{2,1},{4,4},{1,7},{9,1}}'
The string represents a sparse vector, where the vector itself is delimited by curly brackets. Each entry, itself delimited by curly brackets, indicates which indices have which entries. The first entry in the list encodes the dimensions of the vector.
In the above example, the vector has length of 13 and 4 entries which are different from 0.
output = np.array([0,7,1,0,4,0,0,0,0,1,0,0,0])
After parsing it to an array, I have to parse to back to a string in its dense format, with the format:
stringoutput = '{0,7,1,0,4,0,0,0,0,1,0,0,0}'
While I managed to parse the numpy array to a string, I ran into the problem of having the wrong brackets (i.e. the build in array2string function uses [], while I need {})
I am open for any suggestions that help, solving this efficiently (even for large sparse vectors).
Thank you.
\edit: The given vector is always one dimensional, i.e. the second number within the first {} will always be 1. (and you only need 1 index to locate the position of elements)
Here is a numpythonic way:
In [132]: inp = '{{13,1},{2,1},{4,4},{1,7},{9,1}}'
# Relace the brackets with parenthesis in order to convert the string to a valid python object.
In [133]: inp = ast.literal_eval(inp.replace('{', '(').replace('}', ')'))
# Unpack the dimention and rest of then values from input object
In [134]: dim, *rest = inp
# Creat the zero array based on extracted dimention
In [135]: arr = np.zeros(dim)
# use `zip` to collecte teh indices and values separately in order to be use in `np.put`
In [136]: indices, values = zip(*rest)
In [137]: np.put(arr, indices, values)
In [138]: arr
Out[138]:
array([[ 0.],
[ 7.],
[ 1.],
[ 0.],
[ 4.],
[ 0.],
[ 0.],
[ 0.],
[ 0.],
[ 1.],
[ 0.],
[ 0.],
[ 0.]])
I like #Kasramvd's approach, but figured I'd put this one out there as well:
In [116]: r = (list(map(int, a.split(','))) for a in input[2:-2].split('},{'))
In [118]: l = np.zeros(next(r)[0], np.int)
In [119]: for a in r:
...: l[a[0]] = a[1]
...:
In [122]: s = '{' + ','.join(map(str, l)) + '}'
In [123]: s
Out[123]: '{0,7,1,0,4,0,0,0,0,1,0,0,0}'
This is based on #Kasramvd's answer. I adjusted how the other values are populated.
from #Kasramvd
import numpy as np
import ast
inp = '{{13,1},{2,1},{4,4},{1,7},{9,1}}'
inp = ast.literal_eval(inp.replace('{', '(').replace('}', ')'))
dim, *rest = inp
my adjustments
a = np.zeros(dim, dtype=int)
r = np.array(rest)
a[r[:, 0], 0] = r[:, 1]
a
array([[0],
[7],
[1],
[0],
[4],
[0],
[0],
[0],
[0],
[1],
[0],
[0],
[0]])
in one dimension
a = np.zeros(dim[0], dtype=int)
r = np.array(rest)
a[r[:, 0]] = r[:, 1]
a
array([0, 7, 1, 0, 4, 0, 0, 0, 0, 1, 0, 0, 0])

Better way to shuffle two numpy arrays in unison

I have two numpy arrays of different shapes, but with the same length (leading dimension). I want to shuffle each of them, such that corresponding elements continue to correspond -- i.e. shuffle them in unison with respect to their leading indices.
This code works, and illustrates my goals:
def shuffle_in_unison(a, b):
assert len(a) == len(b)
shuffled_a = numpy.empty(a.shape, dtype=a.dtype)
shuffled_b = numpy.empty(b.shape, dtype=b.dtype)
permutation = numpy.random.permutation(len(a))
for old_index, new_index in enumerate(permutation):
shuffled_a[new_index] = a[old_index]
shuffled_b[new_index] = b[old_index]
return shuffled_a, shuffled_b
For example:
>>> a = numpy.asarray([[1, 1], [2, 2], [3, 3]])
>>> b = numpy.asarray([1, 2, 3])
>>> shuffle_in_unison(a, b)
(array([[2, 2],
[1, 1],
[3, 3]]), array([2, 1, 3]))
However, this feels clunky, inefficient, and slow, and it requires making a copy of the arrays -- I'd rather shuffle them in-place, since they'll be quite large.
Is there a better way to go about this? Faster execution and lower memory usage are my primary goals, but elegant code would be nice, too.
One other thought I had was this:
def shuffle_in_unison_scary(a, b):
rng_state = numpy.random.get_state()
numpy.random.shuffle(a)
numpy.random.set_state(rng_state)
numpy.random.shuffle(b)
This works...but it's a little scary, as I see little guarantee it'll continue to work -- it doesn't look like the sort of thing that's guaranteed to survive across numpy version, for example.
Your can use NumPy's array indexing:
def unison_shuffled_copies(a, b):
assert len(a) == len(b)
p = numpy.random.permutation(len(a))
return a[p], b[p]
This will result in creation of separate unison-shuffled arrays.
X = np.array([[1., 0.], [2., 1.], [0., 0.]])
y = np.array([0, 1, 2])
from sklearn.utils import shuffle
X, y = shuffle(X, y, random_state=0)
To learn more, see http://scikit-learn.org/stable/modules/generated/sklearn.utils.shuffle.html
Your "scary" solution does not appear scary to me. Calling shuffle() for two sequences of the same length results in the same number of calls to the random number generator, and these are the only "random" elements in the shuffle algorithm. By resetting the state, you ensure that the calls to the random number generator will give the same results in the second call to shuffle(), so the whole algorithm will generate the same permutation.
If you don't like this, a different solution would be to store your data in one array instead of two right from the beginning, and create two views into this single array simulating the two arrays you have now. You can use the single array for shuffling and the views for all other purposes.
Example: Let's assume the arrays a and b look like this:
a = numpy.array([[[ 0., 1., 2.],
[ 3., 4., 5.]],
[[ 6., 7., 8.],
[ 9., 10., 11.]],
[[ 12., 13., 14.],
[ 15., 16., 17.]]])
b = numpy.array([[ 0., 1.],
[ 2., 3.],
[ 4., 5.]])
We can now construct a single array containing all the data:
c = numpy.c_[a.reshape(len(a), -1), b.reshape(len(b), -1)]
# array([[ 0., 1., 2., 3., 4., 5., 0., 1.],
# [ 6., 7., 8., 9., 10., 11., 2., 3.],
# [ 12., 13., 14., 15., 16., 17., 4., 5.]])
Now we create views simulating the original a and b:
a2 = c[:, :a.size//len(a)].reshape(a.shape)
b2 = c[:, a.size//len(a):].reshape(b.shape)
The data of a2 and b2 is shared with c. To shuffle both arrays simultaneously, use numpy.random.shuffle(c).
In production code, you would of course try to avoid creating the original a and b at all and right away create c, a2 and b2.
This solution could be adapted to the case that a and b have different dtypes.
Very simple solution:
randomize = np.arange(len(x))
np.random.shuffle(randomize)
x = x[randomize]
y = y[randomize]
the two arrays x,y are now both randomly shuffled in the same way
James wrote in 2015 an sklearn solution which is helpful. But he added a random state variable, which is not needed. In the below code, the random state from numpy is automatically assumed.
X = np.array([[1., 0.], [2., 1.], [0., 0.]])
y = np.array([0, 1, 2])
from sklearn.utils import shuffle
X, y = shuffle(X, y)
from np.random import permutation
from sklearn.datasets import load_iris
iris = load_iris()
X = iris.data #numpy array
y = iris.target #numpy array
# Data is currently unshuffled; we should shuffle
# each X[i] with its corresponding y[i]
perm = permutation(len(X))
X = X[perm]
y = y[perm]
Shuffle any number of arrays together, in-place, using only NumPy.
import numpy as np
def shuffle_arrays(arrays, set_seed=-1):
"""Shuffles arrays in-place, in the same order, along axis=0
Parameters:
-----------
arrays : List of NumPy arrays.
set_seed : Seed value if int >= 0, else seed is random.
"""
assert all(len(arr) == len(arrays[0]) for arr in arrays)
seed = np.random.randint(0, 2**(32 - 1) - 1) if set_seed < 0 else set_seed
for arr in arrays:
rstate = np.random.RandomState(seed)
rstate.shuffle(arr)
And can be used like this
a = np.array([1, 2, 3, 4, 5])
b = np.array([10,20,30,40,50])
c = np.array([[1,10,11], [2,20,22], [3,30,33], [4,40,44], [5,50,55]])
shuffle_arrays([a, b, c])
A few things to note:
The assert ensures that all input arrays have the same length along
their first dimension.
Arrays shuffled in-place by their first dimension - nothing returned.
Random seed within positive int32 range.
If a repeatable shuffle is needed, seed value can be set.
After the shuffle, the data can be split using np.split or referenced using slices - depending on the application.
you can make an array like:
s = np.arange(0, len(a), 1)
then shuffle it:
np.random.shuffle(s)
now use this s as argument of your arrays. same shuffled arguments return same shuffled vectors.
x_data = x_data[s]
x_label = x_label[s]
There is a well-known function that can handle this:
from sklearn.model_selection import train_test_split
X, _, Y, _ = train_test_split(X,Y, test_size=0.0)
Just setting test_size to 0 will avoid splitting and give you shuffled data.
Though it is usually used to split train and test data, it does shuffle them too.
From documentation
Split arrays or matrices into random train and test subsets
Quick utility that wraps input validation and
next(ShuffleSplit().split(X, y)) and application to input data into a
single call for splitting (and optionally subsampling) data in a
oneliner.
This seems like a very simple solution:
import numpy as np
def shuffle_in_unison(a,b):
assert len(a)==len(b)
c = np.arange(len(a))
np.random.shuffle(c)
return a[c],b[c]
a = np.asarray([[1, 1], [2, 2], [3, 3]])
b = np.asarray([11, 22, 33])
shuffle_in_unison(a,b)
Out[94]:
(array([[3, 3],
[2, 2],
[1, 1]]),
array([33, 22, 11]))
One way in which in-place shuffling can be done for connected lists is using a seed (it could be random) and using numpy.random.shuffle to do the shuffling.
# Set seed to a random number if you want the shuffling to be non-deterministic.
def shuffle(a, b, seed):
np.random.seed(seed)
np.random.shuffle(a)
np.random.seed(seed)
np.random.shuffle(b)
That's it. This will shuffle both a and b in the exact same way. This is also done in-place which is always a plus.
EDIT, don't use np.random.seed() use np.random.RandomState instead
def shuffle(a, b, seed):
rand_state = np.random.RandomState(seed)
rand_state.shuffle(a)
rand_state.seed(seed)
rand_state.shuffle(b)
When calling it just pass in any seed to feed the random state:
a = [1,2,3,4]
b = [11, 22, 33, 44]
shuffle(a, b, 12345)
Output:
>>> a
[1, 4, 2, 3]
>>> b
[11, 44, 22, 33]
Edit: Fixed code to re-seed the random state
Say we have two arrays: a and b.
a = np.array([[1,2,3],[4,5,6],[7,8,9]])
b = np.array([[9,1,1],[6,6,6],[4,2,0]])
We can first obtain row indices by permutating first dimension
indices = np.random.permutation(a.shape[0])
[1 2 0]
Then use advanced indexing.
Here we are using the same indices to shuffle both arrays in unison.
a_shuffled = a[indices[:,np.newaxis], np.arange(a.shape[1])]
b_shuffled = b[indices[:,np.newaxis], np.arange(b.shape[1])]
This is equivalent to
np.take(a, indices, axis=0)
[[4 5 6]
[7 8 9]
[1 2 3]]
np.take(b, indices, axis=0)
[[6 6 6]
[4 2 0]
[9 1 1]]
If you want to avoid copying arrays, then I would suggest that instead of generating a permutation list, you go through every element in the array, and randomly swap it to another position in the array
for old_index in len(a):
new_index = numpy.random.randint(old_index+1)
a[old_index], a[new_index] = a[new_index], a[old_index]
b[old_index], b[new_index] = b[new_index], b[old_index]
This implements the Knuth-Fisher-Yates shuffle algorithm.
Shortest and easiest way in my opinion, use seed:
random.seed(seed)
random.shuffle(x_data)
# reset the same seed to get the identical random sequence and shuffle the y
random.seed(seed)
random.shuffle(y_data)
most solutions above work, however if you have column vectors you have to transpose them first. here is an example
def shuffle(self) -> None:
"""
Shuffles X and Y
"""
x = self.X.T
y = self.Y.T
p = np.random.permutation(len(x))
self.X = x[p].T
self.Y = y[p].T
With an example, this is what I'm doing:
combo = []
for i in range(60000):
combo.append((images[i], labels[i]))
shuffle(combo)
im = []
lab = []
for c in combo:
im.append(c[0])
lab.append(c[1])
images = np.asarray(im)
labels = np.asarray(lab)
I extended python's random.shuffle() to take a second arg:
def shuffle_together(x, y):
assert len(x) == len(y)
for i in reversed(xrange(1, len(x))):
# pick an element in x[:i+1] with which to exchange x[i]
j = int(random.random() * (i+1))
x[i], x[j] = x[j], x[i]
y[i], y[j] = y[j], y[i]
That way I can be sure that the shuffling happens in-place, and the function is not all too long or complicated.
Just use numpy...
First merge the two input arrays 1D array is labels(y) and 2D array is data(x) and shuffle them with NumPy shuffle method. Finally split them and return.
import numpy as np
def shuffle_2d(a, b):
rows= a.shape[0]
if b.shape != (rows,1):
b = b.reshape((rows,1))
S = np.hstack((b,a))
np.random.shuffle(S)
b, a = S[:,0], S[:,1:]
return a,b
features, samples = 2, 5
x, y = np.random.random((samples, features)), np.arange(samples)
x, y = shuffle_2d(train, test)

Categories