sklearn:Can't make OneHotEncoder work with Pipeline - python

I am building a pipline for a model using ColumnTransformer.This is how my pipeline looks like,
from sklearn.preprocessing import StandardScaler
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder,OrdinalEncoder,MinMaxScaler
from sklearn.impute import KNNImputer
imputer_transformer = ColumnTransformer([
('knn_imputer',KNNImputer(n_neighbors=5),[0,3,4,6,7])
],remainder='passthrough')
category_transformer = ColumnTransformer([
("kms_driven_engine_min_max_scaler",MinMaxScaler(),[0,6]),
("owner_ordinal_enc",OrdinalEncoder(categories=[['fourth','third','second','first']],handle_unknown='ignore',dtype=np.int16),[3]),
("brand_location_ohe",OneHotEncoder(sparse=False,handle_unknown='ignore'),[2,5]),
],remainder='passthrough')
def build_pipeline_with_estimator(estimator):
return Pipeline([
('imputer',imputer_transformer),
('category_transformer',category_transformer),
('estimator',estimator),
])
and this is how my dataset looks like,
kms_driven owner location mileage power brand engine age
34000.0 first other NaN 12.0 Yamaha 150.0 9
28000.0 first other 72.0 7.0 Hero 100.0 16
5947.0 first other 53.0 19.0 Bajaj NaN 4
11000.0 first delhi 40.0 19.8 Royal Enfield 350.0 7
13568.0 first delhi 63.0 14.0 Suzuki 150.0 5
This is how I am using LinearRegression with my pipeline.
linear_regressor = build_pipeline_with_estimator(LinearRegression())
linear_regressor.fit(X_train,y_train)
print('Linear Regression Train Performance.\n')
print(model_perf(linear_regressor,X_train,y_train))
print('Linear Regression Test Performance.\n')
print(model_perf(linear_regressor,X_test,y_test))
Now, whenever I try to apply linear regression with the pipeline I get this error,
ValueError: could not convert string to float: 'bangalore'
The 'banglore' is one of the value in the location feature, which I am trying to one-hot encode,but it is failing and I can't figure out what is going wrong here.Any help would be appreciated.

After passing the imputer, the non-imputed columns are moved to the right as noted in notes under the documentation:
Columns of the original feature matrix that are not specified are
dropped from the resulting transformed feature matrix, unless
specified in the passthrough keyword. Those columns specified with
passthrough are added at the right to the output of the transformers.
We can try just using the imputer first:
from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder, OrdinalEncoder, MinMaxScaler
from sklearn.impute import KNNImputer
from sklearn.linear_model import LinearRegression
imputer_transformer = ColumnTransformer([
('knn_imputer',KNNImputer(n_neighbors=5),[0,3,4,6,7])
],remainder='passthrough')
We can try it with an example data and you will see your categorical columns are now shifted right:
X_train = pd.DataFrame({'kms':[0,1,2],'owner':['first','first','second'],
'location':['other','other','delhi'],'mileage':[9,8,np.nan],
'power':[3,2,1],'brand':['A','B','C'],'engine':[10,100,1000],'age':[3,4,5]})
imputer_transformer.fit_transform(X_train)
Out[25]:
array([[0.0, 9.0, 3.0, 10.0, 3.0, 'first', 'other', 'A'],
[1.0, 8.0, 2.0, 100.0, 4.0, 'first', 'other', 'B'],
[2.0, 8.5, 1.0, 1000.0, 5.0, 'second', 'delhi', 'C']], dtype=object)
In your case, you can see the engine column is now the fourth column, and your ordinal is the fifth, categorical last two, so a simple solution might be:
category_transformer = ColumnTransformer([
("kms_driven_engine_min_max_scaler",MinMaxScaler(),[0,3]),
("owner_ordinal_enc",OrdinalEncoder(categories=[['fourth','third','second','first']],
handle_unknown='ignore',dtype=np.int16),[5]),
("brand_location_ohe",OneHotEncoder(sparse=False,handle_unknown='ignore'),[6,7]),
],remainder='passthrough')
y_train = [7,3,2]
linear_regressor = build_pipeline_with_estimator(LinearRegression())
linear_regressor.fit(X_train,y_train)

Related

Fitting model with NaN values in the features

I would know if there is a method for fitting a model even some features contains some NaN values.
X
Feature1 Feature2 Feature3 Feature4 Feature5
0 0.1 NaN 0.3 NaN 4.0
1 4.0 6.0 6.6 99.0 2.0
2 11.0 15.0 2.2 3.3 NaN
3 1.0 6.0 2.0 2.5 4.0
4 5.0 11.2 NaN 3.0 NaN
Code
model = LogisticRegression()
model.fit(X_train, y_train)
Error ValueError: Input contains NaN, infinity or a value too large for dtype('float64').
Usually, tree-based classifiers can handle NaNs as they just split the dataset based on the feature values. Of course, it also depends on how the algorithm is implemented.
I am not sure about sklearn but if you really want to classify them while preserving the NaN values, your best choice is to use XGBoost. It is not on sklearn but there are very good libraries and they are easy to use as well. It is also one of the most powerful classifiers, so you should definitely try it!
https://xgboost.readthedocs.io/en/latest/python/python_intro.html
You can use a SimpleImputer() to replace nan by the mean value, or a constant prior to fitting the model. Have a look at the documentation to find the correct strategy that work for your usecase.
In your case if you want to have still have nan value and take them out of the equation, you can simply replace nan by 0 using SimpleImputer(strategy='constant', fill_value=0)
As follows:
from sklearn.impute import SimpleImputer
from sklearn.linear_model import LinearRegression
model = make_pipeline(
SimpleImputer(strategy='constant', fill_value=0),
LinearRegression()
)
model.fit(X, y)
Note: I am using here a pipeline to all the steps in one go.

ValueError when attempting to create dataframe with OneHotEncoder results

I have recently started learning python to develop a predictive model for a research project using machine learning methods. I have used OneHotEncoder to encode all the categorical variables in my dataset
# Encode categorical data with oneHotEncoder
from sklearn.preprocessing import OneHotEncoder
ohe = OneHotEncoder(handle_unknown='ignore')
Z = ohe.fit_transform(Z)
I now want to create a dataframe with the results from the OneHotEncoder. I want the dataframe columns to be the new categories that resulted from the encoding, that is why I am using the categories_ attribute. When running the following line of code:
ohe_df = pd.DataFrame(Z, columns=ohe.categories_)
I get the error: ValueError: all arrays must be same length
I understand that the arrays being referred to in the error message are the arrays of categories, each of which has a different length depending on the number of categories it contains, but am not sure what the correct way of creating a dataframe with the new categories as columns is (when there are multiple features).
I tried to do this with a small dataset that contained one feature only and it worked:
ohe = OneHotEncoder(handle_unknown='ignore', sparse=False)
df = pd.DataFrame(['Male', 'Female', 'Female'])
results = ohe.fit_transform(df)
ohe_df = pd.DataFrame(results, columns=ohe.categories_)
ohe_df.head()
Female Male
0 0.0 1.0
1 1.0 0.0
2 1.0 0.0
So how do I do the same for my large dataset with numerous features.
Thank you in advance.
EDIT:
As requested, I have come up with a MWE to demonstrate how it is not working:
import numpy as np
import pandas as pd
# create dataframe
df = pd.DataFrame(np.array([['Male', 'Yes', 'Forceps'], ['Female', 'No', 'Forceps and ventouse'],
['Female','missing','None'], ['Male','Yes','Ventouse']]),
columns=['gender', 'diabetes', 'assistance'])
df.head()
# encode categorical data
from sklearn.preprocessing import OneHotEncoder
ohe = OneHotEncoder(handle_unknown='ignore')
results = ohe.fit_transform(df)
print(results)
By this step, I have created a dataframe of categorical data and encoded it. I now want to create another dataframe such that the columns of the new dataframe are the categories created by the OneHotEncoder and rows are the encoded data. To do this I tried two things:
ohe_df = pd.DataFrame(results, columns=np.concatenate(ohe.categories_))
And I tried:
ohe_df = pd.DataFrame(results, columns=ohe.get_feature_names(input_features=df.columns))
Which both resulted in the error:
ValueError: Shape of passed values is (4, 1), indices imply (4, 9)
IIUC,
import numpy as np
import pandas as pd
# create dataframe
df = pd.DataFrame(np.array([['Male', 'Yes', 'Forceps'], ['Female', 'No', 'Forceps and ventouse'],
['Female','missing','None'], ['Male','Yes','Ventouse']]),
columns=['gender', 'diabetes', 'assistance'])
df.head()
# encode categorical data
from sklearn.preprocessing import OneHotEncoder
ohe = OneHotEncoder(handle_unknown='ignore')
results = ohe.fit_transform(df)
df_results = pd.DataFrame.sparse.from_spmatrix(results)
df_results.columns = ohe.get_feature_names(df.columns)
df_results
Output:
gender_Female gender_Male diabetes_No diabetes_Yes diabetes_missing assistance_Forceps assistance_Forceps and ventouse assistance_None assistance_Ventouse
0 0.0 1.0 0.0 1.0 0.0 1.0 0.0 0.0 0.0
1 1.0 0.0 1.0 0.0 0.0 0.0 1.0 0.0 0.0
2 1.0 0.0 0.0 0.0 1.0 0.0 0.0 1.0 0.0
3 0.0 1.0 0.0 1.0 0.0 0.0 0.0 0.0 1.0
Note, the output of ohe.fit_transform(df) is a sparse matrix.
print(type(results))
<class 'scipy.sparse.csr.csr_matrix'>
You can convert this to a dataframe using pd.DataFrame.sparse.from_spmatrix. Then, we can use ohe.get_feature_names and passing the original dataframe columns to name your columns in the results dataframe, df_results.
ohe.categories_ is a list of arrays, one array for each feature. You need to flatten that into a 1D list/array for pd.DataFrame, e.g. with np.concatenate(ohe.categories_).
But probably better, use the builtin method get_feature_names.

Linear regression and plots through each numerical independent variable and target variable

I would like to know is there a way where can I get one on one( 1 independent variable vs target variable) linear regression analysis ,its p value, R2 value and the plot to show how linearly it is related or not. And I want this to run on all independent variables separately. As far as I know it is possible to get OLS regression analysis from Python statsmodel library. It runs on whole dataset and give the result, and there are no plots to understand it visually.
To very quickly visualize the regression you can try the below using sns:
import numpy as np
from sklearn.datasets import load_iris
import pandas as pd
import seaborn as sns
data = load_iris()
df = pd.DataFrame(data.data, columns=['sepal.length','sepal.width','petal.length','petal.width'])
df = pd.melt(df,id_vars='sepal.length')
df[:5]
sepal.length variable value
0 5.1 sepal.width 3.5
1 4.9 sepal.width 3.0
2 4.7 sepal.width 3.2
3 4.6 sepal.width 3.1
4 5.0 sepal.width 3.6
sns.lmplot(x ='sepal.length', y ='value', data = df,col='variable',
col_wrap=2,aspect = 0.6, height,= 4, palette ='coolwarm')

Python - How to do prediction and testing over multiple files using sklearn

I want to train a model and finally predict a truth value using a random forest model in Python of the three column dataset (click the link to download the full CSV-dataset formatted as in the following
t_stamp,X,Y
0.000543,0,10
0.000575,0,10
0.041324,1,10
0.041331,2,10
0.041336,3,10
0.04134,4,10
0.041345,5,10
0.04135,6,10
0.041354,7,10
I wanted to predict the current value of Y (the true value) using the last (for example: 5, 10, 100, 300, 1000, ..etc) data points of X using random forest model of sklearn in Python. Meaning taking [0,0,1,2,3] of X column as an input for the first window - i want to predict the 5th row value of Y trained on the previous values of Y.
Let's say we have 5 traces of dataset (a1.csv, a2.csv, a3.csv, a4.csv and a5.csv) in the current directory. For a single trace (dataset) (for example, a1.csv) – I can do the prediction of a 5 window as the following
import pandas as pd
import numpy as np
from io import StringIO
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_squared_error
from sklearn.metrics import accuracy_score
import math
from math import sqrt
df = pd.read_csv('a1.csv')
for i in range(1,5):
df['X_t'+str(i)] = df['X'].shift(i)
print(df)
df.dropna(inplace=True)
X=pd.DataFrame({ 'X_%d'%i : df['X'].shift(i) for i in range(5)}).apply(np.nan_to_num, axis=0).values
y = df['Y'].values
reg = RandomForestRegressor(criterion='mse')
reg.fit(X,y)
modelPred = reg.predict(X)
print(modelPred)
print("Number of predictions:",len(modelPred))
modelPred.tofile('predictedValues1.txt',sep="\n",format="%s")
meanSquaredError=mean_squared_error(y, modelPred)
print("Mean Square Error (MSE):", meanSquaredError)
rootMeanSquaredError = sqrt(meanSquaredError)
print("Root-Mean-Square Error (RMSE):", rootMeanSquaredError)
I have solved this problem with random forest, which yields df:
rolling_regression')
time X Y X_t1 X_t2 X_t3 X_t4
0 0.000543 0 10 NaN NaN NaN NaN
1 0.000575 0 10 0.0 NaN NaN NaN
2 0.041324 1 10 0.0 0.0 NaN NaN
3 0.041331 2 10 1.0 0.0 0.0 NaN
4 0.041336 3 10 2.0 1.0 0.0 0.0
5 0.041340 4 10 3.0 2.0 1.0 0.0
6 0.041345 5 10 4.0 3.0 2.0 1.0
7 0.041350 6 10 5.0 4.0 3.0 2.0
.........................................................
[2845 rows x 7 columns]
[ 10. 10. 10. ..., 20. 20. 20.]
RMSE: 0.5136564734333562
However, now I want to do the prediction over all of the files (a1.csv, a2.csv, a3.csv, a4.csv and a5.csv)by dividing the training into 60% of the datasets whose file name start with a and the remaining 40% for testing whose file name start with a using sklearn in Python (meaning 3 traces will be used for training and 2 files for testing)?
PS: All the files have the same structure but they are with different lengths for they are generated with different parameters.
import glob, os
df = pd.concat(map(pd.read_csv, glob.glob(os.path.join('', "a*.csv"))))
# get your X and Y Df's
x_train,x_test,y_train,y_test=sklearn.cross_validation.train_test_split(X,Y,test_size=0.40)
To read in multiple files, you'll need a slight extension. Aggregate data from each csv, then call pd.concat to join them:
df_list = []
for i in range(1, 6):
df_list.append(pd.read_csv('a%d.csv' %i))
df = pd.concat(df_list)
This will read in all your csvs, and you can carry on as usual. Get X and y:
X = pd.DataFrame({ 'X_%d'%i : df['X'].shift(i) for i in range(5)}).apply(np.nan_to_num, axis=0).values
y = df['Y'].values
Use sklearn.cross_validation.train_test_split to segment your data:
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.4)
You can also look at StratifiedKFold.

How to do Onehotencoding in Sklearn Pipeline

I am trying to oneHotEncode the categorical variables of my Pandas dataframe, which includes both categorical and continues variables. I realise this can be done easily with the pandas .get_dummies() function, but I need to use a pipeline so I can generate a PMML-file later on.
This is the code to create a mapper. The categorical variables I would like to encode are stored in a list called 'dummies'.
from sklearn_pandas import DataFrameMapper
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import LabelEncoder
mapper = DataFrameMapper(
[(d, LabelEncoder()) for d in dummies] +
[(d, OneHotEncoder()) for d in dummies]
)
And this is the code to create a pipeline, including the mapper and linear regression.
from sklearn2pmml import PMMLPipeline
from sklearn.linear_model import LinearRegression
lm = PMMLPipeline([("mapper", mapper),
("regressor", LinearRegression())])
When I now try to fit (with 'features' being a dataframe, and 'targets' a series), it gives an error 'could not convert string to float'.
lm.fit(features, targets)
OneHotEncoder doesn't support string features, and with [(d, OneHotEncoder()) for d in dummies] you are applying it to all dummies columns. Use LabelBinarizer instead:
mapper = DataFrameMapper(
[(d, LabelBinarizer()) for d in dummies]
)
An alternative would be to use the LabelEncoder with a second OneHotEncoder step.
mapper = DataFrameMapper(
[(d, LabelEncoder()) for d in dummies]
)
lm = PMMLPipeline([("mapper", mapper),
("onehot", OneHotEncoder()),
("regressor", LinearRegression())])
LabelEncoder and LabelBinarizer are intended for encoding/binarizing the target (label) of your data, i.e. the y vector. Of course they do more or less the same thing as OneHotEncoder, the main difference being the Label preprocessing steps don't accept matrices, only 1-D vectors.
example = pd.DataFrame({'x':np.arange(2,14,2),
'cat1':['A','B','A','B','C','A'],
'cat2':['p','q','w','p','q','w']})
dummies = ['cat1', 'cat2']
x cat1 cat2
0 2 A p
1 4 B q
2 6 A w
3 8 B p
4 10 C q
5 12 A w
As an example, LabelEncoder().fit_transform(example['cat1']) works, but LabelEncoder().fit_transform(example[dummies]) throws a ValueError exception.
In contrast, OneHotEncoder accepts multiple columns:
from sklearn.preprocessing import OneHotEncoder
OneHotEncoder().fit_transform(example[dummies])
<6x6 sparse matrix of type '<class 'numpy.float64'>'
with 12 stored elements in Compressed Sparse Row format>
This can be incorporated into a pipeline using a ColumnTransformer, passing through (or alternatively applying different transformations to) the other columns :
from sklearn.compose import ColumnTransformer
ct = ColumnTransformer([('encode_cats', OneHotEncoder(), dummies),],
remainder='passthrough')
pd.DataFrame(ct.fit_transform(example), columns = ct.get_feature_names_out())
encode_cats__cat1_A encode_cats__cat1_B ... encode_cats__cat2_w remainder__x
0 1.0 0.0 ... 0.0 2.0
1 0.0 1.0 ... 0.0 4.0
2 1.0 0.0 ... 1.0 6.0
3 0.0 1.0 ... 0.0 8.0
4 0.0 0.0 ... 0.0 10.0
5 1.0 0.0 ... 1.0 12.0
Finally, slot this into a pipeline:
from sklearn.pipeline import Pipeline
Pipeline([('preprocessing', ct),
('regressor', LinearRegression())])

Categories