how to reduce rmse while performing Linear Regression in python - python

I am not really a coder but this is what I have so far. I'm trying to apply linear regression to predict something from a sample data. I'm probably doing some mistake here since I'm getting an rmse of above 1. I've checked the correlation among the continuous variables which turned out to be quite small reaching to a max of 0.2. Also checked for outliers using the Inter Quartile Range method but there were none.
Please can someone tell me how should I reduce the rmse?
import pandas as pd
from sklearn import linear_model
from sklearn.metrics import mean_squared_error
from math import sqrt
from sklearn.cross_validation import train_test_split
df_hosp = pd.read_csv('C:\Users\LAPPY-2\Desktop\LengthOfStay.csv')
df_temp, df_test = train_test_split(df_hosp, test_size=0.30, train_size=0.70)
df_train, df_val = train_test_split(df_temp, test_size=0.30, train_size=0.70)
X = df_train[['rcount', 'male', 'female', 'dialysisrenalendstage', 'asthma', \
'irondef', 'pneum', 'substancedependence', \
'psychologicaldisordermajor', 'depress', 'psychother', \
'fibrosisandother', 'malnutrition', 'hemo', 'hematocrit', \
'neutrophils', 'sodium', 'glucose', 'bloodureanitro', \
'creatinine', 'bmi', 'pulse', 'respiration', \
'secondarydiagnosisnonicd9']]
y = df_train['lengthofstay']
model = linear_model.LinearRegression(fit_intercept=True, normalize=True, copy_X=True)
m = model.fit(X, y)
predictions_train = m.predict(X)
print('Score: %.2f' % m.score(X, y))
rms_train = sqrt(mean_squared_error(y, predictions_train))
print ('Training set RMSE: %.2f' % rms_train)
Output:
Score: 0.75
Training set RMSE: 1.19

Since your y variable is the length of stay, there is no reason why it should have an rmse < 1. Here is a resource explaining the formula definition of RMSE. You can see that if (y_pred - y) is on average bigger than 1, then your RMSE is going to be bigger than 1.
As for why this is happening, you appear to be attempting to fit a model with a large number of variables, sum of which are not actually correlated with your output variable. You should only fit a model on data which is actually correlated because a correlation implies that the input data somehow affects the output data.
Try limiting the number of input variables you fit to, starting with the most highly correlated data.

Related

Negative accuracy in linear regression

My linear regression model has negative coefficient of determination R².
How can this happen? Any idea is helpful.
Here is my dataset:
year,population
1960,22151278.0
1961,22671191.0
1962,23221389.0
1963,23798430.0
1964,24397022.0
1965,25013626.0
1966,25641044.0
1967,26280132.0
1968,26944390.0
1969,27652709.0
1970,28415077.0
1971,29248643.0
1972,30140804.0
1973,31036662.0
1974,31861352.0
1975,32566854.0
1976,33128149.0
1977,33577242.0
1978,33993301.0
1979,34487799.0
1980,35141712.0
1981,35984528.0
1982,36995248.0
1983,38142674.0
1984,39374348.0
1985,40652141.0
1986,41965693.0
1987,43329231.0
1988,44757203.0
1989,46272299.0
1990,47887865.0
1991,49609969.0
1992,51423585.0
1993,53295566.0
1994,55180998.0
1995,57047908.0
1996,58883530.0
1997,60697443.0
1998,62507724.0
1999,64343013.0
2000,66224804.0
2001,68159423.0
2002,70142091.0
2003,72170584.0
2004,74239505.0
2005,76346311.0
2006,78489206.0
2007,80674348.0
2008,82916235.0
2009,85233913.0
2010,87639964.0
2011,90139927.0
2012,92726971.0
2013,95385785.0
2014,98094253.0
2015,100835458.0
2016,103603501.0
2017,106400024.0
2018,109224559.0
The code of the LinearRegression model is as follows:
import pandas as pd
from sklearn.linear_model import LinearRegression
data =pd.read_csv("data.csv", header=None )
data = data.drop(0,axis=0)
X=data[0]
Y=data[1]
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.1,shuffle =False)
lm = LinearRegression()
lm.fit(X_train.values.reshape(-1,1), Y_train.values.reshape(-1,1))
Y_pred = lm.predict(X_test.values.reshape(-1,1))
accuracy = lm.score(Y_test.values.reshape(-1,1),Y_pred)
print(accuracy)
output
-3592622948027972.5
Here is the formula of the R² score:
\hat{y_i} is the predictor of the i-th observation y_i and \bar{y} is the mean of all observations.
Therefore, a negative R² means that if someone knew the mean of your y_test sample and always used it as a "prediction", this "prediction" would be more accurate than your model.
Moving on to your dataset (thanks to #Prayson W. Daniel for the convenient loading script), let us have a quick look at your data.
df.population.plot()
It looks like a logarithmic transformation could help.
import numpy as np
df_log = df.copy()
df_log.population = np.log(df.population)
df_log.population.plot()
Now let us perform a linear regression using OpenTURNS.
import openturns as ot
sam = ot.Sample(np.array(df_log)) # convert DataFrame to openturns Sample
sam.setDescription(['year', 'logarithm of the population'])
linreg = ot.LinearModelAlgorithm(sam[:, 0], sam[:, 1])
linreg.run()
linreg_result = linreg.getResult()
coeffs = linreg_result.getCoefficients()
print("Best fitting line = {} + year * {}".format(coeffs[0], coeffs[1]))
print("R2 score = {}".format(linreg_result.getRSquared()))
ot.VisualTest_DrawLinearModel(sam[:, 0], sam[:, 1], linreg_result)
Output:
Best fitting line = -38.35148311467912 + year * 0.028172928802559845
R2 score = 0.9966261033648469
This is an almost exact fit.
EDIT
As suggested by #Prayson W. Daniel, here is the model fit after it is transformed back to the original scale.
# Get the original data in openturns Sample format
orig_sam = ot.Sample(np.array(df))
orig_sam.setDescription(df.columns)
# Compute the prediction in the original scale
predicted = ot.Sample(orig_sam) # start by copying the original data
predicted[:, 1] = np.exp(linreg_result.getMetaModel()(predicted[:, 0])) # overwrite with the predicted values
error = np.array((predicted - orig_sam)[:, 1]) # compute error
r2 = 1.0 - (error**2).mean() / df.population.var() # compute the R2 score in the original scale
print("R2 score in original scale = {}".format(r2))
# Plot the model
graph = ot.Graph("Original scale", "year", "population", True, '')
curve = ot.Curve(predicted)
graph.add(curve)
points = ot.Cloud(orig_sam)
points.setColor('red')
graph.add(points)
graph
Output:
R2 score in original scale = 0.9979032805107133
Sckit-learn’s LinearRegression scores uses 𝑅2 score. A negative 𝑅2 means that the model fitted your data extremely bad. Since 𝑅2 compares the fit of the model with that of the null hypothesis( a horizontal straight line ), then 𝑅2 is negative when the model fits worse than a horizontal line.
𝑅2 = 1 - (SUM((y - ypred)**2) / SUM((y - AVG(y))**2))
So if SUM((y - ypred)**2 is greater than SUM((y - AVG(y))**2, then 𝑅2 will be negative.
reasons and ways to correct it
Problem 1: You are performing a random split of time-series data. Random split will ignore the temporal dimension.
Solution: Preserve time flow (See code below)
Problem 2: Target values are so large.
Solution: Unless we use Tree-base models, you would have to do some target feature engineering to scale data in a range that models can learn.
Here is a code example. Using defaults parameters of LinearRegression and log|exp transformation of our target values, my attempt yield ~87% R2 score:
import pandas as pd
import numpy as np
# we need to transform/feature engineer our target
# I will use log from numpy. The np.log and np.exp to make the value learnable
from sklearn.linear_model import LinearRegression
from sklearn.compose import TransformedTargetRegressor
# your data, df
# transform year to reference
df = df.assign(ref_year = lambda x: x.year - 1960)
df.population = df.population.astype(int)
split = int(df.shape[0] *.9) #split at 90%, 10%-ish
df = df[['ref_year', 'population']]
train_df = df.iloc[:split]
test_df = df.iloc[split:]
X_train = train_df[['ref_year']]
y_train = train_df.population
X_test = test_df[['ref_year']]
y_test = test_df.population
# regressor
regressor = LinearRegression()
lr = TransformedTargetRegressor(
regressor=regressor,
func=np.log, inverse_func=np.exp)
lr.fit(X_train,y_train)
print(lr.score(X_test,y_test))
For those interested in making it better, here is a way to read that dataset
import pandas as pd
import io
df = pd.read_csv(io.StringIO('''year,population
1960,22151278.0
1961,22671191.0
1962,23221389.0
1963,23798430.0
1964,24397022.0
1965,25013626.0
1966,25641044.0
1967,26280132.0
1968,26944390.0
1969,27652709.0
1970,28415077.0
1971,29248643.0
1972,30140804.0
1973,31036662.0
1974,31861352.0
1975,32566854.0
1976,33128149.0
1977,33577242.0
1978,33993301.0
1979,34487799.0
1980,35141712.0
1981,35984528.0
1982,36995248.0
1983,38142674.0
1984,39374348.0
1985,40652141.0
1986,41965693.0
1987,43329231.0
1988,44757203.0
1989,46272299.0
1990,47887865.0
1991,49609969.0
1992,51423585.0
1993,53295566.0
1994,55180998.0
1995,57047908.0
1996,58883530.0
1997,60697443.0
1998,62507724.0
1999,64343013.0
2000,66224804.0
2001,68159423.0
2002,70142091.0
2003,72170584.0
2004,74239505.0
2005,76346311.0
2006,78489206.0
2007,80674348.0
2008,82916235.0
2009,85233913.0
2010,87639964.0
2011,90139927.0
2012,92726971.0
2013,95385785.0
2014,98094253.0
2015,100835458.0
2016,103603501.0
2017,106400024.0
2018,109224559.0
'''))
Results:

How to calculate the RMSE on Ridge regression model

I have performed a ridge regression model on a data set
(link to the dataset: https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data)
as below:
from sklearn.linear_model import Ridge
from sklearn.model_selection import train_test_split
y = train['SalePrice']
X = train.drop("SalePrice", axis = 1)
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.30)
ridge = Ridge(alpha=0.1, normalize=True)
ridge.fit(X_train,y_train)
pred = ridge.predict(X_test)
I calculated the MSE using the metrics library from sklearn as
from sklearn.metrics import mean_squared_error
mean = mean_squared_error(y_test, pred)
rmse = np.sqrt(mean_squared_error(y_test,pred)
I am getting a very large value of MSE = 554084039.54321 and RMSE = 21821.8, I am trying to understand if my implementation is correct.
RMSE implementation
Your RMSE implementation is correct which is easily verifiable when you take the sqaure root of sklearn's mean_squared_error.
I think you are missing a closing parentheses though, here to be exact:
rmse = np.sqrt(mean_squared_error(y_test,pred)) # the last one was missing
High error problem
Your MSE is high due to model not being able to model relationships between your variables and target very well. Bear in mind each error is taken to the power of 2, so being 1000 off in price sky-rockets the value to 1000000.
You may want to modify the price with natural logarithm (numpy.log) and transform it to log-scale, it is a common practice especially for this problem (I assume you are doing House Prices: Advanced Regression Techniques), see available kernels for guidance. With this approach, you will not get such big values.
Last but not least, check Mean Absolute Error in order to see your predictions are not as terrible as they seem.

Interpreting sklearns' GridSearchCV best score

I would like to know the difference between the score returned by GridSearchCV and the R2 metric calculated as below. In other cases I receive the grid search score highly negative (same applies for cross_val_score) and I would be grateful for explaining what it is.
from sklearn import datasets
from sklearn.model_selection import (cross_val_score, GridSearchCV)
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import accuracy_score, r2_score
from sklearn import tree
diabetes = datasets.load_diabetes()
X = diabetes.data[:150]
y = diabetes.target[:150]
X = pd.DataFrame(X)
parameters = {'splitter':('best','random'),
'max_depth':np.arange(1,10),
'min_samples_split':np.arange(2,10),
'min_samples_leaf':np.arange(1,5)}
regressor = GridSearchCV(DecisionTreeRegressor(), parameters, scoring = 'r2', cv = 5)
regressor.fit(X, y)
print('Best score: ', regressor.best_score_)
best = regressor.best_estimator_
print('R2: ', r2_score(y_pred = best.predict(X), y_true = y))
The regressor.best_score_ is the average of r2 scores on left-out test folds for the best parameter combination.
In your example, the cv=5, so the data will be split into train and test folds 5 times. The model will be fitted on train and scored on test. These 5 test scores are averaged to get the score. Please see documentation:
"best_score_: Mean cross-validated score of the best_estimator"
The above process repeats for all parameter combinations. And the best average score from it is assigned to the best_score_.
You can look at my other answer for complete working of GridSearchCV
After finding the best parameters, the model is trained on full data.
r2_score(y_pred = best.predict(X), y_true = y)
is on the same data as the model is trained on, so in most cases, it will be higher.
The question linked by #Davide in the comments has answers why you get a positive R2 score - your model performs better than a constant prediction. At the same time you can get negative values in other situation, if your models there perform bad.
the reason for the difference in values is that regressor.best_score_ is evaluated on a particular fold out of the 5-fold split that you do, whereas r2_score(y_pred = best.predict(X), y_true = y) evaluates the same model (regressor.best_estimator_) but on the full sample (including the (5-1)-fold sub-set that was used to train that estimator)

Interpreting the DecisionTreeRegressor score?

I am trying to evaluate a relevance of features and I am using DecisionTreeRegressor()
The related part of the code is presented below:
# TODO: Make a copy of the DataFrame, using the 'drop' function to drop the given feature
new_data = data.drop(['Frozen'], axis = 1)
# TODO: Split the data into training and testing sets(0.25) using the given feature as the target
# TODO: Set a random state.
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(new_data, data['Frozen'], test_size = 0.25, random_state = 1)
# TODO: Create a decision tree regressor and fit it to the training set
from sklearn.tree import DecisionTreeRegressor
regressor = DecisionTreeRegressor(random_state=1)
regressor.fit(X_train, y_train)
# TODO: Report the score of the prediction using the testing set
from sklearn.model_selection import cross_val_score
#score = cross_val_score(regressor, X_test, y_test)
score = regressor.score(X_test, y_test)
print score # python 2.x
When I run the print function, it returns the given score:
-0.649574327334
You can find the score function implementatioin and some explanation below here and below:
Returns the coefficient of determination R^2 of the prediction.
...
The best possible score is 1.0 and it can be negative (because the
model can be arbitrarily worse).
I could not grasp the whole concept yet, so this explanation is not very helpful for me. For instance I could not understand why score could be negative and what exactly it indicates (if something is squared, I would expect it can only be positive).
What does this score indicates and why can it be negative?
If you know any article (for starters) it might be helpful as well!
R^2 can be negative from its definition (https://en.wikipedia.org/wiki/Coefficient_of_determination) if the model fits the data worse than a horizontal line. Basically
R^2 = 1 - SS_res/SS_tot
and SS_res and SS_tot are always positive. If SS_res >> SS_tot, you have a negative R^2. Look at this answer as well: https://stats.stackexchange.com/questions/12900/when-is-r-squared-negative
The article execute cross_val_score in which DecisionTreeRegressor is implemented. You may take a look at the documentation of scikitlearn DecisionTreeRegressor.
Basically, the score you see is R^2, or (1-u/v). U is the squared sum residual of your prediction, and v is the total square sum(sample sum of square).
u/v can be arbitrary large when you make really bad prediction, while it can only be as small as zero given u and v are sum of squared residual(>=0)

Scikit-learn is returning coefficient of determination (R^2) values less than -1

I'm doing a simple linear model. I have
fire = load_data()
regr = linear_model.LinearRegression()
scores = cross_validation.cross_val_score(regr, fire.data, fire.target, cv=10, scoring='r2')
print scores
which yields
[ 0.00000000e+00 0.00000000e+00 -8.27299054e+02 -5.80431382e+00
-1.04444147e-01 -1.19367785e+00 -1.24843536e+00 -3.39950443e-01
1.95018287e-02 -9.73940970e-02]
How is this possible? When I do the same thing with the built in diabetes data, it works perfectly fine, but for my data, it returns these seemingly absurd results. Have I done something wrong?
There is no reason r^2 shouldn't be negative (despite the ^2 in its name). This is also stated in the doc. You can see r^2 as the comparison of your model fit (in the context of linear regression, e.g a model of order 1 (affine)) to a model of order 0 (just fitting a constant), both by minimizing a squared loss. The constant minimizing the squared error is the mean. Since you are doing cross validation with left out data, it can happen that the mean of your test set is wildly different from the mean of your training set. This alone can induce a much higher incurred squared error in your prediction versus just predicting the mean of the test data, which results in a negative r^2 score.
In worst case, if your data do not explain your target at all, these scores can become very strongly negative. Try
import numpy as np
rng = np.random.RandomState(42)
X = rng.randn(100, 80)
y = rng.randn(100) # y has nothing to do with X whatsoever
from sklearn.linear_model import LinearRegression
from sklearn.cross_validation import cross_val_score
scores = cross_val_score(LinearRegression(), X, y, cv=5, scoring='r2')
This should result in negative r^2 values.
In [23]: scores
Out[23]:
array([-240.17927358, -5.51819556, -14.06815196, -67.87003867,
-64.14367035])
The important question now is whether this is due to the fact that linear models just do not find anything in your data, or to something else that may be fixed in the preprocessing of your data. Have you tried scaling your columns to have mean 0 and variance 1? You can do this using sklearn.preprocessing.StandardScaler. As a matter of fact, you should create a new estimator by concatenating a StandardScaler and the LinearRegression into a pipeline using sklearn.pipeline.Pipeline.
Next you may want to try Ridge regression.
Just because R^2 can be negative does not mean it should be.
Possibility 1: a bug in your code.
A common bug that you should double check is that you are passing in parameters correctly:
r2_score(y_true, y_pred) # Correct!
r2_score(y_pred, y_true) # Incorrect!!!!
Possibility 2: small datasets
If you are getting a negative R^2, you could also check for over fitting. Keep in mind that cross_validation.cross_val_score() does not randomly shuffle your inputs, so if your sample are inadvertently sorted (by date for example) then you might build models on each fold that are not predictive for the other folds.
Try reducing the number of features, increasing the number samples, and decreasing the number of folds (if you are using cross_validation). While there is no official rule here, your m x n dataset (where m is the number of samples and n is the number of features) should be of a shape where
m > n^2
and when you using cross validation with f as the number of folds, you should aim for
m/f > n^2
R² = 1 - RSS / TSS, where RSS is the residual sum of squares ∑(y - f(x))² and TSS is the total sum of squares ∑(y - mean(y))². Now for R² ≥ -1, it is required that RSS/TSS ≤ 2, but it's easy to construct a model and dataset for which this is not true:
>>> x = np.arange(50, dtype=float)
>>> y = x
>>> def f(x): return -100
...
>>> rss = np.sum((y - f(x)) ** 2)
>>> tss = np.sum((y - y.mean()) ** 2)
>>> 1 - rss / tss
-74.430972388955581
If you are getting negative regression r^2 scores, make sure to remove any unique identifier (e.g. "id" or "rownum") from your dataset before fitting/scoring the model. Simple check, but it'll save you some headache time.

Categories