I'm trying to predict temperature at 12 UTC tomorrow in 1 location. To forecast, I use a basic linear regression model with the statmodels module. My code is hereafter:
x = ds_main
X = sm.add_constant(x)
y = ds_target_t
model = sm.OLS(y,X,missing='drop')
results = model.fit()
The summary shows that the fit is "good":
But the problem appears when I try to predict values with a new dataset that I consider to be my testset. The latter has the same columns number and the same variables names, but the .predict() function returns an array of NaN, although my testset has values ...
xnew = ts_main
Xnew = sm.add_constant(xnew)
ynewpred = results.predict(Xnew)
I really don't understand where the problem is ...
UPDATE : I think I have an explanation: my Xnew dataframe contains NaN values. Statmodels function .fit() allows to drop missing values (NaN) but not .predict() function. Thus, it returns a NaN values array ...
But this is the "why", but I still don't get the "how" reason to fix it...
statsmodels.api.OLS be default will not accept the data with NA values. So if you use this, then you need to drop your NA values first.
However, if you use statsmodels.formula.api.ols, then it will automatically drop the NA values to run regression and make predictions for you.
so you can try this:
import statsmodels.formula.api as smf
lm = smf.ols(formula = "y~X", pd.concat([y, X], axis = 1)).fit()
lm.predict(Xnew)
Related
I'm facing an imbalanced regression problem and I've already tried several ways to solve this problem. Eventually I came a cross this new metric called SERA (Squared Error Relevance Area) as a custom scoring function for imbalanced regression as mentioned in this paper. https://link.springer.com/article/10.1007/s10994-020-05900-9
In order to calculate SERA you have to compute the relevance function phi, which is varied from 0 to 1 in small steps. For each value of relevance (phi) (e.g. 0.45) a subset of the training dataset is selected where the relevance is greater or equal to that value (e.g. 0.45). And for that selected training subset sum of squared errors is calculated i.e. sum(y_true - y_pred)**2 which is known as squared error relevance (SER). Then a plot us created for SER vs phi and area under the curve is calculated i.e. SERA.
Here is my implementation, inspired by this other question here in StackOverflow:
import pandas as pd
from scipy.integrate import simps
from sklearn.metrics import make_scorer
def calc_sera(y_true, y_pred,x_relevance=None):
# creating a list from 0 to 1 with 0.001 interval
start_range = 0
end_range = 1
interval_size = 0.001
list_1 = [round(val * interval_size, 3) for val in range(1, 1000)]
list_1.append(start_range)
list_1.append(end_range)
epsilon = sorted(list_1, key=lambda x: float(x))
df = pd.concat([y_true,y_pred,x_relevance],axis=1,keys= ['true', 'pred', 'phi'])
# Initiating lists to store relevance(phi) and squared-error relevance (ser)
relevance = []
ser = []
# Converting the dataframe to a numpy array
rel_arr = x_relevance
# selecting a phi value
for phi in epsilon:
relevance.append(phi)
error_squared_sum = 0
error_squared_sum = sum((df[df.phi>=phi]['true'] - df[df.phi>=phi]['pred'])**2)
ser.append(error_squared_sum)
# squared-error relevance area (sera)
# numerical integration using simps(y, x)
sera = simps(ser, relevance)
return sera
sera = make_scorer(calc_sera, x_relevance=X['relevance'], greater_is_better=False)
I implemented a simple GridSearch using this score as an evaluation metric to select the best model:
model = CatBoostRegressor(random_state=0)
cv = KFold(n_splits = 5, shuffle = True, random_state = 42)
parameters = {'depth': [6,8,10],'learning_rate' : [0.01, 0.05, 0.1],'iterations': [100, 200, 500,1000]}
clf = GridSearchCV(estimator=model,
param_grid=parameters,
scoring=sera,
verbose=0,cv=cv)
clf.fit(X=X.drop(columns=['relevance']),
y=y,
sample_weight=X['relevance'])
print("Best parameters:", clf.best_params_)
print("Lowest SERA: ", clf.best_score_)
I also added the relevance function as weights to the model so it could apply this weights in the learning task. However, what I am getting as output is this:
Best parameters: {'depth': 6, 'iterations': 100, 'learning_rate': 0.01}
Lowest SERA: nan
Any clue on why SERA value is returning nan? Should I implement this another way?
Whenever you get unexpected NaN scores in a grid search, you should set the parameter error_score="raise" to get an error traceback, and debug from there.
In this case I think I see the problem though: sera is defined with x_relevance=X['relevance'], which includes all the rows of X. But in the search, you're cross-validating: each testing set has fewer rows, and those are what sera will be called on. I can think of a couple of options; I haven't tested either, so let me know if something doesn't work.
Use pandas index
In your pd.concat, just set join="inner". If y_true is a slice of the original pandas series (I think this is how GridSearchCV will pass it...), then the concatenation will join on those row indices, so keeping the whole of X['relevance'] is fine: it will just drop the irrelevant rows. y_pred may well be a numpy array, so you might need to set its index appropriately first?
Keep relevance in X
Then your scorer can reference the relevance column directly from the sliced X. For this, you will want to drop that column from the fitting data, which could be difficult to do for the training but not the testing set; however, CatBoost has an ignored_features parameter that I think ought to work.
I'm new to coding and am having a hard time regressing multiple columns on one column.
The dataframe consists of ~200 securities. I want to regress each security on a specific column (stock1 regressed on stock4, stock2 regressed on stock4, stock3 regressed on stock4, etc.)
Then, I want a new dataframe of the regression coefficients and the securities.
regr = LinearRegression()
y_regression = np.array(df.y).reshape(-1,1)
beta = lambda x: list(regr.fit(np.array(x).reshape(-1,1), y_regression).coef_)
beta = df.apply(beta)
The code will correctly perform the calculations across all columns, but it comes out as an np.ndarray and looks like this [[1.25678]]. The only way I can get the code to work on multiple columns is if I create a list of the arrays. My new dataframe has this format:
Stock1 [[1.25678]]
Stock2 [[0.96782]]
etc.
How can I change the type so that it only gives me the inner number (1.25678,etc)?
Does this work:
from sklearn.linear_model import LinearRegression
regr = LinearRegression()
y_regression = np.array(df.y).reshape(-1,1)
beta = lambda x: list(regr.fit(np.array(x).reshape(-1,1), y_regression).coef_)[0]
beta = df.apply(beta)
result = sm.OLS(gold_lookback, silver_lookback ).fit()
After I get the result, how can I get the coefficient and the constant?
In other words, if
y = ax + c
how to get the values a and c?
You can use the params property of a fitted model to get the coefficients.
For example, the following code:
import statsmodels.api as sm
import numpy as np
np.random.seed(1)
X = sm.add_constant(np.arange(100))
y = np.dot(X, [1,2]) + np.random.normal(size=100)
result = sm.OLS(y, X).fit()
print(result.params)
will print you a numpy array [ 0.89516052 2.00334187] - estimates of intercept and slope respectively.
If you want more information, you can use the object result.summary() that contains 3 detailed tables with model description.
Cribbing from this answer Converting statsmodels summary object to Pandas Dataframe, it seems that the result.summary() is a set of tables, which you can export as html and then use Pandas to convert to a dataframe, which will allow you to directly index the values you want.
So, for your case (putting the answer from the above link into one line):
df = pd.read_html(result.summary().tables[1].as_html(),header=0,index_col=0)[0]
And then
a=df['coef'].values[1]
c=df['coef'].values[0]
Adding up details on #IdiotTom answer.
You may use:
head = pd.read_html(res.summary2().as_html())[0]
body = pd.read_html(res.summary2().as_html())[1]
Not as nice, but the info is there.
If the input to the API is pandas objects (i.e. a pd.DataFrame for the data, or pd.Series for x and for y), then when you access .params it will be a pd.Series, so each coefficient is easily accessible by its name.
For example:
import statsmodels.api as sm
# sm.__version__ is '0.13.1'
df = pd.DataFrame({'x': [0, 1,2,3,4],
'y': [0.1, 0.2, 0.5, 0.5, 0.8]
})
sm.OLS.from_formula(formula='y~x-1', data=df).fit().params
Outputs the following pd.Series:
x 0.196667
dtype: float64
Allowing for an intercept term (by changing the formula from y~x-1 to y~x) changes the output to include the intercept under the name Intercept:
Intercept 0.08
x 0.17
dtype: float64
The coefficients are saved as a dictionary in the result.params data frame, that's a pandas Series. In it, the constant term is stored as Intercept, as others pointed. The variable terms are stored with their variable names. So, if your model is y ~ x, the regression coefficients will be available as result.params['Intercept'] (that's b) and result.params['x'] (that's a) for the equation y = a*x + b.
I use scikit linear regression and if I change the order of the features, the coef are still printed in the same order, hence I would like to know the mapping of the feature with the coeff.
#training the model
model_1_features = ['sqft_living', 'bathrooms', 'bedrooms', 'lat', 'long']
model_2_features = model_1_features + ['bed_bath_rooms']
model_3_features = model_2_features + ['bedrooms_squared', 'log_sqft_living', 'lat_plus_long']
model_1 = linear_model.LinearRegression()
model_1.fit(train_data[model_1_features], train_data['price'])
model_2 = linear_model.LinearRegression()
model_2.fit(train_data[model_2_features], train_data['price'])
model_3 = linear_model.LinearRegression()
model_3.fit(train_data[model_3_features], train_data['price'])
# extracting the coef
print model_1.coef_
print model_2.coef_
print model_3.coef_
The trick is that right after you have trained your model, you know the order of the coefficients:
model_1 = linear_model.LinearRegression()
model_1.fit(train_data[model_1_features], train_data['price'])
print(list(zip(model_1.coef_, model_1_features)))
This will print the coefficients and the correct feature. (Tested with pandas DataFrame)
If you want to reuse the coefficients later you can also put them in a dictionary:
coef_dict = {}
for coef, feat in zip(model_1.coef_,model_1_features):
coef_dict[feat] = coef
(You can test it for yourself by training two models with the same features but, as you said, shuffled order of features.)
import pandas as pd
import numpy as np
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train, y_train)
coef_table = pd.DataFrame(list(X_train.columns)).copy()
coef_table.insert(len(coef_table.columns),"Coefs",regressor.coef_.transpose())
#Robin posted a great answer, but for me I had to make one tweak on it to work the way I wanted, and it was to refer to the dimension of the 'coef_' np.array that I wanted, namely modifying to this: model_1.coef_[0,:], as below:
coef_dict = {}
for coef, feat in zip(model_1.coef_[0,:],model_1_features):
coef_dict[feat] = coef
Then the dict was created as I pictured it, with {'feature_name' : coefficient_value} pairs.
Here is what I use for pretty printing of coefficients in Jupyter. I'm not sure I follow why order is an issue - as far as I know the order of the coefficients should match the order of the input data that you gave it.
Note that the first line assumes you have a Pandas data frame called df in which you originally stored the data prior to turning it into a numpy array for regression:
fieldList = np.array(list(df)).reshape(-1,1)
coeffs = np.reshape(np.round(clf.coef_,5),(-1,1))
coeffs=np.concatenate((fieldList,coeffs),axis=1)
print(pd.DataFrame(coeffs,columns=['Field','Coeff']))
Borrowing from Robin, but simplifying the syntax:
coef_dict = dict(zip(model_1_features, model_1.coef_))
Important note about zip: zip assumes its inputs are of equal length, making it especially important to confirm that the lengths of the features and coefficients match (which in more complicated models might not be the case). If one input is longer than the other, the longer input will have the values in its extra index positions cut off. Notice the missing 7 in the following example:
In [1]: [i for i in zip([1, 2, 3], [4, 5, 6, 7])]
Out[1]: [(1, 4), (2, 5), (3, 6)]
pd.DataFrame(data=regression.coef_, index=X_train.columns)
All of these answers were great but what personally worked for me was this, as the feature names I needed were the columns of my train_date dataframe:
pd.DataFrame(data=model_1.coef_,columns=train_data.columns)
Right after training the model, the coefficient values are stored in the variable model.coef_[0]. We can iterate over the column names and store the column name and their coefficient value in a dictionary.
model.fit(X_train,y)
# assuming all the columns except last one is used in training
columns = data.iloc[:,-1].columns
coef_dict = {}
for i in range(0,len(columns)):
coef_dict[columns[i]] = model.coef_[0][i]
Hope this helps!
As of scikit-learn version 1.0, the LinearRegression estimator has a feature_names_in_ attribute. From the docs:
feature_names_in_ : ndarray of shape (n_features_in_,)
Names of features seen during fit. Defined only when X has feature names that are all strings.
New in version 1.0.
Assuming you're fitting on a pandas.DataFrame (train_data), your estimators (model_1, model_2, and model_3) will have the attribute. You can line up your coefficients using any of the methods listed in previous answers, but I'm in favor of this one:
coef_series = pd.Series(
data=model_1.coef_,
index=model_1.feature_names_in_
)
A minimally reproducible example
import numpy as np
import pandas as pd
from sklearn.linear_model import LinearRegression
# for repeatability
np.random.seed(0)
# random data
Xy = pd.DataFrame(
data=np.random.random((10, 3)),
columns=["x0", "x1", "y"]
)
# separate X and y
X = Xy.drop(columns="y")
y = Xy.y
# initialize estimator
lr = LinearRegression()
# fit to pandas.DataFrame
lr.fit(X, y)
# get coeficients and their respective feature names
coef_series = pd.Series(
data=lr.coef_,
index=lr.feature_names_in_
)
print(coef_series)
x0 0.230524
x1 -0.275611
dtype: float64
I am trying to fit OneVsAll Classification output in training data , rows of output adds upto 1 .
One possible way is to read all the rows and find which column has highest value and prepare data for training .
Eg : y = [[0.2,0.8,0],[0,1,0],[0,0.3,0.7]] can be reduced to y = [b,b,c] , considering a,b,c as corresponding class of the columns 0,1,2 respectively.
Is there a function in scikit-learn which helps to achieve such transformations?
This code does what you want:
import numpy as np
import string
y = np.array([[0.2,0.8,0],[0,1,0],[0,0.3,0.7]])
def transform(y,labels):
f = np.vectorize(lambda i : string.letters[i])
y = f(y.argmax(axis=1))
return y
y = transform(y,'abc')
EDIT: Using the comment by alko, I made it more general be letting the user supply the labels to the transform function.