I dont have much knowledge in Python but I have to crack this for an assessment completion,
Question:
Run the following code to load the required libraries and create the data set to fit the model.
from sklearn.datasets import load_boston
import pandas as pd
boston = load_boston()
dataset = pd.DataFrame(boston.data, columns=boston.feature_names)
dataset['target'] = boston.target
print(dataset.head())
I have to perform the following steps to complete this scenario.
For the boston dataset loaded in the above code snippet, perform linear regression.
Use the target variable as the dependent variable.
Use the RM variable as the independent variable.
Fit a single linear regression model using statsmodels package in python.
Import statsmodels packages appropriately in your code.
Upon fitting the model, Identify the coefficients.
Finally print the model summary in your code.
You can write your code using vim app.py .
Press i for insert mode.
Press esc and then :wq to save and quit the editor.
Please help me to understand how to get this completed. Your valuable comments are much appreciated
Thanks in Advance
from sklearn.datasets import load_boston
import pandas as pd
boston = load_boston()
dataset = pd.DataFrame(boston.data, columns=boston.feature_names)
dataset['target'] = boston.target
print(dataset.head())
import statsmodels.api as sm
import statsmodels.formula.api as smf
X = dataset["RM"]
y = dataset['target']
X = sm.add_constant(X)
model = smf.OLS(y,X).fit()
predictions = model.predict(X)
print(model.summary())
Related
I'm new to python and machine learning. So My question may be trivial.
I typed the below code in Jupyter Notebook
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.preprocessing import PolynomialFeatures
poly_reg = PolynomialFeatures(degree=2)
X_poly = poly_reg.fit_transform(X)
X_poly[:5]
lin_reg = LinearRegression()
lin_reg.fit(X_poly, y)
plt.scatter(X, y)
plt.plot(X, lin_reg.predict(poly_reg.fit_transform(X)))
plt.show()
Then I deleted below code:
lin_reg = LinearRegression()
lin_reg.fit(X_poly, y)
But a graph and regression are normally generated.
So those codes are not essential?
Chatgpt said that "without the training and fitting of the linear regression model, the predicted line would not be accurate and would not reflect the relationship between the input and target data."
But to me, the resultant graph and regression seems accurate ... even
lin_reg.predict(poly_reg.fit_transform(X[[2]]))
working
lin_reg = LinearRegression() lin_reg.fit(X_poly, y)
Are they meaningless?
Or Is something get wrong with deleting those codes?
ps. And please note to me if my question method is not right.
Until you restart the runtime environment, your fitted model is still in the memory. You are addressing the model that was fit before you deleted the lines, so the will be no difference in the output. Once you restart the runtime environment, you will get a mistake "lin_reg not defined"
I'm going through a tutorial on mixed-effects models in Python.
I'm building a model where litter is the random effect. In the tutorial, the output contains the variance across the litter intercepts. However, in Bayesian hierarchical modeling, I'm also able to see the intercepts for every level of the random effect variable.
How would I see that here?
import pandas as pd
import statsmodels.api as sm
import scipy.stats as stats
import statsmodels.formula.api as smf
df = pd.read_csv("http://www-personal.umich.edu/~bwest/rat_pup.dat", sep = "\t")
model = smf.mixedlm("weight ~ litsize + C(treatment) + C(sex, Treatment('Male')) + C(treatment):C(sex, Treatment('Male'))",
df,
groups= "litter").fit()
model.summary()
I would also ideally like to see the estimate of the intercept across all litters. Then, how would I interpret that overall intercept compared to the intercept for each single litter?
If there's a better Python package for what I'm striving for, please suggest.
I'm trying to log the plot of a confusion matrix generated with scikit-learn for a test set using mlflow's support for scikit-learn.
For this, I tried something that resemble the code below (I'm using mlflow hosted on Databricks, and sklearn==1.0.1)
import sklearn.datasets
import pandas as pd
import numpy as np
import mlflow
from sklearn.pipeline import Pipeline
from sklearn.linear_model import SGDClassifier
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import train_test_split
mlflow.set_tracking_uri("databricks")
mlflow.set_experiment("/Users/name.surname/plotcm")
data = sklearn.datasets.fetch_20newsgroups(categories=['alt.atheism', 'sci.space'])
df = pd.DataFrame(data = np.c_[data['data'], data['target']])\
.rename({0:'text', 1:'class'}, axis = 'columns')
train, test = train_test_split(df)
my_pipeline = Pipeline([
('vectorizer', TfidfVectorizer()),
('classifier', SGDClassifier(loss='modified_huber')),
])
mlflow.sklearn.autolog()
from sklearn.metrics import ConfusionMatrixDisplay # should I import this after the call to `.autolog()`?
my_pipeline.fit(train['text'].values, train['class'].values)
cm = ConfusionMatrixDisplay.from_predictions(
y_true=test["class"], y_pred=my_pipeline.predict(test["text"])
)
while the confusion matrix for the training set is saved in my mlflow run, no png file is created in the mlflow frontend for the test set.
If I try to add
cm.figure_.savefig('test_confusion_matrix.png')
mlflow.log_artifact('test_confusion_matrix.png')
that does the job, but requires explicitly logging the artifact.
Is there an idiomatic/proper way to autolog the confusion matrix computed using a test set after my_pipeline.fit()?
The proper way to do this is to use mlflow.log_figure as a fluent API announced in MLflow 1.13.0. You can read the documentation here. This code will do the job.
mlflow.log_figure(cm.figure_, 'test_confusion_matrix.png')
This function implicitly store the image, and then calls log_artifact against that path, something like you did.
I'm trying to run ML trials in parallel using HyperOpt with SparkTrials on Databricks.
My opjective function converts the outputs to a spark dataframe using spark.createDataFrame(results) (to reuse some preprocessing code I've previously created - I'd prefer not to have to rewrite this).
However, this causes an error when attempting to use HyperOpt and SparkTrials, as the SparkContext used to create the dataframe "should only be created or accessed on the driver". Is there any way I can create a sparkDataFrame in my objective function here?
For a reproducible example:
from sklearn.datasets import load_iris
from sklearn.model_selection import cross_val_score
from sklearn.svm import SVC
from hyperopt import fmin, tpe, hp, SparkTrials, STATUS_OK, Trials
from pyspark.sql import SparkSession
# If you are running Databricks Runtime for Machine Learning, `mlflow` is already installed and you can skip the following line.
import mlflow
# Load the iris dataset from scikit-learn
iris = iris = load_iris()
X = iris.data
y = iris.target
def objective(C):
# Create a support vector classifier model
clf = SVC(C)
# THESE TWO LINES CAUSE THE PROBLEM
ss = SparkSession.builder.getOrCreate()
sdf = ss.createDataFrame([('Alice', 1)])
# Use the cross-validation accuracy to compare the models' performance
accuracy = cross_val_score(clf, X, y).mean()
# Hyperopt tries to minimize the objective function. A higher accuracy value means a better model, so you must return the negative accuracy.
return {'loss': -accuracy, 'status': STATUS_OK}
search_space = hp.lognormal('C', 0, 1.0)
algo=tpe.suggest
# THIS WORKS (It's not using SparkTrials)
argmin = fmin(
fn=objective,
space=search_space,
algo=algo,
max_evals=16)
from hyperopt import SparkTrials
spark_trials = SparkTrials()
# THIS FAILS
argmin = fmin(
fn=objective,
space=search_space,
algo=algo,
max_evals=16,
trials=spark_trials)
I have tried looking at this, but it is solving a different problem - I can't see an obvious way to apply it to my situation.
How can I get the current SparkSession in any place of the codes?
I think the short answer is that it's not possible. The spark context can only exist on the driver node. Creating a new instance would be a kind of nesting, see this related question.
Nesting parallelizations in Spark? What's the right approach?
I solved my problem in the end by rewriting the transformations in pandas, which would then work.
If the transformations are too big for a single node then you'd probably have to pre-compute them and let hyperopt choose which version as part of the optimisation.
I'm trying to write an integration test that uses the descriptive statistics (.describe().to_list()) of the results of a model prediction (model.predict(X)). However, even though I've set np.random.seed(###) the descriptive statistics are different after running the tests in the console vs. in the environment created by Pycharm:
Here's a MRE for local:
from sklearn.linear_model import ElasticNet
from sklearn.datasets import make_regression
import numpy as np
import pandas as pd
np.random.seed(42)
X, y = make_regression(n_features=2, random_state=42)
regr = ElasticNet(random_state=42)
regr.fit(X, y)
pred = regr.predict(X)
# Theory: This result should be the same from the result in a class
pd.Series(pred).describe().to_list()
And an example test-file:
from unittest import TestCase
from sklearn.linear_model import ElasticNet
from sklearn.datasets import make_regression
import numpy as np
import pandas as pd
np.random.seed(42)
class TestPD(TestCase):
def testExpectedPrediction(self):
np.random.seed(42)
X, y = make_regression(n_features=2, random_state=42)
regr = ElasticNet(random_state=42)
regr.fit(X, y)
pred = pd.Series(regr.predict(X))
for i in pred.describe().to_list():
print(i)
# here we would have a self.assertTrue/Equals f.e. element
What appears to happen is that when I run this test in the Python Console, I get one result. But then when I run it using PyCharm's unittests for the folder, I get another result. Now, importantly, in PyCharm, the project interpreter is used to create an environment for the console that ought to be the same as the test environment. This leaves me to believe that I'm missing something about the way random_state is passed along. My expectation is, given that I have set a seed, that the results would be reproducible. But that doesn't appear to be the case and I would like to understand:
Why they aren't equal?
What I can do to make them equal?
I haven't been able to find a lot of best practices with respect to testing against expected model results. So commentary in that regard would also be helpful.