If I have a model like the one below, how do I access the theano function in order to get the value(s) for my model I'm fitting?
This is quite a basic model and so I could just calculate with the raw function for my variables. However, I intend to generate pymc3 models dynamically where some variables are reused/fixed/bounded etc.
I know I can access the theano function from model.makefn([expected]) but this will rely on transformed arguments like sigma_log_ instead of sigma.
Ideally, I'm looking for something like model.evalute([expected], alpha=1, beta=2)
Is there such a method?
Thanks
def function(a, b):
# do something
basic_model = Model()
with basic_model:
# Priors for unknown model parameters
alpha = Normal('alpha', mu=0, sd=10)
beta = Normal('beta', mu=0, sd=10, shape=2)
sigma = HalfNormal('sigma', sd=1)
# Expected value of outcome
expected = Deterministic('expected', function(alpha,beta))
# Likelihood (sampling distribution) of observations
Y_obs = Normal('Y_obs', mu=function, sd=sigma, observed=Y)
The typical approach here would be to first sample from the model's posterior distribution with something like
with model:
trace = pm.sample(N_SAMPLES)
then use the samples to approximate the posterior expected value of your function.
Related
I am training and tuning a model in pycaret such as:
from pycaret.classification import *
clf1 = setup(data = train, target = 'target', feature_selection = True, test_data = test, remove_multicollinearity = True, multicollinearity_threshold = 0.4)
# create model
lr = create_model('lr')
# tune model
tuned_lr = tune_model(lr)
# optimize threshold
optimized_lr = optimize_threshold(tuned_lr)
I would like to get the parameters estimated for the features in the Logistic Regression, so I could proceed with understanding the effect size of each feature on the target. However, the object optimized_lr has a function optimized_lr.get_params() which returns the hyperparameters of the model, however, I am not quite interested in my tuning decisions, instead, I am very interested in the real parameters of the model, the ones estimated in Logistic Regression.
How could I get them to use pycaret? (I could easily get those using other packages such as statsmodels, but I want to know in pycaret)
how about
for f, c in zip (optimized_lr.feature_names_in_,tuned.coef_[0]):
print(f, c)
To get the coefficients, use this code:
tuned_lr.feature_importances_ #this will give you the coefficients
get_config('X_train').columns #this code will give you the names of the columns.
Now we can create a dataframe so that we could see clearly how it correlates with the independent variable.
Coeff=pd.DataFrame({"Feature":get_config('X_train').columns.tolist(),"Coefficients":tuned_lr.feature_importances_})
print(Coeff)
# It would give me the Coefficient with the names of the respective columns. Hope it helps.
Is there a way to measure the accuracy of an ARMA-GARCH model in Python using a prediction interval (alpha=0.05)? I fitted an ARMA-GARCH model on log returns and used some classical metrics such as RMSE, MSE (out-of-sample), AIC (in-sample), check on residuals and so on. I would like to add a prediction interval as another measurement of accuracy based on my ARMA-GARCH model predictions. I used the armagarch library (https://github.com/iankhr/armagarch).
I already checked on how to use prediction intervals but not sure how to use it with ARMA-GARCH.
I found these formula searching online: Estimator +- 1.96 (for 95%) * Standard Error.
So far i got it, but i have several Standard Errors in my model output for each parameter in the ARMA and GARCH part, which one i have to use? Is there one Standard Error for the whole model itself?
I would be really happy if anyone could help.
ARMA-GARCH model output
So far I created an ARMA(2,2)-GARCH(1,1) model:
#final test of function
import armagarch as ag
#definitions framework
data = pd.DataFrame(data)
meanMdl = ag.ARMA(order = {'AR':2,'MA':2})
volMdl = ag.garch(order = {'p':1,'q':1})
distMdl = ag.normalDist()
model = ag.empModel(data, meanMdl, volMdl, distMdl)
model_fit = model.fit()
After the model fit defining prediction length and
Recieved two arrays as an output (mean + variance) put them into the correct length:
#first array is mean, second is variance
pred = model.predict(nsteps=len(df_test))
#correct the shapes!
df_pred_mean = pd.DataFrame(np.reshape(pred[0], (len(df_test),
1)))
df_pred_variance = pd.DataFrame(np.reshape(pred[1],
(len(df_test), 1)))
So far so good, now i would like to implement a prediction interval.
I got that one has to use the ARMA part +- 1.96 (95%)* GARCH prediction for each prediction. I implemented it for the upper and lower bound. It just shows the upper bound lower bound is same but using * (-1.96) at the end of the formula.
#upper bound
df_all["upper bound"] =df_all["pred_Mean"]+df_all["pred_Variance"]*1.96
Using it on the actual log returns i trained the model with fails in the way its completely wrong. Now I'm unsure if the main approach i used is wrong or the model I used means the package.
prediction interval vs. actual log return
How can I add a constant to my statsmodels regression.
As of now, the model is like this:
model = sm.OLS(y,x).fit()
From the documentation for OLS:
exog: A nobs x k array where nobs is the number of observations and k is the number of regressors. An intercept is not included by default and should be added by the user. See statsmodels.tools.add_constant.
X = sm.add_constant(x)
sm.OLS(y,X)
I'm coming across an issue I haven't seen come up before. I work in Bayesian Machine Learning and as such make a lot of use of the distributions in PyTorch. One common thing to do is to define some of the parameters of distributions in terms of the log of their parameter, so that in optimisation they cannot go negative (e.g. the standard deviation of a Normal distribution).
In order to be distribution independent however, I do not want to have to manually recompute the conversion of this parameter. To demonstrate via example:
The following code will NOT run. After the first backward pass, the part of the graph the calculates the exponential of the parameter is automatically removed, and not re-added.
import torch
import torch.nn as nn
import torch.distributions as dd
log_std = nn.Parameter(torch.Tensor([1])) # Define the log of the parameter as an nn.Parameter, this is what we want to optimise
std = torch.exp(log_std) # Define the transformation we want to apply to the parameter to using it in the distribution
mean = nn.Parameter(torch.Tensor([1])) # A normal parameter
dist = dd.Normal(loc=mean, scale=std) # Define the distribution. From here I want to ONLY refer to this, not the other variables
optim = torch.optim.SGD([log_std, mean], lr=0.01) # Standard optimiser
target = dd.Normal(5,5) # Target distribution to match
for i in range(50):
optim.zero_grad()
samples = dist.rsample((1000,)) # Sample our model, note no reference to log_std
cost = -(target.log_prob(samples) - dist.log_prob(samples)).sum() # KLdivergence cost metric
cost.backward()
optim.step()
print(i)
print(log_std, mean, cost)
print()
The next set of code WILL run, but I must explicitly reference the log_std parameter in the loop, and recreate the distribution. If I wanted to change the distribution type, it would not be possible without considering the specific case.
import torch
import torch.nn as nn
import torch.distributions as dd
log_std = nn.Parameter(torch.Tensor([1])) # Define the log of the parameter as an nn.Parameter, this is what we want to optimise
mean = nn.Parameter(torch.Tensor([1])) # A normal parameter
optim = torch.optim.SGD([log_std, mean], lr=0.001) # Standard optimiser
target = dd.Normal(5,5) # Target distribution to match
for i in range(50):
optim.zero_grad()
std = torch.exp(log_std) # Define the transformation we want to apply to the parameter to using it in the distribution
dist = dd.Normal(loc=mean, scale=std) # Define the distribution.
samples = dist.rsample((1000,)) # Sample our model, note no reference to log_std
cost = -(target.log_prob(samples) - dist.log_prob(samples)).sum() # KL divergence cost metric
cost.backward()
optim.step()
print(i)
print(mean, std, cost)
print()
The first example does however work in Tensorflow, as the graphs there are static. Does anyone have some ideas on how I might fix this? If it were possible to keep only the part of the graph that defines the relationship std = torch.exp(log_std) then this could work. I have also tried playing with backwards gradient hooks, but unfortunately to calculate the new gradient properly you need access to the parameter value and the learning rate.
Thanks in advance!
Michael
EDIT
I have been asked for an example of how I might want to change the distribution. Taking the code which currently will NOT work, and changing the distributions to Gamma distributions:
import torch
import torch.nn as nn
import torch.distributions as dd
log_rate = nn.Parameter(torch.Tensor([1])) # Define the log of the parameter as an nn.Parameter, this is what we want to optimise
rate = torch.exp(log_std) # Define the transformation we want to apply to the parameter to usi it in the distribution
concentration = nn.Parameter(torch.Tensor([1])) # A normal parameter
dist = dd.Gamma(concentration=concentration, rate=std) # Define the distribution. From here I want to ONLY refer to this, not the other variables
optim = torch.optim.SGD([log_rate, concentration], lr=0.01) # Standard optimiser
target = dd.Gamma(5,5) # Target distribution to match
for i in range(50):
optim.zero_grad()
samples = dist.rsample((1000,)) # Sample our model, note no reference to log_std
cost = -(target.log_prob(samples) - dist.log_prob(samples)).sum() # KL divergence cost metric
cost.backward()
optim.step()
print(i)
print(log_std, mean, cost)
print()
Looking at the code that currently works however:
import torch
import torch.nn as nn
import torch.distributions as dd
log_rate = nn.Parameter(torch.Tensor([1])) # Define the log of the parameter as an nn.Parameter, this is what we want to optimise
mean = nn.Parameter(torch.Tensor([1])) # A normal parameter
optim = torch.optim.SGD([log_rate, concentration], lr=0.001) # Standard optimiser
target = dd.Gamma(5,5) # Target distribution to match
for i in range(50):
optim.zero_grad()
rate = torch.exp(log_rate) # Define the transformation we want to apply to the parameter to usi it in the distribution
dist = dd.Gamma(concentration=concentration, rate=rate) # Define the distribution.
samples = dist.rsample((1000,)) # Sample our model, note no reference to log_std
cost = -(target.log_prob(samples) - dist.log_prob(samples)).sum() # KL divergence cost metric
cost.backward()
optim.step()
print(i)
print(mean, std, cost)
print()
And you see we have to change the code inside the loop to allow the algorithm to work. Its not a big issue in this small example, but this is just a demonstration for much larger algorithms where it would be incredibly beneficial to not have to worry about
The easy fix is adding retain_graph=True to cost.backward().
import torch
import torch.nn as nn
import torch.distributions as dd
log_std = nn.Parameter(torch.Tensor([1])) # Define the log of the parameter as an nn.Parameter, this is what we want to optimise
std = torch.exp(log_std) # Define the transformation we want to apply to the parameter to using it in the distribution
mean = nn.Parameter(torch.Tensor([1])) # A normal parameter
dist = dd.Normal(loc=mean, scale=std) # Define the distribution. From here I want to ONLY refer to this, not the other variables
optim = torch.optim.SGD([log_std, mean], lr=0.01) # Standard optimiser
target = dd.Normal(5,5) # Target distribution to match
for i in range(50):
optim.zero_grad()
samples = dist.rsample((1000,)) # Sample our model, note no reference to log_std
cost = -(target.log_prob(samples) - dist.log_prob(samples)).sum() # KLdivergence cost metric
cost.backward(retain_graph=True)
optim.step()
print(i, log_std, mean, cost)
You can free part of the graphs with del <variable name>, for example del cost will delete the part of the graph to compute cost.
The optimal solution would be to detach samples from the parameters of the distribution. Unfortunately I haven't found a way to do that, .detach() doesn't seem to work when dealing with outputs of .rsample().
I run into a common problem I'm wondering if someone can help with. I often would like to use pymc3 in two modes: training (i.e. actually running inference on parameters) and evaluation (i.e. using inferred parameters to generate predictions).
In general, I'd like a posterior over predictions, not just point-wise estimates (that's part of the benefit of the Bayesian framework, no?). When your training data is fixed, this is typically accomplished by adding simulated variable of a similar form to the observed variable. For example,
from pymc3 import *
with basic_model:
# Priors for unknown model parameters
alpha = Normal('alpha', mu=0, sd=10)
beta = Normal('beta', mu=0, sd=10, shape=2)
sigma = HalfNormal('sigma', sd=1)
# Expected value of outcome
mu = alpha + beta[0]*X1 + beta[1]*X2
# Likelihood (sampling distribution) of observations
Y_obs = Normal('Y_obs', mu=mu, sd=sigma, observed=Y)
Y_sim = Normal('Y_sim', mu=mu, sd=sigma, shape=len(X1))
start = find_MAP()
step = NUTS(scaling=start)
trace = sample(2000, step, start=start)
But what if my data changes? Say I want to generate predictions based on new data, but without running inference all over again. Ideally, I'd have a function like predict_posterior(X1_new, X2_new, 'Y_sim', trace=trace) or even predict_point(X1_new, X2_new, 'Y_sim', vals=trace[-1]) that would simply run the new data through the theano computation graph.
I suppose part of my question relates to how pymc3 implements the theano computation graph. I've noticed that the function model.Y_sim.eval seems similar to what I want, but it requires Y_sim as an input and seems just to return whatever you give it.
I imagine this process is extremely common, but I can't seem to find any way to do it. Any help is greatly appreciated. (Note also that I have a hack to do this in pymc2; it's more difficult in pymc3 because of theano.)
Note: This functionality is now incorporated in the core code as the pymc.sample_ppc method. Check out the docs for more info.
Based on this link (dead as of July 2017) sent to me by twiecki, there are a couple tricks to solve my issue. The first is to put the training data into a shared theano variable. This allows us to change the data later without screwing up the theano computation graph.
X1_shared = theano.shared(X1)
X2_shared = theano.shared(X2)
Next, build the model and run the inference as usual, but using the shared variables.
with basic_model:
# Priors for unknown model parameters
alpha = Normal('alpha', mu=0, sd=10)
beta = Normal('beta', mu=0, sd=10, shape=2)
sigma = HalfNormal('sigma', sd=1)
# Expected value of outcome
mu = alpha + beta[0]*X1_shared + beta[1]*X2_shared
# Likelihood (sampling distribution) of observations
Y_obs = Normal('Y_obs', mu=mu, sd=sigma, observed=Y)
start = find_MAP()
step = NUTS(scaling=start)
trace = sample(2000, step, start=start)
Finally, there's a function under development (will likely eventually get added to pymc3) that will allow to predict posteriors for new data.
from collections import defaultdict
def run_ppc(trace, samples=100, model=None):
"""Generate Posterior Predictive samples from a model given a trace.
"""
if model is None:
model = pm.modelcontext(model)
ppc = defaultdict(list)
for idx in np.random.randint(0, len(trace), samples):
param = trace[idx]
for obs in model.observed_RVs:
ppc[obs.name].append(obs.distribution.random(point=param))
return ppc
Next, pass in the new data that you want to run predictions on:
X1_shared.set_value(X1_new)
X2_shared.set_value(X2_new)
Finally, you can generate posterior predictive samples for the new data.
ppc = run_ppc(trace, model=model, samples=200)
The variable ppc is a dictionary with keys for each observed variable in the model. So, in this case ppc['Y_obs'] would contain a list of arrays, each of which is generated using a single set of parameters from trace.
Note that you can even modify the parameters extracted from the trace. For example, I had a model using a GaussianRandomWalk variable and I wanted to generate predictions into the future. While you could allow pymc3 to sample into the future (i.e. allow the random walk variable to diverge), I just wanted to use a fixed value of the coefficient corresponding to the last inferred value. This logic can implemented in the run_ppc function.
It's also worth mentioning that the run_ppc function is extremely slow. It takes about as much time as running the actual inference. I suspect this has to do with some inefficiency related to how theano is used.
EDIT: The link originally included seems to be dead.
Above answer from #santon is correct. I am just adding to that.
Now you don't need to write your own method run_ppc. pymc3 provides sample_posterior_predictive method which does the same.