Struggling to build an arima model in python that is even close to useful for predicting household electricity usage. Would appreciate any thoughts and suggestions. (Might just be a silly error in my implementation!)
Some design thoughts:
Data is very messy in general but there is clearly daily seasonality (usage drops over night and while household at work/school) and a weekly seasonality (weekday usage differs from weekend)
Have tried statsmodels, sktime, fbprophet and pmdarima 'auto_arima' functions with no luck. Don't think these take seasonality into account particularly well
Currently trying to get a more manual approach to work: statsmodel's sarima with only daily seasonality incorporated (see code and results below), and maybe add fourier term as exogenous variable to handle weekly seasonality.
Will consider adding exogenous variables (like temperature) to account for annual seasonality but first just trying to get something reasonable on a smaller time scale (3-6 months).
Approach I am trying to get working: Use box-jenkins method to specify a sarima model for just daily seasonality (images below).
(1) Looking at Dickey-Fuller and KPSS for the time series, there appears to be minimal trend to correct for (expected), but ACF and PACF charts show significant seasonality (daily, weekly).
(2) Taking differences to account for week and day seasonality, then taking a further first-order difference, we can quickly get to a dataset that has minimal remaining seasonality and is stationary. This should be a really good sign and suggests there is a model we can build to predict this behaviour!
One more plot to show difference between original and differenced data when we zoom in for a typicaly week.
(3) Finally, I trained a sarima model in the following way with results. I configured D=d=0 since no identifiable trend (expected), p=2 to give model opportunity to learn from most recent behaviour, m=48 for seasonality (daily since data is in 30min time intervals), and P=Q=1 to capture those seasonality effects t-48.
model = SARIMAX(
train_data,
trend='n',
order=(2, 0, 0),
seasonal_order=(1, 0, 1, 48),
)
results = model.fit()
I am able to get an exponential smoothing model working, but I had expected double seasonal arima to blow it out of the water. Any thoughts and suggestions most welcome. Thank you in advance!
Related
I am a bit confused about how to identify the seasonal component of a SARIMA model. I am currently looking at forecasting rates (ocean carrier rates to be specific). The first thing I did was to convert my original rates to the difference in rates, i.e. log(P2) - log(P1) as I wanted to forecast the change in rate itself. Then I checked for stationarity of the series and it was stationary. This is what the seasonal decomposed series look like
After that, I ran a basic ARIMA model and chose the p,d,q based on the ACF and PACF plots here but the predictions were pretty bad which was expected
I am now trying to run a SARIMA model instead. I obtained the seasonality component via seasonal_decompose and plotted the ACF and PACF over 52 lags (i have weekly data) and this is what I see. How do I choose the right pdq component for SARIMA?
If you want to check an anomaly in stock data many studies use a linear regression. Let's say you want to check if there is a Monday effect, meaning that monday is significantly worse than other days.
I understood that we can use a regression like: return = a + b DummyMon + e
a is the constant, b the regression coefficient, we have the Dummy for Monday and the error term e.
That's what I used in python:
First you add a constant to the anomaly:
anomaly = sm.add_constant(anomaly)
Then you build the model:
model = sm.OLS(return, anomaly)
The you fit the model:
results = model.fit()
I wonder if this is the correct model setup.
In this case a plot of the linear regression would just show two vertical areas above 0 (for no Monday) and 1 (for Monday) with all the returns. It looks pretty strange. Is this correct?
Should I somehow try to use the time (t) in the regression? If so, how can I do it with python? I thought about giving each date an increasing number, but then I wondered how to treat weekends.
I would assume that with many data points both approaches are similar, if the time series is stationary, right? In the end I do a cross section anaylsis and don't care about the aspect of the time series in this case, correct? ( I heard about GARCH models etc, where this is a different)
Well, I am just learning and hope someone could give me some ideas about the topic.
Thank you very much in advance.
For time series analysis tasks (such as forecasting or anomaly detection), you may need a more advanced model, such as Recurrent Neural Networks (RNN) in deep learning. You can assign any time step to an RNN Cell, in your case, every RNN Cell can represent a day or maybe an hour or half a day etc.
The main purpose of the RNNs is to make the model understand the time dependencies in the data. For example, if monday has a bad affect, then corresponding RNN Cells will have reasonable parameters. I would recommend you to do some further research about it. Here there are some documentations that may help:
https://colah.github.io/posts/2015-08-Understanding-LSTMs/
(Also includes different types of RNN)
https://towardsdatascience.com/understanding-rnn-and-lstm-f7cdf6dfc14e
And you can use tensorflow, keras or PyTorch libraries.
Model Fits but the Predictions Fail
Using a (4,0,13) ARIMA model on the following data shown in the picture below yields flat predictions (also shown shown in the second picture below). I am not sure why the model can fit the data in the training set, but then predict nothing afterwards. I found another question here which said I needed to add a seasonal component. I detail my experience with that below.
The Time Series (zoomed in)
The Predictions*
* The predictions plot shows all the training data as well as the validation data after the orange vertical line. The training fit is rounded to be integers (it's not possible to have real numbers in this dataset). Note the prediction is just flat and then dies.
Problem Definition
I have 15 minute interval data and desire to apply a SARIMA model to it. It has a daily seasonality, which is defined from 7am-9pm (therefore, every 4 * 15 = 60 periods (4, 15 minute periods in an hour * 15 hours)). I first tested for stationarity with the Augmented Dickey-Fuller test. This passed, and so I started to analyze the ACF and PACF to determine the SARIMA parameters.
Parameter Determination
(p,d,q)
ACF & PACF on Original Data
From this, I see there is no unit root (sum of ACF and PACF do not equal 1), and that we need to difference the series since there is no big cut off in the ACF.
ACF & PACF on Differenced Data
From this, I see it is slightly overdifferenced, so I may want to try no integrated term and add an AR term at 15 (the point where the ACF in the original plot enters the bands). I also add an MA term here.
(P,D,Q)s
I now look for the seasonal component. I do a seasonal difference of period 60 since that's where the spike is in the plots.
Seasonal difference
Seeing this, I should add 2 MA terms to the seasonal component (Rules 13 and 7 from here) But the site also says to not use more than 1 seasonal MA usually, so I leave it at 1.
Model
This leaves me with a SARIMA(0,1,1)(0,1,1,60) model. However, I run out of memory trying to fit this model (Python, using the statsmodels SARIMA function).
Question
Did I choose the parameters correctly? Is this data even fittable by ARIMA/SARIMA? And lastly, would the 60 period SARIMA actually work and I just need to find a way to run it on a different machine?
I guess the tl;dr question is: what am I doing wrong?
Feel free to go into detail. I want to become well informed with time series and so more information is better!
to select the best fit model, you use the AIC/BIC test to find the model that receives best results. You test different combination of Q and P.
Further,normally the model follows rule: q+d+p+Q+D+P < 6
BR
A.
Assume we have a time-series data that contains the daily orders count of last two years:
We can predict the future's orders using Python's statsmodels library:
fit = statsmodels.api.tsa.statespace.SARIMAX(
train.Count, order=(2, 1, 4),seasonal_order=(0,1,1,7)
).fit()
y_hat_avg['SARIMA'] = fit1.predict(
start="2018-06-16", end="2018-08-14", dynamic=True
)
Result (don't mind the numbers):
Now assume that our input data has some unusual increase or decrease, because of holidays or promotions in the company. So we added two columns that tell if each day was a "holiday" and a day that the company has had "promotion".
Is there a method (and a way of implementing it in Python) to use this new type of input data and help the model to understand the reason of outliers, and also predict the future's orders with providing "holiday" and "promotion_day" information?
fit1.predict('2018-08-29', holiday=True, is_promotion=False)
# or
fit1.predict(start="2018-08-20", end="2018-08-25", holiday=[0,0,0,1,1,0], is_promotion=[0,0,1,1,0,1])
SARIMAX, as a generalisation of the SARIMA model, is designed to handle exactly this. From the docs,
Parameters:
endog (array_like) – The observed time-series process y;
exog (array_like, optional) – Array of exogenous regressors, shaped (nobs, k).
You could pass the holiday and promotion_day as an array of size (nobs, 2) to exog, which will inform the model of the exogenous nature of some of these observations.
This problem have different names such as anomaly detection, rare event detection and extreme event detection.
There is some blog post at Uber engineering blog that may useful for understanding the problem and solution. Please look at here and here.
Although it's not from statsmodels, you can use facebook's prophet library for time series forecasting where you can pass dates with recurring events to your model.
See here.
Try this (it may or may not work based on your problem/data):
You can split your date into multiple features like day of week, day of month, month of year, year, is it last day in month?, is it first day in month? and many more if you think of it and then use some normal ML algorithm like Random Forests or Gradient Boosting Trees or Neural Networks (specially with embedding layers for your categorical features e.g. day of week) to train your model.
I am building a churn forecast model using features such as 1 year worth lags, holidays, moving averages, day/day ratios, seasonality factor extracted from statsmodels etc. It is clearly not an additive series, the magnitude of holiday churn each year is greater than that in previous years.
My XGB model predicts daily churn quite accurately, but it fails on holidays miserably (the trenches are slightly better predicted as compared to peaks):
in my opinion the model is unable to capture the exponential nature of the series. here is how it looks like at present. is there a way i can capture the exponential nature of the series, by using any additional features or something?