Small dataset not working well with prophet - python

I am trying to use Prophet - the forecasting package by facebook, and it works great with say, 150 rows of data. But when I try to model with less than 100 rows, it gives me very weird predictions. When I do it in R, it gives me the same prediction for all dates and when I do it in python, it gives me very bad predictions.
My data is weekly from 2018 week 1 to 2019 week 40.
This is my code:
(python)
predictionSize=6
new_train_df = data[:-predictionSize]
new_test_df = data[len(data)-predictionSize:]
m_new = Prophet(weekly_seasonality=True,yearly_seasonality=True)
m_new.fit(new_train_df)
new_future = m_new.make_future_dataframe(periods=predictionSize,freq='W')
new_forecast = m_new.predict(new_future)
new_ypred = new_forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail(6)
Using this code gives me negative values for yhat.
My question is, are the predictions bad because the dataset is too less for prophet?
Do let me know if you need any other information. The data has weekly seasonality and yearly seasonality.

Related

Understanding TimeSeriesDataSet in Pytorch-Forecasting

I have 913000 rows data:
data image
First, Let me explain this data
this data is sales data for 10 stores and 50 item from 2013-01-01 to 2017-12-31.
i understand why this data has 913000, by leap year.
anyway, i made my training set.
training = TimeSeriesDataSet(
train_df[train_df.apply(lambda x:x['time_idx']<=training_cutoff,axis=1)],
time_idx = "time_idx",
target = "sales",
group_ids = ["store","item"], # list of column names identifying a time series
max_encoder_length = max_encoder_length,
max_prediction_length = max_prediction_length,
static_categoricals = ["store","item"],
# Categorical variables that do nat change over time (e.g. product length)
time_varying_unknown_reals = ["sales"],
)
Now
First Question: i have known as the TimeSeriesDataSet has data param, reflected data minus prediction horizon by training_cutoff and minus max_encoder_length for prediction. this is right? if no please tell me truth.
Second Question: Similarly, this is output of over code
output image
Why the length is 863500
i calculate the length on my knowledge.
prediction horizon by training_cutoff - 205010 =10000
max_encoder_length for prediction - 605010 = 30000
Thus 913000-40000 = 873000.
where is 9500?
i do my best in googling. please tell me truth..

Not able to predict the new covid-19 cases using FB prophet

I am using Fb prophet to predict the upcoming covid cases every day in month of April 2022. I have a dataset of daily covid cases up to March 2022. I have included holiday, and external regressor such as weekends, or daily covid testing but still, I am no way near the actual covid cases.
I have got high MAE and MAPE,
This is the plot of original data points -
This is my Dataset - y is the daily new cases, new_tests are the daily testing and workingday ( 1 for weekday and 0 for weekend)
This is the additional holiday dataset -
I used these parameters after cross - validation:
m = Prophet(growth = "linear",
yearly_seasonality = True,
weekly_seasonality = True,
daily_seasonality = False,
holidays = holidays,
seasonality_mode = "multiplicative",
n_changepoints = 10,
seasonality_prior_scale = 5,
holidays_prior_scale = 5,
changepoint_prior_scale = 0.01)
m.add_regressor('workingday')
m.add_regressor('new_tests')
m.fit(training_set)
But I got very poor results - The blue line is for predictions and black dots are for original points.
The Mean Absolute Error and Root Mean Squared Error are 68862 and 72610 respectively whereas Mean Absolute percentage error is 2702.99.
How can I Tune my hyperparameters or add any other regressor so that I can get low Error? or close predictions. Thanks.

Obtain mean value of specific area in netCDF

I am trying to plot a time series of the sea surface temperature (SST) for a specific region from a .nc file. The SST is a three-dimensional variable (lat,lon,time), that has mean daily values for a specific region from 1982 to 2016. I want my plot to reflect the seasonal sst variability of the entire period of time. I assume that what I need to do first is to obtain a mean sst value for my lat,lon region for each of the days with which I can work alter on. So far, I assume that I need to read the .nc file and the variables:
import netCDF4 as nc
f = nc.Dataset('cmems_SST_MED_SST_L4_REP_OBSERVATIONS_010_021_1639073212518.nc')
sst = f.variables['analysed_sst'][:]
lon = f.variables['longitude'][:]
lat = f.variables['latitude'][:]
Next, following the code suggested here, I tried to reshape and obtain the mean, but an error pops up:
global_average= np.nanmean(sst[:,:,:],axis=(1,2))
annual_temp = np.nanmean(np.reshape(global_average, (34,12)), axis = 1)
#34 years between 1982 and 2016, and 12 months per year.
ERROR cannot reshape array of size 14008 into shape (34,12)
From here I found different ways, like using cdo or nco (which didn't work due installation problems) among others, which were not suitable for my case. I used nanmean because know that in MATLAB this is done using the nanmean function. I am quite new to this topic and I would like to ask for some hints, like, where should I focus more or what path is more suitable for this case. Thank you!!
Handling daily data with just pure python is difficult because you should consider leap years and sub-setting a region require tedious indexing striding....
As steTATO mentioned, since the data that you are working has daily temporal resolution you need to consider the following
You need to reshape the global_average in the shape of (34,365) or (34,366) depending on the year (1984,1988,1992,1996,2000,2004,2008,2012,2016). So your above code should look something like
annual_temp = np.nanmean(np.reshape(global_average, (34,365)), axis = 1)
But, like I said, because of the leap years, you can't do the things you want by simply reshaping the global_average.
If I had no choice but to use python only, I'd do the following
import numpy as np
def days_in_year(in_year):
leap_years = [1984,1988,1992,1996,2000,2004,2008,2012,2016]
if (in_year in leap_years):
out_days = 366
else:
out_days = 365
return out_days
# some of your code, importing netcdf data
year = np.arange(1982,2017)
global_avg= np.nanmean(sst[:,:,:],axis=(1,2))
annual_avgs = []
i = 0
for yr in range(35):
i = i + days_in_year(year[yr])
f = i - days_in_year(year[yr])
annual_avg = np.nanmean(global_avg[i:f])
annual_avgs.append(annual_avg)
Above code basically takes and averages by taking strides of the global_avg considering the leap year, and saving it as annual_avgs.

Why does this output irradiance forecasted values to be somewhat the same? I am using PVLIB's GFS model. What am I doing wrong here?

#Initialize the mode here
model = GFS(resolution='half', set_type='latest')
#the location I want to forecast the irradiance, and also the timezone
latitude, longitude, tz = 15.134677754177943, 120.63806622424912, 'Asia/Manila'
start = pd.Timestamp(datetime.date.today(), tz=tz)
end = start + pd.Timedelta(days=7)
#pulling the data from the GFS
raw_data = model.get_processed_data(latitude, longitude, start, end)
raw_data = pd.DataFrame(raw_data)
data = raw_data
#Description of the PV system we are using
system = PVSystem(surface_tilt=10, surface_azimuth=180, albedo=0.2,
module_type = 'glass_polymer',
module=module, module_parameters=module,
temperature_model_parameters=temperature_model_parameters,
modules_per_string=24, strings_per_inverter=32,
inverter=inverter, inverter_parameters=inverter,
racking_model='insulated_back')
#Using the ModelChain
mc = ModelChain(system, model.location, orientation_strategy=None,
aoi_model='no_loss', spectral_model='no_loss',
temp_model='sapm', losses_model='no_loss')
mc.run_model(data);
mc.total_irrad.plot()
plt.ylabel('Plane of array irradiance ($W/m^2$)')
plt.legend(loc='best')
Here is the picture of it
I am actually getting the same values for irradiance for days now. So I believe there is something wrong. I think there should somewhat be of different values for everyday at the least
Forecasting Irradiance
I think the reason the days all look the same is that the forecast data predicts those days to be consistently overcast, so there's not necessarily anything "wrong" with the values being very similar across days -- it's just several cloudy days in a row. Take a look at raw_data['total_clouds'] and see how little variation there is for this forecast (nearly always 100% cloud cover). Also note that if you print the actual values of mc.total_irrad, you'll see that there is some minor variation day-to-day that is too small to appear on the plot.

How to predict a time series set with statsmodels Holt-Winters

I have a set of data from January 2012 to December 2014 that show some trend and seasonality. I want to make a prediction for the next 2 years (from January 2015 to December 2017), by using the Holt-Winters method from statsmodels.
The data set is the following one:
date,Data
Jan-12,153046
Feb-12,161874
Mar-12,226134
Apr-12,171871
May-12,191416
Jun-12,230926
Jul-12,147518
Aug-12,107449
Sep-12,170645
Oct-12,176492
Nov-12,180005
Dec-12,193372
Jan-13,156846
Feb-13,168893
Mar-13,231103
Apr-13,187390
May-13,191702
Jun-13,252216
Jul-13,175392
Aug-13,150390
Sep-13,148750
Oct-13,173798
Nov-13,171611
Dec-13,165390
Jan-14,155079
Feb-14,172438
Mar-14,225818
Apr-14,188195
May-14,193948
Jun-14,230964
Jul-14,172225
Aug-14,129257
Sep-14,173443
Oct-14,188987
Nov-14,172731
Dec-14,211194
Which looks like follows:
I'm trying to build the Holt-Winters model, in order to improve the prediction performance of the past data (it means, a new graph where I can see if my parameters perform a good prediction of the past) and later on forecast the next years. I made the prediction with the following code, but I'm not able to do the forecast.
# Data loading
data = pd.read_csv('setpoints.csv', parse_dates=['date'], index_col=['date'])
df_data = pd.DataFrame(datos_matric, columns=['Data'])
df_data['Data'].index.freq = 'MS'
train, test = df_data['Data'], df_data['Data']
model = ExponentialSmoothing(train, trend='add', seasonal='add', seasonal_periods=12).fit()
period = ['Jan-12', 'Dec-14']
pred = model.predict(start=period[0], end=period[1])
df_data['Data'].plot(label='Train')
test.plot(label='Test')
pred.plot(label='Holt-Winters')
plt.legend(loc='best')
plt.show()
Which looks like:
Does anyone now how to forecast it?
I think you are making a misconception here. You shouldnt use the same data for train and test. The test data are datapoints which your model "has not seen yet". This way you can test how well your model is performing. So I used the last three months of your data as test.
As for the prediction, we can use different start and end points.
Also notice I used mul as seasonal component, which performs better on your data:
# read in data and convert date column to MS frequency
df = pd.read_csv(data)
df['date'] = pd.to_datetime(df['date'], format='%b-%y')
df = df.set_index('date').asfreq('MS')
# split data in train, test
train = df.loc[:'2014-09-01']
test = df.loc['2014-10-01':]
# train model and predict
model = ExponentialSmoothing(train, seasonal='mul', seasonal_periods=12).fit()
#model = ExponentialSmoothing(train, trend='add', seasonal='add', seasonal_periods=12).fit()
pred_test = model.predict(start='2014-10-01', end='2014-12-01')
pred_forecast = model.predict(start='2015-01-01', end='2017-12-01')
# plot data and prediction
df.plot(figsize=(15,9), label='Train')
pred_test.plot(label='Test')
pred_forecast.plot(label='Forecast')
plt.legend()
plt.show()
plt.savefig('figure.png')

Categories