I am training a Gaussian Process to learn the mapping between a set of coordinates x,y,z and some time series. In a nutshell, my question is about how to prevent my GP to do oerfitting, which I am facing to an odd level.
Some details:
my training set is made of 1500 samples. My testing set of 500 samples. Each time sample has 20 time components;
I don't have a preference in terms of what kernel to use for the GP, and I would appreciate help in understanding which one could work better. Furthermore, I have very little experience with GP in general, hence I am not sure how well I am doing with the hyperparameters. See below for how I set my length_scale: I set it this way following some advice, but I am wondering if it makes sense;
my coordinates are standardized (mean 0, std 1), but my time series are not;
I am training one Gaussian Process for each time component.
Here is my code:
from __future__ import division
import numpy as np
from matplotlib import pyplot as plt
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import (RBF, Matern, RationalQuadratic, ExpSineSquared, DotProduct, ConstantKernel)
# ----------------------------------------------------------------------
number_of_training_samples = 1500
number_of_testing_samples = 500
# read coordinates STANDARDIZED
coords_training_stand = np.loadtxt('coordinates_training_standardized.txt')
coords_testing_stand = np.loadtxt('coordinates_testing_standardized.txt')
# read time series TRAIN/TEST
timeseries_training = np.loadtxt('timeseries_training.txt')
timeseries_testing = np.loadtxt('timeseries_testing.txt')
number_of_time_components = np.shape(timeseries_training)[1] # 20
# Instantiate a Gaussian Process model
kernel = 1.0 * Matern(nu=1.5, length_scale=np.ones(coords_training_stand.shape[1]))
gp = GaussianProcessRegressor(kernel=kernel)
# placeholder for predictions
pred_timeseries_training = np.zeros((np.shape(timeseries_training)))
pred_timeseries_testing = np.zeros((np.shape(timeseries_testing)))
for i in range(number_of_time_components):
print("time component", i)
gp.fit(coords_training_stand, timeseries_training[:,i])
y_pred, sigma = gp.predict(coords_training_stand, return_std=True)
y_pred_test, sigma_test = gp.predict(coords_testing_stand, return_std=True)
pred_timeseries_training[:,i] = y_pred
pred_timeseries_testing[:,i] = y_pred_test
# plot training
fig, ax = plt.subplots(5, figsize=(10,20))
for i in range(5):
ax[i].plot(timeseries_training[100*i, :20], color='blue', label='Original train')
ax[i].plot(pred_timeseries_training[100*i], color='black', label='GP pred train')
ax[i].set_xlabel('Time components', fontsize='x-large')
ax[i].set_ylabel('Amplitude', fontsize='x-large')
ax[i].set_title('Time series n. {:}'.format(100*i+1), fontsize='x-large')
ax[i].legend(fontsize='x-large')
plt.subplots_adjust(hspace=1)
plt.show()
plt.close()
# plot testing
fig, ax = plt.subplots(5, figsize=(10,20))
for i in range(5):
ax[i].plot(timeseries_testing[100*i, :20], color='blue', label='Original test')
ax[i].plot(pred_timeseries_testing[100*i], color='black', label='GP pred test')
ax[i].set_xlabel('Time components', fontsize='x-large')
ax[i].set_ylabel('Amplitude', fontsize='x-large')
ax[i].set_title('Time series n. {:}'.format(1500+100*i+1), fontsize='x-large')
ax[i].legend(fontsize='x-large')
plt.subplots_adjust(hspace=1)
plt.show()
plt.close()
Here the plot of a few samples from the TRAINING set and the corresponding GP predictions (one can't even see the blue lines, corresponding to the original samples, because they are perfectly covered by the predictions of the GP):
Here the plot of a few samples from the TESTING set and the corresponding GP predictions:
(in only one case - 1801 - the prediction is good).
I think there is a very strong overfitting going on, and I would like to understand how to avoid it.
I don't think the problem is with the Gaussian Process itself but with the dataset.
How were the time series samples generated ? and how did you divide the dataset in training and test set ?
If you got one big time series and then cut it in small sequences, there is not enough real examples for the model to learn and you can get big overfitting problems.
Explanation with an example :
I have one big time series t0, t1, t2, t3, ..., t99
I make a training dataset of 80 samples with [t0,...,t19], [t1,...,t20], [t2,...,t21], ..., [t80,...,t99]
In this case all my samples are almost exactly the same and it will cause overfitting. And if the validation set is composed of some random samples taken from this dataset then I'll get a very high validation accuracy because the model saw almost exactly the same thing in the dataset. (I think that's what might have happended for example 1801 you gave)
So make sure all your samples in your datasets are completely independant.
Related
How to generate "lower" and "upper" predictions, not just "yhat"?
import statsmodels
from statsmodels.tsa.arima.model import ARIMA
assert statsmodels.__version__ == '0.12.0'
arima = ARIMA(df['value'], order=order)
model = arima.fit()
Now I can generate "yhat" predictions
yhat = model.forecast(123)
and get confidence intervals for model parameters (but not for predictions):
model.conf_int()
but how to generate yhat_lower and yhat_upper predictions?
In general, the forecast and predict methods only produce point predictions, while the get_forecast and get_prediction methods produce full results including prediction intervals.
In your example, you can do:
forecast = model.get_forecast(123)
yhat = forecast.predicted_mean
yhat_conf_int = forecast.conf_int(alpha=0.05)
If your data is a Pandas Series, then yhat_conf_int will be a DataFrame with two columns, lower <name> and upper <name>, where <name> is the name of the Pandas Series.
If your data is a numpy array (or Python list), then yhat_conf_int will be an (n_forecasts, 2) array, where the first column is the lower part of the interval and the second column is the upper part.
To generate prediction intervals as opposed to confidence intervals (which you have neatly made the distinction between, and is also presented in Hyndman's blog post on the difference between prediction intervals and confidence intervals), then you can follow the guidance available in this answer.
You could also try to compute bootstrapped prediction intervals, which is laid out in this answer.
Below, is my attempt at implementing this (I'll update it when I get the chance to check it in more detail):
def bootstrap_prediction_interval(y_train: Union[list, pd.Series],
y_fit: Union[list, pd.Series],
y_pred_value: float,
alpha: float = 0.05,
nbootstrap: int = None,
seed: int = None):
"""
Bootstraps a prediction interval around an ARIMA model's predictions.
Method presented clearly here:
- https://stats.stackexchange.com/a/254321
Also found through here, though less clearly:
- https://otexts.com/fpp3/prediction-intervals.html
Can consider this to be a time-series version of the following generalisation:
- https://saattrupdan.github.io/2020-03-01-bootstrap-prediction/
:param y_train: List or Series of training univariate time-series data.
:param y_fit: List or Series of model fitted univariate time-series data.
:param y_pred_value: Float of the model predicted univariate time-series you want to compute P.I. for.
:param alpha: float = 0.05, the prediction uncertainty.
:param nbootstrap: integer = 1000, the number of bootstrap sampling of the residual forecast error.
Rules of thumb provided here:
- https://stats.stackexchange.com/questions/86040/rule-of-thumb-for-number-of-bootstrap-samples
:param seed: Integer to specify if you want deterministic sampling.
:return: A list [`lower`, `pred`, `upper`] with `pred` being the prediction
of the model and `lower` and `upper` constituting the lower- and upper
bounds for the prediction interval around `pred`, respectively.
"""
# get number of samples
n = len(y_train)
# compute the forecast errors/resid
fe = y_train - y_fit
# get percentile bounds
percentile_lower = (alpha * 100) / 2
percentile_higher = 100 - percentile_lower
if nbootstrap is None:
nbootstrap = np.sqrt(n).astype(int)
if seed is None:
rng = np.random.default_rng()
else:
rng = np.random.default_rng(seed)
# bootstrap sample from errors
error_bootstrap = []
for _ in range(nbootstrap):
idx = rng.integers(low=n)
error_bootstrap.append(fe[idx])
# get lower and higher percentiles of sampled forecast errors
fe_lower = np.percentile(a=error_bootstrap, q=percentile_lower)
fe_higher = np.percentile(a=error_bootstrap, q=percentile_higher)
# compute P.I.
pi = [y_pred_value + fe_lower, y_pred_value, y_pred_value + fe_higher]
return pi
using ARIMA you need to include seasonality and exogenous variables in the model yourself. While using SARIMA (Seasonal ARIMA) or SARIMAX (also for exogenous factors) implementation give C.I. to summary_frame:
import statsmodels.api as sm
import matplotlib.pyplot as plt
import pandas as pd
dta = sm.datasets.sunspots.load_pandas().data[['SUNACTIVITY']]
dta.index = pd.Index(pd.date_range("1700", end="2009", freq="A"))
print(dta)
print("init data:\n")
dta.plot(figsize=(12,4));
plt.show()
##print("SARIMAX(dta, order=(2,0,0), trend='c'):\n")
result = sm.tsa.SARIMAX(dta, order=(2,0,0), trend='c').fit(disp=False)
print(">>> result.params:\n", result.params, "\n")
##print("SARIMA_model.plot_diagnostics:\n")
result.plot_diagnostics(figsize=(15,12))
plt.show()
# summary stats of residuals
print(">>> residuals.describe:\n", result.resid.describe(), "\n")
# Out-of-sample forecasts are produced using the forecast or get_forecast methods from the results object
# The get_forecast method is more general, and also allows constructing confidence intervals.
fcast_res1 = result.get_forecast()
# specify that we want a confidence level of 90%
print(">>> forecast summary at alpha=0.01:\n", fcast_res1.summary_frame(alpha=0.10), "\n")
# plot forecast
fig, ax = plt.subplots(figsize=(15, 5))
# Construct the forecasts
fcast = result.get_forecast('2010Q4').summary_frame()
print(fcast)
fcast['mean'].plot(ax=ax, style='k--')
ax.fill_between(fcast.index, fcast['mean_ci_lower'], fcast['mean_ci_upper'], color='k', alpha=0.1);
fig.tight_layout()
plt.show()
docs: "The forecast above may not look very impressive, as it is almost a straight line. This is because this is a very simple, univariate forecasting model. Nonetheless, keep in mind that these simple forecasting models can be extremely competitive"
p.s. here " you can use it in a non-seasonal way by setting the seasonal terms to zero."
I am currently running an exploratory factor analysis in Python, which works well with the factor_analyzer package (https://factor-analyzer.readthedocs.io/en/latest/factor_analyzer.html). To choose the appropriate number of factors, I used the Kaiser criterion and the Scree plot. However, I would like to confirm my results using Horn's parallel analysis (Horn, 1965). In R I would use the parallel function from the psych package. Does anyone know an equivalent method / function / package in Python? I've been searching for some time now, but unfortunately without success.
Thanks a lot for your help!
Best regards
You've probably figured out a solution by now but, for the sake of others who might be looking for it, here's some code that I've used to mimic the parallel analysis from the psych library:
import pandas as pd
from factor_analyzer import FactorAnalyzer
import numpy as np
import matplotlib.pyplot as plt
def _HornParallelAnalysis(data, K=10, printEigenvalues=False):
################
# Create a random matrix to match the dataset
################
n, m = data.shape
# Set the factor analysis parameters
fa = FactorAnalyzer(n_factors=1, method='minres', rotation=None, use_smc=True)
# Create arrays to store the values
sumComponentEigens = np.empty(m)
sumFactorEigens = np.empty(m)
# Run the fit 'K' times over a random matrix
for runNum in range(0, K):
fa.fit(np.random.normal(size=(n, m)))
sumComponentEigens = sumComponentEigens + fa.get_eigenvalues()[0]
sumFactorEigens = sumFactorEigens + fa.get_eigenvalues()[1]
# Average over the number of runs
avgComponentEigens = sumComponentEigens / K
avgFactorEigens = sumFactorEigens / K
################
# Get the eigenvalues for the fit on supplied data
################
fa.fit(data)
dataEv = fa.get_eigenvalues()
# Set up a scree plot
plt.figure(figsize=(8, 6))
################
### Print results
################
if printEigenvalues:
print('Principal component eigenvalues for random matrix:\n', avgComponentEigens)
print('Factor eigenvalues for random matrix:\n', avgFactorEigens)
print('Principal component eigenvalues for data:\n', dataEv[0])
print('Factor eigenvalues for data:\n', dataEv[1])
# Find the suggested stopping points
suggestedFactors = sum((dataEv[1] - avgFactorEigens) > 0)
suggestedComponents = sum((dataEv[0] - avgComponentEigens) > 0)
print('Parallel analysis suggests that the number of factors = ', suggestedFactors , ' and the number of components = ', suggestedComponents)
################
### Plot the eigenvalues against the number of variables
################
# Line for eigenvalue 1
plt.plot([0, m+1], [1, 1], 'k--', alpha=0.3)
# For the random data - Components
plt.plot(range(1, m+1), avgComponentEigens, 'b', label='PC - random', alpha=0.4)
# For the Data - Components
plt.scatter(range(1, m+1), dataEv[0], c='b', marker='o')
plt.plot(range(1, m+1), dataEv[0], 'b', label='PC - data')
# For the random data - Factors
plt.plot(range(1, m+1), avgFactorEigens, 'g', label='FA - random', alpha=0.4)
# For the Data - Factors
plt.scatter(range(1, m+1), dataEv[1], c='g', marker='o')
plt.plot(range(1, m+1), dataEv[1], 'g', label='FA - data')
plt.title('Parallel Analysis Scree Plots', {'fontsize': 20})
plt.xlabel('Factors/Components', {'fontsize': 15})
plt.xticks(ticks=range(1, m+1), labels=range(1, m+1))
plt.ylabel('Eigenvalue', {'fontsize': 15})
plt.legend()
plt.show();
If you call the above like this:
_HornParallelAnalysis(myDataSet)
You should get something like the following:
Example output for parallel analysis:
Thanks for sharing Eric and Reza.
Here I also provide a faster solution for those readers who do a PCA parallel analysis only. The above code is taking too long for me (apparently because of my very large dataset of size 33 x 15498) with no answer (I waited 1 day running it), so if anyone have only a PCA parallel analysis like my case, you can use this simple and very fast code, just you need to put your dataset in a csv file, this program reads in the csv and very fastly provides you with a PCA parallel analysis plot:
import pandas as pd
import numpy as np
from sklearn.decomposition import PCA
import matplotlib.pyplot as plt
shapeMatrix = pd.read_csv("E:\\projects\\ankle_imp_ssm\\results\\parallel_analysis\\data\\shapeMatrix.csv")
shapeMatrix.dropna(axis=1, inplace=True)
normalized_shapeMatrix=(shapeMatrix-shapeMatrix.mean())/shapeMatrix.std()
pca = PCA(shapeMatrix.shape[0]-1)
pca.fit(normalized_shapeMatrix)
transformedShapeMatrix = pca.transform(normalized_shapeMatrix)
#np.savetxt("pca_data.csv", pca.explained_variance_, delimiter=",")
random_eigenvalues = np.zeros(shapeMatrix.shape[0]-1)
for i in range(100):
random_shapeMatrix = pd.DataFrame(np.random.normal(0, 1, [shapeMatrix.shape[0], shapeMatrix.shape[1]]))
pca_random = PCA(shapeMatrix.shape[0]-1)
pca_random.fit(random_shapeMatrix)
transformedRandomShapeMatrix = pca_random.transform(random_shapeMatrix)
random_eigenvalues = random_eigenvalues+pca_random.explained_variance_ratio_
random_eigenvalues = random_eigenvalues / 100
#np.savetxt("pca_random.csv", random_eigenvalues, delimiter=",")
plt.plot(pca.explained_variance_ratio_, '--bo', label='pca-data')
plt.plot(random_eigenvalues, '--rx', label='pca-random')
plt.legend()
plt.title('parallel analysis plot')
plt.show()
Byy running this piece of code on the matrix of shapes for which I created a statistical shape model. (Shape matrix is of size: 33 x 15498) and it takes just a few seconds to run.
I am trying to fit a Gaussian to a set of data points using the astropy.modeling package but all I am getting is a flat line. See below:
Here's my code:
%pylab inline
from astropy.modeling import models,fitting
from astropy import modeling
#Fitting a gaussian for the absorption lines
wavelength= linspace(galaxy1_wavelength_extracted_1.min(),galaxy1_wavelength_extracted_1.max(),200)
g_init = models.Gaussian1D(amplitude=1., mean=5000, stddev=1.)
fit_g = fitting.LevMarLSQFitter()
g = fit_g(g_init, galaxy1_wavelength_extracted_1, galaxy1_flux_extracted_1)
#Plotting
plot(galaxy1_wavelength_extracted_1,galaxy1_flux_extracted_1,".k")
plot(wavelength, g(wavelength))
xlabel("Wavelength ($\\AA$)")
ylabel("Flux (counts)")
What am I doing wrong or missing?
I made some fake data that sort of resembles yours, and tried running your code on it and obtained similar results. I think the problem is that if you don't adjust your model's initial parameters to at least sort of resemble the original model, or else the fitter won't be able to converge no matter how many rounds of fitting it performs.
If I'm fitting a Gaussian I like to give the initial model some initial parameters based on computationally "eyeballing" them like so (here I named your real data's flux and wavelength as orig_flux and orig_wavelength respectively):
>>> an_amplitude = orig_flux.min()
>>> an_mean = orig_wavelength[orig_flux.argmin()]
>>> an_stddev = np.sqrt(np.sum((orig_wavelength - an_mean)**2) / (len(orig_wavelength) - 1))
>>> print(f'mean: {an_mean}, stddev: {an_stddev}, amplitude: {an_amplitude}')
mean: 5737.979797979798, stddev: 42.768052162734605, amplitude: 84.73925092448636
where for the standard deviation I used the unbiased standard deviation estimate.
Plotting this over my fake data shows that these are reasonable values I might have picked if I manually eyeballed the data as well:
>>> plt.plot(orig_wavelength, orig_flux, '.k', zorder=1)
>>> plt.scatter(an_mean, an_amplitude, color='red', s=100, zorder=2)
>>> plt.vlines([an_mean - an_stddev, an_mean + an_stddev], orig_flux.min(), orig_flux.max(),
... linestyles='dashed', colors='gg', zorder=2)
One feature I've wanted to add to astropy.modeling in the past is optional methods that can be attached to some models to give reasonable estimates for their parameters based on some data. So for Gaussians such a method would return much like I just computed above. I don't know if that's ever been implemented though.
It is also worth noting that your Gaussian would be inverted (with a negative amplitude) and that it's displaced on the flux axis some 120 points, so I added a Const1D to my model to account for this, and subtracted the displacement from the amplitude:
>>> an_disp = orig_flux.max()
>>> g_init = (
... models.Const1D(an_disp) +
... models.Gaussian1D(amplitude=(an_amplitude - an_disp), mean=an_mean, stddev=an_stddev)
... )
>>> fit_g = fitting.LevMarLSQFitter()
>>> g = fit_g(g_init, orig_wavelength, orig_flux)
This results in the following fit which looks much better already:
>>> plt.plot(orig_wavelength, orig_flux, '.k')
>>> plt.plot(orig_wavelength, g(orig_wavelength), 'r-')
I'm not an expert in modeling or statistics, so someone with deeper knowledge could likely improve on this. I've added a notebook with my full analysis of the problem, including how I generated my sample data here.
I have this plot
Now I want to add a trend line to it, how do I do that?
The data looks like this:
I wanted to just plot how the median listing price in California has gone up over the years so I did this:
# Get California data
state_ca = []
state_median_price = []
state_ca_month = []
for state, price, date in zip(data['ZipName'], data['Median Listing Price'], data['Month']):
if ", CA" not in state:
continue
else:
state_ca.append(state)
state_median_price.append(price)
state_ca_month.append(date)
Then I converted the string state_ca_month to datetime:
# Convert state_ca_month to datetime
state_ca_month = [datetime.strptime(x, '%m/%d/%Y %H:%M') for x in state_ca_month]
Then plotted it
# Plot trends
figure(num=None, figsize=(12, 6), dpi=80, facecolor='w', edgecolor='k')
plt.plot(state_ca_month, state_median_price)
plt.show()
I thought of adding a trendline or some type of line but I am new to visualization. If anyone has any other suggestions I would appreciate it.
Following the advice in the comments I get this scatter plot
I am wondering if I should further format the data to make a clearer plot to examine.
If by "trend line" you mean a literal line, then you probably want to fit a linear regression to your data. sklearn provides this functionality in python.
From the example hyperlinked above:
import matplotlib.pyplot as plt
import numpy as np
from sklearn import datasets, linear_model
from sklearn.metrics import mean_squared_error, r2_score
# Load the diabetes dataset
diabetes = datasets.load_diabetes()
# Use only one feature
diabetes_X = diabetes.data[:, np.newaxis, 2]
# Split the data into training/testing sets
diabetes_X_train = diabetes_X[:-20]
diabetes_X_test = diabetes_X[-20:]
# Split the targets into training/testing sets
diabetes_y_train = diabetes.target[:-20]
diabetes_y_test = diabetes.target[-20:]
# Create linear regression object
regr = linear_model.LinearRegression()
# Train the model using the training sets
regr.fit(diabetes_X_train, diabetes_y_train)
# Make predictions using the testing set
diabetes_y_pred = regr.predict(diabetes_X_test)
# The coefficients
print('Coefficients: \n', regr.coef_)
# The mean squared error
print("Mean squared error: %.2f"
% mean_squared_error(diabetes_y_test, diabetes_y_pred))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % r2_score(diabetes_y_test, diabetes_y_pred))
# Plot outputs
plt.scatter(diabetes_X_test, diabetes_y_test, color='black')
plt.plot(diabetes_X_test, diabetes_y_pred, color='blue', linewidth=3)
plt.xticks(())
plt.yticks(())
plt.show()
To clarify, "the overall trend" is not a well-defined thing. Many times, by "trend", people mean a literal line that "fits" the data well. By "fits the data", in turn, we mean "predicts the data." Thus, the most common way to get a trend line is to pick a line that best predicts the data that you have observed. As it turns out, we even need to be clear about what we mean by "predicts". One way to do this (and a very common one) is by defining "best predicts" in such a way as to minimize the sum of the squares of all of the errors between the "trend line" and the observed data. This is called ordinary least squares linear regression, and is one of the simplest ways to obtain a "trend line". This is the algorithm implemented in sklearn.linear_model.LinearRegression.
I have two NumPy arrays time and no of get requests. I need to fit this data using a function so that i could make future predictions.
These data were extracted from cassandra table which stores the details of a log file. So basically the time format is epoch-time and the training variable here is get_counts.
from cassandra.cluster import Cluster
import numpy as np
import matplotlib.pyplot as plt
from cassandra.query import panda_factory
session = Cluster(contact_points=['127.0.0.1'], port=9042).connect(keyspace='ASIA_KS')
session.row_factory = panda_factory
df = session.execute("SELECT epoch_time, get_counts FROM ASIA_TRAFFIC")
.sort(columns=['epoch_time','get_counts'], ascending=[1,0])
time = np.array([x[1] for x in enumerate(df['epoch_time'])])
get = np.array([x[1] for x in enumerate(df['get_counts'])])
plt.title('Trend')
plt.plot(time, byte,'o')
plt.show()
The data is as follows:
there are around 1000 pairs of data
time -> [1391193000 1391193060 1391193120 ..., 1391279280 1391279340 1391279400 1391279460]
get -> [577 380 430 ...,250 275 365 15]
Plot image (full size here):
Can someone please help me in providing a function so that i could properly fit in the data? I am new to python.
EDIT *
fit = np.polyfit(time, get, 3)
yp = np.poly1d(fit)
plt.plot(time, yp(time), 'r--', time, get, 'b.')
plt.xlabel('Time')
plt.ylabel('Number of Get requests')
plt.title('Trend')
plt.xlim([time[0]-10000, time[-1]+10000])
plt.ylim(0, 2000)
plt.show()
print yp(time[1400])
the fit curve looks like this:
https://drive.google.com/file/d/0B-r3Ym7u_hsKUTF1OFVqRWpEN2M/view?usp=sharing
However at the later part of the curve the value of y becomes (-ve) which is wrong. The curve must change its slope back to (+ve) somewhere in between.
Can anyone please suggest me how to go about it.
Help will be much appreciated.
You could try:
time = np.array([x[1] for x in enumerate(df['epoch_time'])])
byte = np.array([x[1] for x in enumerate(df['byte_transfer'])])
fit = np.polyfit(time, byte, n) # step up n value here,
# where n is the degree of the polynomial
yp = np.poly1d(fit)
print yp # displays function in cx^n +- cx^n-1...c format
plt.plot(x, yp(x), '-')
plt.xlabel('Time')
plt.ylabel('Bytes Transfered')
plt.title('Trend')
plt.plot(time, byte,'o')
plt.show()
I'm new to Numpy and curve fitting as well, but this is how I've been attempting to do it.