I have written the following two functions to calibrate a model :
The main function is:
def function_Price(para,y,t,T,tau,N,C):
# y= price array
# C = Auto and cross correlation array
# a= paramters need to be calibrated
a=para[0:]
temp=0
for j in range(N):
price_j = a[j]*C[j]*P[t:T-tau,j]
temp=temp+price_j
Price=temp
return Price
The objective function is :
def GError_function_Price(para,y,k,t,T,tau,N,C):
# k is the price need to be fitted
return sum((function_Price(para,y,t,T,tau,N,C)-k[t+tau:T]) ** 2)
Now, I am calling these two functions to do the optimization of the model:
import numpy as np
from scipy.optimize import minimize
# Prices (example)
y = np.array([[1,2,3,4,5,4], [4,5,6,7,8,9], [6,7,8,7,8,6], [13,14,15,11,12,19]])
# Correaltion (example)
Corr= np.array([[1,2,3,4,5,4], [4,5,6,7,8,9], [6,7,8,7,8,6], [13,14,15,11,12,19],[1,2,3,4,5,4],[6,7,8,7,8,6]])
# Define
tau=1
Size = y.shape
N = Size[1]
T = Size[0]
t=0
# initial Values
para=np.zeros(N)
# Bounds
B = np.zeros(shape=(N,2))
for n in range(N):
B[n][0]= float('-inf')
B[n][1]= float('inf')
# Calibration
A = np.zeros(shape=(N,N))
for i in range (N):
k=y[:,i] #fitted one
C=Corr[i,:]
parag=minimize(GError_function_Price,para,args=(y,Y,t,T,tau,N,C),method='SLSQP',bounds=B)
A[i,:]=parag.x
Once, I run the model, It should produce an N by N array of optimized values of paramters. But, except for the first column, it keeps zeros for the rest. Something is wrong.
Can you help me fix the problem, please?
I know how to do it in Matlab.
The following is Matlab Code :
main function
function Price=function_Price(para,P,t,T,tau,N,C)
a=para(:,:);
temp=0;
for j=1:N
price_j = a(j).*C(j).*P(t:T-tau,j);
temp=temp+price_j;
end
Price=temp;
end
The objective function:
function gerr=GError_function_Price(para,P,Y,t,T,tau,N,C)
gerr=sum((function_Price(para,P,t,T,tau,N,C)-Y(t+tau:T)).^2);
end
Now, I call these two functions in the following way:
P = [1,2,3,4,5,4;4,5,6,7,8,9;6,7,8,7,8,6;13,14,15,11,12,19];
AutoAndCrossCorr= [1,2,3,4,5,4;4,5,6,7,8,9;6,7,8,7,8,6;13,14,15,11,12,19;1,2,3,4,5,4;6,7,8,7,8,6];
tau=1;
Size = size(P);
N =6;
T =4;
t=1;
for i=1:N
Y=P(:,i); % fitted one
C=AutoAndCrossCorr(i,:);
para=zeros(1,N);
lb= repmat(-inf,N,1);
ub= repmat(inf ,N,1);
parag=fminsearchbnd(#(para)abs(GError_function_Price(para,P,Y,t,T,tau,N,C)),para,lb,ub);
a(i,:)=parag;
end
The problem seems to be that you're passing the result of a function call to minimize, rather than the function itself. The arguments get passed by the args parameter. So instead of:
minimize(GError_function_Price(para,y,k,t,T,tau,N,C),para,method='SLSQP',bounds=B)
the following should work
minimize(GError_function_Price,para,args=(y,k,t,T,tau,N,C),method='SLSQP',bounds=B)
I'm using the statsmodels ARMA functionality to produce a forecast using an ARMAX(0,2) model with 1 exogenous variable and am getting counterintuitive results.
Regardless of the exogenous variables passed to exog in the forecast function (e.g. arma_res.forecast(steps=5, exog=f_exog)), an identical forecast array is returned except for the last element.
For example:
arparams = np.array([0])
maparams = np.array([.65, .35])
arparams = np.r_[1, -arparams]
maparam = np.r_[1, maparams]
nobs = 250
y = arma_generate_sample(arparams, maparams, nobs)
exog = np.random.normal(size=nobs)
arma_mod = sm.tsa.ARMA(y, order=(0, 2), exog=exog)
arma_res = arma_mod.fit(trend='nc', disp=-1)
# Exogenous vars for forecasts
f_exog = [10, 10, 10, 10, 10]
f_exog_2 = [x * 5 for x in f_exog]
forecast_1 = arma_res.forecast(steps=5, exog=f_exog)[0]
forecast_2 = arma_res.forecast(steps=5, exog=f_exog_2)[0]
Produces forecasts of:
array([-0.0884847 , -0.03223685, -0.00190045, -0.00860229, 0.0987421 ])
array([-0.0884847 , -0.03223685, -0.00190045, -0.00860229, 0.49371049])
A notebook with the setup for this is here: https://github.com/dbrodSq/ARMA_Test/blob/master/Statsmodels_forecast_example.ipynb
The above pattern holds regardless of the "steps" parameter. Does anyone have an idea of why?
I believe that ARMAX forecasting for multiple periods is possible with Statsmodels and I don't believe similar problems as above occur as long as k_ar != 0 (i.e. there's at least one AR parameter).
Apologies if I'm missing something obvious and thanks a lot for any help.
I'm trying to figure out what is the proper format of a python list to be given as input to a svm_problem function in python. I got the following program from the web, stackoverflow.
I have the following:
from svm import *
x=[ [1,0,1],[-1,0,-1],[1,0,0]]
#x=[ [1,0,1],[-1,0,-1]]
prob = svm_problem( [1,-1],x )
param = svm_parameter(kernel_type = LINEAR, C = 10)
m = svm_model(prob, param)
print m.predict([ 1,1, 1])
It raises an assertion error, says assert failed: assert len(x)==len(y).
But if x=[ [1,0,1],[-1,0,-1]], the program works perfectly. Am I not supposed to give a train-data problem of length more than 2?
Also I don't understand what in x=[[1,0,1],[-1,0,-1]] is a label and what is the data?
Any help is highly appreciated.
svm_problem() takes two parameters: the first parameter of a vector of labels, and the second is a matrix of features. You get this assertion error because you are only specifying 2 labels, [1, -1], as the first parameter in your call to svm_problem.
Example:
y = [1,-1,1,1]
x = [[1,0,1], [-1,0,-1], [1,2,3], [4,5,6]]
prob = svm_problem(y, x)
If you give 3 examples, you need to give the classes for the three inputs as well, so you need to do
prob = svm_problem( [1,1,-1],x )
or something similar.