my issue is that I'm trying to simulate modifications of GARCH model like IGARCH, FIGARCH or HYGARCH.
I have already found that some of them is possible to generate in R (rugarch or (no more existing) fSeries package) or in Python (arch library). I will organize my questions into the following points:
1. How can I simulate an IGARCH model in Python?
I tried these two ways:
1) used GARCH.simulate with fixed parameters where alfas and betas sum to 1. But there was an error
message about non-stationarity and it took intercept in order to initialize the model. And I'm not
sure if it is OK and the simulated series is still IGARCH (in my opinion it is only moved by
constant so it should have had no crucial effect).
I have also programmed my own function for GARCH simulation, and it works also for coefficients
that sum to 1. Hopefully, the implementation is good...The only restriction for IGARCH that
differentiates it from GARCH is that the sum of coefficients equals 1, right?.
2) used FIGARCH.simulate with fixed parameters where d=1 which is a special case of FIGARCH models
which in that case become IGARCH(p,q) models. But the error there was *invalid value encountered in
sqrt: data[t] = errors[t] * np.sqrt(sigma2[t])*.
2. Is there any free software in which the HYGARCH is implemented? If not, could somebody advise me how
to implement that model in R or Python?
The model is already implemented in Ox Metrics 8, but it is a paid software. I found that the
interface to these Ox Metrics functions were implemented in the R package fSeries which, however
doesn't exist anymore and I'm not able to install the older version on my R version 3.5.1. And in
Python there is no such a model implemented.
3. Corresponds model specification "fgarch" in the function ugarchspec(...) of rugarch package in R to the FIGARCH model? If not, is FIGARCH implemented in the R software?
My R Code is:
spec=ugarchspec(variance.model=list(model="fGARCH", garchOrder=c(1,1), submodel="GARCH"),
mean.model=list(armaOrder=c(0,0), include.mean=TRUE), distribution.model="std",
fixed.pars=list(mu=0.001,omega=0.00001, alpha1=0.05,beta1=0.90,delta = 0.00001,
shape=4))
I found somewhere on the Internet that there is also possible to specify "figarch" in upper function instead of "fgarch", but in that case I'm not able to simulate the path by using ugarchpath(...).
Thank you in advance! I need this for my master thesis so I appreciate any recommendations, advises etc.
Domca
Related
I am working with some MIP models that only have binary variables as integer variables which represent optional elements in a network. Instead of solving the complete MIP I want to try out an algorithm where I do the following steps:
Solve the problem with all binaries fixed to zero
Solve the problem with all binaries fixed to one
Use the difference in objective as budget to remove some of the binaries with too expensive objective contribution from the pool of options and solve again; repeat this step until either no binaries remain or no binary is removed during the iteration
Solve the reduced MIP where some of the binaries are set to zero which reduces the number of practical binary variables
From my understanding this algorithm would benefit from using warmstarts as the last solution including the changes through variables fixings after the solve() calls would provide a feasible solution and bound for the model. I also use the deactivate()- and activate()-methods to change objectives and remove a constraint during step 2 and 3. For the constraint I also wrote code that sets the variables to a feasible solution after reactivating it.
When executing
results = optimizer.solve(self.pyM, warmstart=True, tee=True)
using Gurobi it seems that gurobi doesn't use the current variable values in the pyomo model but instead only uses the values from the last solve()-results without the changes afterwards (fixing variables to one/zero, adjusting values for the constraint).
I assume this because if I don't reactivate the constraint and run a model where no binaries can be removed the log reports a working MIP Start while when activating it does give the same output:
Read MIP start from file /tmp/tmp3lq2wbki.gurobi.mst
[some parameters settings and model characteristics]
User MIP start did not produce a new incumbent solution
User MIP start violates constraint c_e_x96944_ by 0.450000000
regardless if I comment out the code that adjusts the values or not. I also expect that code snippet to work properly as I tested it separately and checked with help of the display()-method that the value of the constraint body lies between both bounds. In Step 2 only the read MIP Start line from above is in the log, but no statement what happend to it.
Is it possible to tell Pyomo to use the values from the Pyomo model instead or to update the .mst-file with the updated pyomo model values?
I found this Gurobi-persistent Class
https://pyomo.readthedocs.io/en/stable/library_reference/solvers/gurobi_persistent.html
I tried
import pyomo.solvers.plugins.solvers.gurobi_persistent as gupyomo
[...]
test = gupyomo.GurobiPersistent(model = self.pyM)
for variable in adjustedVariables:
test.update_var(variable)
test.update()
but this neither produced output/errors nor changed the behaviour. Therefore I assume this is either not the correct way or I used it wrong.
Additional infos:
gurobi 9.0.2
pyomo 5.7.1
If specific parts of the code might be helpful I can provide them but I wasn't sure if and what part could be relevant for the question
So what seemed to work for me was instead using
optimizer.set_instance(self.pyM) at the start of my code and using optimizer.update_var(var[index]) whenever I change something like fixing the variable var[index].
, instead of above code.
I want to get strong branching scores by cplex and python, and for the first step I just tried to use "cplex.advanced.strong_branching" to solve a very simple MILP problem (my code followed the example usage of this function exactly). However it told me that "CPLEX Error 1017: Not available for mixed-integer problems", which made me quite confused because SB should be a traditional branch-and-bound algorithm. But when I used it to solve a LP problem it worked well.
The error seemed to be raised from "CPXXstrongbranch", a base C/C++ API, which also made me question that how cplex could make SB decisions when I set the branching strategy parameter to SB. A similar question is that I know Python API doesn't have the important "CPXgetcallbacknodelp" function, so how could "cplex.advanced.strong_branching" work? Could it be the reason of this error?
I don't totally understand how "CPXstrongbranch" works in C, so the following information may be incorrect: I tried to use "CPXstrongbranch" in the user-set branch callback of the example "adlpex1.c", and the same error was raised; it stopped me to use "ctypes" to get the "CPXgetcallbacknodelp" function.
Could it be a version problem? Does Cplex block the access of SB? Because I have read a paper which relied on the SB scores in Cplex 12.6.1 and C API. Or I just made some mistakes.
My question is whether Cplex can do SB and deliver its results to users in the MILP problem.
cplex.advanced.strong_branching does not carry out any branching. The doc is a bit confusing here. What this function does is computing the strong branching scores for the variables you pass (or all variables if you don't pass a list).
This requires an LP because usually in a MIP search tree you call this function with the current LP relaxation.
If you want to use CPLEX with strong branching then set the variable selection parameter to "strong branching":
with cplex.Cplex() as cpx:
cpx.parameters.mip.strategy.variableselect.set(cpx.parameters.mip.strategy.variableselect.values.strong_branching)
cpx.solve()
The strong_branching function is only needed if you want to implement your own branching algorithm.
I have a MILP problem with binary and continuous variables. I want to use the Benders' decomposition algorithm to solve it. I am following this presentation: http://www.iems.ucf.edu/qzheng/grpmbr/seminar/Yuping_Intro_to_BendersDecomp.pdf
I separate my problem into a master problem composed of the discrete variables and a slave problem that has the continuous variables.
I am using the CPLEX Python API to solve it, based on the ATSP example: https://github.com/cswaroop/cplex-samples/blob/master/bendersatsp.py
I create the master problem and the dual of the slave as in this example.
I use the BendersLazyConsCallback, but I am not sure I understand it entirely. Please help me through.
The function separate is called when the current master solution is obtained, then the dual objective function is updated, and the dual problem is re-solved.
If the dual is unbounded then it adds the ray to the master problem constraints, e.g., self.add(constraint = workerLP.cutLhs, sense = "L", rhs = workerLP.cutRhs), which happens in the BendersLazyConsCallback class.
But the example does not include the code when the dual is optimal.
So, when the dual is optimal, I add a similar call and add the constraint to the master problem based on the dual solution.
However, if I try to print the master problem constraints, e.g., problem.linear_constraints.get_rows(), I do not see the newly included constraints. It seems that the self.add(constraint = workerLP.cutLhs, sense = "L", rhs = workerLP.cutRhs) command does not push this to the master constraints but keeps it as a member of the LazyConstraintCallback class. Is this correct? How can I see that these new constraints are actually added?
Also, how does the algorithm stop? In the traditional Benders' algorithm the lower and upper bounds of the problem are updated based the the dual and master solution and when they are equal we have convergence.
In the ATSP example, I don't see where this is happening. When exactly is the BendersLazyConsCallback triggered and how does it know when to stop?
The example that you linked to above is from CPLEX 12.6. This is quite old, by now (currently, the latest version is 12.9). If you haven't updated to a more recent version yet, it would probably be a good idea to do so. One reason to update is that with CPLEX 12.7, there is support for automatic Benders' decomposition. With CPLEX 12.8, there is a new generic callback (and a new bendersatsp2.py example to demonstrate it).
With that said, I'll try to answer some of your other questions.
First, as described here, if you write out the model, it will not include the lazy constraints that you have added dynamically in the callback. You can print them out in the callback yourself easily enough (e.g., print("LC:", workerLP.cutLhs, "G", workerLP.cutRhs). You can determine if your constraints have been applied, by the presence of a message at the end of the engine log, like:
User cuts applied: 3
As to your final questions about how lazy constraints work, please see the section on legacy callbacks in the CPLEX User's Manual. Also, there is the section on termination conditions of the MIP optimizer. The lazy constraint callback is invoked whenever CPLEX finds an integer feasible solution (see this thread on the IBM developerWorks forum for more on that).
I have this problem with xgboost I use at work. My task is to port a piece of code that's currently running in R to python.
What the code does:
My aim is to use XGBoost to determine the features with most gain. I made sure the inputs into the XGBoost are identical in R and python. The XGBoost is run roughly 100 times (on different data) and each time I extract 30 best features by gain.
My problem is this:
The input in R and python are identical. Yet python and R output vastly different features(both in terms of total number of features per round, and which features are chosen). They only share about 50 % of features. My parameters are the same, and I don't use any samples, so there is no randomness.
Also, another thing I noticed - XGBoost is slower in python when compared to R with the same parameters. Is it a known issue?
R parameters
Python parameters
I've been trying to look around, but didn't find anyone having a similar problem. I can't share the data or code, because it's confidential. Does someone have an idea why the features differ so much?
R version: 3.4.3
XGBoost R version: 0.6.4.1
python version: 3.6.5
XGBoost python version: 0.71
Running on Windows.
You set the internal seed in the R code but not the Python code.
More of an issue is likely that Python and R may also use different random number generators so despite always setting internal and external seeds you could get different sequences. This thread may help in that respect.
I would also hazard a guess that the variables not selected in one model provide similar information to those selected in the other, where swapping variables one way or another shouldn't impact model performance significantly. Although I don't know if the R model and the Python one perform the same?
I have a situation where I need to train a model in R and then use the model coefficients obtained from that (betas) in order to perform a regression classification in semi-live data. This production system is implemented in pure python (data processing) and django (web interface).
The model coefficients will be calculated every week manually and right now produces a csv, that is read by the python code. I just wanted to know if there are any better ways of doing this?
This is mostly a question on what are the established best practices for cases like this, even though the current approach works.