I am trying to run a parametric sweep in OpenModelica using OMPython. Let's assume that I have a Modelica model my_model.mo belonging to the library my_library. The model has two parameters: a and b.
I successfully managed to run a single parametric run by using the following code:
from OMPython import OMCSessionZMQ
omc = OMCSessionZMQ()
omc.sendExpression('loadModel(my_library)')
omc.sendExpression('simulate(my_library.my_model, simflags="-overrideFile=parameter_sweep.txt", stopTime=86400)')
where the file parameter_sweep.txt is:
a=5
b=6
Now the question is: how can I run multiple parametric runs? I could add one more line to the code where a new txt file (parameter_sweep1.txt) with a new set of values for the parameters is used:
from OMPython import OMCSessionZMQ
omc = OMCSessionZMQ()
omc.sendExpression('loadModel(my_library)')
omc.sendExpression('simulate(my_library.my_model, simflags="-overrideFile=parameter_sweep.txt", stopTime=86400)')
omc.sendExpression('simulate(my_library.my_model, simflags="-overrideFile=parameter_sweep1.txt", stopTime=86400)')
However, I am afraid that in this way there is the need to recompile. Is there a way to do multiple parametric runs and avoid re-compilation?
Use the buildModel command instead of simulate Then start the process manually in Python using a library such as subprocess. The command is simply something like:
["./my_library.my_model", "-overrideFile=parameter_sweep.txt"]
(If you use Windows, I believe you need to update your PATH environment variable as well, in order to find the used DLLs. If you use Linux, it just works.)
Related
Imagine I have .mzn file called abc.mzn and it is as follows.
array[1..3] of int:a;
output[show(a)];
Now I have a .dzn file called cde.dzn and it is as follows.
a=[1,2,3];
I will run minizinc python package as below,
import minizinc as minizinc
from minizinc import Instance,Model,Solver
x=Solver.lookup("geocode")
M1=Model("./abc.mzn")
instance1=Instance(x,M1)
instance1("a")=[1,2,3]
result = instance1.solve()
print(result)
Above code works fine and no problem with that.I am keen to use dzn module in this python code instead of Instance module and to get rid of manually assigning below line.
As you can see we need to manually assign the values for all parameters using instance1=..
instance1("a")=[1,2,3]
Is there any way that we can use .dzn file to assign values(using dzn module).I noted that in the package itself we have the dzn module already there.
can we do in below manner or how to get results.
import minizinc as minizinc
from minizinc import dzn,Model,Solver
M1=Model("./abc.mzn")
D1=dzn("./cde.dzn") etc..
The DZN module in MiniZinc Python is meant to be used through the .add_file method of Instance/Model. Using this method you can add data files (.dzn/.json) or additional model files .mzn to your MiniZinc model or instance.
So for your example would become:
from minizinc import Model
M1 = Model("./abc.mzn")
M1.add_file("./cde.dzn")
I have a python code which generates a weighted random graph. I want to use the weights generated in that code in a different Julia program. I am able to run the python code through Julia by using PyCall. But I am unable to get any of the data from the graph. Is there any way to do that?
The 'wt' stores the edge data in the python code.
When I am printing 'wt' in the python code it prints the nodes between which the edge is present and the weights.
This is giving me the required graph. I want to call 'wt' in Julia. How can I do that?
Python code
wt = G.edges.data('weight')
print(wt)
Julia code
using PyCall
y = py"exec(open('wtgraph.py').read())"
For your example it would be something like this (you didn't provide the complete code):
using PyCall
py"""
import something as G
def py_function(x):
return G.edges.data('weight')
"""
wt = py"py_function"('weight')
I primarily program in python (using jupyter notebooks) but on occasion need to use an R function. I currently do this by using rpy2 and R magic, which works fine. Now I would like to write a function which will summarize part of my analysis procedure into one wrapper function (so I don't always need to run all of the code cells but can simply execute the function once). As part of this procedure I need to call an R function. I adapted my code to import the R function to python using the rpy2.robjects interface with importr. This works but is extremely slow (more than triple the run time for an already lengthy procedure) which makes this simply not feasible analysiswise. I am assuming this has to do with me accessing R through the high-level interface of rpy2 instead of the low-level interface. I am unsure of how to use the low-level interface within a function call though and would need some help adapting my code.
I've tried looking into the rpy2 documentation but am struggling to understand it.
This is my code for executing the R function call from within python using R magic.
Activating rpy2 R magic
%load_ext rpy2.ipython
Load my required libaries
%%R
library(scran)
Actually call the R function
%%R -i data_mat -i input_groups -o size_factors
size_factors = computeSumFactors(data_mat, clusters=input_groups, min.mean=0.1)
This is my alternative code to import the R function using rpy2 importr.
from rpy2.robjects.packages import importr
scran = importr('scran')
computeSumFactors = scran.computeSumFactors
size_factors = computeSumFactors(data_mat, clusters=input_groups, min_mean=0.1)
For some reason this second approach is orders of magnitude slower.
Any help would be much apreciated.
The only difference between the two that I would see have an influence on the observe execution speed is conversion.
When running in an "R magic" code cell (prefixed with %%R), in your example the result of calling computeSumFactors() is an R object bound to the symbol size_factors in R. In the other case, the result of calling the function computeSumFactors() will go through the conversion system (and there what exactly happens depends on what are the active converters you have) before the result is bound to the Python symbol size_factors.
Conversion can be costly: you should consider trying to deactivate numpy / pandas conversion (the localconverter context manager can be a convenient way to temporarily use minimal conversion for a code block).
My python script passes changing inputs to a program called "Dymola", which in turn performs a simulation to generate outputs. Those outputs are stored as numpy arrays "out1.npy".
for i in range(0,100):
#code to initiate simulation
print(startValues, 'ParameterSet:', ParameterSet,'time:', stoptime)
np.save('out1.npy', output_data)
Unfortunately, Dymola crashes very often, which makes it necessary to rerun the loop from the time displayed in the console when it has crashed (e.g.: 50) and increase the number of the output file by 1. Otherwise the data from the first set would be overwritten.
for i in range(50,100):
#code to initiate simulation
print(startValues, 'ParameterSet:', ParameterSet,'time:', stoptime)
np.save('out2.npy', output_data)
Is there any way to read out the 'stoptime' value (e.g. 50) out of the console after Dymola has crashed?
I'm assuming dymola is a third-party entity that you cannot change.
One possibility is to use the subprocess module to start dymola and read its output from your program, either line-by-line as it runs, or all after the created process exits. You also have access to dymola's exit status.
If it's a Windows-y thing that doesn't do stream output but manipulates a window GUI-style, and if it doesn't generate a useful exit status code, your best bet might be to look at what files it has created while or after it has gone. sorted( glob.glob("somepath/*.out")) may be useful?
I assume you're using the dymola interface to simulate your model. If so, why don't you use the return value of the dymola.simulate() function and check for errors.
E.g.:
crash_counter = 1
from dymola.dymola_interface import DymolaInterface
dymola = DymolaInterface()
for i in range(0,100):
res = dymola.simulate("myModel")
if not res:
crash_counter += 1
print(startValues, 'ParameterSet:', ParameterSet,'time:', stoptime)
np.save('out%d.npy'%crash_counter, output_data)
As it is sometimes difficult to install the DymolaInterface on your machine, here is a useful link.
Taken from there:
The Dymola Python Interface comes in the form of a few modules at \Dymola 2018\Modelica\Library\python_interface. The modules are bundled within the dymola.egg file.
To install:
The recommended way to use the package is to append the \Dymola 2018\Modelica\Library\python_interface\dymola.egg file to your PYTHONPATH environment variable. You can do so from the Windows command line via set PYTHONPATH=%PYTHONPATH%;D:\Program Files (x86)\Dymola 2018\Modelica\Library\python_interface\dymola.egg.
If this does not work, append the following code before instantiating the interface:
import os
import sys
sys.path.insert(0, os.path.join('PATHTODYMOLA',
'Modelica',
'Library',
'python_interface',
'dymola.egg'))
I have a model that needs to update one specific value for all the patches before every run. After each timestep, these values change are updated (via an external model).
This means, the Netlogo model has to run and then stop (make a break), I need to output some data, then I need to update the patch values and then run Netlogo again. I would like to run one R script to setup the Netlogo model, then run another similar R script to run the go function in Netlogo. However, currently,
- I close the R script which is performing the Netlogo setup,
- then I try to run another similar R script with the go function (without setup) – then this second script doesn’t execute
Does anyone have experience on how to initialize Netlogo through R without running setup? In other words, I am trying to specify the initial conditions without a speed-up run (without the setup part) – is this possible and if yes then how? Even though I wrote about R, this is not a necessity. I could also use the python interface, but I need to use some interface without GUI, as this needs to run on a terminal. The fundamental question is how to specify initial conditions for a run.
So here is example of R code:
# for set up the model
# load RNetLogo package
library(rJava)
library(RNetLogo)
require(RNetLogo)
nl.path <- "C:\\Program Files (x86)\\NetLogo 5.2.0"
# the path to the NetLogo model file
model.path <- "......\\veg_model_1.nlogo"
#Load specific model
my.netlogo <-"veg_model_1.nlogo"
NLStart(nl.path, gui=F, nl.obj=my.netlogo) #Creates an instance of NetLogo.
NLLoadModel(model.path,nl.obj=my.netlogo)
NLCommand("setup", nl.obj=my.netlogo) #Executes a command
NLQuit(nl.obj = my.netlogo)
# to update value and run go for 1year
# load RNetLogo package
library(rJava)
library(RNetLogo)
require(RNetLogo)
# an R random seed (for beeing reproducible)
set.seed(-986131948)
nl.path <- "C:\\Program Files (x86)\\NetLogo 5.2.0"
# the path to the NetLogo model file
model.path <- ".......\\veg_model_1.nlogo"
#Load specific model
my.netlogo <-"veg_model_1.nlogo"
NLStart(nl.path, gui=F, nl.obj=my.netlogo) #Creates an instance of NetLogo.
NLLoadModel(model.path,nl.obj=my.netlogo)
# here is the value i needed to update
NLCommand("Setpatchwaterpotential", nl.obj=my.netlogo) #Executes a command
Command("go", nl.obj=my.netlogo)
NLQuit(nl.obj = my.netlogo)
## in Netlogo the setup and go:
to setup
clear-all
reset-ticks
setup-globals
setup-patches ; Init patches, init Hydroregime
setup-individuals
end
to Setpatchwaterpotential
'read input files'
end
to go
ifelse ticks = 0
[
Setpatchwaterpotential
......
tick ;to count timesteps (ticks) = how often did the model
]
end
Thanks
I don't know of you have taken a look at the RNetlogo package for R. You can find examples in the paper from Jan C Thiele in JSS. For me, your problem is not a setup problem, you can run what you want and interact with the model as you want...
In R with NLCommand() you can send to netlogo exactly what you want.
NLCommand("set timeV ", 255)
The go procedure can be a loop so you can go step by step. As exemple :
j <- 1
for(i in 1:2000){
NLCommand("go")
if(j == 10){
pos.agents <- NLGetAgentSet(c("who","xcor", "ycor","size","color","stockCoopSugar",
"plocsugar","ticks"), "turtles")
}
}
Each 10 step, I keep track of my agents layout
I hop it helps