My python script passes changing inputs to a program called "Dymola", which in turn performs a simulation to generate outputs. Those outputs are stored as numpy arrays "out1.npy".
for i in range(0,100):
#code to initiate simulation
print(startValues, 'ParameterSet:', ParameterSet,'time:', stoptime)
np.save('out1.npy', output_data)
Unfortunately, Dymola crashes very often, which makes it necessary to rerun the loop from the time displayed in the console when it has crashed (e.g.: 50) and increase the number of the output file by 1. Otherwise the data from the first set would be overwritten.
for i in range(50,100):
#code to initiate simulation
print(startValues, 'ParameterSet:', ParameterSet,'time:', stoptime)
np.save('out2.npy', output_data)
Is there any way to read out the 'stoptime' value (e.g. 50) out of the console after Dymola has crashed?
I'm assuming dymola is a third-party entity that you cannot change.
One possibility is to use the subprocess module to start dymola and read its output from your program, either line-by-line as it runs, or all after the created process exits. You also have access to dymola's exit status.
If it's a Windows-y thing that doesn't do stream output but manipulates a window GUI-style, and if it doesn't generate a useful exit status code, your best bet might be to look at what files it has created while or after it has gone. sorted( glob.glob("somepath/*.out")) may be useful?
I assume you're using the dymola interface to simulate your model. If so, why don't you use the return value of the dymola.simulate() function and check for errors.
E.g.:
crash_counter = 1
from dymola.dymola_interface import DymolaInterface
dymola = DymolaInterface()
for i in range(0,100):
res = dymola.simulate("myModel")
if not res:
crash_counter += 1
print(startValues, 'ParameterSet:', ParameterSet,'time:', stoptime)
np.save('out%d.npy'%crash_counter, output_data)
As it is sometimes difficult to install the DymolaInterface on your machine, here is a useful link.
Taken from there:
The Dymola Python Interface comes in the form of a few modules at \Dymola 2018\Modelica\Library\python_interface. The modules are bundled within the dymola.egg file.
To install:
The recommended way to use the package is to append the \Dymola 2018\Modelica\Library\python_interface\dymola.egg file to your PYTHONPATH environment variable. You can do so from the Windows command line via set PYTHONPATH=%PYTHONPATH%;D:\Program Files (x86)\Dymola 2018\Modelica\Library\python_interface\dymola.egg.
If this does not work, append the following code before instantiating the interface:
import os
import sys
sys.path.insert(0, os.path.join('PATHTODYMOLA',
'Modelica',
'Library',
'python_interface',
'dymola.egg'))
Related
I am trying to run a parametric sweep in OpenModelica using OMPython. Let's assume that I have a Modelica model my_model.mo belonging to the library my_library. The model has two parameters: a and b.
I successfully managed to run a single parametric run by using the following code:
from OMPython import OMCSessionZMQ
omc = OMCSessionZMQ()
omc.sendExpression('loadModel(my_library)')
omc.sendExpression('simulate(my_library.my_model, simflags="-overrideFile=parameter_sweep.txt", stopTime=86400)')
where the file parameter_sweep.txt is:
a=5
b=6
Now the question is: how can I run multiple parametric runs? I could add one more line to the code where a new txt file (parameter_sweep1.txt) with a new set of values for the parameters is used:
from OMPython import OMCSessionZMQ
omc = OMCSessionZMQ()
omc.sendExpression('loadModel(my_library)')
omc.sendExpression('simulate(my_library.my_model, simflags="-overrideFile=parameter_sweep.txt", stopTime=86400)')
omc.sendExpression('simulate(my_library.my_model, simflags="-overrideFile=parameter_sweep1.txt", stopTime=86400)')
However, I am afraid that in this way there is the need to recompile. Is there a way to do multiple parametric runs and avoid re-compilation?
Use the buildModel command instead of simulate Then start the process manually in Python using a library such as subprocess. The command is simply something like:
["./my_library.my_model", "-overrideFile=parameter_sweep.txt"]
(If you use Windows, I believe you need to update your PATH environment variable as well, in order to find the used DLLs. If you use Linux, it just works.)
This is an old problem as is demonstrated as in https://community.intel.com/t5/Analyzers/Unable-to-view-source-code-when-analyzing-results/td-p/1153210. I have tried all the listed methods, none of them works, and I cannot find any more solutions on the internet. Basically vtune cannot find the custom python source file no matter what is tried. I am using the most recently version as of speaking. Please let me whether there is a solution.
For example, if you run the following program.
def myfunc(*args):
# Do a lot of things.
if __name__ = '__main__':
# Do something and call myfunc
Call this script main.py. Now use the newest vtune version (I have using Ubuntu 18.04), run the vtune-gui and basic hotspot analysis. You will not found any information on this file. However, a huge pile of information on Python and its other codes are found (related to your python environment). In theory, you should be able to find the source of main.py as well as cost on each line in that script. However, that is simply not happening.
Desired behavior: I would really like to find the source file and function in the top-down manual (or any really). Any advice is welcome.
VTune offer full support for profiling python code and the tool should be able to display the source code in your python file as you expected. Could you please check if the function you are expecting to see in the VTune results, ran long enough?
Just to confirm that everything is working fine, I wrote a matrix multiplication code as shown below (don't worry about the accuracy of the code itself):
def matrix_mul(X, Y):
result_matrix = [ [ 1 for i in range(len(X)) ] for j in range(len(Y[0])) ]
# iterate through rows of X
for i in range(len(X)):
# iterate through columns of Y
for j in range(len(Y[0])):
# iterate through rows of Y
for k in range(len(Y)):
result_matrix[i][j] += X[i][k] * Y[k][j]
return result_matrix
Then I called this function (matrix_mul) on my Ubuntu machine with large enough matrices so that the overall execution time was in the order of few seconds.
I used the below command to start profiling (you can also see the VTune version I used):
/opt/intel/oneapi/vtune/2021.1.1/bin64/vtune -collect hotspots -knob enable-stack-collection=true -data-limit=500 -ring-buffer=10 -app-working-dir /usr/bin -- python3 /home/johnypau/MyIntel/temp/Python_matrix_mul/mat_mul_method.py
Now open the VTune results in the GUI and under the bottom-up tab, order by "Module / Function / Call-stack" (or whatever preferred grouping is).
You should be able to see the the module (mat_mul_method.py in my case) and the function "matrix_mul". If you double click, VTune should be able to load the sources too.
I am a fairly beginner programmer with python and in general with not that much experience, and currently I'm trying to parallelize a process that is heavily CPU bound in my code. I'm using anaconda to create environments and Visual Code to debug.
A summary of the code is as following :
from tkinter import filedialog
import myfuncs as mf, concurrent.futures
file_path = filedialog.askopenfilename('Ask for a file containing data')
# import data from file_path
a = input('Ask the user for input')
Next calculations are made from these and I reach a stage where I need to iterate of a list of lists. These lists may contain up to two values and calls are made to a separate file.
For example the inputs are :
sub_data1 = [test1]
sub_data2 = [test1, test2]
dataset = [sub_data1, sub_data2]
This is the stage I use concurrent.futures.ProcessPoolExecutor()-instance and its .map() method :
with concurrent.futures.ProcessPoolExecutor() as executor:
sm_res = executor.map(mf.process_distr, dataset)
While inside a myfuncs.py, the mf.process_distr() function works like this :
def process_distr(tests):
sm_reg = []
for i in range(len(tests)):
if i==0:
# do stuff
sm_reg.append(result1)
else:
# do stuff
sm_reg.append(result2)
return sm_reg
The problem is that when I try to execute this code on the main.py file, it seems that the main.py starts running multiple times, and asks for user inputs and file dialog pops up multiple times (same amount as cores count).
How can I resolve this matter?
Edit: After reading more into it, encapsulating the whole main.py code with:
if __name__ == '__main__':
did the trick. Thank you to anyone who gave time to help with my rookie problem.
I am trying to run my own version of baselines code source of reinforcement learning on github: (https://github.com/openai/baselines/tree/master/baselines/ppo2).
Whatever I do, I keep having the same display which looks like this :
Where can I edit it ? I know I should edit the "learn" method but I don't know how
Those prints are the result of the following block of code, which can be found at this link (for the latest revision at the time of writing this at least):
if update % log_interval == 0 or update == 1:
ev = explained_variance(values, returns)
logger.logkv("serial_timesteps", update*nsteps)
logger.logkv("nupdates", update)
logger.logkv("total_timesteps", update*nbatch)
logger.logkv("fps", fps)
logger.logkv("explained_variance", float(ev))
logger.logkv('eprewmean', safemean([epinfo['r'] for epinfo in epinfobuf]))
logger.logkv('eplenmean', safemean([epinfo['l'] for epinfo in epinfobuf]))
logger.logkv('time_elapsed', tnow - tfirststart)
for (lossval, lossname) in zip(lossvals, model.loss_names):
logger.logkv(lossname, lossval)
logger.dumpkvs()
If your goal is to still print some things here, but different things (or the same things in a different format) your only option really is to modify this source file (or copy the code you need into a new file and apply your changes there, if allowed by the code's license).
If your goal is just to suppress these messages, the easiest way to do so would probably be by running the following code before running this learn() function:
from baselines import logger
logger.set_level(logger.DISABLED)
That's using this function to disable the baselines logger. It might also disable other baselines-related output though.
I have a model that needs to update one specific value for all the patches before every run. After each timestep, these values change are updated (via an external model).
This means, the Netlogo model has to run and then stop (make a break), I need to output some data, then I need to update the patch values and then run Netlogo again. I would like to run one R script to setup the Netlogo model, then run another similar R script to run the go function in Netlogo. However, currently,
- I close the R script which is performing the Netlogo setup,
- then I try to run another similar R script with the go function (without setup) – then this second script doesn’t execute
Does anyone have experience on how to initialize Netlogo through R without running setup? In other words, I am trying to specify the initial conditions without a speed-up run (without the setup part) – is this possible and if yes then how? Even though I wrote about R, this is not a necessity. I could also use the python interface, but I need to use some interface without GUI, as this needs to run on a terminal. The fundamental question is how to specify initial conditions for a run.
So here is example of R code:
# for set up the model
# load RNetLogo package
library(rJava)
library(RNetLogo)
require(RNetLogo)
nl.path <- "C:\\Program Files (x86)\\NetLogo 5.2.0"
# the path to the NetLogo model file
model.path <- "......\\veg_model_1.nlogo"
#Load specific model
my.netlogo <-"veg_model_1.nlogo"
NLStart(nl.path, gui=F, nl.obj=my.netlogo) #Creates an instance of NetLogo.
NLLoadModel(model.path,nl.obj=my.netlogo)
NLCommand("setup", nl.obj=my.netlogo) #Executes a command
NLQuit(nl.obj = my.netlogo)
# to update value and run go for 1year
# load RNetLogo package
library(rJava)
library(RNetLogo)
require(RNetLogo)
# an R random seed (for beeing reproducible)
set.seed(-986131948)
nl.path <- "C:\\Program Files (x86)\\NetLogo 5.2.0"
# the path to the NetLogo model file
model.path <- ".......\\veg_model_1.nlogo"
#Load specific model
my.netlogo <-"veg_model_1.nlogo"
NLStart(nl.path, gui=F, nl.obj=my.netlogo) #Creates an instance of NetLogo.
NLLoadModel(model.path,nl.obj=my.netlogo)
# here is the value i needed to update
NLCommand("Setpatchwaterpotential", nl.obj=my.netlogo) #Executes a command
Command("go", nl.obj=my.netlogo)
NLQuit(nl.obj = my.netlogo)
## in Netlogo the setup and go:
to setup
clear-all
reset-ticks
setup-globals
setup-patches ; Init patches, init Hydroregime
setup-individuals
end
to Setpatchwaterpotential
'read input files'
end
to go
ifelse ticks = 0
[
Setpatchwaterpotential
......
tick ;to count timesteps (ticks) = how often did the model
]
end
Thanks
I don't know of you have taken a look at the RNetlogo package for R. You can find examples in the paper from Jan C Thiele in JSS. For me, your problem is not a setup problem, you can run what you want and interact with the model as you want...
In R with NLCommand() you can send to netlogo exactly what you want.
NLCommand("set timeV ", 255)
The go procedure can be a loop so you can go step by step. As exemple :
j <- 1
for(i in 1:2000){
NLCommand("go")
if(j == 10){
pos.agents <- NLGetAgentSet(c("who","xcor", "ycor","size","color","stockCoopSugar",
"plocsugar","ticks"), "turtles")
}
}
Each 10 step, I keep track of my agents layout
I hop it helps