I am very new to docker and recently wrote a dockerfile to containerize a mathematical optimization solver called SuiteOPT. However, when testing the optimization solver on a few test problems I am experiencing slower performance in docker than outside of docker. For example, one demo problem of a linear program (demoLP.py) takes ~12 seconds to solve on my machine but in docker it takes ~35 seconds. I have spent about a week looking through blogs and stackoverflow posts for solutions but no matter what changes I make the timing in docker is always ~35 seconds. Does anyone have any ideas what might be going on or could anyone point me in the right direction?
Below are links to the docker hub and PYPI page for the optimization solver:
Docker hub for SuiteOPT
PYPI page for SuiteOPT
Edit 1: Adding an additional thought due to a comment from #user3666197. While I did not expect SuiteOPT to perform as well in the docker container I was mainly surprised by the ~3x slowdown for this demo problem. Perhaps the question can be restated as follows: How can determine whether or not this slowdown is caused purely to the fact that I am executing a CPU-RAM-I/O intensive code inside of a docker container instead of due to some other issue with the configuration of my Dockerfile?
Note: The purpose of this containerization is to provide a simple way for users to get started with the optimization software in Python. While the optimization software is available on PYPI there are many non-python dependencies that could cause issues for people wishing to use the software without running into installation issues.
Q : How can determine whether or not this slowdown is caused purely to the fact that I am executing a CPU-RAM-I/O intensive code inside of a docker container instead of due to some other issue with the configuration of my Dockerfile?
The battlefield :
( Credits: Brendan GREGG )
Step 0 : collect data about the Host-side run processing :
mpstat -P ALL 1 ### 1 [s] sampled CPU counters in one terminal-session (may log to file)
python demoLP.py # <TheWorkloadUnderTest> expected ~ 12 [s] on bare metal system
Step 1 : collect data about the same processing but inside the Docker-container
plus review policies set in --cpus and --cpu-shares ( potentially --memory +--kernel-memory if used )
plus review effects shown in throttled_time ( ref. Pg.13 )
cat /sys/fs/cgroup/cpu,cpuacct/cpu.stat
nr_periods 0
nr_throttled 0
throttled_time 0 <-------------------------------------------------[*] increasing?
plus review the Docker-container's workload view-from-outside the box by :
cat /proc/<_PID_>/status | grep nonvolu ### in one terminal session
nonvoluntary_ctxt_switches: 6 <------------------------------------[*] increasing?
systemd-cgtop ### view <Tasks> <%CPU> <Memory> <In/s> <Out/s>
Step 2 :
Check observed indications against the set absolute CPU cap policy and CPU-shares policy using the flowchart above
Related
I have been using Cplex and docplex (on python) on my PC for a long time and it was working fine. But lately when I run my script it starts the engine but it sticks at the beginning for a very long time (24 hours maybe) and then it terminates the process with no solutions and no error. When I use conflict_refiner same thing happens, it sticks on checking conflicts, it does not finish checking conflicts and it does not return any conflicts.
I tested my script on other device and it was working fine. I can not understand the problem.
Here are my device info:
Windows: 10
python (anaconda): 3.6.9
Spyder: 3.6
Cplex studio: 12.10.0
docplex :2.15.194
Edit:
Current model log_output freezes here:
Nodes Cuts/
Node Left Objective IInf Best Integer Best Bound ItCnt Gap
0 0 0.0000 72605 0.0000 7
So, you are saying that your Python program started behaving differently, all things being equal?
It is not possible to answer without knowing what exactly is your program doing?
It is solving a model? If so did you turn on logging by passing log_output=True,
if this is the case, does the log differ?
If your problem has no solution, the conflict refiner may sometimes take very long time. I suggest you try the docplex.mp.Relaxer class, which implements a relaxation algorithm. It usually runs faster, but does not answer the same question.
Relaxer tries to minimize slack to make the problem feasible, and shows you
which constraints had to be relaxed, and by how much slack.
I have tried to create multiple instance of mav proxy but i have no idea about this.
My question is about how to load two arducopter in single map in sitl. I am learning sitl setup and i want to know is it possible to load two arducopter in one map?
I've successfully managed to do a swarming/flocking simulation using Dronkit-SITL and QGroundControl. The thing is that the SITL TCP ports are hard-coded in the ArduPilot firmware. If you wanna simulate with multiple vehicles, you are gonna have to modify the SOURCE CODE of ArduPilot and compile from source for each vehicle separately.
For instance, a swarming simulation of 5 vehicles requires 5 different vehicle firmware coded with different TCP ports. Also, the simulated eeprom.bin should be slightly adjusted to work properly (or even fit real vehicles).
Basically, monitoring the TCP ports should work fine with both Dronekit-SITL and Mavproxy so it should be no problem to do multi-vehicle simulation in Mavproxy.
Some more details can be found on my Github repo (although the Readme is quite long). Hope it helps!
https://github.com/weskeryuan/flydan
Are you trying to do something related to swarms?
In ardupilot website they mentioned the following:
Using SITL is just like using a real vehicle.
I do not think it is possible but it is better to post your question in Ardupilot forum community.
I like the idea and it will be extremely useful.
From the MAVProxy docs:
MAVProxy is designed to control 1 vehicle per instance. Controlling multiple vehicles would require a substantial re-design of MAVProxy and is not currently on the "to-do" list.
However, there is very limited support for displaying (not controlling) multiple vehicles on the map. This should be considered an experimental feature only, as it was developed for a specific application (2016 UAV Challenge) where two UAV's were required to be displayed on a single map.
If all you need is to view them both in one map, then the instructions there should work for you.
You cannot run two vehicles in one MAP on Mavproxy.
What you can do is run two simulators and track them on Mission Planner or QGC.
To run two unstance you need to specify different instance numbers.
python3 ardupilot/Tools/autotest/sim_vehicle.py -j4 -v ArduCopter -M --map --console --instance 40 --out=udpout:127.0.0.1:14550
python3 ardupilot/Tools/autotest/sim_vehicle.py -j4 -v ArduCopter -M --map --console --instance 50 --out=udpout:127.0.0.1:14551
Note instance 40 & 50... also note the out=udpout ports 14551 & 14550
I have some machine learning code (but for the purposes of my question, it could be any algorithm) written in Python under git version control. My colleague has just concluded a significant refactor of the code base on a feature branch. Now I want to write some acceptance tests to make sure that the results of querying the machine learning process in production are within a reasonable statistical range compared to the same query on the refactored branch of the code base.
This is a somewhat complex app running in docker containers. The tests need to be run as part of a much larger series of tests.
I am not confused about how to write the part of the test that determines whether the results are within a reasonable statistical range. I am confused about how to bring the results from the master branch code together with the results from the WIP branch for comparison in one test in an automated way
So far, my best (only) idea is to launch the production docker container, run the machine learning query, write the query result to a csv file and then to use the docker cp to copy this file to the local machine where it can be tested against the equivalent csv on the feature branch. However, this seems inelegant.
Is there a better/smarter/best practice way to do this? Ideally that keeps things in memory.
I would consider using the http://approvaltests.com/ framework, you just need to write the test code that will produce some output after executing the Python code being tested, the output can be anything (text, JSON/CSV file, etc).
You can run this test on the master branch first so it records the output as the approved baseline and then you can switch to your WIP branch and run the same test there if output differs the approval test will fail.
Check out this podcast episode for more info.
The problem statement is in simulating both 'car' and a quadcopter in ROS Gazebo SITL as mentioned in this question. Two possibilities have been considered for the same which is as depicted in the image.
(Option 1 uses 6 terminals with independent launch files and MAVproxy initiation terminals)
While trying to search for Option 1, the documentation appeared to be sparse (The idea is to launch the simulation with ErleRover and then spawn ErleCopter on-the-go; I haven't found any official documentation mentioning either the possibility or the impossibility of this option). Can somebody be requested to let me know how option 1 can be achieved or why it is impossible by mentioning corresponding official documentation?
Regarding option 2, additional options have been explored; The problem is apparently with two aspects: param vs rosparam and tf2 vs tf_prefix.
Some of the attempts of simulation of multiple turtlebots have used tf_prefix which is deprecated. But, I have been unable to find any example which uses tf2 while simulating multiple (different) robots. But, tf2 works on ROS Hydro (and thus Indigo). Another possible option is the usage of rosparam instead of param (only). But, documentation on that is sparse regarding the usage of same on multi-robot simulation and I have been able to find only one example (for a single robot Husky).
But, one thing is clearer: MAVproxy can support multiple robots through the usage of SYSID and component-ID parameters. (upto 255 robots with 0 being a broadcast ID) Thus, port numbers have to be modified (possibly 14000 and 15000 as each vehicle uses 4 consecutive ports) just like the UCTF simulation. (vehicle_base_port = VEHICLE_BASE_PORT + mav_sys_id*4)
To summarise the question, the main concern is to simulate an independent car moving around and an independent quadcopter flying around in the ROS Gazebo SITL (maybe using Python nodes; C++ is fine too). Can somebody be requested to let me know the answers to the following sub-questions?
Is this kind of simulation possible? (Either by the usage of ROS Indigo, Gazebo 7, MAVproxy 1.5.2 on Ubuntu 14.04 or by modifying UCTF project to spawm a car like ErleRover if there is no other option)
(You are kindly requested to let me know the examples if possible and official links if this is impossible)
If on-the-go launch is not possible with two launch files, is it possible to launch two different robots with a single launch file?
This is an optional question: How to modify the listener (subscriber) of the node? (Is it to be done in the Python node?)
This simulation is taking relatively long time with system software crashing for about 3 times (NVIDIA instead of Noveau, broken packages etc) and any help will be whole-heartedly, gratefully and greatly appreciated. Thanks for your time and consideration.
Prasad N R
After I hit n to evaluate a line, I want to go back and then hit s to step into that function if it failed. Is this possible?
The docs say:
j(ump) lineno
Set the next line that will be executed. Only available in the bottom-most frame. This lets you jump back and execute code again, or jump forward to skip code that you don’t want to run.
The GNU debugger, gdb: It is extremely slow, as it undoes single machine instruction at a time.
The Python debugger, pdb: The jump command takes you backwards in the code, but does not reverse the state of the program.
For Python, the extended python debugger prototype, epdb, was created for this reason. Here is the thesis and here is the program and the code.
I used epdb as a starting point to create a live reverse debugger as part of my MSc degree. The thesis is available online: Combining reverse debugging and live programming towards visual thinking in computer programming. In chapter 1 and 2 I also cover most of the historical approaches to reverse debugging.
PyPy has started to implement RevDB, which supports reverse debugging.
It is (as of Feb 2017) still at an alpha stage, only supports Python 2.7, only works on Linux or OS X, and requires you to build Python yourself from a special revision. It's also very slow and uses a lot of RAM. To quote the Bitbucket page:
Note that the log file typically grows at a rate of 1-2 MB per second. Assuming size is not a problem, the limiting factor are:
Replaying time. If your recorded execution took more than a few minutes, replaying will be painfully slow. It sometimes needs to go over the whole log several times in a single session. If the bug occurs randomly but rarely, you should run recording for a few minutes, then kill the process and try again, repeatedly until you get the crash.
RAM usage for replaying. The RAM requirements are 10 or 15 times larger for replaying than for recording. If that is too much, you can try with a lower value for MAX_SUBPROCESSES in _revdb/process.py, but it will always be several times larger.
Details are on the PyPy blog and installation and usage instructions are on the RevDB bitbucket page.
Reverse debugging (returning to previously recorded application state or backwards single-stepping debugging) is generally an assembly or C level debugger feature. E.g. gdb can do it:
https://sourceware.org/gdb/wiki/ReverseDebug
Bidirectional (or reverse) debugging
Reverse debugging is utterly complex, and may have performance penalty of 50.000x. It also requires extensive support from the debugging tools. Python virtual machine does not provide the reverse debugging support.
If you are interactively evaluation Python code I suggest trying IPython Notebook which provide HTML-based interactive Python shells. You can easily write your code and mix and match the order. There is no pdb debugging support, though. There is ipdb which provides better history and search facilities for entered debugging commands, but it doesn't do direct backwards jumps either as far as I know.
Though you may not be able to reverse the code execution in time, the next best thing pdb has are the stack frame jumps.
Use w to see where you're in the stack frame (bottom is the newest), and u(p) or d(own) to traverse up the stackframe to access the frame where the function call stepped you into the current frame.