Spawning new robot in running ROS Gazebo simulation - python

The problem statement is in simulating both 'car' and a quadcopter in ROS Gazebo SITL as mentioned in this question. Two possibilities have been considered for the same which is as depicted in the image.
(Option 1 uses 6 terminals with independent launch files and MAVproxy initiation terminals)
While trying to search for Option 1, the documentation appeared to be sparse (The idea is to launch the simulation with ErleRover and then spawn ErleCopter on-the-go; I haven't found any official documentation mentioning either the possibility or the impossibility of this option). Can somebody be requested to let me know how option 1 can be achieved or why it is impossible by mentioning corresponding official documentation?
Regarding option 2, additional options have been explored; The problem is apparently with two aspects: param vs rosparam and tf2 vs tf_prefix.
Some of the attempts of simulation of multiple turtlebots have used tf_prefix which is deprecated. But, I have been unable to find any example which uses tf2 while simulating multiple (different) robots. But, tf2 works on ROS Hydro (and thus Indigo). Another possible option is the usage of rosparam instead of param (only). But, documentation on that is sparse regarding the usage of same on multi-robot simulation and I have been able to find only one example (for a single robot Husky).
But, one thing is clearer: MAVproxy can support multiple robots through the usage of SYSID and component-ID parameters. (upto 255 robots with 0 being a broadcast ID) Thus, port numbers have to be modified (possibly 14000 and 15000 as each vehicle uses 4 consecutive ports) just like the UCTF simulation. (vehicle_base_port = VEHICLE_BASE_PORT + mav_sys_id*4)
To summarise the question, the main concern is to simulate an independent car moving around and an independent quadcopter flying around in the ROS Gazebo SITL (maybe using Python nodes; C++ is fine too). Can somebody be requested to let me know the answers to the following sub-questions?
Is this kind of simulation possible? (Either by the usage of ROS Indigo, Gazebo 7, MAVproxy 1.5.2 on Ubuntu 14.04 or by modifying UCTF project to spawm a car like ErleRover if there is no other option)
(You are kindly requested to let me know the examples if possible and official links if this is impossible)
If on-the-go launch is not possible with two launch files, is it possible to launch two different robots with a single launch file?
This is an optional question: How to modify the listener (subscriber) of the node? (Is it to be done in the Python node?)
This simulation is taking relatively long time with system software crashing for about 3 times (NVIDIA instead of Noveau, broken packages etc) and any help will be whole-heartedly, gratefully and greatly appreciated. Thanks for your time and consideration.
Prasad N R

Related

Why is my optimization solver running slower in docker?

I am very new to docker and recently wrote a dockerfile to containerize a mathematical optimization solver called SuiteOPT. However, when testing the optimization solver on a few test problems I am experiencing slower performance in docker than outside of docker. For example, one demo problem of a linear program (demoLP.py) takes ~12 seconds to solve on my machine but in docker it takes ~35 seconds. I have spent about a week looking through blogs and stackoverflow posts for solutions but no matter what changes I make the timing in docker is always ~35 seconds. Does anyone have any ideas what might be going on or could anyone point me in the right direction?
Below are links to the docker hub and PYPI page for the optimization solver:
Docker hub for SuiteOPT
PYPI page for SuiteOPT
Edit 1: Adding an additional thought due to a comment from #user3666197. While I did not expect SuiteOPT to perform as well in the docker container I was mainly surprised by the ~3x slowdown for this demo problem. Perhaps the question can be restated as follows: How can determine whether or not this slowdown is caused purely to the fact that I am executing a CPU-RAM-I/O intensive code inside of a docker container instead of due to some other issue with the configuration of my Dockerfile?
Note: The purpose of this containerization is to provide a simple way for users to get started with the optimization software in Python. While the optimization software is available on PYPI there are many non-python dependencies that could cause issues for people wishing to use the software without running into installation issues.
Q : How can determine whether or not this slowdown is caused purely to the fact that I am executing a CPU-RAM-I/O intensive code inside of a docker container instead of due to some other issue with the configuration of my Dockerfile?
The battlefield :
( Credits: Brendan GREGG )
Step 0 : collect data about the Host-side run processing :
mpstat -P ALL 1 ### 1 [s] sampled CPU counters in one terminal-session (may log to file)
python demoLP.py # <TheWorkloadUnderTest> expected ~ 12 [s] on bare metal system
Step 1 : collect data about the same processing but inside the Docker-container
plus review policies set in --cpus and --cpu-shares ( potentially --memory +--kernel-memory if used )
plus review effects shown in throttled_time ( ref. Pg.13 )
cat /sys/fs/cgroup/cpu,cpuacct/cpu.stat
nr_periods 0
nr_throttled 0
throttled_time 0 <-------------------------------------------------[*] increasing?
plus review the Docker-container's workload view-from-outside the box by :
cat /proc/<_PID_>/status | grep nonvolu ### in one terminal session
nonvoluntary_ctxt_switches: 6 <------------------------------------[*] increasing?
systemd-cgtop ### view <Tasks> <%CPU> <Memory> <In/s> <Out/s>
Step 2 :
Check observed indications against the set absolute CPU cap policy and CPU-shares policy using the flowchart above

Python compiler call another python compiler to execute a script (execute a script from one independent machine to another)

I know the question title is weird!.
I have two virtual machines. First one has limited resources, while the second one has enough resources just like normal machine. The first machine will receive a signal from an external device. This signal will trigger a python compiler to execute a script. The script is big and the first machine does not have enough resources to execute it.
I can copy the script to the second machine to run it there, but I can't make the second machine receive the external signal. I am wondering if there is a way to make the compiler on the first machine ( once the external signal received) call the compiler on the second machine, so the compiler on the second machine executes the script? so the second compiler should use the second machine resources. check the attached image please.
Assume that the connection is established between the two machines and they can see each other, and the second machine has a copy from the script. I just need the commands that pass ( the execution ) to the second machine and make it use its own resources.
You should look into the microservice architecture to do this.
You can achieve this either by using flask and sending server requests between each machine, or something like nameko, which will allow you to create a "bridge" between machines and call functions between them (seems like what you are more interested in). Example for nameko:
Machine 2 (executor of resource-intensive script):
from nameko.rpc import rpc
class Stuff(object):
#rpc
def example(self):
return "Function running on Machine 2."
You would run the above script through the Nameko shell, as detailed in the docs.
Machine 1:
from nameko.standalone.rpc import ClusterRpcProxy
# This is the amqp server that machine 2 would be running.
config = {
'AMQP_URI': AMQP_URI # e.g. "pyamqp://guest:guest#localhost"
}
with ClusterRpcProxy(config) as cluster_rpc:
cluster_rpc.Stuff.example() # Function running on Machine 2.
More info here.
Hmm, there's many approaches to this problem.
If you want a python only solution, you can check out
dispy http://dispy.sourceforge.net/
Or Dask. https://dask.org/
If you want a robust solution (what I use on my home computing cluster but imo overkill for your problem) you can use
SLURM. SLURM is basically a way to string multiple computers together into a "supercomputer". https://slurm.schedmd.com/documentation.html
For a semi-quick, hacky solution. You can write a microservice. Essentially, your "weak" computer will receive the message then send a http request to your "strong" computer. Your strong computer will contain the actual program, compute results, and pass back the result to your "weak" computer.
Flask is an easy and lightweight solution for this.
All of these solutions require some type of networking. At the least, the computers need to be on the same LAN or both have access over the web.
There are many other approaches not mentioned. For example, you can export a NFS (netowrk file storage) and have one computer put a file in the shared folder and the other computer perform work on the file. I'm sure there are plenty other contrived ways to accomplish this task :). I'd be happy to expand on a particular method if you want.

Using ROS vs other method (see post for more details on this "other method")

so I am working with a friend in developing a robot (using a Raspberry Pi). This robot we are working on will be an autonomous boat. Now, for the Raspberry Pi, the Raspbian image we are using already has ROS (specifically, ROS Kinetic) nicely installed on it, and I have confirmed that ROS is working.
For our robot boat, we have different features that we wish to include in it:
Getting GPS location
Getting audio via hydrophone and processing audio to detect a certain frequency range (ie. I want the boat to detect when a sound of 8500-9000 Hz is clearly heard via the hydrophone)
Being able to communicate over XBee
So I have used ROS in the past and I am familiar with the concept of publishing and subscribing to topics. However, my friend says that ROS will cause performance issues due to ROS having some "overhead", claiming that ROS will slow down our audio processing or something.
Instead, he proposes the following alternative method:
Have each of the 3 aspects of our robot (as mentioned above) in different Python files.
When the Raspberry Pi starts up, have all of the Python files be run automatically.
To pass information to each other (essentially "mimicking" the publishing/subscribing functionality of ROS), the Python files will write to different text files (to "publish" values) and read from those text files (to "subscribe"), and the values that these text files contain will be overwritten on each update of a new value.
So... which method of passing information is the better method for our robot?
Using ROS
Using the aforementioned file writing/reading method proposed by my friend
Something else
Oh, and other things that I should mention:
I know how to use ROS, my friend doesn't
My friend has not actually finished writing all the code for doing his file writing/reading idea, whereas ROS is already all setup and good to go on the Raspberry Pi
While I could find plenty of sites that list the various advantages of ROS, I could not find anything that compares ROS to my friend's method that I have mentioned above.
ROS has nodelets which allow multiple nodes to live in the same process and communicate with each other without copy overhead - so less overhead than writing a file would incur.
http://wiki.ros.org/nodelet

simulating two arducopter in with single mavproxy

I have tried to create multiple instance of mav proxy but i have no idea about this.
My question is about how to load two arducopter in single map in sitl. I am learning sitl setup and i want to know is it possible to load two arducopter in one map?
I've successfully managed to do a swarming/flocking simulation using Dronkit-SITL and QGroundControl. The thing is that the SITL TCP ports are hard-coded in the ArduPilot firmware. If you wanna simulate with multiple vehicles, you are gonna have to modify the SOURCE CODE of ArduPilot and compile from source for each vehicle separately.
For instance, a swarming simulation of 5 vehicles requires 5 different vehicle firmware coded with different TCP ports. Also, the simulated eeprom.bin should be slightly adjusted to work properly (or even fit real vehicles).
Basically, monitoring the TCP ports should work fine with both Dronekit-SITL and Mavproxy so it should be no problem to do multi-vehicle simulation in Mavproxy.
Some more details can be found on my Github repo (although the Readme is quite long). Hope it helps!
https://github.com/weskeryuan/flydan
Are you trying to do something related to swarms?
In ardupilot website they mentioned the following:
Using SITL is just like using a real vehicle.
I do not think it is possible but it is better to post your question in Ardupilot forum community.
I like the idea and it will be extremely useful.
From the MAVProxy docs:
MAVProxy is designed to control 1 vehicle per instance. Controlling multiple vehicles would require a substantial re-design of MAVProxy and is not currently on the "to-do" list.
However, there is very limited support for displaying (not controlling) multiple vehicles on the map. This should be considered an experimental feature only, as it was developed for a specific application (2016 UAV Challenge) where two UAV's were required to be displayed on a single map.
If all you need is to view them both in one map, then the instructions there should work for you.
You cannot run two vehicles in one MAP on Mavproxy.
What you can do is run two simulators and track them on Mission Planner or QGC.
To run two unstance you need to specify different instance numbers.
python3 ardupilot/Tools/autotest/sim_vehicle.py -j4 -v ArduCopter -M --map --console --instance 40 --out=udpout:127.0.0.1:14550
python3 ardupilot/Tools/autotest/sim_vehicle.py -j4 -v ArduCopter -M --map --console --instance 50 --out=udpout:127.0.0.1:14551
Note instance 40 & 50... also note the out=udpout ports 14551 & 14550

How can I access Ring 0 with Python?

This answer, stating that the naming of classes in Python is not done because of special privileges, here confuses me.
How can I access lower rings in Python?
Is the low-level io for accessing lower level rings?
If it is, which rings I can access with that?
Is the statement "This function is intended for low-level I/O." referring to lower level rings or to something else?
C tends to be prominent language in os -programming. When there is the OS -class in Python, does it mean that I can access C -code through that class?
Suppose I am playing with bizarre machine-language code and I want to somehow understand what it means. Are there some tools in Python which I can use to analyze such things? If there is not, is there some way that I could still use Python to control some tool which controls the bizarre machine language? [ctypes suggested in comments]
If Python has nothing to do with the low-level privileged stuff, do it still offers some wrappers to control the privileged?
Windows and Linux both use ring 0 for kernel code and ring 3 for user processes. The advantage of this is that user processes can be isolated from one another, so the system continues to run even if a process crashes. By contrast, a bug in ring 0 code can potentially crash the entire machine.
One of the reasons ring 0 code is so critical is that it can access hardware directly. By contrast, when a user-mode (ring 3) process needs to read some data from a disk:
the process executes a special instruction telling the CPU it wants to make a system call
CPU switches to ring 0 and starts executing kernel code
kernel checks that the process is allowed to perform the operation
if permitted, the operation is carried out
kernel tells the CPU it has finished
CPU switches back to ring 3 and returns control to the process
Processes belonging to "privileged" users (e.g. root/Administrator) run in ring 3 just like any other user-mode code; the only difference is that the check at step 3 always succeeds. This is a good thing because:
root-owned processes can crash without taking the entire system down
many user-mode features are unavailable in the kernel, e.g. swappable memory, private address space
As for running Python code in lower rings - kernel-mode is a very different environment, and the Python interpreter simply isn't designed to run in it, e.g. the procedure for allocating memory is completely different.
In the other question you reference, both os.open() and open() end up making the open() system call, which checks whether the process is allowed to open the corresponding file and performs the actual operation.
I think SimonJ's answer is very good, but I'm going to post my own because from your comments it appears you're not quite understanding things.
Firstly, when you boot an operating system, what you're doing is loading the kernel into memory and saying "start executing at address X". The kernel, that code, is essentially just a program, but of course nothing else is loaded, so if it wants to do anything it has to know the exact commands for the specific hardware it has attached to it.
You don't have to run a kernel. If you know how to control all the attached hardware, you don't need one, in fact. However, it was rapidly realised way back when that there are many types of hardware one might face and having an identical interface across systems to program against would make code portable and generally help get things done faster.
So the function of the kernel, then, is to control all the hardware attached to the system and present it in a common interface, called an API (application programming interface). Code for programs that run on the system don't talk directly to hardware. They talk to the kernel. So user land programs don't need to know how to ask a specific hard disk to read sector 0x213E or whatever, but the kernel does.
Now, the description of ring 3 provided in SimonJ's answer is how userland is implemented - with isolated, unprivileged processes with virtual private address spaces that cannot interfere with each other, for the benefits he describes.
There's also another level of complexity in here, namely the concept of permissions. Most operating systems have some form of access control, whereby "administrators" have total control of the system and "users" have a restricted subset of options. So a kernel request to open a file belonging to an administrator should fail under this sort of approach. The user who runs the program forms part of the program's context, if you like, and what the program can do is constrained by what that user can do.
Most of what you could ever want to achieve (unless your intention is to write a kernel) can be done in userland as the root/administrator user, where the kernel does not deny any API requests made to it. It's still a userland program. It's still a ring 3 program. But for most (nearly all) uses it is sufficient. A lot can be achieved as a non-root/administrative user.
That applies to the python interpreter and by extension all python code running on that interpreter.
Let's deal with some uncertainties:
The naming of os and sys I think is because these are "systems" tasks (as opposed to say urllib2). They give you ways to manipulate and open files, for example. However, these go through the python interpreter which in turn makes a call to the kernel.
I do not know of any kernel-mode python implementations. Therefore to my knowledge there is no way to write code in python that will run in the kernel (linux/windows).
There are two types of privileged: privileged in terms of hardware access and privileged in terms of the access control system provided by the kernel. Python can be run as root/an administrator (indeed on Linux many of the administration gui tools are written in python), so in a sense it can access privileged code.
Writing a C extension or controlling a C application to Python would ostensibly mean you are either using code added to the interpreter (userland) or controlling another userland application. However, if you wrote a kernel module in C (Linux) or a Driver in C (Windows) it would be possible to load that driver and interact with it via the kernel APIs from python. An example might be creating a /proc entry in C and then having your python application pass messages via read/write to that /proc entry (which the kernel module would have to handle via a write/read handler. Essentially, you write the code you want to run in kernel space and basically add/extend the kernel API in one of many ways so that your program can interact with that code.
"Low-level" IO means having more control over the type of IO that takes place and how you get that data from the operating system. It is low level compared to higher level functions still in Python that give you easier ways to read files (convenience at the cost of control). It is comparable to the difference between read() calls and fread() or fscanf() in C.
Health warning: Writing kernel modules, if you get it wrong, will at best result in that module not being properly loaded; at worst your system will panic/bluescreen and you'll have to reboot.
The final point about machine instructions I cannot answer here. It's a totally separate question and it depends. There are many tools capable of analysing code like that I'm sure, but I'm not a reverse engineer. However, I do know that many of these tools (gdb, valgrind) e.g. tools that hook into binary code do not need kernel modules to do their work.
You can use inpout library http://logix4u.net/parallel-port/index.php
import ctypes
#Example of strobing data out with nStrobe pin (note - inverted)
#Get 50kbaud without the read, 30kbaud with
read = []
for n in range(4):
ctypes.windll.inpout32.Out32(0x37a, 1)
ctypes.windll.inpout32.Out32(0x378, n)
read.append(ctypes.windll.inpout32.Inp32(0x378)) #Dummy read to see what is going on
ctypes.windll.inpout32.Out32(0x37a, 0)
print read
[note: I was wrong. usermode code can no longer access ring 0 on modern unix systems. -- jc 2019-01-17]
I've forgotten what little I ever knew about Windows privileges. In all Unix systems with which I'm familiar, the root user can access all ring0 privileges. But I can't think of any mapping of Python modules with privilege rings.
That is, the 'os' and 'sys' modules don't give you any special privileges. You have them, or not, due to your login credentials.
How can I access lower rings in Python?
ctypes
Is the low-level io for accessing lower level rings?
No.
Is the statement "This function is intended for low-level I/O." referring to lower level rings or to something else?
Something else.
C tends to be prominent language in os -programming. When there is the OS -class in Python, does it mean that I can access C -code through that class?
All of CPython is implemented in C.
The os module (it's not a class, it's a module) is for accessing OS API's. C has nothing to do with access to OS API's. Python accesses the API's "directly".
Suppose I am playing with bizarre machine-language code and I want to somehow understand what it means. Are there some tools in Python which I can use to analyze such things?
"playing with"?
"understand what it means"? is your problem. You read the code, you understand it. Whether or not Python can help is impossible to say. What don't you understand?
If there is not, is there some way that I could still use Python to control some tool which controls the bizarre machine language? [ctypes suggested in comments]
ctypes
If Python has nothing to do with the low-level privileged stuff, do it still offers some wrappers to control the privileged?
You don't "wrap" things to control privileges.
Most OS's work like this.
You grant privileges to a user account.
The OS API's check the privileges granted to the user making the OS API request.
If the user has the privileges, the OS API works.
If the user lacks the privileges, the OS API raises an exception.
That's all there is to it.

Categories