simulating two arducopter in with single mavproxy - python

I have tried to create multiple instance of mav proxy but i have no idea about this.
My question is about how to load two arducopter in single map in sitl. I am learning sitl setup and i want to know is it possible to load two arducopter in one map?

I've successfully managed to do a swarming/flocking simulation using Dronkit-SITL and QGroundControl. The thing is that the SITL TCP ports are hard-coded in the ArduPilot firmware. If you wanna simulate with multiple vehicles, you are gonna have to modify the SOURCE CODE of ArduPilot and compile from source for each vehicle separately.
For instance, a swarming simulation of 5 vehicles requires 5 different vehicle firmware coded with different TCP ports. Also, the simulated eeprom.bin should be slightly adjusted to work properly (or even fit real vehicles).
Basically, monitoring the TCP ports should work fine with both Dronekit-SITL and Mavproxy so it should be no problem to do multi-vehicle simulation in Mavproxy.
Some more details can be found on my Github repo (although the Readme is quite long). Hope it helps!
https://github.com/weskeryuan/flydan

Are you trying to do something related to swarms?
In ardupilot website they mentioned the following:
Using SITL is just like using a real vehicle.
I do not think it is possible but it is better to post your question in Ardupilot forum community.
I like the idea and it will be extremely useful.

From the MAVProxy docs:
MAVProxy is designed to control 1 vehicle per instance. Controlling multiple vehicles would require a substantial re-design of MAVProxy and is not currently on the "to-do" list.
However, there is very limited support for displaying (not controlling) multiple vehicles on the map. This should be considered an experimental feature only, as it was developed for a specific application (2016 UAV Challenge) where two UAV's were required to be displayed on a single map.
If all you need is to view them both in one map, then the instructions there should work for you.

You cannot run two vehicles in one MAP on Mavproxy.
What you can do is run two simulators and track them on Mission Planner or QGC.
To run two unstance you need to specify different instance numbers.
python3 ardupilot/Tools/autotest/sim_vehicle.py -j4 -v ArduCopter -M --map --console --instance 40 --out=udpout:127.0.0.1:14550
python3 ardupilot/Tools/autotest/sim_vehicle.py -j4 -v ArduCopter -M --map --console --instance 50 --out=udpout:127.0.0.1:14551
Note instance 40 & 50... also note the out=udpout ports 14551 & 14550

Related

Python compiler call another python compiler to execute a script (execute a script from one independent machine to another)

I know the question title is weird!.
I have two virtual machines. First one has limited resources, while the second one has enough resources just like normal machine. The first machine will receive a signal from an external device. This signal will trigger a python compiler to execute a script. The script is big and the first machine does not have enough resources to execute it.
I can copy the script to the second machine to run it there, but I can't make the second machine receive the external signal. I am wondering if there is a way to make the compiler on the first machine ( once the external signal received) call the compiler on the second machine, so the compiler on the second machine executes the script? so the second compiler should use the second machine resources. check the attached image please.
Assume that the connection is established between the two machines and they can see each other, and the second machine has a copy from the script. I just need the commands that pass ( the execution ) to the second machine and make it use its own resources.
You should look into the microservice architecture to do this.
You can achieve this either by using flask and sending server requests between each machine, or something like nameko, which will allow you to create a "bridge" between machines and call functions between them (seems like what you are more interested in). Example for nameko:
Machine 2 (executor of resource-intensive script):
from nameko.rpc import rpc
class Stuff(object):
#rpc
def example(self):
return "Function running on Machine 2."
You would run the above script through the Nameko shell, as detailed in the docs.
Machine 1:
from nameko.standalone.rpc import ClusterRpcProxy
# This is the amqp server that machine 2 would be running.
config = {
'AMQP_URI': AMQP_URI # e.g. "pyamqp://guest:guest#localhost"
}
with ClusterRpcProxy(config) as cluster_rpc:
cluster_rpc.Stuff.example() # Function running on Machine 2.
More info here.
Hmm, there's many approaches to this problem.
If you want a python only solution, you can check out
dispy http://dispy.sourceforge.net/
Or Dask. https://dask.org/
If you want a robust solution (what I use on my home computing cluster but imo overkill for your problem) you can use
SLURM. SLURM is basically a way to string multiple computers together into a "supercomputer". https://slurm.schedmd.com/documentation.html
For a semi-quick, hacky solution. You can write a microservice. Essentially, your "weak" computer will receive the message then send a http request to your "strong" computer. Your strong computer will contain the actual program, compute results, and pass back the result to your "weak" computer.
Flask is an easy and lightweight solution for this.
All of these solutions require some type of networking. At the least, the computers need to be on the same LAN or both have access over the web.
There are many other approaches not mentioned. For example, you can export a NFS (netowrk file storage) and have one computer put a file in the shared folder and the other computer perform work on the file. I'm sure there are plenty other contrived ways to accomplish this task :). I'd be happy to expand on a particular method if you want.

Managing build and running of docker containers on one host

I have one server which runs multiple containers
Nginx
Portainer
Several custom HTTP servers
RabbitMQ
I have a folder structure like this in the home directoty
/docker/dockerfiles/nginx/Dockerfile
/docker/dockerfiles/nginx/README
/docker/dockerfiles/nginx/NOTES
/docker/dockerfiles/portainer/Dockerfile
...
/docker/dockerfiles/rabbitmq/Dockerfile
/docker/volumes/nginx/sites/...
/docker/volumes/nginx/logs/...
/docker/volumes/portainer/
...
/docker/volumes/rabbitmq/
/docker/volumes/ contains all the files which the docker containers use, they are mapped into the containers, the containers don't use real Docker volumes and I really want to avoid using them.
I also have 3 Python files:
containers_info.py
containers_build.py
containers_run.py
containers_info.py is basically a dictionary holding rudimentary information about the containers, like the version of the container and the build date, if it should be excluded/included in a build pass, if it should get included/excluded in a run pass
containers_build.py imports containers_info.py and checks which containers should be built, reads the corresponding Dockerfile from /docker/dockerfiles/.../Dockerfile and then builds the container(s) with the Docker Python API, collects some stats and creates summaries, notifies of failures and the like.
containers_run.py also imports containers_info.py and checks which containers should be run. It contains the information of which volumes to map to, which ports to use, basically all the stuff that would go in a YAML file to describe the container and a bit of management of the currently running container along with it.
It contains multiple snippets like
def run_websites(info):
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
container_name = 'websites'
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
new_container_name = container_name
if info['auto-run']: rename_and_stop_container(container_name)
else: new_container_name = container_name + '-prep'
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
container = client.containers.run(
detach=True,
name=new_container_name,
hostname='docker-websites',
image='myscope/websites:{}'.format(versions['websites']),
command='python -u server.py settings:docker-lean app:websites id:hp-1 port:8080 domain:www.example.com',
ports={'8080/tcp': ('172.17.0.1', 10001)},
working_dir='/home/user/python/server/app',
volumes={
'/home/user/docker/volumes/websites': {'bind': '/home/user/python/server', 'mode': 'rw'},
}
)
#patch = 'sed -i.bak s/raise\ ImportError/name\ =\ \\"libc.so.6\\"\ #\ raise\ ImportError/g /usr/lib/python2.7/site-packages/twisted/python/_inotify.py'
#print container.exec_run(patch, user='root')
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
if info['auto-run'] and info['auto-delete-old']: remove_container(container_name + '-old')
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Now I want to move away from this custom solution and use something open source, which will allow me to scale this approach to multiple machines. Currently I can copy the ~/docker/ among servers and execute the modified scripts to obtain the machines I need, but I think that Docker Swarm or Kubernetes is designed to solve these issues. At least somehow that's the impression I have.
My Python solution was born while I was learning Docker, automating it via the Docker Python API helped me a lot with learning Dockerfiles, since I could automate the entire process and mistakes in the Dockerfiles would only mean a little bit of lost time.
Another important benefit of this Python script approach was that I was able to automate the creation of dozens if instances of the webserver on the same machine (assuming that this would make sense to do) and have Nginx adapt perfectly to this change (adding/removing proxies dynamically, reloading configuration).
So, which technology should I start looking into, in order to replace my current system with it? Also, I don't intend to run many machines, initially only two (main+backup), but would, at any point in time, like to be able to add more machines and distribute the load among them, and that by just changing some settings in a configuration file.
Which is the current approach to solve these issues?
There are a number of tools you could use in this scenario. If you just plan on using a single machine, docker-compose could be the solution you are looking for. It uses a yaml style makefile and supports the same build-context (as do standard Docker and kubernetes). It is really easy to get multiple instances of a pod or container running, just using the --scale flag eliminates a lot of the headache
If you are planning on running this on multiple machines, I'd say kubernetes is probably going to be your best bet. It's really well set up for it. Admittedly, I don't have a lot of experience in Swarm, but it's analogous from what I understand. The benefit there is that kubernetes can also handle the load-balancing for you, whereas docker-compose does not, and you'd have to use some sort of proxy (like Nginx) for that. It's not horrible, but also not the most straightforward if you haven't done something like that before

Stress test a website using curl in Python

I have a Flask application running on port 5000 that supports 7 different endpoints that support GET requests. So I can do a
curl http://localhost:5000/get_species_interactions?q=tiger
And it returns a page after some computation. There are 6 other such endpoints each with varying degrees of computation at the back end. It works fine with one user but I want to get metrics for how well it can perform under load. I am trying to stress test this by simulating a large number of requests and I was thinking of using a python script. The rough algorithm I had in mind is the following:
while (num_tests < 1000):
e = get_random_end_point_to_test() # pick one out of 7 end points
d = get_random_data_for_get(e) # pick relevant random data to send in curl command
resp = curl(e/q?d)
num_tests++
My question is - is this general approach on the right track? Does it simulate a large number of simultaneous users? I was planning to store the amount of time it took to execute each request and compute stats. Otherwise is there a free utility I can use to do this kind of stress test on Mac OS? I saw a tool called siege but its not available on mac easily.
I would suggest Apache jmeter. The tool has everything you need for Stresstests and is good documented online.
You'll need to install Java though
No, you need to parallelize your requests. The libcurl can do this using the multi interface.
Check this out. Pythonic interface to libcurl/pycurl.

Spawning new robot in running ROS Gazebo simulation

The problem statement is in simulating both 'car' and a quadcopter in ROS Gazebo SITL as mentioned in this question. Two possibilities have been considered for the same which is as depicted in the image.
(Option 1 uses 6 terminals with independent launch files and MAVproxy initiation terminals)
While trying to search for Option 1, the documentation appeared to be sparse (The idea is to launch the simulation with ErleRover and then spawn ErleCopter on-the-go; I haven't found any official documentation mentioning either the possibility or the impossibility of this option). Can somebody be requested to let me know how option 1 can be achieved or why it is impossible by mentioning corresponding official documentation?
Regarding option 2, additional options have been explored; The problem is apparently with two aspects: param vs rosparam and tf2 vs tf_prefix.
Some of the attempts of simulation of multiple turtlebots have used tf_prefix which is deprecated. But, I have been unable to find any example which uses tf2 while simulating multiple (different) robots. But, tf2 works on ROS Hydro (and thus Indigo). Another possible option is the usage of rosparam instead of param (only). But, documentation on that is sparse regarding the usage of same on multi-robot simulation and I have been able to find only one example (for a single robot Husky).
But, one thing is clearer: MAVproxy can support multiple robots through the usage of SYSID and component-ID parameters. (upto 255 robots with 0 being a broadcast ID) Thus, port numbers have to be modified (possibly 14000 and 15000 as each vehicle uses 4 consecutive ports) just like the UCTF simulation. (vehicle_base_port = VEHICLE_BASE_PORT + mav_sys_id*4)
To summarise the question, the main concern is to simulate an independent car moving around and an independent quadcopter flying around in the ROS Gazebo SITL (maybe using Python nodes; C++ is fine too). Can somebody be requested to let me know the answers to the following sub-questions?
Is this kind of simulation possible? (Either by the usage of ROS Indigo, Gazebo 7, MAVproxy 1.5.2 on Ubuntu 14.04 or by modifying UCTF project to spawm a car like ErleRover if there is no other option)
(You are kindly requested to let me know the examples if possible and official links if this is impossible)
If on-the-go launch is not possible with two launch files, is it possible to launch two different robots with a single launch file?
This is an optional question: How to modify the listener (subscriber) of the node? (Is it to be done in the Python node?)
This simulation is taking relatively long time with system software crashing for about 3 times (NVIDIA instead of Noveau, broken packages etc) and any help will be whole-heartedly, gratefully and greatly appreciated. Thanks for your time and consideration.
Prasad N R

Multiprocessing with Screen and Bash

Running a python script on different nodes at school using SSH. Each node has 8 cores. I use GNU Screen to be able to detach from a single process.
Is it more desirable to:
Run several different sessions of screen.
Run a single screen process and use & in a bash terminal.
Are they equivalent?
I am not sure if my experiments are poorly coded and taking an inordinate amount of time (very possible) OR my choice to use 1. is slowing the process down considerably. Thank you!
With bash I imagine you're doing something like this (assuming /home is under network mount):
#!/bin/bash
for i in {1..$NUM_NODES}
do
ssh node$i 'python /home/ryan/my_script.py' &
done
Launching this script from behind a single screen will work fine. Starting up several sessions of screen provides no performance gains but adds in the extra complication of starting multiple screens.
Keep in mind that there are much better ways to distribute load across a cluster (e.g. if someone else is using up all of node7 you'd want a way to detect that and send your job elsewhere). Most clusters I've worked with have Torque, Maui or the qsub command installed. I suggest giving those a look.
I would think they are about the same. I would prefer screen just because I have an easier time managing it. Depending on the scripts usage, that could also have some effect on time to process.

Categories