I have a Flask application running on port 5000 that supports 7 different endpoints that support GET requests. So I can do a
curl http://localhost:5000/get_species_interactions?q=tiger
And it returns a page after some computation. There are 6 other such endpoints each with varying degrees of computation at the back end. It works fine with one user but I want to get metrics for how well it can perform under load. I am trying to stress test this by simulating a large number of requests and I was thinking of using a python script. The rough algorithm I had in mind is the following:
while (num_tests < 1000):
e = get_random_end_point_to_test() # pick one out of 7 end points
d = get_random_data_for_get(e) # pick relevant random data to send in curl command
resp = curl(e/q?d)
num_tests++
My question is - is this general approach on the right track? Does it simulate a large number of simultaneous users? I was planning to store the amount of time it took to execute each request and compute stats. Otherwise is there a free utility I can use to do this kind of stress test on Mac OS? I saw a tool called siege but its not available on mac easily.
I would suggest Apache jmeter. The tool has everything you need for Stresstests and is good documented online.
You'll need to install Java though
No, you need to parallelize your requests. The libcurl can do this using the multi interface.
Check this out. Pythonic interface to libcurl/pycurl.
Related
I know the question title is weird!.
I have two virtual machines. First one has limited resources, while the second one has enough resources just like normal machine. The first machine will receive a signal from an external device. This signal will trigger a python compiler to execute a script. The script is big and the first machine does not have enough resources to execute it.
I can copy the script to the second machine to run it there, but I can't make the second machine receive the external signal. I am wondering if there is a way to make the compiler on the first machine ( once the external signal received) call the compiler on the second machine, so the compiler on the second machine executes the script? so the second compiler should use the second machine resources. check the attached image please.
Assume that the connection is established between the two machines and they can see each other, and the second machine has a copy from the script. I just need the commands that pass ( the execution ) to the second machine and make it use its own resources.
You should look into the microservice architecture to do this.
You can achieve this either by using flask and sending server requests between each machine, or something like nameko, which will allow you to create a "bridge" between machines and call functions between them (seems like what you are more interested in). Example for nameko:
Machine 2 (executor of resource-intensive script):
from nameko.rpc import rpc
class Stuff(object):
#rpc
def example(self):
return "Function running on Machine 2."
You would run the above script through the Nameko shell, as detailed in the docs.
Machine 1:
from nameko.standalone.rpc import ClusterRpcProxy
# This is the amqp server that machine 2 would be running.
config = {
'AMQP_URI': AMQP_URI # e.g. "pyamqp://guest:guest#localhost"
}
with ClusterRpcProxy(config) as cluster_rpc:
cluster_rpc.Stuff.example() # Function running on Machine 2.
More info here.
Hmm, there's many approaches to this problem.
If you want a python only solution, you can check out
dispy http://dispy.sourceforge.net/
Or Dask. https://dask.org/
If you want a robust solution (what I use on my home computing cluster but imo overkill for your problem) you can use
SLURM. SLURM is basically a way to string multiple computers together into a "supercomputer". https://slurm.schedmd.com/documentation.html
For a semi-quick, hacky solution. You can write a microservice. Essentially, your "weak" computer will receive the message then send a http request to your "strong" computer. Your strong computer will contain the actual program, compute results, and pass back the result to your "weak" computer.
Flask is an easy and lightweight solution for this.
All of these solutions require some type of networking. At the least, the computers need to be on the same LAN or both have access over the web.
There are many other approaches not mentioned. For example, you can export a NFS (netowrk file storage) and have one computer put a file in the shared folder and the other computer perform work on the file. I'm sure there are plenty other contrived ways to accomplish this task :). I'd be happy to expand on a particular method if you want.
I have tried to create multiple instance of mav proxy but i have no idea about this.
My question is about how to load two arducopter in single map in sitl. I am learning sitl setup and i want to know is it possible to load two arducopter in one map?
I've successfully managed to do a swarming/flocking simulation using Dronkit-SITL and QGroundControl. The thing is that the SITL TCP ports are hard-coded in the ArduPilot firmware. If you wanna simulate with multiple vehicles, you are gonna have to modify the SOURCE CODE of ArduPilot and compile from source for each vehicle separately.
For instance, a swarming simulation of 5 vehicles requires 5 different vehicle firmware coded with different TCP ports. Also, the simulated eeprom.bin should be slightly adjusted to work properly (or even fit real vehicles).
Basically, monitoring the TCP ports should work fine with both Dronekit-SITL and Mavproxy so it should be no problem to do multi-vehicle simulation in Mavproxy.
Some more details can be found on my Github repo (although the Readme is quite long). Hope it helps!
https://github.com/weskeryuan/flydan
Are you trying to do something related to swarms?
In ardupilot website they mentioned the following:
Using SITL is just like using a real vehicle.
I do not think it is possible but it is better to post your question in Ardupilot forum community.
I like the idea and it will be extremely useful.
From the MAVProxy docs:
MAVProxy is designed to control 1 vehicle per instance. Controlling multiple vehicles would require a substantial re-design of MAVProxy and is not currently on the "to-do" list.
However, there is very limited support for displaying (not controlling) multiple vehicles on the map. This should be considered an experimental feature only, as it was developed for a specific application (2016 UAV Challenge) where two UAV's were required to be displayed on a single map.
If all you need is to view them both in one map, then the instructions there should work for you.
You cannot run two vehicles in one MAP on Mavproxy.
What you can do is run two simulators and track them on Mission Planner or QGC.
To run two unstance you need to specify different instance numbers.
python3 ardupilot/Tools/autotest/sim_vehicle.py -j4 -v ArduCopter -M --map --console --instance 40 --out=udpout:127.0.0.1:14550
python3 ardupilot/Tools/autotest/sim_vehicle.py -j4 -v ArduCopter -M --map --console --instance 50 --out=udpout:127.0.0.1:14551
Note instance 40 & 50... also note the out=udpout ports 14551 & 14550
The problem statement is in simulating both 'car' and a quadcopter in ROS Gazebo SITL as mentioned in this question. Two possibilities have been considered for the same which is as depicted in the image.
(Option 1 uses 6 terminals with independent launch files and MAVproxy initiation terminals)
While trying to search for Option 1, the documentation appeared to be sparse (The idea is to launch the simulation with ErleRover and then spawn ErleCopter on-the-go; I haven't found any official documentation mentioning either the possibility or the impossibility of this option). Can somebody be requested to let me know how option 1 can be achieved or why it is impossible by mentioning corresponding official documentation?
Regarding option 2, additional options have been explored; The problem is apparently with two aspects: param vs rosparam and tf2 vs tf_prefix.
Some of the attempts of simulation of multiple turtlebots have used tf_prefix which is deprecated. But, I have been unable to find any example which uses tf2 while simulating multiple (different) robots. But, tf2 works on ROS Hydro (and thus Indigo). Another possible option is the usage of rosparam instead of param (only). But, documentation on that is sparse regarding the usage of same on multi-robot simulation and I have been able to find only one example (for a single robot Husky).
But, one thing is clearer: MAVproxy can support multiple robots through the usage of SYSID and component-ID parameters. (upto 255 robots with 0 being a broadcast ID) Thus, port numbers have to be modified (possibly 14000 and 15000 as each vehicle uses 4 consecutive ports) just like the UCTF simulation. (vehicle_base_port = VEHICLE_BASE_PORT + mav_sys_id*4)
To summarise the question, the main concern is to simulate an independent car moving around and an independent quadcopter flying around in the ROS Gazebo SITL (maybe using Python nodes; C++ is fine too). Can somebody be requested to let me know the answers to the following sub-questions?
Is this kind of simulation possible? (Either by the usage of ROS Indigo, Gazebo 7, MAVproxy 1.5.2 on Ubuntu 14.04 or by modifying UCTF project to spawm a car like ErleRover if there is no other option)
(You are kindly requested to let me know the examples if possible and official links if this is impossible)
If on-the-go launch is not possible with two launch files, is it possible to launch two different robots with a single launch file?
This is an optional question: How to modify the listener (subscriber) of the node? (Is it to be done in the Python node?)
This simulation is taking relatively long time with system software crashing for about 3 times (NVIDIA instead of Noveau, broken packages etc) and any help will be whole-heartedly, gratefully and greatly appreciated. Thanks for your time and consideration.
Prasad N R
I have simple Python script which do check few urls :
f = urllib2.urlopen(urllib2.Request(url))
as i have socket timeout setted on 5 seconds sometimes is bothering to wait 5sec * number of urls on results.
Is there any easy standartized way how to run those url checks asynchronously without big overhead. Script must use standart python components on vanilla ubuntu distribution (no additional installations).
Any ideas ?
I wrote something called multibench a long time ago. I used it for almost the same thing you want to do here, which was to call multiple concurrent instances of wget and see how long it takes to complete. It is a crude load testing and performance monitoring tool. You will need to adapt this somewhat, because this runs the same command n times.
Install additional software. It's a waste of time you re-invent something just because of some packaging decisions made by someone else.
This is probably a truly basic thing that I'm simply having an odd time figuring out in a Python 2.5 app.
I have a process that will take roughly an hour to complete, so I made a backend. To that end, I have a backend.yaml that has something like the following:
-name: mybackend
options: dynamic
start: /path/to/script.py
(The script is just raw computation. There's no notion of an active web session anywhere.)
On toy data, this works just fine.
This used to be public, so I would navigate to the page, the script would start, and time out after about a minute (HTTP + 30s shutdown grace period I assume, ). I figured this was a browser issue. So I repeat the same thing with a cron job. No dice. Switch to a using a push queue and adding a targeted task, since on paper it looks like it would wait for 10 minutes. Same thing.
All 3 time out after that minute, which means I'm not decoupling the request from the backend like I believe I am.
I'm assuming that I need to write a proper Handler for the backend to do work, but I don't exactly know how to write the Handler/webapp2Route. Do I handle _ah/start/ or make a new endpoint for the backend? How do I handle the subdomain? It still seems like the wrong thing to do (I'm sticking a long-process directly into a request of sorts), but I'm at a loss otherwise.
So the root cause ended up being doing the following in the script itself:
models = MyModel.all()
for model in models:
# Magic happens
I was basically taking for granted that the query would automatically batch my Query.all() over many entities, but it was dying at the 1000th entry or so. I originally wrote it was computational only because I completely ignored the fact that the reads can fail.
The actual solution for solving the problem we wanted ended up being "Use the map-reduce library", since we were trying to look at each model for analysis.