Is there a way to make Locust measure (completed tasks/second) assuming we have more that one request in the task, as the Jmeter dose by using "Transaction Controller"
measuring performance by how many completed flows (several requests) per second
If it were me, I would reduce your work down to a single request or action per task. You can use SequentialTasks or put TaskSets inside TaskSets to structure the work however you need it to be, then Locust will do all the work for you.
Alternatively, you can create your own User class with your code however you want and use EventHooks to manually tell Locust when stuff happens by firing a request event. You can call it with request_type as whatever label you want for tasks to be tagged with as a type and name to be whatever you want the task to be called when it reports.
In addition to #solowalkers answer you may want to check out locust-plugins TransactionManager.
See the transaction_example* files here https://github.com/SvenskaSpel/locust-plugins/tree/master/examples
Related
I am new for locust. I have a CSV file which records the requests rate. And I want to sent a specify number of requests to a website according to the CSV file, like 500 requests per seconds, how can I do this?
Please help me about this problem, Thanks
Sounds like what would be best for you is a Custom Load Shape. You can specify in code how many users get spawned at any given time during your test. You could read the values from your CSV and then set the user count and spawn rate appropriately.
http://docs.locust.io/en/stable/custom-load-shape.html
EDIT:
A Custom Load Shape lets you control the spawned number of users in code, but if you want to ensure you're getting a certain amount of Requests Per Second (RPS), that's going to be up to how you write your code. The code your Locust users runs needs to be written in such a way that each user is only making one request. Then in your Locust User class, you set wait_time = constant(1) or wait_time = constant_pacing(1), whichever behavior you want.
http://docs.locust.io/en/stable/api.html?highlight=wait_time#locust.wait_time.constant
You can see this sort of pattern in all of the code examples for custom shapes.
How Can I create real time actions in python / django?
for more info:
Some user add some thing to database also other user add same one too (not same as they are but they have similar property) at same time (and all times) program should being check if they are objects with similar property {if they are not same check both of them in other time with all other objects that may be added/edited on database}
these actions should be in real time or at last by few minuts appart.
for example:
for every(2min):
do_job()
or
while True:
do_job()
if i use second one program will stop.
You need to run a async task to check the objects in background. You can check this link for reference Celery doc
In case if you have any limitations using celery or a similar approach, the other way is to create a scripts.py inside your app(same level as models.py & views.py) and write logic and schedule this in cron or any scheduler based on your host server.
This is probably a truly basic thing that I'm simply having an odd time figuring out in a Python 2.5 app.
I have a process that will take roughly an hour to complete, so I made a backend. To that end, I have a backend.yaml that has something like the following:
-name: mybackend
options: dynamic
start: /path/to/script.py
(The script is just raw computation. There's no notion of an active web session anywhere.)
On toy data, this works just fine.
This used to be public, so I would navigate to the page, the script would start, and time out after about a minute (HTTP + 30s shutdown grace period I assume, ). I figured this was a browser issue. So I repeat the same thing with a cron job. No dice. Switch to a using a push queue and adding a targeted task, since on paper it looks like it would wait for 10 minutes. Same thing.
All 3 time out after that minute, which means I'm not decoupling the request from the backend like I believe I am.
I'm assuming that I need to write a proper Handler for the backend to do work, but I don't exactly know how to write the Handler/webapp2Route. Do I handle _ah/start/ or make a new endpoint for the backend? How do I handle the subdomain? It still seems like the wrong thing to do (I'm sticking a long-process directly into a request of sorts), but I'm at a loss otherwise.
So the root cause ended up being doing the following in the script itself:
models = MyModel.all()
for model in models:
# Magic happens
I was basically taking for granted that the query would automatically batch my Query.all() over many entities, but it was dying at the 1000th entry or so. I originally wrote it was computational only because I completely ignored the fact that the reads can fail.
The actual solution for solving the problem we wanted ended up being "Use the map-reduce library", since we were trying to look at each model for analysis.
I'm using the django_notification module. https://github.com/pinax/django-notification/blob/master/docs/usage.txt
This is what I do in my code to send an email to a user when something happens:
notification.send([to_user], "comment_received", noti_dict)
But, this seems to block the request. And it takes a long time to send it out. I read the docs and it says that it's possible to add it to a queue (asynchronous). How do I add it to an asynchronous queue?
I don't understand what the docs are trying to say. What is "emit_notices"? When do I call that? Do I have a script that calls that every 5 seconds? That's silly. What's the right way to do it asynchronously? What do I do?
Lets first break down what each does.
``send_now``
~~~~~~~~~~~~
This is a blocking call that will check each user for elgibility of the
notice and actually peform the send.
``queue``
~~~~~~~~~
This is a non-blocking call that will queue the call to ``send_now`` to
be executed at a later time. To later execute the call you need to use
the ``emit_notices`` management command.
``send``
~~~~~~~~
A proxy around ``send_now`` and ``queue``. It gets its behavior from a global
setting named ``NOTIFICATION_QUEUE_ALL``. By default it is ``False``. This
setting is meant to help control whether you want to queue any call to
``send``.
``send`` also accepts ``now`` and ``queue`` keyword arguments. By default
each option is set to ``False`` to honor the global setting which is ``False``.
This enables you to override on a per call basis whether it should call
``send_now`` or ``queue``.
It looks like in your settings file you need to set
NOTIFICATION_QUEUE_ALL=True
And then you need to setup a cronjob (maybe every 10-30 seconds or something) to run something like,
django_admin.py emit_notices
This will periodically run and do the blocking call which sends out all the emails and whatever legwork the notification app needs. I'm sure if there is nothing to do it's not that intense of a workload.
And before you expand on your comment about this being silly you should think about it. It's not really silly at all. You don't want a blocking call to be tied to a web request otherwise the user will never get a response back from the server. Sending email is blocking in this sense.
Now, if you were just going to have the person receive this notification when they login, then you probably don't need to go this way because you do have to make an external call to sendmail or whatever you're using to send emails. But in your case, sending emails, you should do it this way.
According to those docs, send is just wrapping send_now and queue. So if you want to send the notifications asynchronous instead of synchronous you have 2 options:
Change your settings:
# This flag will make all messages default to async
NOTIFICATION_QUEUE_ALL = True
Use teh queue keyword argument:
notification.send([to_user], "comment_received", noti_dict, queue=True)
If you queue the notifications you will have to run the emit_notices management command periodically. So you could put that in a cron job to run every couple of minutes.
I want to check users' subscribed dates for certain period. And send mail to users whose subscription is finishing (ex. reminds two days).
I think the best way is using thread and timer to check dates. But I have no idea how to call this function. I don't want to make a separate program or shell. I want to combine this procedure to my django code. I tried to call this function in my settings.py file. But it seems it is not a good idea. It calls the function and creates thread every time I imported settings.
That's case for manage.py command called periodically from cron. Oficial doc about creating those commands. Here bit more helpful.
If you want something simpler then django-command-extensions has commands for managing django jobs.
if you need more then only this one asynchronous job have a look at celery.
using Django-cron is much easier and simple
EDIT: Added a tip
from django_cron import cronScheduler, Job
class sendMail(Job):
# period run every 300 seconds (5 minutes)
run_every = 300
def job(self):
# This will be executed every 5 minutes
datatuple = check_subscription_finishing()
send_mass_mail(datatuple)
//and just register it
cronScheduler.register(sendMail)