Locust Load Testing running into infinite loop - python

i am really new to Python with Locust load testing.
I Created the below Python script but its running into infinite loop.
I have used
Number of total users to simulate = 1
Spawn rate = 1
Please let me know where i am going wrong.
from locust import HttpUser, task, between, TaskSet
class Behaviour(TaskSet):
#task
def first_task(self):
self.client.get('/LoginHome.aspx')
class WebsiteTestUser(HttpUser):
tasks = [Behaviour]
wait_time = between(5, 15)
host = "http://DEV/LoadTesting"
Thanks.

It doesn’t look like you’ve done anything wrong. I assume by “infinite loop” you mean that you tell it to spawn 1 user and then one user starts but then you get repeated requests to your endpoint that you defined in your task until you stop Locust.
This is by design. When one Locust user finishes its tasks, a new one is spawned in its place. Locust will try to keep the specified number of users running indefinitely. As a load test tool, you’re not telling it to run the defined tasks X number of times, you define a user flow from start to finish and then tell it the number of users you want to throw at your system you’re testing to make sure/find out if your system can handle the load you want.

Related

Locust class-picker mode auto reset statistic when ramping up users

currently I have started working with Locust. I follow the class-picker docs and practice with a simple test, but quickly realized that every time I increase the number of users during the test, Locust will reset the statistics table. Besides, Locust behevior of ramping up users is quite strange: instead of increasing from 2 users to 5 users, it set the number of users to 0 first and then increase to 5. Is that an obvious thing when running Locust in class-picker mode?
Here is test.py
from locust import HttpUser, constant, task
class MyReqRes1(HttpUser):
wait_time = constant(1)
host = "http://example.com"
#task
def get_users(self):
res = self.client.get("/hello")
print(res.status_code)
class MyReqRes2(HttpUser):
wait_time = constant(1)
host = "http://example.com"
#task
def get_users(self):
res = self.client.get("/world")
print(res.status_code)
And here is my command to run:
locust -f test.py --class-picker
I am trying to keep Locust ramping up users normally (the way it do without --class-picker arguments) and keep the statistic table as well.
The user class UI picker is designed to let you choose a user class to use for the test run. This means that user class will be used for the whole duration of the test. If you want to choose a different user class, you need to start a new test which results in the behavior you described: Locust stops all currently running users, resets the stats, switches user classes, starts the new test by spawning new users at your defined spawn rate to reach the number of desired users.
In other words, it's designed to let you have multiple different test scenarios defined in the same file and let you choose the one you want at run time.
The user class UI picker does not allow you to choose one user class, start a test and get X number of users, choose another class, add Y users, choose another class, add Z users, end up with X+Y+Z running users, which sounds like what you're trying to do. There is not currently a way to accomplish that.
Of course, you're welcome to put together a pull request with that kind of behavior and it can be reviewed and perhaps included in a future version.

Is reusing same process name in loop situation possibly generate zombie process?

My script has to run over a day and its core cycle runs 2-3 times per a minute. I used multiprocessing to give a command simultaneously and each of them will be terminated/join within one cycle.
But in reality I found the software end up out of swap memory or computer freezing situation, I guess this is caused by accumulated processes. I can see on another session while running program, python PID abnormally increasing by time. So I just assume this must be something process thing. What I don't understand is how it happens though I made sure each cycle's process has to be finished on that cycle before proceed the next one.
so I am guessing, actual computing needs more time to progress 'terminate()/join()' job, so I should not "reuse" same object name. Is this proper guessing or is there other possibility?
def function(a,b):
try:
#do stuff # audio / serial things
except:
return
flag_for_2nd_cycle=0
for i in range (1500): # main for running long time
#do something
if flag_for_2nd_cycle==1:
while my_process.is_alive():
if (timecondition) < 30: # kill process if it still alive
my_process.terminate()
my_process.join()
flag_for_2nd_cycle=1
my_process=multiprocessing.process(target=function, args=[c,d])
my_process.start()
#do something and other process jobs going on, for example
my_process2 = multiprocessing.process() ##*stuff
my_process2.terminate()
my_process2.join()
Based on your comment, you are controlling three projectors over serial ports.
The simplest way to do that would be to simply open three serial connections (using pySerial). Then run a loop where you check for available data each of the connections and if so, read and process it. Then you send commands to each of the projectors in turn.
Depending on the speed of the serial link you might not need more than this.

Get Locust IO info for each generated user

The company I work on compromises to deliver 99% of their service responses in less than 1 second and 99.9% or their responses in less that 2 seconds. How can I make Locust report if this rule has been broken for any of the virtual users?
My first approach would be to make a method in my user (inherited
from locust.HttpLocust) that will detect when this event happens and
record it in a user-based log. I think this would work but if I have
1000 user it means I will have 1000 different log files.
A second approach would be to create a single event log, but I guess
that would require me to deal with asynchronos IO handling. I guess
there must be a more pythonesque way.
Locust and performance newbie here. Sorry if my question is misguided.
You can add duration checks at the end of each #task like:
#task
def service_request(self):
r = self.client.get("/your/service/path")
assert r.elapsed < datetime.timedelta(seconds = 1), "Request took more than 1 second"
This way you will have a report on individual HTTP Requests level with regards to which ones are successful and which tool > 1 second.
More information: Locust Assertions - A Complete User Manual
Alternatively you can try considering running your test using Taurus tool as a wrapper. Taurus has powerful and flexible Pass/Fail Criteria subsystem which analyses the results on-the-fly and returns a non-zero exit status code which can be used as an indicator of failure for shell scripts or continuous integration solutions.

What is the most efficient way to run independent processes from the same application in Python

I have a script that in the end executes two functions. It polls for data on a time interval (runs as daemon - and this data is retrieved from a shell command run on the local system) and, once it receives this data will: 1.) function 1 - first write this data to a log file, and 2.) function 2 - observe the data and then send an email IF that data meets certain criteria.
The logging will happen every time, but the alert may not. The issue is, in cases that an alert needs to be sent, if that email connection stalls or takes a lengthy amount of time to connect to the server, it obviously causes the next polling of the data to stall (for an undisclosed amount of time, depending on the server), and in my case it is very important that the polling interval remains consistent (for analytics purposes).
What is the most efficient way, if any, to keep the email process working independently of the logging process while still operating within the same application and depending on the same data? I was considering creating a separate thread for the mailer, but that kind of seems like overkill in this case.
I'd rather not set a short timeout on the email connection, because I want to give the process some chance to connect to the server, while still allowing the logging to be written consistently on the given interval. Some code:
def send(self,msg_):
"""
Send the alert message
:param str msg_: the message to send
"""
self.msg_ = msg_
ar = alert.Alert()
ar.send_message(msg_)
def monitor(self):
"""
Post to the log file and
send the alert message when
applicable
"""
read = r.SensorReading()
msg_ = read.get_message()
msg_ = read.get_message() # the data
if msg_: # if there is data in general...
x = read.get_failed() # store bad data
msg_ += self.write_avg(read)
msg_ += "==============================================="
self.ctlog.update_templog(msg_) # write general data to log
if x:
self.send(x) # if bad data, send...
This is exactly the kind of case you want to use threading/subprocesses for. Fork off a thread for the email, which times out after a while, and keep your daemon running normally.
Possible approaches that come to mind:
Multiprocessing
Multithreading
Parallel Python
My personal choice would be multiprocessing as you clearly mentioned independent processes; you wouldn't want a crashing thread to interrupt the other function.
You may also refer this before making your design choice: Multiprocessing vs Threading Python
Thanks everyone for the responses. It helped very much. I went with threading, but also updated the code to be sure it handled failing threads. Ran some regressions and found that the subsequent processes were no longer being interrupted by stalled connections and the log was being updated on a consistent schedule . Thanks again!!

ec2 wait for instance to come up with timeout [Python]

I'm using AWS python API (boto3). My script starts a few instances and then waits for them to come up online, before proceeding doing stuff. I want the wait to timeout after a predefined period, but I can't find any API for that in Python. Any ideas? A snippet of my current code:
def waitForInstance(id):
runningWaiter = self.ec2c.get_waiter("instance_status_ok")
runningWaiter.wait(InstanceIds = [id])
instance = ec2resource.Instance(id)
return instance.state
I can certainly do something like running this piece of code in a separate thread and terminate it if needed, but I was wondering whether there is already a built in API in boto3 for that and I'm just missing it.
A waiter has a configuration associated with it which can be accessed (using your example above) as:
runningWaiter.config
One of the settings in this config is max_attempts which controls how many attempts will be tried before giving up. The default value is 40. You can change that value like this:
runningWaiter.config.max_attempts = 10
This isn't directly controlling a timeout as your question asked but will cause the waiter to give up earlier.
Why not check the instances status from time to time?
#code copy from boto3 doc
for status in ec2.meta.client.describe_instance_status()['InstanceStatuses']:
print(status)
refence : http://boto3.readthedocs.org/en/latest/guide/migrationec2.html
BTW, it is better to use tag naming for all the instances with a standard naming convention. Query any aws resources with its original ID is a maintenance nightmare.
You could put a sleep timer in your code. Sleep for x minutes, check it to see if it is finished and go back to sleep if not. After y number of attempts take some sort it action.

Categories