I'd like to simulate spikey traffic, so that for example:
in the first 5 minutes there are only 50 users (instant hatch of 50 at time T0)
then from 5th to 10th minute we have 100 users (instant hatch +50 at T+5)
then 150 (instant hatch +50 at T+10)
etc.
Is it possible to create an equal number of users, but instead of doing that every second change that to every xx minutes?
There is no such built in feature (https://github.com/locustio/locust/issues/1353 might solve this if it is ever implemented)
One way to do a workaround is to spawn all your users right away (using a spawn rate of something like 100/s), and have them sleep until it is time to run:
import time
start = time.time()
class User1(HttpUser):
#task
def mytask(self):
# do actual task
class User2(HttpUser):
#task
def mytask(self):
while time.time() - start < 300:
time.sleep(1)
# do actual task
class User3(HttpUser):
#task
def mytask(self):
while time.time() - start < 600:
time.sleep(1)
# do actual task
...
You can probably do something clever and put it all in one class, but I'll leave that as an exercise :)
Locust 2.8.6 Update
Now you can benefit from using custom shapes. Read more at Locust Documentation.
You should use the tick function and define the return tuple of your user limit and spawn rate.
Here is a code example:
from locust import LoadTestShape
class SharpStepShape(LoadTestShape):
increase_delay = 300 # 5 minutes for increase
increase_size = 50 # number of extra users per increase
def tick(self):
run_time = self.get_run_time()
step_number = int(run_time / self.increase_delay) + 1
user_limit = int(step_number * self.increase_size)
return user_limit, self.increase_size
Then just import this shape into your locustfile and it will use this shape for your load testing.
from locust import User
from sharp_step_shape import SharpStepShape
class PerformanceUser(User):
pass
Related
I'm pretty new to Python. I'm trying to make a text-based strategy-like game in python and I want to have a value to increase constantly(I wanna have some other values to increase or decrease at the same time too but this is just for beginning). But if I use a While True loop I can't do anything else in the program. It just keeps rising the value but I can't do anything else. I want it to increase continuously while I can get some inputs from the user or run some other functions. Tell me if there is a module I can use or anything else please.
import time
print("PLANET EARTH" + "" + "\n Buildings:",
" ", "Resources:")
class ironMine():
def __init__(self, bc, ps, w):
self.buildingCost = bc
self.productionSpeed = ps
self.warehouse = w
def production(self):
while True:
print(" " +
"iron:", self.warehouse,
end="\r")
self.warehouse += self.productionSpeed
time.sleep(0.5)
x=input("Write something")
if x == upgrade:
self.productionSpeed += 5
else:
print("there is no such command")
t1 = ironMine([300,200,100], 10, 0)
t1.production()
For example this part is the resource production part for iron. I just add an random input to show I cant get it done. And I don't know if this part if x == upgrade: self.productionSpeed += 5 will update the existing self.productionSpeed value for object t1.
Effectively, you are trying to implement your own version of a clock, which counts up at some rate, using a loop like this:
value = initial_value
while True:
time.sleep(1)
value += rate
There is more to your code than that, of course; you also want to read user input and control the rate based on that. But at the core of it, you're trying to create a clock, and it's not working because your clock stops "ticking" while input is waiting for the user to enter something.
Instead of writing a clock, you should use one from the standard library. The time.monotonic() function works like a clock, in the sense that if you call the function twice, the difference between the two numbers is the number of seconds which elapsed between the two function calls.
The simple "clock" above, which has a variable value increasing at a fixed rate, can be replaced by a function call which calculates the current value based on the number of seconds that have elapsed, instead of continuously maintaining its current value in a variable:
import time
initial_time = time.monotonic()
def get_current_value():
current_time = time.monotonic()
seconds = current_time - initial_time
# use int(seconds) for discrete updates once per second
return initial_value + rate * int(seconds)
For your case, where the rate can change dynamically, it is a bit more complicated, but the key idea is the same; don't write your own clock, use an existing one. Since there are two things we need to be able to do - get the current value, and change the rate - let's encapsulate those two operations in a class:
import time
class TimeBasedVariable:
def __init__(self, initial_value, rate):
self.initial_value = initial_value
self.rate = rate
self.initial_time = time.monotonic()
def get_value(self, current_time=None):
if current_time is None:
current_time = time.monotonic()
seconds = current_time - self.initial_time
return self.initial_value + self.rate * int(seconds)
def set_rate(self, rate):
# reset the reference point to the current time
t = time.monotonic()
self.initial_value = self.get_value(t)
self.initial_time = t
self.rate = rate
Note that I simplified the problem slightly by making the variable update every second, rather than every 0.5 seconds. If you do want it to update every half-second, just write int(2 * seconds) instead of int(seconds).
I need to generate sine wave data (only positive values) between 0 and the specified interval and for each value of the sine wave, data call some function.
Currently, I am generating sine wave data between 0 and the specified interval using below code
np.sin(np.linspace(0,180, count)* np.pi / 180. )
it generates values between 0 to 180. The size of the array is equal to count.
Now I need to call some function for each value of the generated array. The total time to call the function for each value should complete within some predefined time interval. I tried to use sleep function by dividing predefined time interval by count.
I am wondering if there is any other way to achieve the above functionality because the instruction execution can take some time.
Let's say you want to run function foo() every 10 seconds, but the actual running time of foo() is unknown. The best you can do, without resorting to hard real-time programming, is to get the current time before and after the call to foo() and then sleep() for the rest of the interval:
import time
INTERVAL = 10 # seconds
# Repeat this fragment as needed
start = time.time() # in seconds
foo()
elapsed = time.time() - start
remains = INTERVAL - elapsed
time.sleep(remains)
However, keep in mind that sleep sleeps at least that much time. It may sleep longer, due to scheduling, in which case your function foo may be executed less frequently than needed.
Just for putting some Python around #DYZ's answer, you could use a decorator or a context manager in order to "patch" your target function and make it take the time you want to complete.
In the following code, you have a list with five elements and you want to print each one, the total time is 5s, so print each element should take 1s.
import time
data = [1, 2, 3, 4, 5]
# Decorator.
def patch_execution_time(limit):
def wrapper(func):
def wrapped(*args, **kwargs):
init = time.time()
result = func(*args, **kwargs)
end = time.time()
elapsed = end - init
if elapsed < limit:
time.sleep(limit - elapsed)
return result
return wrapped
return wrapper
# Context manager, more usefull if the total time interval
# is dynamic.
class patch_execution_time_cxt(object):
def __init__(self, operation, time):
self.operation = operation
self.time = time
def __enter__(self):
return patch_execution_time(self.time)(self.operation)
def __exit__(self, *args):
pass
# Two sample functions one decarated and the other for
# ilustrating the use of the context manager.
#patch_execution_time(1)
def foo(item):
print(item)
def foo_1(item):
print(item)
print("Using decoreted ...")
for item in data:
foo(item)
print("Using context manager ...")
with patch_execution_time_cxt(foo_1, 1) as patched_foo:
for item in data:
patched_foo(item)
I have a script that generates millions of Celery tasks, one per row in the DB. Is there a way to throttle it so that it doesn't completely flood Celery?
Ideally I want to keep Celery busy, but I don't want the length of the Celery queue to exceed a few dozen tasks since that's just a waste of memory (especially since without some kind of throttle the script will add millions of tasks to the queue almost instantly).
I've spent some time on this problem over the past several days and came up with what I'm calling a CeleryThrottle object. Basically, you tell it how many items you want in a queue and it does its best to keep the queue between that size and 2× that size.
So here's the code (assumes Redis broker, but easily changed):
# coding=utf-8
from collections import deque
import time
import redis
from django.conf import settings
from django.utils.timezone import now
def get_queue_length(queue_name='celery'):
"""Get the number of tasks in a celery queue.
:param queue_name: The name of the queue you want to inspect.
:return: the number of items in the queue.
"""
r = redis.StrictRedis(
host=settings.REDIS_HOST,
port=settings.REDIS_PORT,
db=settings.REDIS_DATABASES['CELERY'],
)
return r.llen(queue_name)
class CeleryThrottle(object):
"""A class for throttling celery."""
def __init__(self, min_items=100, queue_name='celery'):
"""Create a throttle to prevent celery run aways.
:param min_items: The minimum number of items that should be enqueued.
A maximum of 2× this number may be created. This minimum value is not
guaranteed and so a number slightly higher than your max concurrency
should be used. Note that this number includes all tasks unless you use
a specific queue for your processing.
"""
self.min = min_items
self.max = self.min * 2
# Variables used to track the queue and wait-rate
self.last_processed_count = 0
self.count_to_do = self.max
self.last_measurement = None
self.first_run = True
# Use a fixed-length queue to hold last N rates
self.rates = deque(maxlen=15)
self.avg_rate = self._calculate_avg()
# For inspections
self.queue_name = queue_name
def _calculate_avg(self):
return float(sum(self.rates)) / (len(self.rates) or 1)
def _add_latest_rate(self):
"""Calculate the rate that the queue is processing items."""
right_now = now()
elapsed_seconds = (right_now - self.last_measurement).total_seconds()
self.rates.append(self.last_processed_count / elapsed_seconds)
self.last_measurement = right_now
self.last_processed_count = 0
self.avg_rate = self._calculate_avg()
def maybe_wait(self):
"""Stall the calling function or let it proceed, depending on the queue.
The idea here is to check the length of the queue as infrequently as
possible while keeping the number of items in the queue as closely
between self.min and self.max as possible.
We do this by immediately enqueueing self.max items. After that, we
monitor the queue to determine how quickly it is processing items. Using
that rate we wait an appropriate amount of time or immediately press on.
"""
self.last_processed_count += 1
if self.count_to_do > 0:
# Do not wait. Allow process to continue.
if self.first_run:
self.first_run = False
self.last_measurement = now()
self.count_to_do -= 1
return
self._add_latest_rate()
task_count = get_queue_length(self.queue_name)
if task_count > self.min:
# Estimate how long the surplus will take to complete and wait that
# long + 5% to ensure we're below self.min on next iteration.
surplus_task_count = task_count - self.min
wait_time = (surplus_task_count / self.avg_rate) * 1.05
time.sleep(wait_time)
# Assume we're below self.min due to waiting; max out the queue.
if task_count < self.max:
self.count_to_do = self.max - self.min
return
elif task_count <= self.min:
# Add more items.
self.count_to_do = self.max - task_count
return
Usage looks like:
throttle = CeleryThrottle()
for item in really_big_list_of_items:
throttle.maybe_wait()
my_task.delay(item)
Pretty simple and hopefully pretty flexible. With that in place, the code will monitor your queue and add waits to your loop if the queue is getting too long. This is in our github repo in case there are updates.
As it does this, it will track the rolling average speed of the task, and will attempt not to check the queue length more frequently than needed. For example, if tasks take two minutes each to run, after putting 100 items in teh queue, it can wait quite a while before having to check the length of the queue again. A simpler version of this script could check the queue length every time through the loop, but that would add unnecessary delay. This version tries to be smart about it at the cost of being sometimes wrong (in which case the queue goes below min_items).
I'm using SimPy in Python to create a Discrete Event Simulation that requires resources to be available based on a schedule input by the user in my case in a csv file. The aim is to represent different numbers of the same resource (e.g. staff) being available at different times of day. As far I as I can tell this isn't something that is available in base SimPy - like resource priorities.
I have managed to get this working and have included the code below to show how. However I wanted to ask the community if there is a better way to achieve this functionality in SimPy?
The code below works by requesting the resources at the start of each day for the times they are not supposed to be available - with a much higher priority to ensure they get the resource. The resources are then released at the appropriate times for use by other events/processes. As I say it works but seems wasteful with a lot of dummy processes working to ensure the correct true availability of resources. Any comments which would lead to improvements would be welcomed.
so the csv looks like:
Number time
0 23
50 22
100 17
50 10
20 8
5 6
where Number represents the number of staff that are the become available at the defined time. For example: There will be 5 staff from 6-8, 20 from 8-10, 50 from 10-17 and so on until the end of the day.
The code:
import csv
import simpy
# empty list ready to hold the input data in the csv
input_list = []
# a dummy process that "uses" staff until the end of the current day
def take_res():
req = staff.request(priority=-100)
yield req # Request a staff resource at set priority
yield test_env.timeout(24 - test_env.now)
# A dummy process that "uses" staff for the time those staff should not
# be available for the real processes
def request_res(delay, avail_time):
req = staff.request(priority=-100)
yield req # Request a staff resource at set priority
yield test_env.timeout(delay)
yield staff.release(req)
# pass time it is avail for
yield test_env.timeout(avail_time)
test_env.process(take_res())
# used to print current levels of resource usage
def print_usage():
print('At time %0.2f %d res are in use' % (test_env.now, staff.count))
yield test_env.timeout(0.5)
test_env.process(print_usage())
# used to open the csv and read the data into a list
with open('staff_schedule.csv', mode="r") as infile:
reader = csv.reader(infile)
next(reader, None) # ignore header
for row in reader:
input_list.append(row[:2])
# calculates the time the current number of resources will be
# available for and adds to the list
i = 0
for row in the_list:
if i == 0:
row.append(24 - int(input_list[i][1]))
else:
row.append(int(input_list[i-1][1]) - int(input_list[i][1]))
i += 1
# converts list to tuple of tuples to prevent any accidental
# edits from this point in
staff_tuple = tuple(tuple(row) for row in input_list)
print(staff_tuple)
# define environment and creates resources
test_env = simpy.Environment()
staff = simpy.PriorityResource(test_env, capacity=sum(int(l[0]) for l in staff_tuple))
# for each row in the tuple run dummy processes to hold resources
# according to schedule in the csv
for item in the_tuple:
print(item[0])
for i in range(int(item[0])):
test_env.process(request_res(int(item[1]), int(item[2])))
# run event to print usage over time
test_env.process(print_usage())
# run for 25 hours - so 1 day
test_env.run(until=25)
I tried something else, I overloaded the Resource class, only adding one method and while I don't fully understand the source code, it seems to work properly. You can tell the resource to change the capacity somewhere in your simulation.
from simpy.resources.resource import Resource, Request, Release
from simpy.core import BoundClass
from simpy.resources.base import BaseResource
class VariableResource(BaseResource):
def __init__(self, env, capacity):
super(VariableResource, self).__init__(env, capacity)
self.users = []
self.queue = self.put_queue
#property
def count(self):
return len(self.users)
request = BoundClass(Request)
release = BoundClass(Release)
def _do_put(self, event):
if len(self.users) < self.capacity:
self.users.append(event)
event.usage_since = self._env.now
event.succeed()
def _do_get(self, event):
try:
self.users.remove(event.request)
except ValueError:
pass
event.succeed()
def _change_capacity(self, capacity):
self._capacity = capacity
I think this should work, but I'm not a 100% confident about how the triggers work.
I solved creating a Resource for each time window. Each arrival is processed in the function service, and each customer will be assigned to a resource depending on the arrival time. In case a customer has to wait in queue and has to be re-asigned to the next time window, it is removed from current Resource and re-assigned to the next Resource. This is done by modifying the request as:
with self.Morning.request() as req1:
yield req1 | self.env.timeout(self.durationMorning)
The code:
import simpy
import numpy as np
import itertools
class Queue():
def __init__(self, env, N_m, N_e):
self.Arrival = {}
self.StartService = {}
self.FinishService = {}
self.Morning = simpy.Resource(env, N_m)
self.Evening = simpy.Resource(env, N_e)
self.env = env
self.durationMorning = 30
#arrivals/second
def t_arrival(self,t):
if t<self.durationMorning:
return 1
else:
return 2
def t_service(self):
return 5
def service(self,i):
arrival_time = self.env.now
if arrival_time==self.durationMorning:
yield self.env.timeout(0.0001)
# Add Arrival
system.Arrival[i] = arrival_time
# Morning shift
if self.env.now < self.durationMorning:
with self.Morning.request() as req1:
yield req1 | self.env.timeout(self.durationMorning)
if self.env.now < self.durationMorning:
system.StartService[i] = self.env.now
yield self.env.timeout(self.t_service())
print(f'{i} arrived at {self.Arrival[i]} done at {self.env.now} by 1')
self.FinishService[i] = self.env.now
# Evening shift
if (self.env.now >= self.durationMorning) & (i not in self.FinishService):
with self.Evening.request() as req2:
yield req2
system.StartService[i] = self.env.now
yield self.env.timeout(self.t_service())
print(f'{i} arrived at {self.Arrival[i]} done at {self.env.now} by 2')
self.FinishService[i] = self.env.now
def arrivals(self):
for i in itertools.count():
self.env.process(self.service(i))
t = self.t_arrival(self.env.now)
yield self.env.timeout(t)
env = simpy.Environment()
system = Queue(env, N_morning, N_evening)
system.env.process(system.arrivals())
system.env.run(until=60)
0 arrived at 0 done at 5 by 1
1 arrived at 1 done at 6 by 1
2 arrived at 2 done at 10 by 1
3 arrived at 3 done at 11 by 1
4 arrived at 4 done at 15 by 1
5 arrived at 5 done at 16 by 1
6 arrived at 6 done at 20 by 1
7 arrived at 7 done at 21 by 1
8 arrived at 8 done at 25 by 1
9 arrived at 9 done at 26 by 1
10 arrived at 10 done at 30 by 1
11 arrived at 11 done at 31 by 1
12 arrived at 12 done at 35 by 2
13 arrived at 13 done at 40 by 2
14 arrived at 14 done at 45 by 2
15 arrived at 15 done at 50 by 2
16 arrived at 16 done at 55 by 2
SimPy related
Maybe you can use PreemptiveResource (see this example). With this, you would only need one blocker-process per resource as it can just "kick" less important processes.
Python related
Document your code. What’s the purpose of take_res() and request_res()? (Why do both functions use priority=-100, anyway?)
Use better names. the_list or the_tuple is not very helpful.
Instead of the_list.append(row[0], row[1]) you can do the_list.append(row[:2]).
Why do you convert the list of lists into a tuple of tuples? As far as I can see the benefit. But it adds extra code and thus, extra confusion and expra possibilities for programming errors.
You should leave the with open(file) block as soon as possible (after the first four lines, in your case). There’s no need to keep the file open longer than necessary and when you are done iterating over all lines, you no longer need it.
This is how I solved it for my application. It's not perfect but was the best I could do given my basic level of skill with Python and SimPy.
The result is the correct number of Advisers are available at the desired times.
First I define a store and set the capacity to be equal to the total number of adviser instances that will exist within the simulation.
self.adviser_store = simpy.FilterStore(self.env,
capacity=self.total_ad_instances)
The instances of the Adviser class required are created in an initialization step which for brevity I have not included. I actually use a JSON file to customize the individual adviser instances which are then placed in a list.
The run parameter in the class definition below is actually another class that contains all info related to the current run of the simulation - so for example it contains the start and end dates for the simulation. self.start_date therefore defines the date that adviser starts working. self.run.start_date is the start date for the simulation.
class Adviser(object):
def __init__(self, run, id_num, start_time, end_time, start_date, end_date):
self.env = run.env
self.run = run
self.id_num = id_num
self.start_time = start_time
self.end_time = end_time
self.start_date = datetime.datetime.strptime(start_date, '%Y, %m, %d')
self.end_date = datetime.datetime.strptime(end_date, '%Y, %m, %d')
self.ad_type = ad_type
self.avail = False
self.run.env.process(self.set_availability())
So as you can see creating the adviser class also starts the process to set the availability. In the example below I've simplified it to set the same availability each day for a given date range. You could of course set different availabilities depending on date/day etc.
def set_availability(self):
start_delay = self.start_time + (self.start_date - self.run.start_date).total_seconds()/3600 # this returns the time in hours until the resource becomes available and is applied below.
end_delay = self.end_time + (self.start_date - self.run.start_date).total_seconds()/3600
repeat = (self.end_date - self.start_date).days + 1 # defines how man days to repeat it for
for i in range(repeat):
start_delayed(self.run.env, self.add_to_store(), start_delay)
start_delayed(self.run.env, self.remove_from_store(), end_delay)
start_delay += 24
end_delay += 24
yield self.run.env.timeout(0)
def add_to_store(self):
self.run.ad_avail.remove(self) # take adviser from a list
self.run.adviser_store.put(self) # and put it in the store
yield self.run.env.timeout(0)
def remove_from_store(self):
current_ad = yield self.run.adviser_store.get(lambda item: item.id_num == self.id_num) # get itself from the store
self.run.ad_avail.append(current_ad) # and put it back in the list
yield self.run.env.timeout(0)
So essentially customers can only request advisers from the store and the advisers will only be in the store at certain times. the rest of the time they are in the list attached to the current run of the simulation.
I think there is still a pitfall here. The adviser object may be in use when it is due to become unavailable. I haven't noticed if this happens as yet or the impact if it does.
I have created a simple score system for my pygame. but it's pausing the game. I know it's because of time.sleep but I don't how to sort it out.
The score system is to +100 every 5 seconds while start is true, code:
while start == True:
time.sleep(5)
score = score + 100
Full code with indentation: http://pastebin.com/QLd3YTdJ
code at line : 156-158
Thank you
Instead of using sleep, which stalls the game until time has elapsed, you want to count up an internal timer with the number of seconds which have passed. When you hit 5 seconds, increment the score and then reset the timer.
Something like this:
scoreIncrementTimer = 0
lastFrameTicks = pygame.time.get_ticks()
while start == True:
thisFrameTicks = pygame.time.get_ticks()
ticksSinceLastFrame = thisFrameTicks - lastFrameTicks
lastFrameTicks = thisFrameTicks
scoreIncrementTimer = scoreIncrementTimer + ticksSinceLastFrame
if scoreIncrementTimer > 5000:
score = score + 100
scoreIncrementTimer = 0
This could easily be improved (what if your frame rate is so low there's more than 5 seconds between frames?) but is the general idea. This is commonly called a "delta time" game timer implementation.
If i understand you correctly you dont want the while True: score += 100 loop to block your entire program?
You should solve it by moving the score adding to a seperate function
and use the intervalfunction of APScheduler http://packages.python.org/APScheduler/intervalschedule.html
from apscheduler.scheduler import Scheduler
# Start the scheduler
sched = Scheduler()
sched.start()
# Schedule job_function to be called every 5 seconds
#sched.interval_schedule(seconds=5)
def incr_score():
score += 100
This will result in APScheduler creating a thread for you running the function every 5 seconds.
you might need to do some changes to the function to make it work but it should get you started at least :).