I have simplified the code representing the problem as suggested and attempted to debug.
Here is the most simple version of the code which still contains the bug:
import simpy
SIM_TIME = 5
def source(env):
i = 0
while True:
i += 1
env.process(train(env, 'Train %d' % i, 'a'))
yield env.timeout(1)
i += 1
env.process(train(env, 'Train %d' % i, 'b'))
yield env.timeout(1)
def train(env, name, route):
resource_req = resource.request()
p3_req = p3.request()
if route == 'a':
print 'Route a %s generated' % name
yield resource_req
print 'Route a %s seized resource at time %d' % (name, env.now)
yield env.timeout(1)
resource.release(resource_req)
print 'Route a %s released resource at time %d' % (name, env.now)
elif route == 'b':
print 'Route b %s generated' % name
yield p3_req
print 'Route b %s seized resource at time %d' % (name, env.now)
yield env.timeout(1)
p3.release(p3_req)
print 'Route b %s released resource at time %d' % (name, env.now)
else:
print 'Could not find a route branch'
env = simpy.Environment()
resource = simpy.Resource(env, 1)
p3 = simpy.Resource(env, 1)
env.process(source(env))
env.run(until=SIM_TIME)
The output from running this:
Route a Train 1 generated
Route a Train 1 seized resource at time 0
Route b Train 2 generated
Route a Train 1 released resource at time 1
Route a Train 3 generated
Route b Train 4 generated
Route a Train 5 generated
As you can see the resource is apparently released on "route a" but then no other resources can be seized either on route a or route b.
I'm very confused by this. Any help would be much appreciated!
I believe the problem comes about because the release refers to a specific request, but the request has by then already be over-written by a new request. resource_req is the request, but before it can be released, it is overwritten by a new resource_req. I believe that when you try to release this new request it does not release it properly, because it's not a release that is being handled by the resource (it's the new one).
I don't know what the solution is. I came across this post trying to find it, because I'm having the same problem myself. One obvious possibility (which I haven't tried yet) is to create a list of requests, and keep track of them, but this seems like a silly solution. There must be a way to simply free up the resource (which is the behaviour that is desired). I'll try to post back if I figure it out!
Here's a minimal working example:
import simpy
class Machine:
def __init__(self,env):
self.machine = simpy.Resource(env,capacity=1)
self.load_proc = env.process(self.load(env))
def load(self,env):
"Load machine 1 when it's empty"
while True:
self.req = self.machine.request()
print("Waiting for machine at %d" %env.now)
yield self.req
print("Load machine at %d" %env.now)
self.process_proc = env.process(self.process(env))
def process(self,env):
"Machine does process and is then emptied"
print("Machine starts process at %d" %env.now)
yield env.timeout(10)
print("Machine finished process at %d" %env.now)
self.machine.release(self.req)
print("Machine released at %d" %env.now)
env = simpy.Environment()
M1 = Machine(env)
env.run(until=100)
Here there is a machine that continually tries to load, but waits for it to be empty. Once it's loaded, it tries to run a 10 second process and then releases the Resource to allow it to be loaded again. In 100 time-steps it should obviously be able to make 10 batches, but only the first one is done.
>>>
Waiting for machine at 0
Load machine at 0
Waiting for machine at 0
Machine starts process at 0
Machine finished process at 10
Machine released at 10
>>>
It seems that the release does not work, because it is referring to the second request. Diagnosing this makes it possible to find a workaround, but it would be nice to know the right way to do it!
One possible solution is to just release a current user rather than a specific request:
import simpy
class Machine:
def __init__(self,env):
self.machine = simpy.Resource(env,capacity=1)
self.load_proc = env.process(self.load(env))
def load(self,env):
"Load machine 1 when it's empty"
while True:
print("Waiting for machine at %d" %env.now)
yield self.machine.request()
print("Load machine at %d" %env.now)
self.process_proc = env.process(self.process(env))
def process(self,env):
"Machine does process and is then emptied"
print("Machine starts process at %d" %env.now)
yield env.timeout(10)
print("Machine finished process at %d" %env.now)
if len(self.machine.users)>=1: self.machine.release(self.machine.users[0])
print("Machine released at %d" %env.now)
env = simpy.Environment()
M1 = Machine(env)
env.run(until=100)
This behaves as expected, and has the advantage that you don't need a variable for the request. The release of
self.machine.release(self.machine.users[0])
is probably sufficient, unless you're at risk of releasing something that hasn't been requested already.
Update based on Stefan Scherfke's comment, is to pass the req explicitly to the new process:
import simpy
class Machine:
def __init__(self,env):
self.machine = simpy.Resource(env,capacity=1)
self.load_proc = env.process(self.load(env))
def load(self,env):
"Load machine 1 when it's empty"
while True:
print("Waiting for machine at %d" %env.now)
req = self.machine.request()
yield req
print("Load machine at %d" %env.now)
self.process_proc = env.process(self.process(env,req))
def process(self,env,req):
"Machine does process and is then emptied"
print("Machine starts process at %d" %env.now)
yield env.timeout(10)
print("Machine finished process at %d" %env.now)
self.machine.release(req)
print("Machine released at %d" %env.now)
env = simpy.Environment()
M1 = Machine(env)
env.run(until=100)
This does indeed work as expected.
Solved - there was a typo in the resource.
Originally defined as:
resource = simpy.Resource(env, 1)
Correct definition:
resource = simpy.Resource(env, capacity = 1)
Related
Currently picking up Simpy and trying to build a simulation which has a series of events to complete which need personnel, supplies, etc.
To emulate the personnel, I have a Container with the number of people and to handle the availability (like a shift pattern for when the people are able to work) I have a PreemptiveResource which events can grab when the shift is available but are all removed with a priority request at the end of a shift, leaving the events to wait.
All works well except for one circumstance when the priority request occurs at the start of the simulation (e.g. the simulation starts at midnight and first shift begins at 6am, so there's an immediate request to pull the shifts away from the events).
Here's the code:
import itertools
import random
import simpy
import simpy.util
import bisect
class PriorityFilterStore(simpy.FilterStore):
def _do_put(self, event):
if len(self.items) < self._capacity:
bisect.insort(self.items, event.item)
event.succeed()
def _do_get(self, event):
for i in range(len(self.items)):
item = self.items[i]
if event.filter(item):
del self.items[i]
event.succeed(item)
break
return True
RANDOM_SEED = 42
CONS_MAX = [100, 100, 100]
RESUPPLY_TIME = 24
def incident_1(name, env, schedule, manpool, manhours):
print('%s is created at %.1f' % (name, env.now))
done_in = manhours
while done_in > 0:
with schedule.request(priority=2) as req:
yield req
try:
mans = manpool.get(1)
yield mans
start = env.now
print('%s is being pre-planned starting at %.1f hours.' % (name, start))
yield env.timeout(done_in)
print('%s finished pre-planning at %.1f hours.' % (name, env.now))
manpool.put(1)
done_in = 0
except simpy.Interrupt:
done_in -= env.now - start
print('%s interrupted at %.1f, with %.1f hours left' % (name, env.now, done_in))
manpool.put(1)
def cons_control(env, cons_list):
"""Periodically check the level of the *fuel_pump* and call the tank
truck if the level falls below a threshold."""
while True:
resups = [0,0,0]
res_flag = False
for i in range(len(cons_list)):
if cons_list[i].capacity > cons_list[i].level:
# We need to call the tank truck now!
print('Calling resupply for cons %d at %d' % (i, env.now))
# Wait for the tank truck to arrive and refuel the station
res_flag = True
resups[i] = cons_list[i].capacity - cons_list[i].level
if res_flag:
yield env.process(resup_truck(env, cons_list, resups))
yield env.timeout(12)
def resup_truck(env, cons_list, resups):
"""Arrives at the gas station after a certain delay and refuels it."""
yield env.timeout(RESUPPLY_TIME)
print('Resupply arriving at time %d' % env.now)
for i in range(len(cons_list)):
if resups[i] > 0:
print('Resupplying cons %d with %d items.' % (i, resups[i]))
yield cons_list[i].put(resups[i])
def scheduler(env, shift_a, schedule_a):
#env.timeout(1)
env.process(shift_off(env, schedule_a, shift_a[0]))
simpy.util.start_delayed(env,shift_off(env, schedule_a, shift_a[2]-shift_a[1]),shift_a[1])
simpy.util.start_delayed(env,shift_off(env, schedule_a, 6), shift_a[3])
yield env.timeout(0)
def shift_off(env, this_schedule, time):
with this_schedule.request(priority=1) as req:
yield req
try:
print('Shift is stopping at %.1f until %.1f' % (env.now, env.now + time))
yield env.timeout(time)
print('Shift is starting at %.1f' % env.now)
except simpy.Interrupt:
print('Shift is somehow pulled back in play...')
def monitoring(env, time, man_a, man_b, man_c):
while True:
print('At time = %d: '% env.now)
print('%d of %d are allocated.' % (man_a.count, man_a.capacity))
print('%d of %d are allocated.' % (man_b.count, man_b.capacity))
print('%d of %d are allocated.' % (man_c.count, man_c.capacity))
yield env.timeout(0)
# Setup and start the simulation
print('Incident processing')
random.seed(RANDOM_SEED)
# Create environment and start processes
env = simpy.Environment()
schedule_a = simpy.PreemptiveResource(env, capacity=1)
man_a = simpy.Container(env, capacity=5, init=5)
shift_a = [6, 12, 13, 18]
man_b = simpy.Resource(env, capacity=1)
man_c = simpy.Resource(env, capacity=1)
cons_x = simpy.Container(env, CONS_MAX[0], init=CONS_MAX[0])
cons_y = simpy.Container(env, CONS_MAX[1], init=CONS_MAX[1])
cons_z = simpy.Container(env, CONS_MAX[2], init=CONS_MAX[2])
#env.process(monitoring(env,3,man_a,man_b,man_c))
env.process(scheduler(env,shift_a,schedule_a))
env.process(cons_control(env, [cons_x,cons_y,cons_z]))
env.process(incident_1('Incident 1', env, schedule_a, man_a, 24))
# Execute!
env.run(until=240)
If you comment out the line env.process(shift_off(env, schedule_a, shift_a[0])), then it works. Otherwise there is an "Exception Unhandled" error, which appears to happen within def shift_off at the line yield env.timeout(time)
Can anyone explain why this interrupt isn't being handled and how to correct this?
TIA
Edit 1:
I've got this working, but don't understand why.
Trying to follow some working examples, I managed to get the behaviour to work if I created a Scheduler object and had it initialise with the processes to stop/start the shifts (e.g. request the resource):
class Scheduler (object):
def __init__(self, env, this_shift, this_sched):
self.env = env
self.shift = this_shift
self.sched = this_sched
self.process = env.process(self.shift_off(env, self.sched, self.shift[0]))
def shift_off(self, env, this_schedule, time):
with this_schedule.request(priority=1) as req:
yield req
try:
print('Shift is stopping at %.1f until %.1f' % (env.now, env.now + time))
yield env.timeout(time)
print('Shift is starting at %.1f' % env.now)
except simpy.Interrupt:
print('Shift is somehow pulled back in play...')
I'm assuming this has something to do with removing the process from env to this object, but do not follow how or why this works.
Anyone explain the subtleties here?
Edit 2: This is still confusing me... Making further changes can still cause this error.
My current work-around is to create the activities to be worked on, then create the scheduler that blocks the shifts. The preemptive request goes in, kicks the activities out and it continues fine. If I init the scheduler before the activities then I get the interrupt error as before - but surely it's not interrupting anything if the resource is free and available at this point...
I'm trying to simulate a queue with limited buffer where no packet is dropped but kept in waiting . Bear with me since I'm just a student with basic coding skills.
The packet arrive exponentially distributed and each hold a packet size with mean 1250 bytes. I managed to get the code working for packet arrival + processing time but i couldn't make the packet 'depart' and also simulating the queue (so far, it is with unlimited buffer) Is there anything I could do to simulate the packet departure and the queue limit?
code:
import random
import simpy
RANDOM_SEED = 42
NEW_CUSTOMERS = 100 # Total number of customers
INTERVAL_CUSTOMERS = 1 # Generate new customers roughly every x seconds
SIZE = 1250
def source(env, number, interval, port):
"""Source generates packet randomly"""
for i in range(number):
size = int(random.expovariate(0.0008))
packet = Packet(env, '%d' % i, size, port, time_in_port=1)
env.process(packet)
t = random.expovariate(1 / interval)
yield env.timeout(t)
def Packet(env, id, size, port, time_in_port):
arrive = env.now
yield Queue.buffer.put(size)
print('packet%s %s arriving at %lf' % (id, size, arrive))
with port.request() as req:
yield req
tip = random.expovariate(1/time_in_port)
yield env.timeout(tip)
amount = size
yield Queue.buffer.get(amount)
print('packet%s %s finished processing at %lf' % (id, size, env.now))
class queue: #THIS PART WON'T WORK
def __init__(self, env):
self.port = simpy.Resource(env, capacity=1)
self.buffer = simpy.Container(env, init = 0, capacity=12500)
self.mon_proc = env.process(self.monitor_tank(env))
def monitor_tank(self, env):
while True:
if self.buffer.level > 12500:
print('Full at %d' % env.now)
random.seed(RANDOM_SEED)
env = simpy.Environment()
Queue = queue(env)
port = simpy.Resource(env, capacity=1)
env.process(source(env, NEW_CUSTOMERS, INTERVAL_CUSTOMERS, port))
env.run()
The queue class didn't work (The program won't run at all). It will run if only I remove the queue class and simulate packet arrival and processing time. Would appreciate any help to make me simulate the packet departure(using a sink) and the queue limit. Thanks.
Not familiar with the details, but your call to self.monitor_tank(env) in the queue constructor is going to go into a tight infinite loop - it isn't a generator, just an unending loop, so python is going to get stuck at that point in the execution.
I think this code from your code is a infinite loop and is blocking your code from running
def monitor_tank(self, env):
while True:
if self.buffer.level > 12500:
print('Full at %d' % env.now)
Try commenting This piece out, or adding a env.timeout so it "sleeps" for a bit on every loop pass
Hello I think the code will solve your problem or at least give you a direction. As in your original code all the packages have the same size, I modelled in this packages, but to change to bytes is straight forward.
I used a buffer (container) and a server (resource).
;)
import simpy
import random
def arrival(env, buffer):
#Arrival of the Package
while True:
print('Package ARRIVED at %.1f \n\t Buffer: %i'
% (env.now, buffer.level))
yield buffer.put(1) # Put the package in the buffer
yield env.timeout(random.expovariate(1.0)) # time between arrivals
env.process(processDeparture(env, buffer, server))
def processDeparture(env, buffer, server):
#Processing and Departure of the Package
while True:
# request a Server to process thge package
request = server.request()
yield request
yield buffer.get(1) # GET a package from the buffer
# Processing time of the package
processingTime = 2
print('Package begin processing at %.1f'
% (env.now))
yield env.timeout(processingTime)
print('Package end processing at %.1f'
% (env.now))
# release the server
yield server.release(request)
random.seed(150)
env = simpy.Environment()
buffer = simpy.Container(env, capacity=3, init=0) # Create the Buffer
server = simpy.Resource(env, capacity=1) # Create the servers (resources)
env.process(arrival(env, buffer))
env.run(until=30) # Execute the Model
I'm trying to simulate a token ring, but having some troubles so far. I modified the one the simpy examples to get this:
import random
import simpy
import math
SIM_TIME = 1000 # Simulation time in minutes
TOTAL_BYTES = 0
class Ring(object):
"""
Hosts have to request the token. When they get the token, they
can be served.
"""
def __init__(self, env):
self.env = env
self.token = simpy.Resource(env)
def serve(self, host):
global TOTAL_BYTES
bytes = random.randint(64, 1518)
TOTAL_BYTES += bytes
yield self.env.timeout(bytes / 10e8 * 60)
print("Ring served %s." % (host))
def host(env, name, ring):
"""The host process (each host has a ``name``) arrives at the ring
(``ring``) and waits for a token.
"""
print('%s enters the ring at %.2f.' % (name, env.now))
with ring.token.request() as request:
yield request
print('%s is handed a token at %.2f.' % (name, env.now))
yield env.process(ring.serve(name))
def setup(env, numberOfHosts, lmda):
"""Create a ring, a number of initial hosts."""
# Create the ring
ring = Ring(env)
# Create n hosts
for i in range(numberOfHosts):
env.process(host(env, 'Host %d' % i, ring))
# Create more packets for the hosts
while True:
yield env.timeout(nedTime(lmda))
print("A new process has arrived at %s" % (i))
# Return a random number (negativeExponentiallyDistributedTime)
def nedTime(rate):
u = random.random()
return (-1 / rate) * math.log(1 - u)
# Create an environment and start the setup process
env = simpy.Environment()
env.process(setup(env, numberOfHosts=10, lmda=0.01))
# Execute!
env.run(until=SIM_TIME)
My question is: how can I get it so that packets are always incoming to a host at all times while the simulation is running.
I've tried the while True in setup, but I don't think that's the correct place to put it.
Also, how can the host function hold a buffer of packets? I've tried converting the host() function to a Host class, but having compiling issues now.
Thank you.
pub.py
import redis
import datetime
import time
import json
import sys
import threading
import gevent
from gevent import monkey
monkey.patch_all()
def main(chan):
redis_host = '10.235.13.29'
r = redis.client.StrictRedis(host=redis_host, port=6379)
while True:
def getpkg():
package = {'time': time.time(),
'signature' : 'content'
}
return package
#test 2: complex data
now = json.dumps(getpkg())
# send it
r.publish(chan, now)
print 'Sending {0}'.format(now)
print 'data type is %s' % type(now)
time.sleep(1)
def zerg_rush(n):
for x in range(n):
t = threading.Thread(target=main, args=(x,))
t.setDaemon(True)
t.start()
if __name__ == '__main__':
num_of_chan = 10
zerg_rush(num_of_chan)
cnt = 0
stop_cnt = 21
while True:
print 'Waiting'
cnt += 1
if cnt == stop_cnt:
sys.exit(0)
time.sleep(30)
sub.py
import redis
import threading
import time
import json
import gevent
from gevent import monkey
monkey.patch_all()
def callback(ind):
redis_host = '10.235.13.29'
r = redis.client.StrictRedis(host=redis_host, port=6379)
sub = r.pubsub()
sub.subscribe(str(ind))
start = False
avg = 0
tot = 0
sum = 0
while True:
for m in sub.listen():
if not start:
start = True
continue
got_time = time.time()
decoded = json.loads(m['data'])
sent_time = float(decoded['time'])
dur = got_time - sent_time
tot += 1
sum += dur
avg = sum / tot
print decoded #'Recieved: {0}'.format(m['data'])
file_name = 'logs/sub_%s' % ind
f = open(file_name, 'a')
f.write('processing no. %s' % tot)
f.write('it took %s' % dur)
f.write('current avg: %s\n' % avg)
f.close()
def zerg_rush(n):
for x in range(n):
t = threading.Thread(target=callback, args=(x,))
t.setDaemon(True)
t.start()
def main():
num_of_chan = 10
zerg_rush(num_of_chan)
while True:
print 'Waiting'
time.sleep(30)
if __name__ == '__main__':
main()
I am testing redis pubsub to replace the use of rsh to communicate with remote boxes.
One of the things I have tested for was the number of channels affecting latency of publish and pubsub.listen().
Test: One publisher and one subscriber per channel (publisher publish every one second). Incremented the number of channels from and observed the latency (The duration from the moment publisher publish a message to the moment subscriber got the message via listen)
num of chan--------------avg latency in seconds
10:----------------------------------0.004453
50:----------------------------------0.005246
100:---------------------------------0.0155
200:---------------------------------0.0221
300:---------------------------------0.0621
Note: tested on 2 CPU + 4GB RAM + 1 NICs RHEL6.4 VM.
What can I do to maintain low latency with high number of channels?
Redis is single-threaded so increasing more cpus wont help. maybe more RAM? if so, how much more?
Anything I can do code-wise or bottleneck is in Redis itself?
Maybe the limitation comes from the way my test codes are written with threading?
EDIT:
Redis Cluster vs ZeroMQ in Pub/Sub, for horizontally scaled distributed systems
Accepted answer says "You want to minimize latency, I guess. The number of channels is irrelevant. The key factors are the number of publishers and number of subscribers, message size, number of messages per second per publisher, number of messages received by each subscriber, roughly. ZeroMQ can do several million small messages per second from one node to another; your bottleneck will be the network long before it's the software. Most high-volume pubsub architectures therefore use something like PGM multicast, which ZeroMQ supports."
From my testings, i dont know if this is true. (The claim that the number of channels is irrelevant)
For example, i did a testing.
1) One channel. 100 publishers publishing to a channel with 1 subscriber listening. Publisher publishing one second at a time. latency was 0.00965 seconds
2) Same testing except 1000 publishers. latency was 0.00808 seconds
Now during my channel testing:
300 channels with 1 pub - 1 sub resulted in 0.0621 and this is only 600 connections which is less than above testing yet significantly slow in latency
In a simulation I would like to able to request a resource in one function before calling another function in which the resource is released. When I use the method shown below it doesn’t work and I get an error.
Thanks in advance.
(in function 1):
req = resource.request()
yield req
yield.env.process(function2( ))
(in function 2):
resource.release(req)
Is this possible? Let me know what else if any other information is required.
From you’re example it’s not clear why it would not work. The simulation below works as expected:
import simpy
def func1(name, env, res):
req = res.request()
yield req
print(name, 'got resource at', env.now)
yield env.process(func2(name, env, res, req))
print(name, 'done')
def func2(name, env, res, req):
yield env.timeout(1)
yield res.release(req)
print(name, 'released at', env.now)
env = simpy.Environment()
res = simpy.Resource(env, capacity=1)
env.process(func1('A', env, res))
env.process(func1('B', env, res))
env.run()
The output:
A got resource at 0
A released at 1
A done
B got resource at 1
B released at 2
B done