I'm trying to create a simulation where there are two printers and I find the average wait time for each. I'm using a class for the printer and task in my program. Basically, I'm adding the wait time to each of each simulation to a list and calculating the average time. My issue is that I'm getting a division by 0 error so nothing is being appended. When I try it with 1 printer (Which is the same thing essentially) I have no issues. Here is the code I have for the second printer. I'm using a queue for this.
if printers == 2:
for currentSecond in range(numSeconds):
if newPrintTask():
task = Task(currentSecond,minSize,maxSize)
printQueue.enqueue(task)
if (not labPrinter1.busy()) and (not labPrinter2.busy()) and \
(not printQueue.is_empty()):
nexttask = printQueue.dequeue()
waitingtimes.append(nexttask.waitTime(currentSecond))
labPrinter1.startNext(nexttask)
elif (not labPrinter1.busy()) and (labPrinter2.busy()) and \
(not printQueue.is_empty()):
nexttask = printQueue.dequeue()
waitingtimes.append(nexttask.waitTime(currentSecond))
labPrinter1.startNext(nexttask)
elif (not labPrinter2.busy()) and (labPrinter1.busy()) and \
(not printQueue.is_empty()):
nexttask = printQueue.dequeue()
waitingtimes.append(nexttask.waitTime(currentSecond))
labPrinter2.startNext(nexttask)
labPrinter1.tick()
labPrinter2.tick()
averageWait = sum(waitingtimes)/len(waitingtimes)
outfile.write("Average Wait %6.2f secs %3d tasks remaining." \
%(averageWait,printQueue.size()))
Any assistance would be great!
Edit: I should mention that this happens no matter the values. I could have a page range of 99-100 and a PPM of 1 yet I still get divided by 0.
I think your problem stems from an empty waitingtimes on the first iteration or so. If there is no print job in the queue, and there has never been a waiting time inserted, you are going to reach the bottom of the loop with waitingtimes==[] (empty), and then do:
sum(waitingtimes) / len(waitingtimes)
Which will be
sum([]) / len([])
Which is
0 / 0
The easiest way to deal with this would just be to check for it, or catch it:
if not waitingtimes:
averageWait = 0
else:
averageWait = sum(waitingtimes)/len(waitingtimes)
Or:
try:
averageWait = sum(waitingtimes)/len(waitingtimes)
except ZeroDivisionError:
averageWait = 0
Related
I am trying to make a alarm like app where I want a list of things to happen when its time. But I am facing a bug. Time keeps waiting for previous time instead of going to next time in list.
t1 = dt.time(hour=17,minute=8)
t2 = dt.time(hour=18,minute=48)
timetable = [t1, t2]
for elt in timetable:
i_time = elt
#i_minute = i.minute
while True:
if i_time == dt.datetime.now().time():
#if i_hour == dt.datetime.now().hour and i_minute == dt.datetime.now().minute:
#current_time = tk.Label(text = dt.datetime.now())
#current_time.pack()
#playsound('media/ClassAlarm.mp3')
print("Its time")
break
The function works fine when it comes to t1 but if t1 is passed and current time is higher that t1 it should go to t2 and ring alarm. But it keeps waiting for t1 which will happen next day. it doesn't read t2 unless t1 is processed.
Ex. Current time 1:30 while t1 is 1:25 and t2 is 1:35. It doesn't ring at t2 but keeps waiting for t1 to happen again which has already happened.
I have tried to execute for loop in different way
for elt in timetable:
time = dt.datetime.now().time()
if time - elt < 0:
break
while(True):
if time == elt:
print("you did it")
I have also tried any() method. Which isn't helping exactly as well
current_hour = dt.datetime.now().hour
current_min = dt.datetime.now().minute
alarm = any(i.hour == current_hour and i.minute == current_min for i in timetable)
print(alarm)
I have tried posting question previously but wasn't able to explain properly. Hope this helps
Using == Operator To Compare Time Is Risky, Logically It Should Work But Somehow It's Better To Use <= Operator Which Eventually Compare If Your Time Is Greater Than The One Recorded In List! This Is Lot Safer Than Equality Which Trigger Only Once And Has No Guarantee To Work For A Split
-->Note: I Believe Those Function Generated Timestamp Of Different Format, Although They Represent Time And Are Useful But Since They Are In Different Format You Ain't Getting Equality Operator To Work (Bcz Even for Same Time And Date, Your Timestamp Gonna Be Different Although They Represent Same). To confirm this behavior you can write print variables of t1 and datetime.now and see if they are same.
Regarding Your Second Question You Can Have if/else statements to check for which time has been occured and most last time which has just been crossed, or you can run loop in reverse and check for timer (assuming late timers are in end of loop)
Sample Code:
for elt in timetable.reverse():
i_time = elt
while True:
if i_time <= dt.datetime.now().time():
print("Its time")
break
I am newbie in Python. And I am wondering how to exit out of an unbounded loop after n failures. Using counter seems for me unpythonic.
This is my code:
while(True):
#get trigger state
trigState = scope.ask("TRIG:STATE?")
#check if Acq complete
if( trigState.endswith('SAVE') ):
print 'Acquisition complete. Writing into file ...\n'
#save screen
#rewrite in file
#check if trigger is still waiting
elif( trigState.endswith('READY') ):
print 'Please wait ...\n'
#if trigger neither in SAVE nor in READY mode
else:
#THIS IS FAILURE!!!
print 'Failure: k tries remaining'
#wait for several seconds before next iteration
time.sleep(2)
Note that loop must be unbounded - it may iterate arbitrarily many times until number of tries exceeded.
Is there an elegant (or at least pythonic) method to meet the requirements?
To be "pythonic" you'll want to write code that conforms to the Zen of Python and a relevant rules here are:
Explicit is better than implicit.
Readability counts.
You should be explicit about the number of failures you allow, or the number of failures remaining. In the former case:
for number_of_failures in range(MAX_FAILURE):
...
expresses that intent, but is clunky because each time through the loop you are not necessarily failing, so you'd have to be be contorted in counting successes, which would impact readability. You can go with:
number_of_failures = 0
while number_of_failures < MAX_FAILURES:
...
if this_is_a_failure_case:
number_of_failures += 1
...
This is perfectly fine, as it says "if I haven't failed the maximum number of times, keep going. This is a little better than:
number_of_failures = 0
while True:
...
if this_is_a_failure_case:
number_of_failures += 1
...
if number_of_failures == MAX_FAILURES: #you could use >= but this isn't necessary
break
which hides the exit case. There are times when it is perfectly fine to exit the loop in the middle, but in your case, aborting after N failures is so crucial to what you want to do, that the condition should be in the while-loop condition.
You can also rephrase in terms of number of failures remaining:
failures_remaining = MAX_FAILURES
while failures_remaining > 0:
...
if this_is_a_failure_case:
failures_remaining -= 1
...
If you like functional programming, you can get rid of the loop and use tail recursion:
def do_the_thing():
def try_it(failures_remaining = MAX_FAILURES):
...
failed = ....
...
try_it(failures_remaining-1 if failed else failures_remaining)
try_it()
but IMHO that isn't really an improvement. In Python, we like to be direct and meaningful. Counters may sound clunky, but if you phrase things well (after all you are counting failures, right?) you are safely in the realm of Pythonic code.
Using an if statement to number your attempts seems simple enough for your purposes. It's simple, and (to my eyes anyway) easily readable, which suits your Pythonic criterion.
Within your while loop you you can do:
failure_no = 0
max_fails = n #you can substitute an integer for this variable
while:
#...other code
else:
failure_no += 1
if failure_no == max_fails:
break
print 'Failure: '+str(max_fails - failure_no)+' tries remaining'
This will only iterate the number of attempts if a else/failure condition is met, if you wanted to count the total number of run throughs, not just failures, you could put the failure_no +=1 right after your while rather than within the else block.
NB You could easily put a notification inside your escape:
eg
if failure_no == max_fails:
print 'Maximum number of failures reached'
break
E.g. you can use an if statement and raise an error after n tries.
You only need a limit and a counter.
while(True):
#get trigger state
limit = 5
counter = 0
trigState = scope.ask("TRIG:STATE?")
#check if Acq complete
if( trigState.endswith('SAVE') ):
print 'Acquisition complete. Writing into file ...\n'
#save screen
#rewrite in file
#check if trigger is still waiting
elif( trigState.endswith('READY') ):
print 'Please wait ...\n'
#if trigger neither in SAVE nor in READY mode
else:
#THIS IS FAILURE!!!
counter +=1
print 'Failure: ', limit - counter ,' tries remaining'
if k => limit: raise RuntimeError('too many attempts')
#wait for several seconds before next iteration
time.sleep(2)
...
I am running an algorithm which reads an excel document by rows, and pushes the rows to a SQL Server, using Python. I would like to print some sort of progression through the loop. I can think of two very simple options and I would like to know which is more lightweight and why.
Option A:
for x in xrange(1, sheet.nrows):
print x
cur.execute() # pushes to sql
Option B:
for x in xrange(1, sheet.nrows):
if x % some_check_progress_value == 0:
print x
cur.execute() # pushes to sql
I have a feeling that the second one would be more efficient but only for larger scale programs. Is there any way to calculate/determine this?
I'm a newbie, so I can't comment. An "answer" might be overkill, but it's all I can do for now.
My favorite thing for this is tqdm. It's minimally invasive, both code-wise and output-wise, and it gets the job done.
I am one of the developers of tqdm, a Python progress bar that tries to be as efficient as possible while providing as many automated features as possible.
The biggest performance sink we had was indeed I/O: printing to the console/file/whatever.
But if your loop is tight (more than 100 iterations/second), then it's useless to print every update, you'd just as well print just 1/10 of the updates and the user would see no difference, while your bar would be 10 times less overhead (faster).
To fix that, at first we added a mininterval parameter which updated the display only every x seconds (which is by default 0.1 seconds, the human eye cannot really see anything faster than that). Something like that:
import time
def my_bar(iterator, mininterval=0.1)
counter = 0
last_print_t = 0
for item in iterator:
if (time.time() - last_print_t) >= mininterval:
last_print_t = time.time()
print_your_bar_update(counter)
counter += 1
This will mostly fix your issue as your bar will always have a constant display overhead which will be more and more negligible as you have bigger iterators.
If you want to go further in the optimization, time.time() is also an I/O operation and thus has a cost greater than simple Python statements. To avoid that, you want to minimize the calls you do to time.time() by introducing another variable: miniters, which is the minimum number of iterations you want to skip before even checking the time:
import time
def my_bar(iterator, mininterval=0.1, miniters=10)
counter = 0
last_print_t = 0
last_print_counter = 0
for item in iterator:
if (counter - last_print_counter) >= miniters:
if (time.time() - last_print_t) >= mininterval:
last_print_t = time.time()
last_print_counter = counter
print_your_bar_update(counter)
counter += 1
You can see that miniters is similar to your Option B modulus solution, but it's better fitted as an added layer over time because time is more easily configured.
With these two parameters, you can manually finetune your progress bar to make it the most efficient possible for your loop.
However, miniters (or modulus) is tricky to get to work generally for everyone without manual finetuning, you need to make good assumptions and clever tricks to automate this finetuning. This is one of the major ongoing work we are doing on tqdm. Basically, what we do is that we try to calculate miniters to equal mininterval, so that time checking isn't even needed anymore. This automagic setting kicks in after mininterval gets triggered, something like that:
from __future__ import division
import time
def my_bar(iterator, mininterval=0.1, miniters=10, dynamic_miniters=True)
counter = 0
last_print_t = 0
last_print_counter = 0
for item in iterator:
if (counter - last_print_counter) >= miniters:
cur_time = time.time()
if (cur_time - last_print_t) >= mininterval:
if dynamic_miniters:
# Simple rule of three
delta_it = counter - last_print_counter
delta_t = cur_time - last_print_t
miniters = delta_it * mininterval / delta_t
last_print_t = cur_time
last_print_counter = counter
print_your_bar_update(counter)
counter += 1
There are various ways to compute miniters automatically, but usually you want to update it to match mininterval.
If you are interested in digging more, you can check the dynamic_miniters internal parameters, maxinterval and an experimental monitoring thread of the tqdm project.
Using the modulus check (counter % N == 0) is almost free compared print and a great solution if you run a high frequency iteration (log a lot).
Specially if you does not need to print for each iteration but want some feedback along the way.
I am using a raspberry pi, a pi face and a python script to monitor several home sensors. I want to add the sensing wire from the smoke detectors to that list but I need a bit of help with the if statement.
I'm unsure how to tell the if statement to check how long the input has detected the signal. Under 4 seconds disregard (low battery chirp), over 4 seconds (smoke detected) alert me..
Basically I need help writing the if statement below.
if piface.digital_read(0)==0 >= 4 seconds:
# do x
else:
# do y
Do I need a loop and can it be as easy as what I have above? (Coded correctly of course!)
Something like this (untested pseudo-code):
counter = 0
while True: #your main loop
smoke = digital_read() #assume 0 = no alarm, 1 = alarm
if smoke:
counter += 1
else:
counter = 0
if counter >= 4: #there was smoke for the last 4 seconds
call_the_fire_brigade()
time.sleep(1) #wait one second
I guess you probably need some loop.
Well I think a good solution to this would be spawn a separate thread for each detector and then use a blocking for the number with a loop.. like
count = 0
while count < 4:
if piface.digital_read(0) == 0:
count += 1
else: count = 0
sleep(4000)
# ... rest of code ...
I wanted to do a simple write operation to a Cassandra instance (v1.1.10) on a single node. I just wanted to see how it handles constant writes and if it can keep up with the write speed.
pool = ConnectionPool('testdb')
test_cf = ColumnFamily(pool,'test')
test2_cf = ColumnFamily(pool,'test2')
test3_cf = ColumnFamily(pool,'test3')
test_batch = test_cf.batch(queue_size=1000)
test2_batch = test2_cf.batch(queue_size=1000)
test3_batch = test3_cf.batch(queue_size=1000)
chars=string.ascii_uppercase
counter = 0
while True:
counter += 1
uid = uuid.uuid1()
junk = ''.join(random.choice(chars) for x in range(50))
test_batch.insert(uid, {'junk':junk})
test2_batch.insert(uid, {'junk':junk})
test3_batch.insert(uid, {'junk':junk})
sys.stdout.write(str(counter)+'\n')
pool.dispose()
The code keeps crushing after a long write (when the counter is around 10M+) with the following message
pycassa.pool.AllServersUnavailable: An attempt was made to connect to each of the servers twice, but none of the attempts succeeded. The last failure was timeout: timed out
I set the queue_size=100 which didn't help. Also I fired up the cqlsh -3 console to truncate the table after the script crashed and got the following error:
Unable to complete request: one or more nodes were unavailable.
Tailing /var/log/cassandra/system.log gives no error sign but INFO on Compaction, FlushWriter and so on. What am I doing wrong?
I've had this problem too - as #tyler-hobbs suggested in his comment the node is likely overloaded (it was for me). A simple fix that I've used is to back-off and let the node catch up. I've rewritten your loop above to catch the error, sleep a while and try again. I've run this against a single node cluster and it works a treat - pausing (for a minute) and backing off periodically (no more than 5 times in a row). No data is missed using this script unless the error throws five times in a row (in which case you probably want to fail hard rather than return to the loop).
while True:
counter += 1
uid = uuid.uuid1()
junk = ''.join(random.choice(chars) for x in range(50))
tryCount = 5 # 5 is probably unnecessarily high
while tryCount > 0:
try:
test_batch.insert(uid, {'junk':junk})
test2_batch.insert(uid, {'junk':junk})
test3_batch.insert(uid, {'junk':junk})
tryCount = -1
except pycassa.pool.AllServersUnavailable as e:
print "Trying to insert [" + str(uid) + "] but got error " + str(e) + " (attempt " + str(tryCount) + "). Backing off for a minute to let Cassandra settle down"
time.sleep(60) # A delay of 60s is probably unnecessarily high
tryCount = tryCount - 1
sys.stdout.write(str(counter)+'\n')
I've added a complete gist here