How to schedule tasks without exiting existing loop? - python

I have struggled with this question for about a week -- time to ask someone who can bang out an answer in a couple minutes.
I am trying to run a python program once every 10 seconds. There are a lot of questions of this sort : Use sched module to run at a given time, Python threading.timer - repeat function every 'n' seconds, How to execute a function asynchronously every 60 seconds in Python?
Normally the solutions using sched or time.sleep would work, but I am trying to start a scheduled process from within cmd2, which is already running in a while False loop. (When you exit cmd2, it exits this loop).
Because of this, when I start a function to repeat every 10 seconds, I enter another loop nested within cmd2 and I am unable to enter cmd2 commands. I can only get back to cmd2 by exiting the sub-loop that is repeating the function, and thus the function stops repeating.
Evidently threading will solve this problem. I have tried threading.Timer without success. Perhaps the real problem is that I do not understand threads or multiprocessing.
Here is an example of code that is roughly isomorphic to the code I'm using, using sched module, which I got to work:
import cmd2
import repeated
class prompt(cmd2.Cmd):
"""this lets you enter commands"""
def default(self, line):
return cmd2.Cmd.default(self, line)
def do_exit(self, line):
return True
def do_repeated(self, line):
repeated.func_1()
Where repeated.py looks like this:
import sched
import time
def func_2(sc):
print 'doing stuff'
sc.enter(10, 0, func_2, (sc,))
def func_1():
s = sched.scheduler(time.time, time.sleep)
s.enter(0, 0, func_2, (s,))
s.run()

http://docs.python.org/2/library/queue.html?highlight=queue#Queue
Can you instance a Queue object outside of cmd2? There can be one thread that watches the queue and takes jobs from it at periodic intervals; while cmd2 is free to run or not run. The thread that processes the queue, and the queue object itself need to be in the outer scope, of course.
To schedule something at a particular time, you can insert a tuple which has the target time in it. Or you can have the thread just check at regular intervals, if that's good enough.
[Edit, if you have a process that is intended to repeat, you can have it requeue itself at the end of it's operation.]

As soon as I asked the question I was able to figure it out. Don't know why that happens sometimes.
This code
def f():
# do something here ...
# call f() again in 60 seconds
threading.Timer(60, f).start()
# start calling f now and every 60 sec thereafter
f()
From here: How to execute a function asynchronously every 60 seconds in Python?
Actually works for what I was trying to do. There are evidently some subtleties in how the function is called as an argument in threading.Timer. Before when I was including the arguments and even the parentheses after the function I was getting recursive depth errors --i.e. the function was calling itself without delay constantly.
So anyone else who has a problem like this, pay attention to how you call the function in threading.Timer(60, f).start(). If you write threading.Timer(60, f()).start() or something similar it will probably not work.

Related

waiting for one function to finish before starting another python concurrent

I don't know if this is possible but i have a tool that make use of the python package concurrent the code looks something like this:
import concurrent.futures
import time
def func1():
#do something
def func2():
#do something separate
def func3():
time.sleep(10)
#do something different
def func4():
time.sleep(10)
#do something different
with concurrent.futures.ThreadPoolExecutor() as executor:
executor.submit(func1)
executor.submit(func2)
executor.submit(func3)
executor.submit(func4)
I want to wait for function 2 to finish before starting function 3 and 4 (whilst function 1 is always running), I've found it takes just under 10 seconds to complete function 2 so I sleep function 3 and 4 for 10 seconds (obviously this is not practical as if I move to any other computer it could take more or less time).
Is there any way to do so, I've seen something about the wait function in the concurrent package but I don't know if this applies in this case
I don't care about the outputs of the functions.
any help would be really appreciated.
This may defeat the purpose of concurrency (i.e. if you want func3 to immediately follow func2 you may just want to put those into a single thread rather than having one thread wait on the other), but:
with concurrent.futures.ThreadPoolExecutor() as executor:
executor.submit(func1)
executor.submit(func2).result()
executor.submit(func3)
forces you to wait for func2's result before you submit func3.

How to run a python script multiple times simultaneously using python and terminate all when one has finished

Maybe it's a very simple question, but I'm new in concurrency. I want to do a python script to run foo.py 10 times simultaneously with a time limit of 60 sec before automatically abort. The script is a non deterministic algorithm, hence all executions takes different times and one will be finished before the others. Once the first ends, I would like to save the execution time, the output of the algorithm and after that kill the rest of the processes.
I have seen this question run multiple instances of python script simultaneously and it looks very similar, but how can I add time limit and the possibility of when the first one finishes the execution, kills the rest of processes?
Thank you in advance.
I'd suggest using the threading lib, because with it you can set threads to daemon threads so that if the main thread exits for whatever reason the other threads are killed. Here's a small example:
#Import the libs...
import threading, time
#Global variables... (List of results.)
results=[]
#The subprocess you want to run several times simultaneously...
def run():
#We declare results as a global variable.
global results
#Do stuff...
results.append("Hello World! These are my results!")
n=int(input("Welcome user, how much times should I execute run()? "))
#We run the thread n times.
for _ in range(n):
#Define the thread.
t=threading.Thread(target=run)
#Set the thread to daemon, this means that if the main process exits the threads will be killed.
t.setDaemon(True)
#Start the thread.
t.start()
#Once the threads have started we can execute tha main code.
#We set a timer...
startTime=time.time()
while True:
#If the timer reaches 60 s we exit from the program.
if time.time()-startTime>=60:
print("[ERROR] The script took too long to run!")
exit()
#Do stuff on your main thread, if the stuff is complete you can break from the while loop as well.
results.append("Main result.")
break
#When we break from the while loop we print the output.
print("Here are the results: ")
for i in results:
print(f"-{i}")
This example should solve your problem, but if you wanted to use blocking commands on the main thread the timer would fail, so you'd need to tweak this code a bit. If you wanted to do that move the code from the main thread's loop to a new function (for example def main(): and execute the rest of the threads from a primary thread on main. This example may help you:
def run():
pass
#Secondary "main" thread.
def main():
#Start the rest of the threads ( in this case I just start 1).
localT=threading.Thread(target=run)
localT.setDaemon(True)
localT.start()
#Do stuff.
pass
#Actual main thread...
t=threading.Thread(target=main)
t.setDaemon(True)
t.start()
#Set up a timer and fetch the results you need with a global list or any other method...
pass
Now, you should avoid global variables at all costs as sometimes they may be a bit buggy, but for some reason the threading lib doesn't allow you to return values from threads, at least i don't know any methods. I think there are other multi-processing libs out there that do let you return values, but I don't know anything about them so I can't explain you anything. Anyways, I hope that this works for you.
-Update: Ok, I was busy writing the code and I didn't read the comments in the post, sorry. You can still use this method but instead of writing code inside the threads, execute another script. You could either import it as a module or actually run it as a script, here's a question that may help you with that:
How to run one python file in another file?

Obtain reference to thread I forgot to keep a reference for

In Python, if I do something like the answer in this thread: Executing periodic actions in Python
eg:
>>>import time, threading
>>> def foo():
... print(time.ctime())
... threading.Timer(10, foo).start()
...
>>> foo()
I understand that each 10 seconds I'm starting a new Timer thread that will wait the time, then create a new timer, etc, and it will run indefinitely.
Obviously, there is nothing in the output of dir() as I didn't assign it a name.
Is there any way to get a reference to this Timer, for instance to .cancel() it? Or is the only way to stop it to have kept a reference to it from the beginning?
I know this is very poor practice, but it demonstrates my more general question.
Timers are a Thread subclass, and like all Threads, they show up in enumeration. You wouldn't necessarily know which one is the timer, but you can get a list of all running threads with threading.enumerate().
If you just want to cancel all outstanding Timers, you could do:
for thread in threading.enumerate():
if isinstance(thread, threading._Timer): # In Py2.7 at least, Timer is a function wrapping the class _Timer
thread.cancel()
Obviously, this would be playing unsportingly if your code is just one library among many, since it would cancel Timers spawned elsewhere; not cricket that.

5 minutes loop in python does it causing issue?

I am using this loop for running every 5 minutes just creating thread and it completes.
while True:
now_plus_5 = now + datetime.timedelta(minutes = 5)
while datetime.datetime.now()<= now_plus_5:
new=datetime.datetime.now()
pass
now = new
pass
But when i check my process status it shows 100% usage when the script runs.Does it causing problem?? or any good ways??
Does it causes CPU 100% usage??
You might rather use something like time.sleep
while True:
# do something
time.sleep(5*60) # wait 5 minutes
Based on your comment above, you may find a Timer object from the threading module to better suit your needs:
from threading import Timer
def hello():
print "hello, world"
t = Timer(300.0, hello)
t.start() # after 5 minutes, "hello, world" will be printed
(code snippet modified from docs)
A Timer is a thread subclass, so you can further encapsulate your logic as needed.
This allows the threading subsystem to schedule the execution of your task such that it's not entirely CPU bound like your current implementation.
I should also note that the Timer class is designed to be fired only once. As such, you'd want to design your task to start a new instance upon completion, or create your own Thread subclass with its own smarts.
While researching this, I noticed that there's also a sched module that provides this functionality as well, but rather than rehash the solution, check out this related question:
Python Equivalent of setInterval()?
timedelta takes(seconds,minutes,hours,days,months,years) as input and works accordingly
from datetime import datetime,timedelta
end_time = datetime.now()+timedelta(minutes=5)
while end_time>= datetime.now():
statements

PYTHON: How to perform set of instruction at predefined time

I have a set of instructions, say {I} and I would like to perform this set {I}
at predefined time for instance each minute.
I'm not asking how to insert a delay of 1 minutes between to successive executions of
the set {I}, I want to start the instructions {I} each minute independently of the time of execution of {I}.
If I inderstand the following code
import time
while True:
{I}
time.sleep(60)
would simply insert a delay of 60 secs between the end of the execution of {I} and the following one. Is it true? Instead I would like that the set of instructions {I} starts each minute (for instance at 9.00 am, 9.01 am, 9.02 am, etc).
Is it possible to perform such a task inside python, or is it preferable to write a script with {I} that I execute each minutes, for instance, with Crontab?
Thank you in advance
Looks like signal.alarm and signal.signal(signal.SIGALRM, handler) should help you.
If you don't need finer resolution than a minute, cron would be the easiest option. Otherwise you'd end up re-writing something like it.
If you need intervals shorter than a minute, you might consider "timeouts" from the glib library. It has Python bindings. The timeout should then probably start the task in a separate process.
Something like APScheduler might meet your needs.
I'm sure there are other similar packages out there as well.
Chances are, you'd have to instantiate separate threads for every instruction to be run concurrently, and simply dispatch them in your delayed while loop.
You could spawn a thread every second using threading.Timer:
import threading
import time
def do_stuff(count):
print(count)
if c < 10: # Let's build in some way to quit
t = threading.Timer(1.0, do_stuff, args=[count+1])
t.start()
t = threading.Timer(0.0, do_stuff, args=[0])
t.start()
t.join()
Using the sched module is another possibility, but note that the sched.scheduler.run method blocks the main process until the event queue is empty. (So if the do_stuff function takes longer than a second, the next event won't run on time.) If you want nonblocking events, use threading.Timer.

Categories