Python - Infinite loop script running in background + restart - python

I would like to have a python script that runs in background (infinite loop).
def main():
# inizialize and start threads
[...]
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
my.cleanup()
if __name__ == '__main__':
main()
my.cleanup()
In order to have the application run constantly in infinite loop what is the best way? I want to remove the time.sleep(1) which I don't need
I would like to run the script in background nohup python myscript.py & is there a way to kill it "gracefully"? When I run it normally hitting the CTRL+C my.cleanup() is called, is there a way to call this when the kill command is used?
What if I would like (using cron) kill the script and restart it again? Is there a way to make it do my.cleanup()?
Thank you

In order to have the application run constantly in infinite loop what is the best way? I want to remove the time.sleep(1) which I don't need
A while True or while <condition> loop is OK, in my opinion.
A sleep() is not mandatory for such a infinite loop as long as you don't require your program to wait for a certain time period.
I would like to run the script in background nohup python myscript.py & is there a way to kill it "gracefully"? When I run it normally hitting the CTRL+C my.cleanup() is called, is there a way to call this when the kill command is used?
You may want to "listen" to serveral signals using the signal() method from the package "signal".
Simplified example extended by signal hooks:
import time
import signal
# determines if the loop is running
running = True
def cleanup():
print "Cleaning up ..."
def main():
global running
# add a hook for TERM (15) and INT (2)
signal.signal(signal.SIGTERM, _handle_signal)
signal.signal(signal.SIGINT, _handle_signal)
# is True by default - will be set False on signal
while running:
time.sleep(1)
# when receiving a signal ...
def _handle_signal(signal, frame):
global running
# mark the loop stopped
running = False
# cleanup
cleanup()
if __name__ == '__main__':
main()
Note, that you can't listen to SIGKILL, when terminating a program using that signal, you have no chance to do any clean up. Your program should be aware of that (an do kind of pre-start clean up or fail with a proper message).
Note, I was using a global variable to keep this example simple, I prefer to encapsulate this in a custom class.
What if I would like (using cron) kill the script and restart it again? Is there a way to make it do my.cleanup()?
As long as your cronjob will kill the program with any signal except SIGKILL this is possible, of course.
You should consider doing what you want to do in a different way: for example, if you want to "re-do" some setup task before the infinite loop task, you could also do this upon a certain signal (some programs use SIGHUP for reloading configrations, for example). You'd have to break the loop, do the task and resume it.

Related

How to kill a single process in Python at any point of its execution?

I'm fairly new to Python and have created an rudimentary application that runs a series of what is functionally macros for the user to automate some tedious processes with an existing application. Some of these processes take a while, and others I would like to run infinitely until a user hits a key to stop it. It is a small program and I was in a hurry, so the easiest solution was throwing a stop field into my classes and putting "if stop -> return/break" in numerous spots throughout my methods. The code below should demonstrate the general idea.
class ExampleClass:
stop = false
def stop_doing_stuff(self):
stop = true
def do_stuff(self):
if stop:
return
else:
for i in range(10000):
if stop:
return
else:
do_thing()
This seems clearly to me as an amateur solution, and I would like to know how this could be done better. I'm sure there is a way to accomplish this with threading, but I've only briefly worked with threads in the past so was not sure how at the time. I was most curious though as to whether there is perhaps an even easier solution which does not involve launching a thread since I'm in no need of multiple processes running concurrently.
Edit:
I forgot to mention this is a GUI application running. The user is pressing buttons which will do the tasks for them. However, the GUI is hidden as a task is executed.
You may catch a KeyboardInterrupt:
try:
for i in range(10000):
do_things()
except KeyboardInterrupt:
return
You can immediately terminate a script with a specified exit status using sys.exit(exit_status):
import sys
def foo():
sys.exit(0) # exit with success status (zero)
sys.exit(1) # exit with error status (non-zero)
If you want to terminate the script via keyboard you need only Ctrl + C / Cmd + C it. You can handle that interrupt gracefully using signal:
#!/usr/bin/env python
import signal
import sys
def signal_handler(sig, frame):
print('You pressed Ctrl+C!')
sys.exit(0)
signal.signal(signal.SIGINT, signal_handler)
print('Press Ctrl+C to terminate')
# ..perform work here..

Kill a python process when busy in a loop

I'm running a script on my Raspberry. Sometimes happens that the program freezes, so I've to close the terminal and re-run the .py
So I wanted to "multiprocess" this program. I made two function, the first one does the work, the second one has the job to check the time, and kill the process of the first function in the case the condition is true.
However I tried to do like so:
def AntiFreeze():
print("AntiFreeze partito\n")
global stop
global endtime
global freq
proc_SPN = multiprocessing.Process(target=SPN(), args=())
proc_SPN.start()
time.sleep(2)
proc_SPN.terminate()
proc_SPN.join()
if __name__ == '__main__':
proc_AF = multiprocessing.Process(target=AntiFreeze(), args=())
proc_AF.start()
The main function start the "AntiFreeze" function on a process, this one create another process to run the function that will do the Job I want.
THE PROBLEM (I think):
The function "SPN()" (that is the one that does the job) is busy in a very long while loop that calls function in another .py file.
So when I use proc_SPN.terminate() or proc_SPN.kill() nothing happens... why?
There is another way to force a process to kill? maybe I've to do two different programs?
Thanks in advance for help
You are calling your function at process creation, so most likely the process is never correctly spawned. Your code should be changed into:
def AntiFreeze():
print("AntiFreeze partito\n")
global stop
global endtime
global freq
proc_SPN = multiprocessing.Process(target=SPN, args=())
proc_SPN.start()
time.sleep(2)
proc_SPN.terminate()
proc_SPN.join()
if __name__ == '__main__':
proc_AF = multiprocessing.Process(target=AntiFreeze, args=())
proc_AF.start()
Furthermore, you shouldn't use globals (unless strictly necessarry). You could pass the needed arguments to the AntiFreeze function instead.

How to run a python script multiple times simultaneously using python and terminate all when one has finished

Maybe it's a very simple question, but I'm new in concurrency. I want to do a python script to run foo.py 10 times simultaneously with a time limit of 60 sec before automatically abort. The script is a non deterministic algorithm, hence all executions takes different times and one will be finished before the others. Once the first ends, I would like to save the execution time, the output of the algorithm and after that kill the rest of the processes.
I have seen this question run multiple instances of python script simultaneously and it looks very similar, but how can I add time limit and the possibility of when the first one finishes the execution, kills the rest of processes?
Thank you in advance.
I'd suggest using the threading lib, because with it you can set threads to daemon threads so that if the main thread exits for whatever reason the other threads are killed. Here's a small example:
#Import the libs...
import threading, time
#Global variables... (List of results.)
results=[]
#The subprocess you want to run several times simultaneously...
def run():
#We declare results as a global variable.
global results
#Do stuff...
results.append("Hello World! These are my results!")
n=int(input("Welcome user, how much times should I execute run()? "))
#We run the thread n times.
for _ in range(n):
#Define the thread.
t=threading.Thread(target=run)
#Set the thread to daemon, this means that if the main process exits the threads will be killed.
t.setDaemon(True)
#Start the thread.
t.start()
#Once the threads have started we can execute tha main code.
#We set a timer...
startTime=time.time()
while True:
#If the timer reaches 60 s we exit from the program.
if time.time()-startTime>=60:
print("[ERROR] The script took too long to run!")
exit()
#Do stuff on your main thread, if the stuff is complete you can break from the while loop as well.
results.append("Main result.")
break
#When we break from the while loop we print the output.
print("Here are the results: ")
for i in results:
print(f"-{i}")
This example should solve your problem, but if you wanted to use blocking commands on the main thread the timer would fail, so you'd need to tweak this code a bit. If you wanted to do that move the code from the main thread's loop to a new function (for example def main(): and execute the rest of the threads from a primary thread on main. This example may help you:
def run():
pass
#Secondary "main" thread.
def main():
#Start the rest of the threads ( in this case I just start 1).
localT=threading.Thread(target=run)
localT.setDaemon(True)
localT.start()
#Do stuff.
pass
#Actual main thread...
t=threading.Thread(target=main)
t.setDaemon(True)
t.start()
#Set up a timer and fetch the results you need with a global list or any other method...
pass
Now, you should avoid global variables at all costs as sometimes they may be a bit buggy, but for some reason the threading lib doesn't allow you to return values from threads, at least i don't know any methods. I think there are other multi-processing libs out there that do let you return values, but I don't know anything about them so I can't explain you anything. Anyways, I hope that this works for you.
-Update: Ok, I was busy writing the code and I didn't read the comments in the post, sorry. You can still use this method but instead of writing code inside the threads, execute another script. You could either import it as a module or actually run it as a script, here's a question that may help you with that:
How to run one python file in another file?

How do you kill Futures once they have started?

I am using the new concurrent.futures module (which also has a Python 2 backport) to do some simple multithreaded I/O. I am having trouble understanding how to cleanly kill tasks started using this module.
Check out the following Python 2/3 script, which reproduces the behavior I'm seeing:
#!/usr/bin/env python
from __future__ import print_function
import concurrent.futures
import time
def control_c_this():
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
future1 = executor.submit(wait_a_bit, name="Jack")
future2 = executor.submit(wait_a_bit, name="Jill")
for future in concurrent.futures.as_completed([future1, future2]):
future.result()
print("All done!")
def wait_a_bit(name):
print("{n} is waiting...".format(n=name))
time.sleep(100)
if __name__ == "__main__":
control_c_this()
While this script is running it appears impossible to kill cleanly using the regular Control-C keyboard interrupt. I am running on OS X.
On Python 2.7 I have to resort to kill from the command line to kill the script. Control-C is just ignored.
On Python 3.4, Control-C works if you hit it twice, but then a lot of strange stack traces are dumped.
Most documentation I've found online talks about how to cleanly kill threads with the old threading module. None of it seems to apply here.
And all the methods provided within the concurrent.futures module to stop stuff (like Executor.shutdown() and Future.cancel()) only work when the Futures haven't started yet or are complete, which is pointless in this case. I want to interrupt the Future immediately.
My use case is simple: When the user hits Control-C, the script should exit immediately like any well-behaved script does. That's all I want.
So what's the proper way to get this behavior when using concurrent.futures?
It's kind of painful. Essentially, your worker threads have to be finished before your main thread can exit. You cannot exit unless they do. The typical workaround is to have some global state, that each thread can check to determine if they should do more work or not.
Here's the quote explaining why. In essence, if threads exited when the interpreter does, bad things could happen.
Here's a working example. Note that C-c takes at most 1 sec to propagate because the sleep duration of the child thread.
#!/usr/bin/env python
from __future__ import print_function
import concurrent.futures
import time
import sys
quit = False
def wait_a_bit(name):
while not quit:
print("{n} is doing work...".format(n=name))
time.sleep(1)
def setup():
executor = concurrent.futures.ThreadPoolExecutor(max_workers=5)
future1 = executor.submit(wait_a_bit, "Jack")
future2 = executor.submit(wait_a_bit, "Jill")
# main thread must be doing "work" to be able to catch a Ctrl+C
# http://www.luke.maurits.id.au/blog/post/threads-and-signals-in-python.html
while (not (future1.done() and future2.done())):
time.sleep(1)
if __name__ == "__main__":
try:
setup()
except KeyboardInterrupt:
quit = True
I encountered this, but the issue I had was that many futures (10's of thousands) would be waiting to run and just pressing Ctrl-C left them waiting, not actually exiting. I was using concurrent.futures.wait to run a progress loop and needed to add a try ... except KeyboardInterrupt to handle cancelling unfinished Futures.
POLL_INTERVAL = 5
with concurrent.futures.ThreadPoolExecutor(max_workers=MAX_WORKERS) as pool:
futures = [pool.submit(do_work, arg) for arg in large_set_to_do_work_over]
# next line returns instantly
done, not_done = concurrent.futures.wait(futures, timeout=0)
try:
while not_done:
# next line 'sleeps' this main thread, letting the thread pool run
freshly_done, not_done = concurrent.futures.wait(not_done, timeout=POLL_INTERVAL)
done |= freshly_done
# more polling stats calculated here and printed every POLL_INTERVAL seconds...
except KeyboardInterrupt:
# only futures that are not done will prevent exiting
for future in not_done:
# cancel() returns False if it's already done or currently running,
# and True if was able to cancel it; we don't need that return value
_ = future.cancel()
# wait for running futures that the above for loop couldn't cancel (note timeout)
_ = concurrent.futures.wait(not_done, timeout=None)
If you're not interested in keeping exact track of what got done and what didn't (i.e. don't want a progress loop), you can replace the first wait call (the one with timeout=0) with not_done = futures and still leave the while not_done: logic.
The for future in not_done: cancel loop can probably behave differently based on that return value (or be written as a comprehension), but waiting for futures that are done or canceled isn't really waiting - it returns instantly. The last wait with timeout=None ensures that pool's running jobs really do finish.
Again, this only works correctly if the do_work that's being called actually, eventually returns within a reasonable amount of time. That was fine for me - in fact, I want to be sure that if do_work gets started, it runs to completion. If do_work is 'endless' then you'll need something like cdosborn's answer that uses a variable visible to all the threads, signaling them to stop themselves.
Late to the party, but I just had the same problem.
I want to kill my program immediately and I don't care what's going on. I don't need a clean shutdown beyond what Linux will do.
I found that replacing geitda's code in the KeyboardInterrupt exception handler with os.kill(os.getpid(), 9) exits immediately after the first ^C.
main = str(os.getpid())
def ossystem(c):
return subprocess.Popen(c, shell=True, stdout=subprocess.PIPE).stdout.read().decode("utf-8").strip()
def killexecutor():
print("Killing")
pids = ossystem('ps -a | grep scriptname.py').split('\n')
for pid in pids:
pid = pid.split(' ')[0].strip()
if(str(pid) != main):
os.kill(int(pid), 9)
...
killexecutor()

Programmatically exiting python script while multithreading

I have some code which runs routinely, and every now and then (like once a month) the program seems to hang somewhere and I'm not sure where.
I thought I would implement [what has turned out to be not quite] a "quick fix" of checking how long the program has been running for. I decided to use multithreading to call the function, and then while it is running, check the time.
For example:
import datetime
import threading
def myfunc():
#Code goes here
t=threading.Thread(target=myfunc)
t.start()
d1=datetime.datetime.utcnow()
while threading.active_count()>1:
if (datetime.datetime.utcnow()-d1).total_seconds()>60:
print 'Exiting!'
raise SystemExit(0)
However, this does not close the other thread (myfunc).
What is the best way to go about killing the other thread?
The docs could be clearer about this. Raising SystemExit tells the interpreter to quit, but "normal" exit processing is still done. Part of normal exit processing is .join()-ing all active non-daemon threads. But your rogue thread never ends, so exit processing waits forever to join it.
As #roippi said, you can do
t.daemon = True
before starting it. Normal exit processing does not wait for daemon threads. Your OS should kill them then when the main process exits.
Another alternative:
import os
os._exit(13) # whatever exit code you want goes there
That stops the interpreter "immediately", and skips all normal exit processing.
Pick your poison ;-)
There is no way to kill a thread. You must kill the target from within the target. The best way is with a hook and a queue. It goes something like this.
import Threading
from Queue import Queue
# add a kill_hook arg to your function, kill_hook
# is a queue used to pass messages to the main thread
def myfunc(*args, **kwargs, kill_hook=None):
#Code goes here
# put this somewhere which is periodically checked.
# an ideal place to check the hook is when logging
try:
if q.get_nowait(): # or use q.get(True, 5) to wait a longer
print 'Exiting!'
raise SystemExit(0)
except Queue.empty:
pass
q = Queue() # the queue used to pass the kill call
t=threading.Thread(target=myfunc, args = q)
t.start()
d1=datetime.datetime.utcnow()
while threading.active_count()>1:
if (datetime.datetime.utcnow()-d1).total_seconds()>60:
# if your kill criteria are met, put something in the queue
q.put(1)
I originally found this answer somewhere online, possibly this. Hope this helps!
Another solution would be to use a separate instance of Python, and monitor the other Python thread, killing it from the system level, with psutils.
Wow, I like the daemon and stealth os._exit solutions too!

Categories