I am looking for the way of sending a keystroke to a Python script. In this case, I am trying that the script detects if a press whatever key, and not only the interrupt signals (ctrl + c , ctrl + d, ...).
I have checked the signal python module. But it seems like it's only prepared to handle interrupt signals, and not if I press "K" or "Space" for example. I have seen this in the official docs of the module:
import signal
import os
import time
def receive_signal(signum, stack):
print 'Received:', signum
signal.signal(signal.SIGUSR1, receive_signal)
signal.signal(signal.SIGUSR2, receive_signal)
print 'My PID is:', os.getpid()
while True:
print 'Waiting...'
time.sleep(3)
And they say:
To send signals to the running program, I use the command line program kill. To produce the output below, I ran signal_signal.py in one window, then kill -USR1 $pid, kill -USR2 $pid, and kill -INT $pid in another.
I am quite sure that this module is not the solution. Do you know some module or something that could help me for sending keystroke to my python script asynchronously ?
Thanks a lot!!
I want the user has the possibility of skip a day, a month or a machine by pressing a key in whatever moment.
Ah. Now it makes sense.
And not very sure that this would be possible.
Anything's possible. It can just be quite complex for a truly asynchronous solution.
The only way I could think to do it, while avoiding a polling approach, was to fork(2) the process, have the parent process listen for keypresses, and send signals to the child process, which actually does the work.
Something like this...
#!/usr/bin/env python
import sys, os, time, termios, tty, signal
# Define some custom exceptions we can raise in signal handlers
class SkipYear(Exception):
pass
class SkipMonth(Exception):
pass
# Process one month
def process_month(year, month):
# Fake up whatever the processing actually is
print 'Processing %04d-%02d' % (year, month)
time.sleep(1)
# Process one year
def process_year(year):
# Iterate months 1-12
for month in range(1, 13):
try:
process_month(year, month)
except SkipMonth:
print 'Skipping month %d' % month
# Do all processing
def process_all(args):
# Help
print 'Started processing - args = %r' % args
try:
# Iterate years 2010-2015
for year in range(2010, 2016):
try:
process_year(year)
except SkipYear:
print 'Skipping year %d' % year
# Handle SIGINT from parent process
except KeyboardInterrupt:
print 'Child caught SIGINT'
# Return success
print 'Child terminated normally'
return 0
# Main entry point
def main(args):
# Help
print 'Press Y to skip current year, M to skip current month, or CTRL-C to abort'
# Get file descriptor for stdin. This is almost always zero.
stdin_fd = sys.stdin.fileno()
# Fork here
pid = os.fork()
# If we're the child
if not pid:
# Detach child from controlling TTY, so it can't be the foreground
# process, and therefore can't get any signals from the TTY.
os.setsid()
# Define signal handler for SIGUSR1 and SIGUSR2
def on_signal(signum, frame):
if signum == signal.SIGUSR1:
raise SkipYear
elif signum == signal.SIGUSR2:
raise SkipMonth
# We want to catch SIGUSR1 and SIGUSR2
signal.signal(signal.SIGUSR1, on_signal)
signal.signal(signal.SIGUSR2, on_signal)
# Now do the thing
return process_all(args[1:])
# If we get this far, we're the parent
# Define a signal handler for when the child terminates
def on_sigchld(signum, frame):
assert signum == signal.SIGCHLD
print 'Child terminated - terminating parent'
sys.exit(0)
# We want to catch SIGCHLD
signal.signal(signal.SIGCHLD, on_sigchld)
# Remember the original terminal attributes
stdin_attrs = termios.tcgetattr(stdin_fd)
# Change to cbreak mode, so we can detect single keypresses
tty.setcbreak(stdin_fd)
try:
# Loop until we get a signal. Typically one of...
#
# a) SIGCHLD, when the child process terminates
# b) SIGINT, when the user presses CTRL-C
while 1:
# Wait for a keypress
char = os.read(stdin_fd, 1)
# If it was 'Y', send SIGUSR1 to the child
if char.lower() == 'y':
os.kill(pid, signal.SIGUSR1)
# If it was 'M', send SIGUSR2 to the child
if char.lower() == 'm':
os.kill(pid, signal.SIGUSR2)
# Parent caught SIGINT - send SIGINT to child process
except KeyboardInterrupt:
print 'Forwarding SIGINT to child process'
os.kill(pid, signal.SIGINT)
# Catch system exit
except SystemExit:
print 'Caught SystemExit'
# Ensure we reset terminal attributes to original settings
finally:
termios.tcsetattr(stdin_fd, termios.TCSADRAIN, stdin_attrs)
# Return success
print 'Parent terminated normally'
return 0
# Stub
if __name__ == '__main__':
sys.exit(main(sys.argv))
...should do the trick, although you'll be limited by the number of distinct signals you can send.
Related
I'm writing a script that runs a background process in parallel. When restarting the script I want to be able to kill the background process and exit it cleanly by sending it a CTRL_C_EVENT signal. For some reason though, sending the CTRL_C_EVENT signal to the child process also causes the same signal to be sent to the parent process. I suspect that the KeyboardInterrupt exception isn't being cleaned up after the child process gets it and is then caught by the main process.
I'm using Python version 2.7.1 and running on Windows Server 2012.
import multiprocessing
import time
import signal
import os
def backgroundProcess():
try:
while(True):
time.sleep(10)
except KeyboardInterrupt:
#exit cleanly
return
def script():
try:
print "Starting function"
#Kill all background processes
for proc in multiprocessing.active_children():
print "Killing " + str(proc) + " with PID " + str(proc.pid)
os.kill(proc.pid, signal.CTRL_C_EVENT)
print "Creating background process"
newProc = multiprocessing.Process(target=backgroundProcess)
print "Starting new background process"
newProc.start()
print "Process PID is " + str(newProc.pid)
except KeyboardInterrupt:
print "Unexpected keyboard interrupt"
def main():
script()
time.sleep(5)
script()
I expect that the script() function should never be receiving a KeyboardInterrupt exception, but it is triggered the second time that the function is called. Why is this happening?
I'm still looking for an explanation as to why the issue occurs, but I'll post my (albeit somewhat hacky) workaround here in case it helps anyone else. Since the Ctrl+C gets propagated to the parent process (still not entirely sure why this happens), I'm going to just catch the exception when it arrives and do nothing.
Eryk suggested using an extra watchdog thread to handle terminating the extra process, but for my application this introduces extra complexity and seems a bit overkill for the rare case that I actually need to kill the background process. Most of the time the background process in my application will close itself cleanly when it's done.
I'm still open to suggestions for a better implementation that doesn't add too much complexity (more processes, threads, etc.).
Modified code here:
import multiprocessing
import time
import signal
import os
def backgroundProcess():
try:
while(True):
time.sleep(10)
except KeyboardInterrupt:
#Exit cleanly
return
def script():
print "Starting function"
#Kill all background processes
for proc in multiprocessing.active_children():
print "Killing " + str(proc) + " with PID " + str(proc.pid)
try:
#Apparently sending a CTRL-C to the child also sends it to the parent??
os.kill(proc.pid, signal.CTRL_C_EVENT)
#Sleep until the parent receives the KeyboardInterrupt, then ignore it
time.sleep(1)
except KeyboardInterrupt:
pass
print "Creating background process"
newProc = multiprocessing.Process(target=backgroundProcess)
print "Starting new background process"
newProc.start()
print "Process PID is " + str(newProc.pid)
def main():
script()
time.sleep(5)
script()
this is the daemon class i am using
it is acting as a base class which i want to spawn 2 seperate daemons from another controller file
class Daemon:
"""A generic daemon class.
Usage: subclass the daemon class and override the run() method."""
def __init__(self, pidfile,outfile='/tmp/daemon_out',errfile='/tmp/daemon_log'):
self.pidfile = pidfile
self.outfile = outfile
self.errfile = errfile
def daemonize(self):
"""Deamonize class. UNIX double fork mechanism."""
try:
pid = os.fork()
if pid > 0:
# exit first parent
sys.exit(0)
except OSError as err:
sys.stderr.write('fork #1 failed: {0}\n'.format(err))
sys.exit(1)
# decouple from parent environment
os.chdir('/')
os.setsid()
os.umask(0)
# do second fork
try:
pid = os.fork()
if pid > 0:
# exit from second parent
sys.exit(0)
except OSError as err:
sys.stderr.write('fork #2 failed: {0}\n'.format(err))
sys.exit(1)
# redirect standard file descriptors
sys.stdout.flush()
sys.stderr.flush()
si = open(os.devnull, 'r')
so = open(self.outfile, 'a+')
se = open(self.errfile, 'a+')
os.dup2(si.fileno(), sys.stdin.fileno())
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(se.fileno(), sys.stderr.fileno())
# write pidfile
atexit.register(self.delpid)
pid = str(os.getpid())
with open(self.pidfile,'w+') as f:
f.write(pid + '\n')
#method for removing the pidfile before stopping the program
#remove the commented part if you want to delete the output & error file before stopping the program
def delpid(self):
os.remove(self.pidfile)
#os.remove(self.outfile)
#os.remove(self.errfile)
def start(self):
"""Start the daemon."""
# Check for a pidfile to see if the daemon already runs
try:
with open(self.pidfile,'r') as pf:
pid = int(pf.read().strip())
except IOError:
pid = None
if pid:
message = "pidfile {0} already exist. " + \
"Daemon already running?\n"
sys.stderr.write(message.format(self.pidfile))
sys.exit(1)
# Start the daemon
self.daemonize()
self.run()
def stop(self):
#Stop the daemon.
# Get the pid from the pidfile
try:
with open(self.pidfile,'r') as pf:
pid = int(pf.read().strip())
except IOError:
pid = None
if not pid:
message = "pidfile {0} does not exist. " + \
"Daemon not running?\n"
sys.stderr.write(message.format(self.pidfile))
return # not an error in a restart
# Try killing the daemon process
try:
while 1:
os.kill(pid, signal.SIGTERM)
time.sleep(0.1)
except OSError as err:
e = str(err.args)
if e.find("No such process") > 0:
if os.path.exists(self.pidfile):
os.remove(self.pidfile)
else:
print (str(err.args))
sys.exit(1)
def restart(self):
"""Restart the daemon."""
self.stop()
self.start()
def run(self):
"""override this method when you subclass Daemon.
It will be called after the process has been daemonized by
start() or restart()."""
here is the code i am using in a different file
in this file i am extending the daemon class from seperate classes & overriding the run() method.
#! /usr/bin/python3.6
import sys, time, os, psutil, datetime
from daemon import Daemon
class net(Daemon):
def run(self):
while(True):
print("net daemon : ",os.getpid())
time.sleep(200)
class file(Daemon):
def run(self):
while(True):
print("file daemon : ",os.getpid())
time.sleep(200)
if __name__ == "__main__":
net_daemon = net(pidfile='/tmp/net_pidFile',outfile='/tmp/network_out.log',errfile='/tmp/net_error.log')
file_daemon = file(pidfile='/tmp/file_pidFile',outfile='/tmp/filesys_out.log',errfile='/tmp/file_error.log')
if len(sys.argv) == 2:
if 'start' == sys.argv[1]:
net_daemon.start()
file_daemon.start()
elif 'stop' == sys.argv[1]:
file_daemon.stop()
net_daemon.stop()
elif 'restart' == sys.argv[1]:
file_daemon.restart()
net_daemon.restart()
else:
print("Unknown command")
sys.exit(2)
sys.exit(0)
else:
print("usage: %s start|stop|restart" % sys.argv[0])
sys.exit(2)
the first class to run the start() method is running currently &
only the net Daemon works now how do i make the 2 classes spawn 2 seperate daemons ??
The real problem here is that you've chosen the wrong code for the task you want. You're asking "How do I use this power saw to hammer in this nail?" And in this case, it's not even a professionally-produced saw with an instruction manual, it's a home-made saw you found in someone's garage, built by a guy who probably knew what he was doing but you can't actually be sure because you don't know what he was doing.
The proximate problem that you're complaining about is in daemonize:
try:
pid = os.fork()
if pid > 0:
# exit first parent
sys.exit(0)
The first time you call this, the parent process exits. Which means the parent process never gets to launch the second daemon, or do anything else.
For a self-daemonizing program that can be managed by a separate program, this is exactly what you want. (Whether it gets all the details right, I don't know, but the basic idea is definitely right.)
For a managing program that spawns daemons, this is exactly what you don't want. And that's what you're trying to write. So this is the wrong tool for the job.
But the tasks aren't that much different. If you understand what you're doing (and crack open your copy of Unix Network Programming—nobody understands this stuff well enough to get it right off the top of their head), you can convert one into the other. Which might be a useful exercise, even if for any real application I'd just use one of the well-tested, well-documented, nicely-maintained libraries on PyPI.
What happens if you just replace the sys.exit(0) calls that happen in the parent process (but not the ones that happen in the intermediate child!) with return True? (Well, you probably want to also replace the sys.exit(1) in the parent with a return False or raise some kind of exception.) Then daemonize no longer daemonizes you, but instead spawns a daemon and reports back on whether it succeeded. Which is what you wanted, right?
No guarantees that it does everything else right (and I'd bet it doesn't), but it does solve the specific problem you were asking about.
If nothing obvious is going wrong after that, the next step would probably be to read through PEP 3143 (which does a pretty nice job translating all the details in Stevens' book into Python terms and making sure they're up to date for 21st century linux and BSD) and come up with a checklist of tests to run, and then run them to see what less obvious things you're still getting wrong.
I have an infinite loop in which there are operations that are mandatory to be completely executed before exiting the loop. Namely, I am using the socket library for connecting to an external device and I need to wait the read instructions to be finished before interrupting the loop.
I have tried using a signal handler (like in this question) for raising a flag when a Keyboard interrupt is detected.
Current code:
import videosensor
import signal
def signal_handler(signal, frame):
"""Raises a flag when a keyboard interrupt is raised."""
global interrupted
interrupted = True
if __name__ == '__main__':
camera = videosensor.VideoSensor(filename)
interrupted = False
signal.signal(signal.SIGINT, signal_handler)
while not interrupted:
location = camera.get_register()
#...
#More irrelevant stuff is executed.
#...
time.sleep(0.01)
#This code has to be executed after exiting while loop
camera_shutdown(camera)
In the previous code, videosensor.VideoSensor is a class containing socket operations for getting data from an external device. The get_register() method used in the main routine is the following:
def get_register(self):
"""Read the content of the specified register.
"""
#Do some stuff
value = socket.recv(2048)
return value
The problem:
I wanted the while loop to be continually executed until the user pressed a key or used the Keyboard Interrupt, but after the current iteration was finished. Instead, using the previous solution does not work as desired, as it interrupts the ongoing instruction, and if it is reading the socket, an error is raised:
/home/.../client.pyc
in read_register(self, regkey)
164 reg = self._REGISTERS[regkey]
165 self.send('r,{}\n'.format(reg))
--> 166 value = socket.recv(2048)
167 #Convert the string input into a valid value e.g. list or int
168 formatted_result = ast.literal_eval(value)
error: [Errno 4] Interrupted system
EDIT: It seems, from an answer below, that there is no way of using the Keyboard Interrupt and avoid the socket read function to be aborted. Despite there are solutions for catching the error, they don't avoid the read cancellation.
I am interested, though, in finding a way of getting a user input e.g. specific key press, that raises the flag, which will be checked at the end of the loop, without interrupting the main routine execution until this check.
EDIT2: The used OS is the Linux distribution Ubuntu 14.04
After quick SO search I found this solution for your issue
Basically, there's nothing you can do: when you send a SIGINT to your process, the socket will return a SIGINT as well. The best you can do, then, is to actively ignore the issue, by catching the socket EINTR error and going on with your loop:
import errno
try:
# do something
value = conn.recv(2048)
except socket.error as (code, msg):
if code != errno.EINTR:
raise
An alternative solution to avoid issues with C-c breaking reads, is to use parallel execution, to read your socket in a routine, and handle user input on the other:
import asyncio
async def camera_task(has_ended, filename):
camera = videosensor.VideoSensor(filename)
try:
while not has_ended.is_set():
location = camera.get_register()
#...
#More irrelevant stuff is executed.
#...
await asyncio.sleep(0.01)
finally:
#This code has to be executed after exiting while loop
camera_shutdown(camera)
async def input_task(shall_end):
while True:
i = input("Press 'q' to stop the script…")
if i == 'q':
shall_end.set()
def main():
filename = …
#
end_event = asyncio.Event()
asyncio.Task(camera_task(end_event, filename))
asyncio.Task(input_task(end_event))
asyncio.get_event_loop().run_forever()
or with threading
import threading, time
def camera_task(has_ended, filename):
camera = videosensor.VideoSensor(filename)
try:
while not has_ended.is_set():
location = camera.get_register()
#...
#More irrelevant stuff is executed.
#...
time.sleep(0.01)
finally:
#This code has to be executed after exiting while loop
camera_shutdown(camera)
def input_task(shall_end):
while True:
i = input("Press 'q' to stop the script…")
if i == 'q':
shall_end.set()
def main():
filename = …
#
end_event = threading.Event()
threads = [
threading.Thread(target=camera_task, args=(end_event, filename)),
threading.Thread(target=input_task, args=(end_event,))
]
# start threads
for thread in threads:
thread.start()
# wait for them to end
for thread in threads:
thread.join()
or with multiprocessing:
import multiprocessing, time
def camera_task(has_ended, filename):
camera = videosensor.VideoSensor(filename)
try:
while not has_ended.is_set():
location = camera.get_register()
#...
#More irrelevant stuff is executed.
#...
time.sleep(0.01)
finally:
#This code has to be executed after exiting while loop
camera_shutdown(camera)
def input_task(shall_end):
while True:
i = input("Press 'q' to stop the script…")
if i == 'q':
shall_end.set()
def main():
filename = …
#
end_event = multiprocessing.Event()
processes = [
multiprocessing.Process(target=camera_task, args=(end_event, filename)),
multiprocessing.Process(target=input_task, args=(end_event,))
]
# start processes
for process in processes:
process.start()
# wait for them to end
for process in processes:
process.join()
disclaimer: those codes are untested, and there might be some typos or little errors, but I believe the overall logic should be 👌
You created your custom signal handler but did not overide the default keyboard interrupt behaviour. Add signal.signal(signal.SIGINT, signal_handler) to your code to accomplish this:
import videosensor
import signal
# Custom signal handler
def signal_handler(signal, frame):
"""Raises a flag when a keyboard interrupt is raised."""
global interrupted
interrupted = True
# Necessary to override default keyboard interrupt
signal.signal(signal.SIGINT, signal_handler)
if __name__ == '__main__':
# Main programme
If I understand correctly, you do not want socket.recv() to be interrupted, but you do want to use signals to let the user indicate that the I/O loop should be terminated once the current I/O operation has completed.
With the assumption that you are using Python 2 on a Unix system, you can solve your problem by calling signal.siginterrupt(signal.SIGINT, False) before entering the loop. This will cause system calls to be restarted when a signal occurs rather than interrupting it and raising an exception.
In your case this means that the socket.recv() operation will be restarted after your signal handler is called and therefore get_register() will not return until a message is received on the socket. If that is what you want your code will be:
interrupted = False
old_handler = signal.signal(signal.SIGINT, signal_handler) # install signal handler
signal.siginterrupt(signal.SIGINT, False) # do not interrupt system calls
while not interrupted:
location = camera.get_register()
if location == '':
# remote connection closed
break
#...
#More irrelevant stuff is executed.
#...
time.sleep(0.01)
That's one way to do it, but it does require that your code is running on a Unix platform.
Another way, which might work on other platforms, is to handle the exception, ignore further SIGINT signals (in case the user hits interrupt again), and then perform a final socket.recv() before returning from the get_register() function:
import errno
def get_register(s):
"""Read the content of the specified register.
"""
#Do some stuff
try:
old_handler = None
return s.recv(2048)
except socket.error as exc:
if exc.errno == errno.EINTR:
old_handler = signal.signal(signal.SIGINT, signal.SIG_IGN) # ignore this signal
return s.recv(2048) # system call was interrupted, restart it
else:
raise
finally:
if old_handler is not None:
signal.signal(signal.SIGINT, old_handler) # restore handler
Signal handling can get tricky and there might be race conditions in the above that I am not aware of. Try to use siginterrupt() if possible.
I have been programming using python for the raspberryPi for several months now and I am trying to make my scripts "well behaved" and wrap up (close files and make sure no writes to SD are being perfomed) upon reception of SIGTERM.
Following advice on SO (1, 2) I am able to handle SIGTERM if I kill the process manually (i.e. kill {process number}) but if I send the shutdown command (i.e. shutdown -t 30 now) my handler never gets called.
I also tried registering for all signals and checking the signal being send on the shutdown event but I am not getting any.
Here's simple example code:
import time
import signal
import sys
def myHandler(signum, frame):
print "Signal #, ", signum
sys.exit()
for i in [x for x in dir(signal) if x.startswith("SIG")]:
try:
signum = getattr(signal, i)
signal.signal(signum, myHandler)
print "Handler added for {}".format(i)
except RuntimeError,m:
print "Skipping %s"%i
except ValueError:
break
while True:
print "goo"
time.sleep(1)
Any ideas will be greatly appreciated .. =)
this code works for me on the raspberry pi, i can see the correct output in the file output.log after the restart:
logging.basicConfig(level=WARNING,
filename='output.log',
format='%(message)s')
def quit():
#cleaning code here
logging.warning('exit')
sys.exit(0)
def handler(signum=None, frame=None):
quit()
for sig in [signal.SIGTERM, signal.SIGHUP, signal.SIGQUIT, signal.SIGKILL]:
signal.signal(sig, handler)
def restart():
command = '/sbin/shutdown -r now'
process = subprocess.Popen(command.split(), stdout=subprocess.PIPE)
output = process.communicate()[0]
logging.warning('%s'%output)
restart()
maybe your terminal handles the signal before the python script does, so you can't actually see anything. Try to see the output in a file (with the logging module or the way as you like).
I need to do the following in Python. I want to spawn a process (subprocess module?), and:
if the process ends normally, to continue exactly from the moment it terminates;
if, otherwise, the process "gets stuck" and doesn't terminate within (say) one hour, to kill it and continue (possibly giving it another try, in a loop).
What is the most elegant way to accomplish this?
The subprocess module will be your friend. Start the process to get a Popen object, then pass it to a function like this. Note that this only raises exception on timeout. If desired you can catch the exception and call the kill() method on the Popen process. (kill is new in Python 2.6, btw)
import time
def wait_timeout(proc, seconds):
"""Wait for a process to finish, or raise exception after timeout"""
start = time.time()
end = start + seconds
interval = min(seconds / 1000.0, .25)
while True:
result = proc.poll()
if result is not None:
return result
if time.time() >= end:
raise RuntimeError("Process timed out")
time.sleep(interval)
There are at least 2 ways to do this by using psutil as long as you know the process PID.
Assuming the process is created as such:
import subprocess
subp = subprocess.Popen(['progname'])
...you can get its creation time in a busy loop like this:
import psutil, time
TIMEOUT = 60 * 60 # 1 hour
p = psutil.Process(subp.pid)
while 1:
if (time.time() - p.create_time()) > TIMEOUT:
p.kill()
raise RuntimeError('timeout')
time.sleep(5)
...or simply, you can do this:
import psutil
p = psutil.Process(subp.pid)
try:
p.wait(timeout=60*60)
except psutil.TimeoutExpired:
p.kill()
raise
Also, while you're at it, you might be interested in the following extra APIs:
>>> p.status()
'running'
>>> p.is_running()
True
>>>
I had a similar question and found this answer. Just for completeness, I want to add one more way how to terminate a hanging process after a given amount of time: The python signal library
https://docs.python.org/2/library/signal.html
From the documentation:
import signal, os
def handler(signum, frame):
print 'Signal handler called with signal', signum
raise IOError("Couldn't open device!")
# Set the signal handler and a 5-second alarm
signal.signal(signal.SIGALRM, handler)
signal.alarm(5)
# This open() may hang indefinitely
fd = os.open('/dev/ttyS0', os.O_RDWR)
signal.alarm(0) # Disable the alarm
Since you wanted to spawn a new process anyways, this might not be the best soloution for your problem, though.
A nice, passive, way is also by using a threading.Timer and setting up callback function.
from threading import Timer
# execute the command
p = subprocess.Popen(command)
# save the proc object - either if you make this onto class (like the example), or 'p' can be global
self.p == p
# config and init timer
# kill_proc is a callback function which can also be added onto class or simply a global
t = Timer(seconds, self.kill_proc)
# start timer
t.start()
# wait for the test process to return
rcode = p.wait()
t.cancel()
If the process finishes in time, wait() ends and code continues here, cancel() stops the timer. If meanwhile the timer runs out and executes kill_proc in a separate thread, wait() will also continue here and cancel() will do nothing. By the value of rcode you will know if we've timeouted or not. Simplest kill_proc: (you can of course do anything extra there)
def kill_proc(self):
os.kill(self.p, signal.SIGTERM)
Koodos to Peter Shinners for his nice suggestion about subprocess module. I was using exec() before and did not have any control on running time and especially terminating it. My simplest template for this kind of task is the following and I am just using the timeout parameter of subprocess.run() function to monitor the running time. Of course you can get standard out and error as well if needed:
from subprocess import run, TimeoutExpired, CalledProcessError
for file in fls:
try:
run(["python3.7", file], check=True, timeout=7200) # 2 hours timeout
print("scraped :)", file)
except TimeoutExpired:
message = "Timeout :( !!!"
print(message, file)
f.write("{message} {file}\n".format(file=file, message=message))
except CalledProcessError:
message = "SOMETHING HAPPENED :( !!!, CHECK"
print(message, file)
f.write("{message} {file}\n".format(file=file, message=message))