I have a Python app that initiates from a main script, let's say a main.py. main.py (since my app is organized) references and imports other .py files within the same directory, that house other functions. As my app is continuously running, it imports such a function from another script, which is also supposed to run forever until it is explicitly cancelled.
Thing is, how would I cancel that specific script, while leaving its affected variables untouched and the main script/larger app still running?
I do not how I would go about targeting a specific function to stop its execution.
I use a kill function in my utils to kill any unneeded python process who's name I know. Note the following code was tested/works on Ubuntu Linux and Mac OS machines.
def get_running_pids(process_name):
pids = []
p = subprocess.Popen(['ps', '-A'], stdout=subprocess.PIPE)
out, err = p.communicate()
for line in out.splitlines():
if process_name in line.decode('utf-8'):
pid = int(line.decode('utf-8').split(None, 1)[0])
pids.append(pid)
return pids
def kill_process_with_name(process_name):
pids = get_running_pids(process_name)
for pid in pids:
os.kill(pid, signal.SIGKILL)
You Could set up user defined, custom, Exceptions. Extending Pythons builtin Exception object. Further reading here : Pythons User Defined Exceptions
CustomExceptions.py:
class HaltException(Exception):
pass
-
main.py:
from CustomExceptions import HaltException
class Functions():
def a(self):
print("hey")
self.b()
return "1"
def b(self):
print("hello")
raise HaltException()
def main():
func_obj = Functions()
try:
func_obj.a()
except HaltException as e:
pass
print("Awesome")
main()
Programs may name their own exceptions by creating a new exception
class (see Classes for more about Python classes). Exceptions should
typically be derived from the Exception class, either directly or
indirectly.
Related
I am trying to package my python project into an executable using pyinstaller. The main module contains code for multiprocessing. When I run the executable, only the lines of code prior to the multi processing part get executed again and again. Neither does it throw an exception or exit the program.
Code in main module:
from Framework.ExcelUtility import ExcelUtility
from Framework.TestRunner import TestRunner
import concurrent.futures
class Initiator:
def __init__(self):
self.exec_config_dict = {}
self.test_list = []
self.test_names = []
self.current_test_set = []
def set_first_execution_order(self):
# Code
def set_subsequent_execution_order(self):
# Code
def kick_off_tests(self):
'''Method to do Multi process execution'''
if(__name__=="__main__"):
with concurrent.futures.ProcessPoolExecutor(max_workers=int(self.exec_config_dict.get('Parallel'))) as executor:
for test in self.current_test_set:
executor.submit(TestRunner().runner,test) ***This line is not being executed from the exe file.
initiator = Initiator()
initiator.get_run_info()
initiator.set_first_execution_order()
initiator.kick_off_tests()
while len(initiator.test_list) > 0:
initiator.set_subsequent_execution_order()
try:
initiator.kick_off_tests()
except BaseException as exception:
print(exception)
From the problem definition I'm assuming you are using ms-windows, and that the main module is not named __main__.py.
In that case, multiprocessing has some special guidelines:
Make sure that the main module can be safely imported by a new Python interpreter without causing unintended side effects (such a starting a new process).
and
Instead one should protect the “entry point” of the program by using if __name__ == '__main__'
So, change the last part of your main module like this:
from multiprocessing import freeze_support
def kick_off_tests(self):
'''Method to do Multi process execution'''
with concurrent.futures.ProcessPoolExecutor(max_workers=int(self.exec_config_dict.get('Parallel'))) as executor:
for test in self.current_test_set:
executor.submit(TestRunner().runner,test)
if __name__ == '__main__':
freeze_support()
initiator = Initiator()
initiator.get_run_info()
initiator.set_first_execution_order()
initiator.kick_off_tests()
while len(initiator.test_list) > 0:
initiator.set_subsequent_execution_order()
try:
initiator.kick_off_tests()
except BaseException as exception:
print(exception)
Given this code:
from time import sleep
class TemporaryFileCreator(object):
def __init__(self):
print 'create temporary file'
# create_temp_file('temp.txt')
def watch(self):
try:
print 'watching tempoary file'
while True:
# add_a_line_in_temp_file('temp.txt', 'new line')
sleep(4)
except (KeyboardInterrupt, SystemExit), e:
print 'deleting the temporary file..'
# delete_temporary_file('temp.txt')
sleep(3)
print str(e)
t = TemporaryFileCreator()
t.watch()
during the t.watch(), I want to close this application in the console..
I tried using CTRL+C and it works:
However, if I click the exit button:
it doesn't work.. I checked many related questions about this but it seems that I cannot find the right answer..
What I want to do:
The console can be exited while the program is still running.. to handle that, when the exit button is pressed, I want to make a cleanup of the objects (deleting of created temporary files), rollback of temporary changes, etc..
Question:
how can I handle console exit?
how can I integrate it on object destructors (__exit__())
Is it even possible? (how about py2exe?)
Note: code will be compiled on py2exe.. "hopes that the effect is the same"
You may want to have a look at signals. When a *nix terminal is closed with a running process, this process receives a couple signals. For instance this code waits for the SIGHUB hangup signal and writes a final message. This codes works under OSX and Linux. I know you are specifically asking for Windows but you might want to give it a shot or investigate what signals a Windows command prompt is emitting during shutdown.
import signal
import sys
def signal_handler(signal, frame):
with open('./log.log', 'w') as f:
f.write('event received!')
signal.signal(signal.SIGHUP, signal_handler)
print('Waiting for the final blow...')
#signal.pause() # does not work under windows
sleep(10) # so let us just wait here
Quote from the documentation:
On Windows, signal() can only be called with SIGABRT, SIGFPE, SIGILL, SIGINT, SIGSEGV, or SIGTERM. A ValueError will be raised in any other case.
Update:
Actually, the closest thing in Windows is win32api.setConsoleCtrlHandler (doc). This was already discussed here:
When using win32api.setConsoleCtrlHandler(), I'm able to receive shutdown/logoff/etc events from Windows, and cleanly shut down my app.
And if Daniel's code still works, this might be a nice way to use both (signals and CtrlHandler) for cross-platform purposes:
import os, sys
def set_exit_handler(func):
if os.name == "nt":
try:
import win32api
win32api.SetConsoleCtrlHandler(func, True)
except ImportError:
version = “.”.join(map(str, sys.version_info[:2]))
raise Exception(”pywin32 not installed for Python ” + version)
else:
import signal
signal.signal(signal.SIGTERM, func)
if __name__ == "__main__":
def on_exit(sig, func=None):
print "exit handler triggered"
import time
time.sleep(5)
set_exit_handler(on_exit)
print "Press to quit"
raw_input()
print "quit!"
If you use tempfile to create your temporary file, it will be automatically deleted when the Python process is killed.
Try it with:
>>> foo = tempfile.NamedTemporaryFile()
>>> foo.name
'c:\\users\\blah\\appdata\\local\\temp\\tmpxxxxxx'
Now check that the named file is there. You can write to and read from this file like any other.
Now kill the Python window and check that file is gone (it should be)
You can simply call foo.close() to delete it manually in your code.
I have a couple of different scripts that require opening a MongoDB instance that go something like this:
mongod = Popen(
["mongod", "--dbpath", '/path/to/db'],
)
#Do some stuff
mongod.terminate()
And this works great when the code I'm executing works, but while I'm tinkering, errors inevitably arise. Then the Mongod instance remains running, and the next time I attempt to run the script, it detects that and doesn't open a new one.
I can terminate the process from the command line, but this is somewhat tedious. Or I can wrap everything in a try loop, but for some of the scripts, I have to do this a bunch, since every function depends on every other one. Is there a more elegant way to force close the process even in the event of an error somewhere else in the code?
EDIT: Did some testing based on tdelaney's comment, it looks like when I run these scripts in Sublime text and en error is generated, the script doesn't actually finish - it hits the error and then waits with the mongod instance open... i think. Once I kill the process in the terminal, sublime text tells me "finished in X seconds with exit code1"
EDIT2: On Kirby's suggestion, tried:
def testing():
mongod = Popen(
["mongod", "--dbpath", '/Users/KBLaptop/computation/db/'],
)
#Stuff that generates error
mongod.terminate()
def cleanup():
for proc in subprocess._active[:]:
try: proc.terminate()
except: pass
atexit.register(cleanup)
testing()
The error in testing() seems to prevent anything from continuing, so the atexit never registers and the process keeps running. Am I missing something obvious?
If you're running under CPython, you can cheat and take advantage of Python's destructors:
class PopenWrapper(object):
def __del__(self):
if self._child_created:
self.terminate()
This is slightly ucky, though. My preference would be to atexit:
import atexit
mongod = Popen(...)
def cleanup():
for proc in subprocess._active[:]:
try: proc.terminate()
except: pass
atexit.register(cleanup)
Still slightly hack-ish, though.
EDIT: Try this:
from subprocess import Popen
import atexit
started = []
def auto_popen(*args, **kw):
p = Popen(*args, **kw)
started.append(p)
return p
def testing():
mongod = auto_popen(['blah blah'], shell=True)
assert 0
#Stuff that generates error
mongod.terminate()
def cleanup():
for proc in started:
if proc.poll() is None:
try: proc.kill()
except: pass
atexit.register(cleanup)
testing()
I am trying to write an alarm clock program in python using multiprogramming module on Windows 7.
It all runs good in the interpreter. But when packed in one file by pyinstaller, every time the code create a process, there turn out to be 2 processes, one is a parent and the other is its child. When the code kills the parent process, the child become an orphan process.
The code:
from multiprocessing import Process,freeze_support
import time
import winsound
def startout(seconds,name):
freeze_support()
print name+':pid '+str(os.getpid())+' is created'
startTime=time.time()
while (time.time()-startTime)<seconds:
time.sleep(1)
winsound.PlaySound('SystemQuestion', winsound.SND_ALIAS)
print name+' end'
class alarmCenter:
def __init__(self):
self.alarmList={'alarm1':None,'alarm2':None,'alarm3':None}
def newAlarm(self,seconds,name):
if self.alarmList[name]!=None:
if self.alarmList[name].is_alive():
return False
ala=Process(target=startout, args=(seconds,name))
ala.deamon=True
ala.start()
self.alarmList[name]=ala
return True
def stopAlarm(self,name):
try:
self.alarmList[name].terminate()
self.alarmList[name].join()
self.alarmList[name]=None
except Exception:
pass
def terminateAll(self):
for each in self.alarmList.keys():
if self.alarmList[each]!=None:
self.alarmList[each].terminate()
if __name__=='__main__':
freeze_support()
#....
Note that multiprocessing.freeze_support() is already there.
Could anyone please show me how to kill the child process or fix this bug?
I have a complex python pipeline (which code I cant change), calling multiple other scripts and other executables. The point is it takes ages to run over 8000 directories, doing some scientific analyses. So, I wrote a simple wrapper, (might not be most effective, but seems to work) using the multiprocessing module.
from os import path, listdir, mkdir, system
from os.path import join as osjoin, exists, isfile
from GffTools import Gene, Element, Transcript
from GffTools import read as gread, write as gwrite, sort as gsort
from re import match
from multiprocessing import JoinableQueue, Process
from sys import argv, exit
# some absolute paths
inbase = "/.../abfgp_in"
outbase = "/.../abfgp_out"
abfgp_cmd = "python /.../abfgp-2.rev/abfgp.py"
refGff = "/.../B0510_manual_reindexed_noSeq.gff"
# the Queue
Q = JoinableQueue()
i = 0
# define number of processes
try: num_p = int(argv[1])
except ValueError: exit("Wrong CPU argument")
# This is the function calling the abfgp.py script, which in its turn calls alot of third party software
def abfgp(id_, pid):
out = osjoin(outbase, id_)
if not exists(out): mkdir(out)
# logfile
log = osjoin(outbase, "log_process_%s" %(pid))
try:
# call the script
system("%s --dna %s --multifasta %s --target %s -o %s -q >>%s" %(abfgp_cmd, osjoin(inbase, id_, id_ +".dna.fa"), osjoin(inbase, id_, "informants.mfa"), id_, out, log))
except:
print "ABFGP FAILED"
return
# parse the output
def extractGff(id_):
# code not relevant
# function called by multiple processes, using the Queue
def run(Q, pid):
while not Q.empty():
try:
d = Q.get()
print "%s\t=>>\t%s" %(str(i-Q.qsize()), d)
abfgp(d, pid)
Q.task_done()
except KeyboardInterrupt:
exit("Interrupted Child")
# list of directories
genedirs = [d for d in listdir(inbase)]
genes = gread(refGff)
for d in genedirs:
i += 1
indir = osjoin(inbase, d)
outdir = osjoin(outbase, d)
Q.put(d)
# this loop creates the multiple processes
procs = []
for pid in range(num_p):
try:
p = Process(target=run, args=(Q, pid+1))
p.daemon = True
procs.append(p)
p.start()
except KeyboardInterrupt:
print "Aborting start of child processes"
for x in procs:
x.terminate()
exit("Interrupted")
try:
for p in procs:
p.join()
except:
print "Terminating child processes"
for x in procs:
x.terminate()
exit("Interrupted")
print "Parsing output..."
for d in genedirs: extractGff(d)
Now the problem is, abfgp.py uses the os.chdir function, which seems to disrupt the parallel processing. I get a lot of errors, stating that some (input/output) files/directories cannot be found for reading/writing. Even though I call the script through os.system(), from which I though spawning separate processes would prevent this.
How can I work around these chdir interference?
Edit: I might change os.system() to subprocess.Popen(cwd="...") with the right directory. I hope this makes a difference.
Thanks.
Edit 2
Do not use os.system() use subprocess.call()
system("%s --dna %s --multifasta %s --target %s -o %s -q >>%s" %(abfgp_cmd, osjoin(inbase, id_, id_ +".dna.fa"), osjoin(inbase, id_, "informants.mfa"), id_, out, log))
would translate to
subprocess.call((abfgp_cmd, '--dna', osjoin(inbase, id_, id_ +".dna.fa"), '--multifasta', osjoin(inbase, id_, "informants.mfa"), '--target', id_, '-o', out, '-q')) # without log.
Edit 1
I think the problem is that multiprocessing is using the module names to serialize functions, classes.
This means if you do import module where module is in ./module.py and the you do something like os.chdir('./dir') now you would need to from .. import module.
The child processes inherit the folder of the parent process. This may be a problem.
Solutions
Make sure that all modules are imported (in the child processes) and after this you change the directory
insert the original os.getcwd() to sys.path to enable import from the original directory. This must be done before any functions are called from the local directory.
put all functions that you use inside a directory that can always be imported. The site-packages could be such a directory. Then you can do something like import module module.main() to start what you do.
This is a hack that I do because I know how pickle works. Only use this if other attempts fail.
The script prints:
serialized # the function runD is serialized
string executed # before the function is loaded the code is executed
loaded # now the function run is deserialized
run # run is called
In you case you would do something like this:
runD = evalBeforeDeserialize('__import__("sys").path.append({})'.format(repr(os.getcwd())), run)
p = Process(target=runD, args=(Q, pid+1))
This is the script:
# functions that you need
class R(object):
def __init__(self, call, *args):
self.ret = (call, args)
def __reduce__(self):
return self.ret
def __call__(self, *args, **kw):
raise NotImplementedError('this should never be called')
class evalBeforeDeserialize(object):
def __init__(self, string, function):
self.function = function
self.string = string
def __reduce__(self):
return R(getattr, tuple, '__getitem__'), \
((R(eval, self.string), self.function), -1)
# code to show how it works
def printing():
print('string executed')
def run():
print('run')
runD = evalBeforeDeserialize('__import__("__main__").printing()', run)
import pickle
s = pickle.dumps(runD)
print('serialized')
run2 = pickle.loads(s)
print('loaded')
run2()
Please report back if these do not work.
You could determine which instance of the os library the unalterable program is using; then create a tailored version of chdir in that library that does what you need -- prevent the directory change, log it, whatever. If the tailored behavior needs to be just for the single program, you can use the inspect module to identify the caller and tailor the behavior in a specific way for just that caller.
Your options are limited if you truly can't alter the existing program; but if you have the option of altering libraries it imports, something like this could be a least-invasive way to skirt the undesired behavior.
Usual caveats apply when altering a standard library.