Is it a bad coding practice to call the exit() function in Python repeatedly?
I'm working on a command-line tool, so there are multiple function definitions... Basically:
def usage()
def error(arg1)
def find(arg1, arg2)
In the end of usage() I call exit(), which I assume it's OK, but it's also called in the success of find(), and in error() (which is called in the failure of find().
As you can see, exit() is being called many times in my code, and I wasn't sure if this is actually a bad coding practice.
It does work to call exit() on multiple locations and if it's simple program with only you using it it's no problem. But in my opinion it always makes inspecting/debugging code more complex if there are multiple exit points. Especially if you think other developers will at some point be modifying your code or you will offer part of your code as a library to other developers.
Other option is to raise exceptions and catch them on the outer function. This way you also have chance to do some additional tasks before exiting (release some resources for example).
Not really bad practice IMO - just make sure you return an exit code reflecting the different exit points whenever that might be useful to the calling process...
I do that all the time in my scripts. In general, you need not worry about that, since python takes care of cleaning the system before program termination. I also used to do
signal.signal(signal.SIGTERM, lambda *args: sys.exit(0))
To force system cleaning up in case I need to kill a stalled script.
Related
After reading A LOT of data on the subject I still couldn't find any actual solution to my problem (there might not be any).
My problem is as following:
In my project I have multiple drivers working with various hardware's (IO managers, programmable loads, power supplies and more).
Initializing connection to these hardware's is costly (in time), and I cant open and then close the connection for every communication iteration between us.
Meaning I cant do this (Assuming programmable load implements enter / exit):
start of code...
with programmable_load(args) as program_instance:
programmable_load_instance.do_something()
rest of code...
So I went for a different solution :
class programmable_load():
def __init__(self):
self.handler = handler_creator()
def close_connection(self):
self.handler.close_connection()
self.handler = None
def __del__(self):
if (self.handler != None):
self.close_connection()
For obvious reasons I dont 'trust' the destructor to actually get called so I explicitly call close_connection() when I want to end my program (for all drivers).
The problem happens when I abruptly terminate the process, for example when I run via debug mode and quit debugging.
In these cases the process terminates without running through any destructors.
I understand that the OS will clear all memory unused at this point, but is there any way to clear the memory in an organized manner?
and if not, is there a way to make the quit debugging function pass through a certain set of functions? Does the python process know it got a quite debugging event or does it treat it as a normal termination?
Operating system: Windows
According to this documentation:
If a process is terminated by TerminateProcess, all threads of the
process are terminated immediately with no chance to run additional
code.
(Emphasis mine.) This implies that there is nothing you can do in this case.
As detailed here, signals don't work very well on ms-windows.
As was mentioned in a comment, you could use atexit to do the cleanup. But that only works if the process is asked to close (e.g. QUIT signal on Linux) and not just killed (as is likely the case when stopping the debugging session). Similarily if you force your computer to turn off (e.g. long press power button or remove power) then it won't be called either. There is no 'solution' to that for obvious reasons. Your program can't expect to be called when the power suddenly goes off or when it is forcefully killed. The point of forcefully killing is to definitely kill the process now. If it first called your clean-up code then you could delay that which defeats the purpose. That is why there are signals such as to ask your process to stop. This is not Python specific. The same concept also applies across operating systems.
Bonus (design suggestion, not a solution): I would argue that you can still make use of the context manager (using with). Your problem is not unique. Database connections are usually kept alive for longer as well. It is a question of the scope. Move the context further up to the application level. Then it is clear what the boundary is and you don't need any magic (you are probably also aware of #contextmanager to make that a breeze).
I haven't tested properly as I don't have wingide installed over here so I can't grant you this will work but what about using setconsolectrlhandler? For instance, try something like this:
import os
import sys
import win32api
if __name__ == "__main__":
def callback(sig, func=None):
print("Exit handler called!")
try:
win32api.SetConsoleCtrlHandler(callback, True)
except Exception as e:
print("Captured exception", e)
sys.exit(1)
print("Press to quit")
input()
print("Bye!")
It'll be able to handle CTRL+C and CTRL+BREAK signals:
I've been using Python more and more recently, and I'd like to know if Python scripts need an exit statement at the end? If they don't is it proper to always add an exit statement at the end of a script?
No, python scripts do not need an exit- in fact, a raw quit() or exit() at the end of many things may break them - in the case of importing something, all top-level code is executed: if that contains an exit(), the whole importing program will exit, which is almost certainly not desired.
If in doubt, Python will almost always clean up after itself (zombie threads may be an exception, but that's way more advanced).
You do not need an exit statement in Python code, unless you want to set the exit code yourself
This is unnecessary. There is no such thing as an "exit statement" in Python, and calling a function like sys.exit() at the end of your program does the same thing as just letting execution flow off the end of your program.
I am using Python C Api to embed a python in our application. Currently when users execute their scripts, we call PyRun_SimpleString(). Which runs fine.
I would like to extend this functionality to allow users to run scripts in "Debug" mode, where like in a typical IDE, they would be allowed to set breakpointsm "watches", and generally step through their script.
I've looked at the API specs, googled for similar functionality, but did not find anything that would help much.
I did play with PyEval_SetTrace() which returns all the information I need, however, we execute the Python on the same thread as our main application and I have not found a way to "pause" python execution when the trace callback hits a line number that contains a user checked break point - and resuming the execution at a later point.
I also see that there are various "Frame" functions like PyEval_EvalFrame() but not a whole lot of places that demo the proper usage. Perhaps these are the functions that I should be using?
Any help would be much appreciated!
PyEval_SetTrace() is exactly the API that you need to use. Not sure why you need some additional way to "pause" the execution; when your callback has been called, the execution is already paused and will not resume until you return from the callback.
Assume I have python code
def my_great_func(an_arg):
a_file = open("/user/or/root/file", "w")
a_file.write("bla")
which I want to maintain without paying attention to invokation with and without priveleges. At the same time I don't want to invoke the script with sudo/enforce the invokation with sudo (although this would be a legitemate pratice) or enable setuid for my python interpreter (generally a bad idea...). An idea is now to start a second instance of the python interpretor and communicate over processes/pipes. In order to maximize the maintainability of the code it would be nice to simply pass the callable to the instance (e.g. started with subprocess.Popen and addressed to with its PID) like I would pass it to multiprocess.Process (which I can't use because I can't setuid in the subprocess). I imagine something like
# please consider this pseudo python code
pid = subprocess.Popen(["sudo", "python"]).get_pid()
thelib.pass_callable(pid, target, args)
or even
interpreter_instance = greatlib.Python(target, args)
interpreter_instance.start()
interpreter_instance.wait()
Is that possible and covered by existing libs?
Generally speaking, you don't want any script to run as Super User unless the script invoking it was called with Super User. This is not only an issue of good practice and secure programming, but also programmer etiquette. If any part of your program requires use of Super User, this intention should be made known before you even begin the program.
With that in mind, the Python thread library should work just fine for this.
I have a command line program that wants to pickle things when I send it a ctrl-C via the terminal. I have a some questions and concerns:
How do I perform this handling? Do I check for a KeyboardInterrupt? Is there a way to implement an exit function?
What if the program is halted in the middle of a write to a structure that I'm writing to? I presume these writes aren't treated atomically, so then how can I keep from writing trash into the pickle file?
You can use atexit for defining an exit handler. Modifications of Python objects will be treated atomically, so you should be fine as long as your code is arranged in a way that your objects are always in a consistent state between (byte code) instructions.
(1) Use the atexit module:
def pickle_things():
pass
import atexit
atexit.register(pickle_things)
(2) In general, you can't. Imagine someone trips on the power cord while your program is in the middle of a write. It's impossible to guarantee everything gets properly written in all cases.
However, in the KeyboardInterrupt case, the interpreter will make sure to finish whatever it's currently doing before raising that exception, so you should be fine.