Do python scripts need to be exited? - python

I've been using Python more and more recently, and I'd like to know if Python scripts need an exit statement at the end? If they don't is it proper to always add an exit statement at the end of a script?

No, python scripts do not need an exit- in fact, a raw quit() or exit() at the end of many things may break them - in the case of importing something, all top-level code is executed: if that contains an exit(), the whole importing program will exit, which is almost certainly not desired.
If in doubt, Python will almost always clean up after itself (zombie threads may be an exception, but that's way more advanced).

You do not need an exit statement in Python code, unless you want to set the exit code yourself

This is unnecessary. There is no such thing as an "exit statement" in Python, and calling a function like sys.exit() at the end of your program does the same thing as just letting execution flow off the end of your program.

Related

How to hard suspend or pause a python script after it runs so it doesn’t force close upon completion?

Hi so I’m working on a python script that involves a loop function, so far the loop function process is failing for some reason(although I kinda know why) but the problem I’ve got os.system(‘pause’) and also input(“prompt:”) at end of the code in order to pause all activity so I can read the error messages prior to script completion and termination but the script still shuts down, I need a way to HARD pause it or freeze before the window closes abruptly. Need help and any further insight.
Ps. Let me know if you need any more info to better describe this problem.
I assume you are just 'double clicking' the icon on Window Explorer. This has the disadvantage which you are encountering here in that the shell (terminal window) closes when the process finishes so you can't tell what went wrong if it terminated due to an error.
A better method would be to use the command prompt. If you are not familiar with this, there are many tutorials online.
The reason this will help with your problem is that, once navigating to the script's containing directory, you can use python your_script.py (assuming python is in your path environmental variable) to run the script within the same window.
Then, even if it fails, you can read the error messages as you will only be returned to the command line.
An alternative hacky method would be to create a script called something like run_pythons.py which will use the subprocess module to call your actual script in the same window, and then (no matter how it terminates), wait for your input before terminating itself so that you can read the error messages.
So something like:
import subprocess
subprocess.call(('python', input('enter script name: ')))
input('press ENTER to kill me')
I needed something like this at one point. I had a wrapper that loaded a bunch of modules and data and then waited for a prompt to run something. If I had a stupid mistake in a module, it would quit, and that time that it spent loading all that data into memory would be wasted, which was >1min. For me, I wanted a way to keep that data in memory even if I had an error in a module so that I could edit the module and rerun the script.
To do this:
while True:
update = raw_input("Paused. Enter = start, 'your input' = update params, C-C = exit")
if update:
update = update.split()
#unrelevant stuff used to parse my update
#custom thing to reload all my modules
fullReload()
try:
#my main script that needed all those modules and data loaded
model_starter.main(stuff, stuff2)
except Exception as e:
print(e)
traceback.print_exc()
continue
except KeyboardInterrupt:
print("I think you hit C-C. Do it again to exit.")
continue
except:
print("OSERROR? sys.exit()? who knows. C-C to exit.")
continue
This kept all the data loaded that I grabbed from before my while loop started, and prevented exiting on errors. It also meant that I could still ctrl+c to quit, I just had to do it from this wrapper instead of once it got to the main script.
Is this somewhat what you're looking for?
The answer is basically, you have to catch all your exceptions and have a method to restart your loop once you figured out and fixed the issue.

how to halt python program after pdb.set_trace()

When debugging scripts in Python (2.7, running on Linux) I occasionally inject pdb.set_trace() (note that I'm actually using ipdb), e.g.:
import ipdb as pdb
try:
do_something()
# I'd like to look at some local variables before running do_something_dangerous()
pdb.set_trace()
except:
pass
do_something_dangerous()
I typically run my script from the shell, e.g.
python my_script.py
Sometimes during my debugging session I realize that I don't want to run do_something_dangerous(). What's the easiest way to halt program execution so that do_something_dangerous() is not run and I can quit back to the shell?
As I understand it pressing ctrl-d (or issuing the debugger's quit command) will simply exit ipdb and the program will continue running (in my example above). Pressing ctrl-c seems to raise a KeyboardInterrupt but I've never understood the context in which it was raised.
I'm hoping for something like ctrl-q to simply take down the entire process, but I haven't been able to find anything.
I understand that my example is highly contrived, but my question is about how to abort execution from pdb when the code being debugged is set up to catch exceptions. It's not about how to restructure the above code so it works!
I found that ctrl-z to suspend the python/ipdb process, followed by 'kill %1' to terminate the process works well and is reasonably quick for me to type (with a bash alias k='kill %1'). I'm not sure if there's anything cleaner/simpler though.
From the module docs:
q(uit)
Quit from the debugger. The program being executed is aborted.
Specifically, this will cause the next debugger function that gets called to raise a BdbQuit exception.

Multiple use of exit() in Python

Is it a bad coding practice to call the exit() function in Python repeatedly?
I'm working on a command-line tool, so there are multiple function definitions... Basically:
def usage()
def error(arg1)
def find(arg1, arg2)
In the end of usage() I call exit(), which I assume it's OK, but it's also called in the success of find(), and in error() (which is called in the failure of find().
As you can see, exit() is being called many times in my code, and I wasn't sure if this is actually a bad coding practice.
It does work to call exit() on multiple locations and if it's simple program with only you using it it's no problem. But in my opinion it always makes inspecting/debugging code more complex if there are multiple exit points. Especially if you think other developers will at some point be modifying your code or you will offer part of your code as a library to other developers.
Other option is to raise exceptions and catch them on the outer function. This way you also have chance to do some additional tasks before exiting (release some resources for example).
Not really bad practice IMO - just make sure you return an exit code reflecting the different exit points whenever that might be useful to the calling process...
I do that all the time in my scripts. In general, you need not worry about that, since python takes care of cleaning the system before program termination. I also used to do
signal.signal(signal.SIGTERM, lambda *args: sys.exit(0))
To force system cleaning up in case I need to kill a stalled script.

debugging: how to check what where my Python program is hanging?

A fairly large Python program I write, runs, but sometimes, after running for minutes or hours, in a non easily reproducible moment, hangs and outputs nothing to the screen.
I have no idea what it is doing at that moment, and in what part of code it is.
How can I run this in a debugger or something to see what lines of code is the program executing in the moment it hangs?
Its too large to put "print" statements all over the place.
I did:
python -m trace --trace /usr/local/bin/my_program.py
but that gives me so much output that I can't really see anything, just millions of lines scrolling on the screen.
Best would be if I could send some signal to the program with "kill -SIGUSR1" or something, and at that moment the program would drop into a debugger and show me the line it stopped at and possibly allow me to step through the program then.
I've tried:
pdb usr/local/bin/my_program.py
and then:
(Pdb) cont
but what do I do to see where I am when it hangs?
It doesn't throw and exception, just seems like it waits for something, possibly in an infinite loop.
One more detail: when the program hangs, and I press ^C and then (not sure if that is necessary) the program continues normally (without throwing any exception and without giving me any hint on the screen why did it stop).
This could be useful to you. I usually do
>>> import pdb
>>> import program2debug
>>> pdb.run('program2debug.test()')
I usually add a -v option to my programs, which enables tons of print statements explaining what I'm doing in detail. When you write a program in the future, consider doing the same before it gets thousands of lines big.
You could try running it in debug mode in an IDE like pydev (eclipse) or pycharm. You can break the program at any moment and get to its current execution point.
No program is ever too big to put print statements all over the place. You need to read up on the logging module and insert lots of logging.debug() statements. This is just a better form of print statement that outputs to a file, and can be turned off easily in production software. But years from now, when you need to modify the code, you can easily turn it all back on and get the benefit of the insight of the original programmer.

How can I clean stuff up on program exit?

I have a command line program that wants to pickle things when I send it a ctrl-C via the terminal. I have a some questions and concerns:
How do I perform this handling? Do I check for a KeyboardInterrupt? Is there a way to implement an exit function?
What if the program is halted in the middle of a write to a structure that I'm writing to? I presume these writes aren't treated atomically, so then how can I keep from writing trash into the pickle file?
You can use atexit for defining an exit handler. Modifications of Python objects will be treated atomically, so you should be fine as long as your code is arranged in a way that your objects are always in a consistent state between (byte code) instructions.
(1) Use the atexit module:
def pickle_things():
pass
import atexit
atexit.register(pickle_things)
(2) In general, you can't. Imagine someone trips on the power cord while your program is in the middle of a write. It's impossible to guarantee everything gets properly written in all cases.
However, in the KeyboardInterrupt case, the interpreter will make sure to finish whatever it's currently doing before raising that exception, so you should be fine.

Categories