Please have a look at the pseudo-code.
I am unable to catch exception in Scenario2
Scenario 1:
A.py
def fun1():
try:
ssh connection
do something1
do something2
session.write(cmd1) # Execute some commands
session.write(cmd2)
session.write(cmd3)
session.write(cmd4)
except RuntimeError as err:
print err
Do something with err
Further decision making based on err
Scenario 2:
B.py
def fun1():
try:
ssh connection
do something1
do something2
execute_commands(session)
except RuntimeError as err:
print err
Do something with err
Further decision making based on err
def execute_commands(session):
session.write(cmd1) # Execute some commands
session.write(cmd2)
session.write(cmd3)
session.write(cmd4)
In A.py, There is only one function which does everything. I know that while executing cmd1 to cmd4, it throws runtime error. A.py absolutely works fine.
In B.py, I've created another separate function to execute those commands separately.But here I'm not able to catch that run time error.In B.py, as execute_commands() is called it hangs there. It doesn't catch any exceptions.
How do i catch the same exception occurred in fun1() when execute_commands() is called ?
import SSHLibrary
session = SSHLibrary.SSHLibrary()
session.open_connection(self.ip, self.prompt)
session.login(self.username, self.password)
session.write(cmd1)
session.read_until("some pattern")
Runtimerror causing because of pattern mismatch. So, handling it in exception block and processing it further.
Related
I am making a python program (A) that runs another file (B). If B exits with code 0, A exits too. However, if B crashes, i want to handle that exception myself in A.
There is this question that asks the same thing, however, the one asking only needs the exit code or the error message printed to stderr. However, i want the raw data otherwise provided by sys.exc_info if the exception occured in the parent / main file (A).
Try subprocess with the check=True option:
From the subprocess docs:
If check is true, and the process exits with a non-zero exit code, a
CalledProcessError exception will be raised. Attributes of that
exception hold the arguments, the exit code, and stdout and stderr if
they were captured.
If B.py is something like:
print("HELLO from B")
raise Exception("MEH")
Then A.py could be something like:
import subprocess
try:
print("TRYING B.py")
subprocess.run(["python", "B.py"], check=True)
except subprocess.CalledProcessError as cpe:
print("OH WELL: Handling exception from B.py")
print(f"Exception from B: {cpe}")
The result:
~ > python A.py
TRYING B.py
HELLO from B
Traceback (most recent call last):
File "B.py", line 2, in <module>
raise Exception("MEH")
Exception: MEH
OH WELL: Handling exception from B.py
Exception from B: Command '['python', 'B.py']' returned non-zero exit status 1.
To silence the exception from display, change B.py to the following:
import os
import sys
sys.stderr = open(os.devnull, 'w')
print("HELLO from B")
raise Exception("MEH")
The result:
~ > python A.py
TRYING B.py
HELLO from B
OH WELL: Handling exception from B.py
Exception from B: Command '['python', 'B.py']' returned non-zero exit status 1.
I'm new to Python. I tried to recreate the "Timer" control used in .NET Framework ("Timer" in Python is a different thing...). So, basically, I created a function named CheckMessages that, after its execution, stops the main thread for 1 second and then calls again CheckMessages. Under normal circumstances (=when no exception in the code is thrown), this works good. The timing between one execution and the next is obviously not precise (it is never 1 second) but for the scope of my app this is not a problem.
The problem is: when an exception is thrown, nothing happens. The code does not follow my try ... except logic (no logs are shown) and the app does not exit.
import time
import sched
import requests
s = sched.scheduler(time.time, time.sleep)
def CheckMessages(sc):
newMsgsReq = ""
try:
newMsgsReq = requests.get("...")
except requests.HTTPError as errh:
print ("Http Error:",errh)
except requests.ConnectionError as errc:
print ("Connection Error:",errc)
except requests.Timeout as errt:
print ("Timeout Error:",errt)
except requests.RequestException as err:
print ("OOps: Something Else",err)
msgInf = newMsgsReq.text.split("|")
#...
s.enter(1, 1, CheckMessages, (sc,))
s.enter(1, 1, CheckMessages, (s,))
s.run()
When, for example, I disable my internet connection, I would expect my app to report: Connection Error:... (line 11 of the code snippet). Instead, as said, nothing happens. The app remains active but the output is empty. The function CheckMessages stops being executed every 1 sec.
When I put in an invalid URL ("http://sdkls9e9i.ooo"), I see the exception printed successfully.
After an exception from requests is handled, the script quits with AttributeError: 'str' object has no attribute 'text' because you attempt to work with the response (msgInf = newMsgsReq.text.split("|")) after you've handled an exception.
When I followed your lead and disabled my internet connection, the script indeed "hang". Hitting CTRL+C revealed that it was waiting in:
File ".../site-packages/urllib3/util/connection.py", line 70, in create_connection
sock.connect(sa)
After I added a timeout: requests.get("...", timeout=1), I got the exception as well.
P.S. note that PEP 8 style guide recommends using lower_case_with_underscore for variable and function names.
I have nested exceptions handling in my code, when inner blocks do some stuff before re-raising the exception to upper layers.
Traceback always reports the line number that started the exception handling.
However, when running in UI debugger (Pydev/Eclipse) it stops on the outer exception block in some cases.
Consider the following code for example:
import sys
def f(a):
c=5/a
return c
def sub(d):
print("Entered sub(%d)" % d)
try:
print("Entered try #sub")
e = f(d)/(d-1)
return e
except:
print("Do some staff before re-raising the exception upwards")
raise
def main():
try:
print("Entered try #main")
d = int(sys.argv[1])
sub(d)
except:
print("Reached except block #main")
raise
if __name__ == '__main__':
main()
When running with argument = 0, the exception is caused at line#4 and the debugger stops on that line:
However, when running with argument = 1, the exception is caused at line#11 (as reported in the printed traceback) but the debugger stops at line#15.
Once the debugger stops at the incorrect location it is very difficult to watch internal variables and handle the error, especially when the code in the try block uses loops.
In Pydev->Manage Exceptions, I checked only the "suspend on uncaught exceptions".
There is a checkbox "Skip exceptions caught in same function" that seem related (because it seems like the debugger skipped the first exception in sub and stopped on "raise" statement which can be considered another exception in same function, although the documentation says that it should re-raise the same exception).
This checkbox is grayed out unless I first check "Suspend on caught exceptions*", but once enabling it the debugger gets stuck and does not stop anywhere...
Will appreciate your help.
-Moshe
I want to run a task when my Python program finishes, but only if it finishes successfully. As far as I know, using the atexit module means that my registered function will always be run at program termination, regardless of success. Is there a similar functionality to register a function so that it runs only on successful exit? Alternatively, is there a way for my exit function to detect whether the exit was normal or exceptional?
Here is some code that demonstrates the problem. It will print that the program succeeded, even when it has failed.
import atexit
def myexitfunc():
print "Program succeeded!"
atexit.register(myexitfunc)
raise Exception("Program failed!")
Output:
$ python atexittest.py
Traceback (most recent call last):
File "atexittest.py", line 8, in <module>
raise Exception("Program failed!")
Exception: Program failed!
Program succeeded!
Out of the box, atexit is not quite suited for what you want to do: it's primarily used for resource cleanup at the very last moment, as things are shutting down and exiting. By analogy, it's the "finally" of a try/except, whereas what you want is the "else" of a try/except.
The simplest way I can think of is continuing to create a global flag which you set only when your script "succeeds"... and then have all the functions you attach to atexit check that flag, and do nothing unless it's been set.
Eg:
_success = False
def atsuccess(func, *args, **kwds):
def wrapper():
if _success:
func(*args,**kwds)
atexit(wrapper)
def set_success():
global _success
_success = True
# then call atsuccess() to attach your callbacks,
# and call set_success() before your script returns
One limitation is if you have any code which calls sys.exit(0) before setting the success flag. Such code should (probably) be refactored to return to the main function first, so that you call set_success and sys.exit in only one place. Failing that, you'll need add something like the following wrapper around the main entry point in your script:
try:
main()
except SystemExit, err:
if err.code == 0:
set_success()
raise
Wrap the body of your program in a with statement and define a corresponding context object that only performs your action when no exceptions have been raised. Something like:
class AtExit(object):
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
if exc_value is None:
print "Success!"
else:
print "Failure!"
if __name__ == "__main__":
with AtExit():
print "Running"
# raise Exception("Error")
How can you have a function or something that will be executed before your program quits? I have a script that will be constantly running in the background, and I need it to save some data to a file before it exits. Is there a standard way of doing this?
Check out the atexit module:
http://docs.python.org/library/atexit.html
For example, if I wanted to print a message when my application was terminating:
import atexit
def exit_handler():
print 'My application is ending!'
atexit.register(exit_handler)
Just be aware that this works great for normal termination of the script, but it won't get called in all cases (e.g. fatal internal errors).
If you want something to always run, even on errors, use try: finally: like this -
def main():
try:
execute_app()
finally:
handle_cleanup()
if __name__=='__main__':
main()
If you want to also handle exceptions you can insert an except: before the finally:
If you stop the script by raising a KeyboardInterrupt (e.g. by pressing Ctrl-C), you can catch that just as a standard exception. You can also catch SystemExit in the same way.
try:
...
except KeyboardInterrupt:
# clean up
raise
I mention this just so that you know about it; the 'right' way to do this is the atexit module mentioned above.
If you have class objects, that exists during the whole lifetime of the program, you can also execute commands from the classes with the __del__(self) method:
class x:
def __init__(self):
while True:
print ("running")
sleep(1)
def __del__(self):
print("destructuring")
a = x()
this works on normal program end as well if the execution is aborted, for sure there will be some exceptions:
running
running
running
running
running
Traceback (most recent call last):
File "x.py", line 14, in <module>
a = x()
File "x.py", line 8, in __init__
sleep(1)
KeyboardInterrupt
destructuring
This is a version adapted from other answers.
It should work (not fully tested) with graceful exits, kills, and PyCharm stop button (the last one I can confirm).
import signal
import atexit
def handle_exit(*args):
try:
... do computation ...
except BaseException as exception:
... handle the exception ...
atexit.register(handle_exit)
signal.signal(signal.SIGTERM, handle_exit)
signal.signal(signal.SIGINT, handle_exit)