Stopping a Fabric Task From Running - python

The method I currently use if one fabric task fails on one of my servers is
to
try:
sudo(...)
except SystemExit():
raise Exception("You should fix this with...")
However, this leaves an unpleasant stack trace from the exception when all I want to do is print the message from the exception. However, if I don't throw this exception then the fabric script will continue to run on my other servers when I want it to stop.
Is there a way to stop all fabric tasks?

If I understand you correctly, you want to stop execution of script on all servers with log message, but no stacktrace. You can do it with:
import sys
try:
sudo(...)
except SystemExit():
print "You should fix this with..." # you can use logging here
sys.exit()

Related

how do I prevent python exe from closing when an error occurs

try:
some code here
except Exception as e:
print("error: ",e)
here if this python exe code produces an exception it immediately closes the exe terminal
how do I stop it from exiting the exe terminal so that I can understand what exception exactly occurred
also I cant run it in CMD I have to run the exe file only
also I cant use the press any key method
The terminal closes when the program terminates. When you catch the exception, you print your error message, and then the program terminate, so you don't really gain much from catching the exception; in fact, you even get less (you don't print the stack trace).
To get the stacktrace, look into traceback:
try:
foo = 123 / 0
except Exception as e:
traceback.print_exception(e)
Then you need to have the program wait a bit that you can actually see the stack trace and error you print. A simple way is to just wait for input:
try:
foo = 123 / 0
except Exception as e:
traceback.print_exception(e)
wait_for_it = input('Press enter to close the terminal window')
Or you could add a break point to have the Python debugger pdb come up. (It does at least on Mac OS when I run this code in a terminal, no idea about Windows.) See the above link for help on it, or type help at its prompt.
try:
foo = 123 / 0
except Exception as e:
breakpoint()
Speaking of terminal: if you just open a command prompt or bash terminal, you can just run your code with python3 myprog.py and that terminal does not automatically close, so that you can see the output without modifying the program. Depending on how you run your code and what module dependencies you have, this may need a bit more setup (like a virtual environment) but is probably worth it in the long run.

Script hanging on time.sleep()

I have a script started with nohup python3 script.py & . It looks something like this:
import thing
import anotherthing
logfile = "logfile {}".format(datetime.datetime.today())
while True:
try:
logging.debug("Started loop.")
do_some_stuff()
logging.debug("Stuff was done.")
except Exception as e:
logging.exception("message")
logging.debug("Starting sleep.")
time.sleep(60)
This works fine, however it seems to hang up on time.sleep() (as in it just stops doing anything without killing the process) after about 2 days. According to logs, all parts of the script execute fine, but it always hangs up on the sleep part and doesn't start back. I checked for memory leaks, i/o hangups and connection timeouts, and none of those seem to be the case.
What could be the cause of that behavior and why?
EDIT: Added logging to pinpoint the cause. Logs always finish on DEBUG Starting Sleep.

Catching Keyboard Interrupt with Raw Input

I have a bit of python code to to try and make raw_input catch keyboard interrupts. If I run the code in this function it works perfectly fine. But if I run it in my program, the print statement is never made, indicating that the keyboard interrupt is not caught. The program attempts to exit and fails until it escalates to SIGKILL, which of course works fine. My guess is somewhere else the keyboard interrupt is being caught, preventing the exception from running at all. My question is, where would such an interrupt likely occur, and how can I prevent it from blocking this one. My plan has been to add a slight delay between the program catching a keyboard interrupt and killing itself to give excepting here a moment to catch.
Any ideas appreciated
Thanks!
import sys
def interruptable_input(text=''):
'''Takes raw input, but accepts keyboard interrupt'''
try:
return raw_input(text)
except KeyboardInterrupt:
print "Interrupted by user"
sys.exit()
I have narrowed it down to the following:
import sys
text=''
try:
print raw_input(text)
except KeyboardInterrupt:
print "Interrupted by user"
sys.exit()
Which works perfectly when i run it on the command line using python 2.7.
It lets me type an input on the console and when I hit ctrl+c it prints intterupted by user
Edit:
I misread your question at first, however when i use the method from your example and call it from another method the result is the same
I have determined the reason for my issue was another interrupt handler killing the script before the KeyboardInterrupt was hit. I solved it by setting my own interrupt handler for signal.SIGINT like so:
import sys
import signal
signal.signal(signal.SIGINT, signal_term_handler)
def signal_term_handler(signal, frame):
'''Handles KeyboardInterrupts to ensure smooth exit'''
rospy.logerr('User Keyboard interrupt')
sys.exit(0)
it's slightly less direct but it get's the job done. Now raw_input() will simply die when told to.

Python: Timeout Exception Handling with Signal.Alarm

I am trying to implement a timeout exception handler if a function call is taking too long.
EDIT: In fact, I am writing a Python script using subprocess, which calls an old C++ program with arguments. I know that the program hangs from time to time, not returning anything. That's why I am trying to put a time limit and to move on to next call with different argument and etc.
I've been searching and trying to implement it, but it doesn't quite work, so I wish to get some help. What I have so far is:
#! /usr/bin/env python
import signal
class TimeOutException(Exception):
def __init__(self, message, errors):
super(TimeOutException, self).__init__(message)
self.errors = errors
def signal_handler(signum, frame):
raise TimeOutException("Timeout!")
signal.signal(signal.SIGALRM, signal_handler)
signal.alarm(3)
try:
while True:
pass
except TimeOutException:
print "Timed out!"
signal.alarm(0)
EDIT: The Error message I receive currently is "TypeError: init() takes exactly 3 arguments (2 given)
Also, I would like ask a basic question regarding the except block. what's the role difference between the code right below "except TimeOutException" and the code in the "Exception handler"? It seems both can do the same thing?
Any help would be appreciated.
if a function call is taking too long
I realize that this might not be obvious for inexperienced developers, but the methods applicable for approaching this problem entirely depend on what you are doing in this "busy function", such as:
Is this a heavy computation? If yes, which Python interpreter are you using? CPython or PyPy? If CPython: does this computation only use Python bytecode or does it involve function calls outsourced to compiled machine code (which may hold Python's Global Interpreter Lock for quite an uncontrollable amount of time)?
Is this a lot of I/O work? If yes, can you abort this I/O work in an arbitrary state? Or do you need to properly clean up? Are you using a certain framework such as gevent or Twisted?
Edit:
So, it looks you are just spawning a subprocess and wait for it to terminate. Great, that is actually one of the most simple problems to implement a timeout control for. Python (3) ships a corresponding feature! :-) Have a look at
https://docs.python.org/3/library/subprocess.html#subprocess.call
The timeout argument is passed to Popen.wait(). If the timeout
expires, the child process will be killed and then waited for again.
The TimeoutExpired exception will be re-raised after the child process
has terminated.
Edit2:
Example code for you, save this to a file and execute it with Python 3.3, at least:
import subprocess
try:
subprocess.call(['python', '-c', 'print("hello")'], timeout=2)
except subprocess.TimeoutExpired as e:
print("%s was terminated as of timeout. Its output was:\n%s" % (e.cmd, e.output))
try:
subprocess.call(['python'], timeout=2)
except subprocess.TimeoutExpired as e:
print("%s was terminated as of timeout. Its output was:\n%s" % (e.cmd, e.output))
In the first case, the subprocess immediately returns. No timeout exception will be raised. In the second case, the timeout expires, and your controlling process (the process running above's script) will attempt to terminate the subprocess. This succeeds. After that, the subprocess.TimeoutExpired is raised and the exception handler deals with it. For me the output of the script above is ['python'] was terminated as of timeout. Its output was:
None:

Daemon dies unexpectedly

I have a python script, which I daemonise using this code
def daemonise():
from os import fork, setsid, umask, dup2
from sys import stdin, stdout, stderr
if fork(): exit(0)
umask(0)
setsid()
if fork(): exit(0)
stdout.flush()
stderr.flush()
si = file('/dev/null', 'r')
so = file('daemon-%s.out'%os.getpid(), 'a+')
se = file('daemon-%s.err'%os.getpid(), 'a+')
dup2(si.fileno(), stdin.fileno())
dup2(so.fileno(), stdout.fileno())
dup2(se.fileno(), stderr.fileno())
print 'this file has the output from daemon%s'%os.getpid()
print >> stderr, 'this file has the errors from daemon%s'%os.getpid()
The script is in
while True: try: funny_code(); sleep(10); except:pass;
loop. It runs fine for a few hours and then dies unexpectedly. How do I go about debugging such demons, err daemons.
[Edit]
Without starting a process like monit, is there a way to write a watchdog in python, which can watch my other daemons and restart when they go down? (Who watches the watchdog.)
You really should use python-daemon for this which is a library that implements PEP 3141 for a standard daemon process library. This way you will ensure that your application does all the right things for whichever type of UNIX it is running under. No need to reinvent the wheel.
Why are you silently swallowing all exceptions? Try to see what exceptions are being caught by this:
while True:
try:
funny_code()
sleep(10)
except BaseException, e:
print e.__class__, e.message
pass
Something unexpected might be happening which is causing it to fail, but you'll never know if you blindly ignore all the exceptions.
I recommend using supervisord (written in Python, very easy to use) for daemonizing and monitoring processes. Running under supervisord you would not have to use your daemonise function.
What I've used in my clients is daemontools. It is a proven, well tested tool to run anything daemonized.
You just write your application without any daemonization, to run on foreground; Then create a daemontools service folder for it, and it will discover and automatically restart your application from now on, and every time the system restarts.
It can also handle log rotation and stuff. Saves a lot of tedious, repeated work.

Categories