I have a system() command and I want to catch the exception it may generate. The code that I have is:
def test():
filename = "test.txt"
try:
cmd = "cp /Users/user1/Desktop/Test_Folder/"+filename+" /Users/user1/Desktop/"
output = system(cmd)
except:
print 'In the except'
traceback.print_exc()
sys.exit(1)
if __name__ == '__main__':
test()
When I execute the above code and say the file that I want to copy is not present then the error is not caught and the code does not enter the except section. How can I catch such errors generated by system() commands?
Note: The above system() command is just an example. There are multiple such system() commands and each of them vary from one another
The system() command doesn't throw an exception on failure; it will simply return the exit status code of the application. If you want an exception thrown on failure, use subprocess.check_call, instead. (And, in general, using the subprocess module is superior in that it gives you greater control over the invocation as well as the ability to redirect the subprocess's standard input/output).
Note, though, that if most of the operations you are doing are simple filesystem operations like copying files from one location to another, that there are Python functions that do the equivalent. For example, shutil provides the ability to copy files from one location to another. Where there are Python functions to do the task, it is generally better to use those rather than invoke a sub process to do it (especially since the Python-provided methods may be able to do it more efficiently without forking a process, and the Python versions will also be more robust to cross-platform considerations).
Related
How do I handle a subprocess.run() error in Python? For example, I want to run cd + UserInput with subprocess.run(). What if the user types in a directory name which does not exist? How do I handle this type of error?
As #match has mentioned, you can't run cd as a subprocess, because cd isn't a program, it's a shell built-in command.
But if you're asking about any subprocess failures, besides cd:
try:
subprocess.run(command_that_might_not_exist) # like ['abcd']
except Exception:
# handle the error
result = subprocess.run(command_that_might_fail) # like ['ls', 'abcd/']
if result.returncode != 0:
# handle the error
There is no way running cd in a subprocess is useful. The subprocess will change its own directory and then immediately exit, leaving no observable change in the parent process or anywhere else.
For the same reason, there is no binary command named cd on most systems; the cd command is a shell built-in.
Generally, if you run subprocess.run() without the check=True keyword argument, any error within the subprocess will simply be ignored. So if /bin/cd or a similar command existed, you could run
# purely theoretical, and utterly useless
subprocess.run(['cd', UserInput])
and simply not know whether it did anything or not.
If you do supply check=True, the exception you need to trap is CalledProcessError:
try:
# pointless code as such; see explanation above
subprocess.run(['cd', UserInput], check=True)
except subprocess.CalledProcessError:
print('Directory name %s misspelled, or you lack the permissions' % UserInput)
But even more fundamentally, allowing users to prod the system by running arbitrary unchecked input in a subprocess is a horrible idea. (Allowing users to run arbitrary shell script with shell=True is a monumentally, catastrophically horrible idea, so let's not even go there. Maybe see Actual meaning of shell=True in subprocess)
A somewhat more secure approach is to run the subprocess with a cwd= keyword argument.
# also vaguely pointless
subprocess.run(['true'], cwd=UserInput)
In this case, you can expect a regular FileNotFoundError if the directory does not exist, or a PermissionError if you lack the privileges.
You should probably still add check=True and be prepared to handle any resulting exception, unless you specifically don't care whether the subprocess succeeded. (There are actually cases where this makes sense, like when you grep for something but are fine with it not finding any matches, which raises an error if you use check=True.)
Perhaps see also Running Bash commands in Python
I Have a shell script which in turn runs a python script, I have to exit from the main shell script when an exception is caught in python. Can anyone suggest a way on how to achieve it.
In Python you can set the return value using sys.exit(). Typically when execution completed successfully you return 0, and if not then some non-zero number.
So something like this in your Python will work:
import sys
try:
....
except:
sys.exit(1)
And then, as others have said, you need to make sure your bash script catches the error by either checking the return value explicitly (using e.g. $?) or using set -e.
I have a Python program, which, under certain conditions, should prompt the user for a filename. However, there is a default filename which I want to provide, which the user can edit if they wish. This means typically that they need to hit the backspace key to delete the current filename and replace it with the one they prefer.
To do this, I've adapted this answer for Python 3, into:
def rlinput(prompt, prefill=''):
readline.set_startup_hook(lambda: readline.insert_text(prefill))
try:
return input(prompt)
finally:
readline.set_startup_hook()
new_filename = rlinput("What filename do you want?", "foo.txt")
This works as expected when the program is run interactively as intended - after backspacing and entering a new filename, new_filename contains bar.txt or whatever filename the user enters.
However, I also want to test the program using unit tests. Generally, to do this, I run the program as a subprocess, so that I can feed it input to stdin (and hence test it as a user would use it). I have some unit testing code which (simplified) looks like this:
p = Popen(['mypythonutility', 'some', 'arguments'], stdin=PIPE)
p.communicate('\b\b\bbar.txt')
My intention is that this should simulate the user 'backspacing' over the provided foo.txt, and entering bar.txt instead.
However, this doesn't seem to have the desired effect. Instead, it would appear, after some debugging, that new_filename in my program ends up with the equivalent of \b\b\bbar.txt in it. I was expecting just bar.txt.
What am I doing wrong?
The appropriate way to control an interactive child process from Python is to use the pexpect module. This module makes the child process believe that it is running in an interactive terminal session, and lets the parent process determine exactly which keystrokes are sent to the child process.
Pexpect is a pure Python module for spawning child applications; controlling them; and responding to expected patterns in their output. Pexpect works like Don Libes’ Expect. Pexpect allows your script to spawn a child application and control it as if a human were typing commands.
Python provides two convenient functions for calling subprocesses that might fail, subprocess.check_call and subprocess.check_output. Basically,
subprocess.check_call(['command', 'arg1', ...])
spawns the specified command as a subprocess, blocks, and verifies that the subprocess terminated successfully (returned zero). If not, it throws an exception. check_output does the same thing, except it captures the subprocess's stdout and returns it as a byte-string.
This is convenient because it is a single Python expression (you don't have to set up and control the subprocess over several lines of code), and there's no risk of forgetting to check the return value.
What are the idiomatic Ruby equivalents to check_call and check_output? I am aware of the $? global that gives the process's return value, but that would be awkward—the point of having exceptions is that you don't have to manually check error codes. There are numerous ways to spawn a subprocess in Ruby, but I don't see any that provide this feature.
Here’s a simple check_call I threw together, and that seems to work.
def check_call(*cmd, **kw)
_, status = Process.waitpid2 Kernel.spawn(*cmd, **kw)
raise "Command #{cmd} #{status}" unless status.success?
end
The basic/in-built methods are supplanted by the POpen4 gem. And the shell-executor gem provides further awesomeness.
It's hard to say what's the most idiomatic solution in Ruby… but the one that's closest to Python is probably Shell.execute! from shell-executer.
From the example on the docs page:
begin
Shell.execute!('ls /not_existing')
rescue RuntimeError => e
print e.message
end
Compare to:
try:
subprocess.check_call('ls /not_existing', shell=True)
except Exception as e:
print e.message
The most notable difference here is that the Ruby equivalent doesn't have a way to do shell=False (and take the args as a list), which Python not only has, but defaults to.
Also, Python's e.message will be a default message or something generated based on the return code, while Ruby's e.message will be the child's stderr.
If you want to do shell=False, as far as I know, you'll have to write your own wrapper around something lower level; all of the Ruby wrappers I know of (shell-executer, Popen4, [open4][4]) are wrappers around or emulators of the POSIX popen functions.
I have a python script, which I daemonise using this code
def daemonise():
from os import fork, setsid, umask, dup2
from sys import stdin, stdout, stderr
if fork(): exit(0)
umask(0)
setsid()
if fork(): exit(0)
stdout.flush()
stderr.flush()
si = file('/dev/null', 'r')
so = file('daemon-%s.out'%os.getpid(), 'a+')
se = file('daemon-%s.err'%os.getpid(), 'a+')
dup2(si.fileno(), stdin.fileno())
dup2(so.fileno(), stdout.fileno())
dup2(se.fileno(), stderr.fileno())
print 'this file has the output from daemon%s'%os.getpid()
print >> stderr, 'this file has the errors from daemon%s'%os.getpid()
The script is in
while True: try: funny_code(); sleep(10); except:pass;
loop. It runs fine for a few hours and then dies unexpectedly. How do I go about debugging such demons, err daemons.
[Edit]
Without starting a process like monit, is there a way to write a watchdog in python, which can watch my other daemons and restart when they go down? (Who watches the watchdog.)
You really should use python-daemon for this which is a library that implements PEP 3141 for a standard daemon process library. This way you will ensure that your application does all the right things for whichever type of UNIX it is running under. No need to reinvent the wheel.
Why are you silently swallowing all exceptions? Try to see what exceptions are being caught by this:
while True:
try:
funny_code()
sleep(10)
except BaseException, e:
print e.__class__, e.message
pass
Something unexpected might be happening which is causing it to fail, but you'll never know if you blindly ignore all the exceptions.
I recommend using supervisord (written in Python, very easy to use) for daemonizing and monitoring processes. Running under supervisord you would not have to use your daemonise function.
What I've used in my clients is daemontools. It is a proven, well tested tool to run anything daemonized.
You just write your application without any daemonization, to run on foreground; Then create a daemontools service folder for it, and it will discover and automatically restart your application from now on, and every time the system restarts.
It can also handle log rotation and stuff. Saves a lot of tedious, repeated work.