How do I handle a subprocess.run() error in Python? For example, I want to run cd + UserInput with subprocess.run(). What if the user types in a directory name which does not exist? How do I handle this type of error?
As #match has mentioned, you can't run cd as a subprocess, because cd isn't a program, it's a shell built-in command.
But if you're asking about any subprocess failures, besides cd:
try:
subprocess.run(command_that_might_not_exist) # like ['abcd']
except Exception:
# handle the error
result = subprocess.run(command_that_might_fail) # like ['ls', 'abcd/']
if result.returncode != 0:
# handle the error
There is no way running cd in a subprocess is useful. The subprocess will change its own directory and then immediately exit, leaving no observable change in the parent process or anywhere else.
For the same reason, there is no binary command named cd on most systems; the cd command is a shell built-in.
Generally, if you run subprocess.run() without the check=True keyword argument, any error within the subprocess will simply be ignored. So if /bin/cd or a similar command existed, you could run
# purely theoretical, and utterly useless
subprocess.run(['cd', UserInput])
and simply not know whether it did anything or not.
If you do supply check=True, the exception you need to trap is CalledProcessError:
try:
# pointless code as such; see explanation above
subprocess.run(['cd', UserInput], check=True)
except subprocess.CalledProcessError:
print('Directory name %s misspelled, or you lack the permissions' % UserInput)
But even more fundamentally, allowing users to prod the system by running arbitrary unchecked input in a subprocess is a horrible idea. (Allowing users to run arbitrary shell script with shell=True is a monumentally, catastrophically horrible idea, so let's not even go there. Maybe see Actual meaning of shell=True in subprocess)
A somewhat more secure approach is to run the subprocess with a cwd= keyword argument.
# also vaguely pointless
subprocess.run(['true'], cwd=UserInput)
In this case, you can expect a regular FileNotFoundError if the directory does not exist, or a PermissionError if you lack the privileges.
You should probably still add check=True and be prepared to handle any resulting exception, unless you specifically don't care whether the subprocess succeeded. (There are actually cases where this makes sense, like when you grep for something but are fine with it not finding any matches, which raises an error if you use check=True.)
Perhaps see also Running Bash commands in Python
Related
I am using popen to run the following command on a windows vm
'tf changeset ...'
however when I run it using
commandLine = 'tf changeset /noprompt /latest /loginType:OAuth /login:.,***'
process = Popen(commandLine, shell=True, stdout=PIPE, stderr=PIPE)
I see the following being executed in the logs
'C:\Azure\Agent-1/externals/tf/tf changeset ...'
Meaning that 'C:\Azure\Agent-1/externals/tf/' has been prepended to my command. I was just expecting to see
'tf changeset ...'
Unfortunately adding the path to the execution breaks the command, is there any way to stop python from doing this?
Try passing the commandLine to Popen as a list of arguments:
commandLine = ["tf", "changeset", "/noprompt", "/latest", "/loginType:OAuth", "/login:.,***'"]
process = Popen(commandLine, stdout=PIPE, stderr=PIPE)
Python by itself does no such thing. Perhaps the shell=True is doing more than you hoped or bargained for? But we would need access to your shell's configuration to get beyond mere speculation around this.
Calling Popen on the result from Popen is obviously not well-defined; but perhaps this is just an error in your transcription of your real code?
Removing the first process =Popen( would fix this with minimal changes. As per the above, I would also remove shell=True as at least superfluous and at worst directly harmful.
commandLine = 'tf changeset /noprompt /latest /loginType:OAuth /login:.,***'
process = Popen(commandLine, stdout=PIPE, stderr=PIPE)
Like the subprocess documentation tells you, shell=True is only useful on Windows when your command is a cmd built-in.
For proper portability, you should break the command into tokens, manually or by way of shlex.split() if you are lazy or need the user to pass in a string to execute.
commandLine = ['tf ', 'changeset', '/noprompt', '/latest', '/loginType:OAuth', '/login:.,***']
process = Popen(commandLine, stdout=PIPE, stderr=PIPE)
This avoids the other dangers of shell=True and will be portable to non-Windows platforms (assuming of course that the command you are trying to run is available on the target platform).
I am running spyder on windows 10 and when I attempt to run a command similar to the following:
cmd = 'python /path/to/program.py arg1 arg2'
subprocess.run(cmd,shell=True)
The script is being run as expected but I would like to see what is being printed to screen by the executed command in the spyder ipython console. I know the program is printing things to screen as expected by other methods (running the program from a shell) so there is not an error in the script I am running.
How do I go about enabling printing for the subprocess?
The output comes in a stream called stdout. To capture it, you need to redirect it to a pipe, which then is terminated in the calling process. subprocess.run(...) has builtin support for handling this:
import subprocess
cmd = 'python /path/to/program.py arg1 arg2'.split()
proc = subprocess.run(cmd, stdout=subprocess.PIPE, universal_newlines=True)
print(proc.stdout)
As can be seen, the output is caught in the CompletedProcess object (proc) and then accessed as member data.Also, to make the output into text (a string) rather than a bytearray, I have passed the parameter universal_newlines=True.
A caveat, though, is that subprocess.run(...) runs to completion before it returns control. Therefore, this does not allow for capturing the output "live" but rather after the whole process has finsihed. If you want live capture, you must instead use subprocess.Popen(...) and then use .communicate() or some other means of communication to catch the output from the subprocess.
Another comment I like to make, is that using shell=True is not recommended. Specifically not when handling unknown or not trusted input. It leaves the interpretation of cmd to the shell which can lead to all kind of security breaches and bad behavior. Instead, split cmd into a list (e.g. as I have done) and then pass that list to subprocess.run(...) and leave out shell=True.
I would like to write a simple python script which will be able to clone a git repository into desired directory. I used try...except construction to be able to catch all exceptions however it looks like I am not able to handle 'fatal' properly.
#!/usr/bin/env python
import subprocess
try:
subprocess.check_call(['git', 'clone', 'git clone git#some_repo', '/tmp/some_directory'])
except Exception:
print "There was a problem during repository configuration"
The output of the script above:
fatal: repository 'git clone git#some_repo' does not exist
There was a problem during repository configuration
To be more specific, I was rather expecting to get only the "There was a ..." message. Why do I get a 'fatal' message also?
You need to capture STDERR of your subprocess.check_call() execution. See Catch stderr in subprocess.check_call without using subprocess.PIPE
for details.
The message you are seeing is produced by the git command.
If you want to prevent that message from appearing you should redirect either standard error or all output to /dev/null through a shell, like:
subprocess.check_call(['git', 'clone', 'git clone git#some_repo', '/tmp/some_directory', '2&>/dev/null'], shell=True)
However, I'd recommend against that practice since you lose information on the actual cause of error.
As previously specified you need to capture the standard error. Also, as the documentation specifies, subprocess.check_call() just raises an exception when the return code is non-zero.
So, you could mimic the behavior as follows:
#!/usr/bin/env python
import subprocess
def clone_repository(): # customize your function parameters
# prepare the arguments with your function parameters
arguments = ['git', 'clone', 'git clone git#some_repo', '/tmp/some_directory']
git_proc = subprocess.Popen(arguments)
stdout, stderr = git_proc.communicate()
if git_proc.returncode != 0:
raise subprocess.CalledProcessError(arguments, git_proc.returncode)
return stdout, stderr
try:
stdout, stderr = clone_repository()
except (OSError, ValueError) as e:
# this errors out when the arguments are invalid (ValueError)
# or when there is an underlying file missing, etc (OSError)
# put the print that you require for these errors
pass
except subprocess.CalledProcessError:
# you could use stderr to determine the underlying error
print "There was a problem during repository configuration"
I have a system() command and I want to catch the exception it may generate. The code that I have is:
def test():
filename = "test.txt"
try:
cmd = "cp /Users/user1/Desktop/Test_Folder/"+filename+" /Users/user1/Desktop/"
output = system(cmd)
except:
print 'In the except'
traceback.print_exc()
sys.exit(1)
if __name__ == '__main__':
test()
When I execute the above code and say the file that I want to copy is not present then the error is not caught and the code does not enter the except section. How can I catch such errors generated by system() commands?
Note: The above system() command is just an example. There are multiple such system() commands and each of them vary from one another
The system() command doesn't throw an exception on failure; it will simply return the exit status code of the application. If you want an exception thrown on failure, use subprocess.check_call, instead. (And, in general, using the subprocess module is superior in that it gives you greater control over the invocation as well as the ability to redirect the subprocess's standard input/output).
Note, though, that if most of the operations you are doing are simple filesystem operations like copying files from one location to another, that there are Python functions that do the equivalent. For example, shutil provides the ability to copy files from one location to another. Where there are Python functions to do the task, it is generally better to use those rather than invoke a sub process to do it (especially since the Python-provided methods may be able to do it more efficiently without forking a process, and the Python versions will also be more robust to cross-platform considerations).
In linux to set a proxy value you do the following.
proxy=http://$user:$password#proxy.server.com:${NUM}
http_proxy="$proxy" https_proxy="$proxy" ${COMMAND}
For security reasons, if you run this in a subshell, you are not explicitly letting your password in the open, or in the logs. Problem with this approach is, I have to set user name and password everytime I want to run a command.
Therefore decided to write a python code for it. I have a working version in C. Just wanted to learn more in Python. I have found nice ways to encode and decode my password, and after most of the hooplahs, I pass it to this function to test proxy connection.
def test_connection(creds,proxy_url):
import pycurl
import cStringIO
buf = cStringIO.StringIO()
test_url="http://www.google.com"
c = pycurl.Curl()
c.setopt(c.URL, test_url)
c.setopt(c.WRITEFUNCTION, buf.write)
c.setopt(c.PROXY, proxy_url)
c.setopt(c.PROXYPORT, 8080)
c.setopt(c.PROXYTYPE, c.PROXYTYPE_HTTP)
c.setopt(c.PROXYAUTH, c.HTTPAUTH_NTLM)
c.setopt(c.PROXYUSERPWD, creds)
c.perform()
buf.close()
return c.getinfo(c.RESPONSE_CODE)
Where I'm having problems is using suprocess, I do understand that subprocess does not allow you to use export, since it is not really a command. Subprocess module errors with 'export' in python on linux?
this is my implementation
finalCommand = ["/bin/sh", "-c"]
finalCommand.append(http_proxy)
finalCommand.append(https_proxy)
for x in bashCommand:
finalCommand.append(x)
print subprocess.call(finalCommand)
process = subprocess.Popen(finalCommand,stdout=subprocess.PIPE)
out, err = process.communicate()
print "Output ... \n %s" % (out)
if err == None:
print "No errors"
else:
print "Errors ... \n %s" %(err)
Unfortunately, after several tests, my program always return no output and no error.
I have printed the output of the curl, so i know the decode, encode, or proxy isn't the issue. Any suggestions?
POST-ANSWER EDIT:
Interaction between Python script and linux shell
env did solve my problem, but I also had to refer to the thread above. Some of the commands I ran were interactive one, and as it explains it well in the thread, PIPE doesn't work properly with interactive programs.
It's hard to be sure without knowing exactly what commands you're trying to run, but I'm pretty sure what you want to do here is just set up the environment for your subprocess, using the env argument to Popen:
env = dict(os.environ)
env['http_proxy'] = proxy
env['https_proxy'] = proxy
for command in commands:
out = subprocess.check_output(command, env=env)
If you want to modify your own environment, rather than just the subprocesses' environments, just modify os.environ in place. (See the documentation for platform-specific issues, and how to deal with them.)
Meanwhile, the reason you're getting no errors is simple:
process = subprocess.Popen(finalCommand,stdout=subprocess.PIPE)
out, err = process.communicate()
If you don't pass stderr=subprocess.PIPE to the Popen constructor, it doesn't capture stderr, so err ends up as None.
As a side note, you almost never want to check == None. Often, just if not err: is sufficient. When it's not, if err is not None: is almost always required. The set of cases where == is necessary but not insufficient is vanishingly small. See the Programming Recommendations in PEP 8 for (slightly) more details.
And one more side note: You can just write finalCommand.extend(x). The list.extend method does the same thing as looping over an iterable and appending each element one by one, except that it's more readable, harder to get anything wrong, more concise, and faster.