Issue when combining pipe command and sudo with Python Subprocess - python

I am attempting to utilize the Python module subprocess to automate a terminal command on Mac. Specifically, I am running a certain command to create port mappings on my machine. However, the command in question requires both root privileges and piping:
echo "
rdr pass inet proto tcp from any to any port 80 -> 127.0.0.1 port 8080
" | sudo pfctl -ef -
In order to pass my root password to the shell command with subprocess, I followed the code example found here to create a script below:
from subprocess import PIPE, Popen
p = Popen(['echo', '"rdr pass inet proto tcp from any to any port 80 -> 127.0.0.1 port 8080"\n'], stdin=PIPE, stderr=PIPE, universal_newlines=True)
p2 = Popen(['sudo', '-S']+['pfctl', '-ef', '-'], stdin=p.stdout, stderr=PIPE, universal_newlines=True)
return p2.communicate('my_root_password\n')[1]
Note that to implement the piping of the echo output to the command pfctl -ef - I have created two Popen objects and have passed the stdout of the first object to the stdin parameter of second, as recommended in the subprocess docs, and am using Popen.communicate to write the root password to the stdin.
However, my script above is not working, as I am still prompted in the terminal to enter my root password. Strangely, I am able to successfully write my root password to stdin when using a command without piping, for instance, when running sudo pfctl -s nat (to display my current port mapping settings):
p = Popen(['sudo', '-S']+'pfctl -s nat'.split(), stdin=PIPE, stderr=PIPE, universal_newlines=True)
print(p.communicate('root_password\n')[1])
The above code works, as the mapping configuration is displayed without any password prompt.
How can my first Python script be changed so that I am not prompted to enter my root password, having already utilized Popen.communicate to write the password to stdin?
I am running this code on macOS Sierra 10.12.5

I think this is just a simple case of the pipes not being connected up properly. You don't specify a pipe for the stdout of the first process so by the looks of things the output just gets printed to the terminal then the process finishes.
When second process starts, it will prompt for the password and as far as I can see receive it correctly. However the communicate method then closes the input and waits for the process to finish. As far as I can see the output of the first process never reaches the second, which is why your script isn't working. Instead of creating a separate echo process, why not just send all the text data you need with communicate?
The other problem it looks like you have (I don't have a MAC to check) is that sudo is printing the prompt directly to the terminal (ie via /dev/tty rather than stdout). On my version of sudo (on Debian) adding the -S option causes it to print the prompt to stderr. However it looks like the -S option doesn't do this on a MAC. Instead try disabling the prompt with -p ''.
Putting everything together, this should work:
from subprocess import PIPE, Popen
from getpass import getpass
password = getpass()
cmd = ['sudo', '-k', '-S', '-p', '', 'pfctl', '-ef', '-']
p = Popen(cmd, stdin=PIPE, stderr=PIPE, universal_newlines=True)
text = password + '\n'
text += 'rdr pass inet proto tcp from any to any port 80 -> 127.0.0.1 port 8080\n'
p.communicate(text)
Security Note
This answer was updated to not use a plain text password. See the comments below for a good example of why this is a bad idea! Note also that storing passwords in memory with Python isn't completely secure as if the memory is swapped to disk, this will include the password in plain text. With a lower level language, the mlock system call would be used to prevent any memory containing the password from being swapped.

Related

Can't close an SSH connection opened with Popen

I created a class method (this will only run on Linux) that sends a list of commands to a remote computer over SSH and returns the output using subprocess.Popen:
def remoteConnection(self, list_of_remote_commands):
ssh = subprocess.Popen(["ssh", self.ssh_connection_string], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE,universal_newlines=True, bufsize=0)
# send ssh commands to stdin
for command in list_of_remote_commands:
ssh.stdin.write(command + "\n")
ssh.stdin.close()
output_dict = {'stdin': list(ssh.stdin), 'stdout': list(ssh.stdout), 'stderr': list(ssh.stderr)}
return output_dict
Whilst I'm still getting to grips with the subprocess module I'd read quite a bit about Popen and no one ever mentioned closing it (SSH Connection with Python 3.0, Proper way to close all files after subprocess Popen and communicate, https://docs.python.org/2/library/subprocess.html) so I assumed that that wasn't a problem.
However when testing this out in ipython outside of a function I noticed that the variable ssh still seemed active. I tried closing ssh.stdin, ssh.stdout and ssh.stderr and even ssh.close(), ssh.terminate() and ssh.kill() but nothing seemed to close it. I thought perhaps it doesn't matter but my function will be called many times for months or even years so I don't want it to spawn a new process everytime it is run otherwise I'm going to quickly use up my maximum processes limit. So I use ssh.pid to find the PID and look it up using ps aux | grep PID and it's still there even after doing all of the above.
I also tried:
with subprocess.Popen(["ssh", self.ssh_connection_string], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE,universal_newlines=True, bufsize=0) as shh:
instead of:
ssh = subprocess.Popen(["ssh", self.ssh_connection_string], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE,universal_newlines=True, bufsize=0)
I also remember solving a similar problem a while back using ssh -T but even:
ssh = subprocess.Popen(["ssh", "-T", self.ssh_connection_string], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE,universal_newlines=True, bufsize=0)
Didn't work.
I'm sure I would have found something about closing Popen if I needed to but then why is the process still open on my computer - can anyone help me understand what's going on here?
In your case, you have a deadlock here:
output_dict = {'stdin': list(ssh.stdin), 'stdout': list(ssh.stdout), 'stderr': list(ssh.stderr)}
Mostly because list(ssh.stdin) blocks forever: trying to read standard input of a process doesn't work (there's also an extra risk because you redirected both standard output & error to different pipes without using threading to consume them)
You mean to use ssh.communicate, passing the whole input as argument. Simply do:
command_input = "".join(["{}\n".format(x) for x in list_of_remote_commands])
output,error = ssh.communicate(command_input) # may need .encode() for python 3
return_code = ssh.wait()
then
output_dict = {'stdin': list_of_commands, 'stdout': output.splitlines(), 'stderr': error.splitlines()}
I may add that in the particular ssh case, using paramiko module is better (python paramiko ssh) and avoids using subprocess completely.

Subprocess remote command execution in python

I am trying to execute remote command using subprocess:
import subprocess
x=subprocess.Popen(['ssh','15.24.13.14', ' ps -ef | grep -i upgrade | wc -l'],stdout=subprocess.PIPE)
y=x.stdout.read()
print y
print '\n'
z=int(y)
print z
I need to get number of processes runing with 'upgrade' in their name. But for some reason, script is not executed well. I get message:
"Warning: Permanently added '15.24.13.14' (RSA) to the list of known hosts."
And then nothing happens.
Where is the problem?
The problem is that if you are connecting for the first time via ssh to the given host, it asks you to add this host to the known hosts list and user has to confirm this by pressing 'y'. Since you didn't, than it hangs and does nothing.
You should either:
turn off host verification: ssh -o "StrictHostKeyChecking no" user#host
send 'y' to the ssh input, or
add this host manually to the known hosts, or
change the method of performing remote calls.
Because you didn't specify any stderr to the subprocess.Popen, the standard error will be directly print to your display. This is why you will always have the Warning: Permanently added '<hostname>' (ECDSA) to the list of known hosts. message until you clearly redirect stderr to a subprocess.PIPE (or /dev/null)
Also, to avoid hosts file issues, here is a little trick (be careful with it, it's kind of dangerous) :
from subprocess import Popen, PIPE
p = Popen(['ssh', '-o', 'UserKnownHostsFile=/dev/null', '-o', 'StrictHostKeyChecking=no', hostname, 'ps aux | grep -i upgrade | wc -l'], stdout=PIPE, stderr=PIPE)
result = int(p.communicate()[0][:-1]) # don't forget there's the \n at the end.
Why is it dangerous ? Because in case of MITM attack, you don't have any knowledge base of the remote, so you considere the attacker as your remote destination. Be careful about over-using this feature.

Pseudo terminal will not be allocated error - ssh - sudo - websocket - subprocess

I basically want to create a web page through which a unix terminal at the server side can be reached and commands can be sent to and their results can be received from the terminal.
For this, I have a WSGIServer. When a connection is opened, I execute the following:
def opened(self):
self.p = Popen(["bash", "-i"], bufsize=1, stdin=PIPE, stdout=PIPE, stderr=STDOUT)
self.p.stdout = Unbuffered(self.p.stdout)
self.t = Thread(target=self.listen_stdout)
self.t.daemon = True
self.t.start()
When a message comes to the server from the client, It is handled in the following function, which only redirects the coming message to the stdin of subprocess p which is an interactive bash:
def received_message(self, message):
print(message.data, file=self.p.stdin)
Then outputs of the bash is read in the following function within a separate thread t. It only sends the outputs to the client.
def listen_stdout(self):
while True:
c = self.p.stdout.read(1)
self.send(c)
In such a system, I am able to send any command(ls, cd, mkdir etc.) to the bash working at the server side and receive their outputs. However, when I try to run ssh xxx#xxx, the error pseudo-terminal will not be allocated because stdin is not a terminal is shown.
Also, in a similar way, when I run sudo ..., the prompt for password is not sent to the client somehow, but it appears on the terminal of the server script, instead.
I am aware of expect; however, only for such sudo and ssh usage, I do not want to mess my code up with expect. Instead, I am looking for a general solution that can fake sudo and ssh and redirect prompt's to the client.
Is there any way to solve this? Ideas are appreciated, thanks.
I found the solution. What I need was creating a pseudo-terminal. And, at the slave side of the tty, make a setsid() call to make this process a new session and run commands on it.
Details are here:
http://www.rkoucha.fr/tech_corner/pty_pdip.html

Subprocess Popen not capturing wget --spider command result

My understanding of capturing the output of a subprocess command as a string was to set stdout=sucprocess.PIPE and use command.communicate() to capture result, error.
For example, typing the following:
command = subprocess.Popen(["nmcli", "con"], stdout=subprocess.PIPE)
res, err = command.communicate()
produces no output to the terminal and stores all my connection information as a byte literal in the variable res. Simple.
It falls apart for me here though:
url = "http://torrent.ubuntu.com/xubuntu/releases/trusty/release/desktop/xubuntu-14.04.1-desktop-amd64.iso.torrent"
command = subprocess.Popen(["wget", "--spider", url], stdout=subprocess.PIPE)
This prints the output of the command to the terminal, then pauses execution until a keystroke is input by user. Subsequently running command.communicate() returns an empty bytes literal, b''.
Particularly odd to me is the pause in execution as issuing the command in bash just prints the command result and directly returns to the prompt.
All my searches just find Q&A about how to capture subprocess results in general, not anything about certain commands having to be captured in a different manner or anything particular about wget and subprocess.
Additional note, I have been able to use the wget command with subprocess to download files (no --spider option) without issue.
Any help greatly appreciated, this one has me stumped.
stderr is capturing the output so because you are not piping stderr you are seeing the output when you run the command and stdout is empty:
url = "http://torrent.ubuntu.com/xubuntu/releases/trusty/release/desktop/xubuntu-14.04.1-desktop-amd64.iso.torrent"
command = Popen(["wget", "--spider", url],stdout=PIPE,stderr=PIPE)
out,err = command.communicate()
print("This is stdout: {}".format(out))
print("This is stderr: {}".format(err))
This is stdout: b''
This is stderr: b'Spider mode enabled. Check if remote file exists.\n--2015-02-09 18:00:28-- http://torrent.ubuntu.com/xubuntu/releases/trusty/release/desktop/xubuntu-14.04.1-desktop-amd64.iso.torrent\nResolving torrent.ubuntu.com (torrent.ubuntu.com)... 91.189.95.21\nConnecting to torrent.ubuntu.com (torrent.ubuntu.com)|91.189.95.21|:80... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 37429 (37K) [application/x-bittorrent]\nRemote file exists.\n\n'
I've never been asked anything by wget before, but some processes (e.g. ssh) do capture the terminal device (tty) directly to get a password, short-cutting the process pipe you've set up.
To automate cases like this, you need to fake a terminal instead of a normal pipe. There are recipes out there using termios and stuff, but my suggestion would be to use the module "pexpect" which is written to do exactly that.

Python subprocesses don't output properly?

I don't think I'm understanding python subprocess properly at all but here's a simple example to illustrate a point I'm confused about:
#!/usr/bin/env python
import subprocess
lookup_server = subprocess.Popen("nc -l 5050", shell=True)
lookup_client = subprocess.Popen("nc localhost 5050", shell=True, stdin=subprocess.PIPE)
print lookup_client.poll()
lookup_client.stdin.write("magic\n")
print lookup_client.poll()
lookup_client.send_signal(subprocess.signal.SIGINT)
print lookup_client.poll()
lookup_server.wait()
print "Lookup server terminated properly"
The output comes back as
None
None
None
and never completes. Why is this? Also, if I change the first argument of Popen to an array of all of those arguments, none of the nc calls execute properly and the script runs through without ever waiting. Why does that happen?
Ultimately, I'm running into a problem in a much larger program that does something similar using netcat and another program running locally instead of two versions of nc. Either way, I haven't been able to write to or read from them properly. However, when I run them in the python console everything runs as I would expect. All this has me very frustrated. Let me know if you have any insights!
EDIT: I'm running this on Ubuntu Linux 12.04, when I man nc, I get the BSD General Commands manual so I'm assuming this is BSD netcat.
The problem here is that you're sending SIGINT to the process. If you just close the stdin, nc will close its socket and quit, which is what you want.
It sounds like you're actually using nc for the client (although not the server) in your real program, which means you have two easy fixes:
Instead of lookup_client.send_signal(subprocess.signal.SIGINT), just do lookup_client.stdin.close(). nc will see this as an EOF on its input, and exit normally, at which point your server will also exit.
#!/usr/bin/env python
import subprocess
lookup_server = subprocess.Popen("nc -l 5050", shell=True)
lookup_client = subprocess.Popen("nc localhost 5050", shell=True, stdin=subprocess.PIPE)
print lookup_client.poll()
lookup_client.stdin.write("magic\n")
lookup_client.stdin.close()
print lookup_client.poll()
lookup_server.wait()
print "Lookup server terminated properly"
When I run this, the most common output is:
None
None
magic
Lookup server terminated properly
Occasionally the second None is a 0 instead, and/or it comes after magic instead of before, but otherwise, it's always all four lines. (I'm running on OS X.)
For this simple case (although maybe not your real case), just use communicate instead of trying to do it manually.
#!/usr/bin/env python
import subprocess
lookup_server = subprocess.Popen("nc -l 5050", shell=True)
lookup_client = subprocess.Popen("nc localhost 5050", shell=True, stdin=subprocess.PIPE)
print lookup_client.communicate("magic\n")
lookup_server.wait()
print "Lookup server terminated properly"
Meanwhile:
Also, if I change the first argument of Popen to an array of all of those arguments, none of the nc calls execute properly and the script runs through without ever waiting. Why does that happen?
As the docs say:
On Unix with shell=True… If args is a sequence, the first item specifies the command string, and any additional items will be treated as additional arguments to the shell itself.
So, subprocess.Popen(["nc", "-l", "5050"], shell=True) does /bin/sh -c 'nc' -l 5050, and sh doesn't know what to do with those arguments.
You probably do want to use an array of args, but then you have to get rid of shell=True—which is a good idea anyway, because the shell isn't helping you here.
One more thing:
lookup_client.send_signal(subprocess.signal.SIGINT)
print lookup_client.poll()
This may print either -2 or None, depending on whether the client has finished responding to the SIGINT and been killed before you poll it. If you want to actually get that -2, you have to call wait rather than poll (or do something else, like loop until poll returns non-None).
Finally, why didn't your original code work? Well, sending SIGINT is asynchronous; there's no guarantee as to when it might take effect. For one example of what could go wrong, it could take effect before the client even opens the socket, in which case the server is still sitting around waiting for a client that never shows up.
You can throw in a time.sleep(5) before the signal call to test this—but obviously that's not a real fix, or even an acceptable hack; it's only useful for testing the problem. What you need to do is not kill the client until it's done everything you want it to do. For complex cases, you'll need to build some mechanism to do that (e.g., reading its stdout), while for simple cases, communicate is already everything you need (and there's no reason to kill the child in the first place).
Your invocation of nc is bad, what will happen if I invoke this as you in command line:
# Server window:
[vyktor#grepfruit ~]$ nc -l 5050
# Client Window
[vyktor#grepfruit ~]$ nc localhost 5050
[vyktor#grepfruit ~]$ echo $?
1
Which mean (1 in $?) failure.
Once you use -p:
-p, --local-port=NUM local port number
NC starts listening, so:
# Server window
[vyktor#grepfruit ~]$ nc -l -p 5050
# Keeps handing
# Client window
[vyktor#grepfruit ~]$ echo Hi | nc localhost 5050
# Keeps hanging
Once you add -c to client invocation:
-c, --close close connection on EOF from stdin
You'll end up with this:
# Client window
[vyktor#grepfruit ~]$ echo Hi | nc localhost 5050 -c
[vyktor#grepfruit ~]$
# Server window
[vyktor#grepfruit ~]$ nc -l -p 5050
Hi
[vyktor#grepfruit ~]$
So you need this python piece of code:
#!/usr/bin/env python
import subprocess
lookup_server = subprocess.Popen("nc -l -p 5050", shell=True)
lookup_client = subprocess.Popen("nc -c localhost 5050", shell=True,
stdin=subprocess.PIPE)
lookup_client.stdin.write("magic\n")
lookup_client.stdin.close() # This
lookup_client.send_signal(subprocess.signal.SIGINT) # or this kill
lookup_server.wait()
print "Lookup server terminated properly"

Categories