How can I implement port forwarding in a Paramiko server? - python

A "direct-tcpip" request (commonly known as port-forwarding) occurs when you run SSH as ssh user#host -L <local port>:<remote host>:<remote port> and then try to connect over the local port.
I'm trying to implement direct-tcpip on a custom SSH server, and Paramiko offers the check_channel_direct_tcpip_request in the ServerInterface class in order to check if the "direct-tcpip" request should be allowed, which can be implemented as follows:
class Server(paramiko.ServerInterface):
# ...
def check_channel_direct_tcpip_request(self, chanid, origin, destination):
return paramiko.OPEN_SUCCEEDED
However, when I use the aforementioned SSH command, and connect over the local port, nothing happens, probably because I need to implement the connection handling myself.
Reading the documentation, it also appears that the channel is only opened after OPEN_SUCCEDED has been returned.
How can I handle the direct-tcpip request after returning OPEN_SUCCEEDED for the request?

You indeed do need to set up your own connection handler. This is a lengthy answer to explain the steps I took - some of it you will not need if your server code already works. The whole running server example in its entirety is here: https://controlc.com/25439153
I used the Paramiko example server code from here https://github.com/paramiko/paramiko/blob/master/demos/demo_server.py as a basis and implanted some socket code on that. This does not have any error handling, thread related niceties or anything else "proper" for that matter but it allows you to use the port forwarder.
This also has a lot of things you do not need as I did not want to start tidying up a dummy example code. Apologies for that.
To start with, we need the forwarder tools. This creates a thread to run the "tunnel" forwarder. This also answers to your question where you get your channel. You accept() it from the transport but you need to do that in the forwarder thread. As you stated in your OP, it is not there yet in the check_channel_direct_tcpip_request() function but it will be eventually available to the thread.
def tunnel(sock, chan, chunk_size=1024):
while True:
r, w, x = select.select([sock, chan], [], [])
if sock in r:
data = sock.recv(chunk_size)
if len(data) == 0:
break
chan.send(data)
if chan in r:
data = chan.recv(chunk_size)
if len(data) == 0:
break
sock.send(data)
chan.close()
sock.close()
class ForwardClient(threading.Thread):
daemon = True
# chanid = 0
def __init__(self, address, transport, chanid):
threading.Thread.__init__(self)
self.socket = socket.create_connection(address)
self.transport = transport
self.chanid = chanid
def run(self):
while True:
chan = self.transport.accept(10)
if chan == None:
continue
print("Got new channel (id: %i).", chan.get_id())
if chan.get_id() == self.chanid:
break
peer = self.socket.getpeername()
try:
tunnel(self.socket, chan)
except:
pass
Back to the example server code. Your server class needs to have transport as a parameter, unlike in the example code:
class Server(paramiko.ServerInterface):
# 'data' is the output of base64.b64encode(key)
# (using the "user_rsa_key" files)
data = (
b"AAAAB3NzaC1yc2EAAAABIwAAAIEAyO4it3fHlmGZWJaGrfeHOVY7RWO3P9M7hp"
b"fAu7jJ2d7eothvfeuoRFtJwhUmZDluRdFyhFY/hFAh76PJKGAusIqIQKlkJxMC"
b"KDqIexkgHAfID/6mqvmnSJf0b5W8v5h2pI/stOSwTQ+pxVhwJ9ctYDhRSlF0iT"
b"UWT10hcuO4Ks8="
)
good_pub_key = paramiko.RSAKey(data=decodebytes(data))
def __init__(self, transport):
self.transport = transport
self.event = threading.Event()
Then you will override the relevant method and create the forwarder there:
def check_channel_direct_tcpip_request(self, chanid, origin, destination):
print(chanid, origin, destination)
f = ForwardClient(destination, self.transport, chanid)
f.start()
return paramiko.OPEN_SUCCEEDED
You need to add transport parameter to the creation of the server class:
t.add_server_key(host_key)
server = Server(t)
This example server requires you to have a RSA private key in the directory named test_rsa.key. Create any RSA key there, you do not need it but I did not bother to strip the use of it off the code.
You can then run your server (runs on port 2200) and issue
ssh -p 2200 -L 2300:www.google.com:80 robey#localhost
(password is foo)
Now when you try
telnet localhost 2300
and type something there, you will get a response from Google.

Related

How to get IP address and port of newly accepted connection in Python asyncio server?

I'm using the asyncio library in Python 3.8
https://docs.python.org/3/library/asyncio.html
I am creating a server, and in the "newly accepted connection" callback function, I want to find out the remote IP address and port of the new client.
The arguments to the callback function are one instance each of StreamReader and StreamWriter used to read and write from the client. Is there a straightforward way to find the IP address and port of the streams? Note that I want to do this for both SSL and non-SSL connections.
Here I create the server:
async def create_server(self, new_client_cb, host, port):
srvsocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
srvsocket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
srvsocket.bind((host, port))
srvsocket.listen(5)
return await asyncio.start_server(new_client_cb, sock=srvsocket, start_serving=False)
I pass in the callback function, which adheres to the documentation and accepts an instance of a StreamReader and a StreamWriter.
Here is said callback function. It's part of a class, hence the leading self argument.
async def _new_client(self, client_r, client_w):
try:
self.logger.debug("New client on incoming proxy")
dests_r = {}
dests_w = {}
for addr in self.config['addrlist']:
host, port = addr.split(':')
host = socket.gethostbyname(host)
self.logger.debug(f"Connecting to {addr}...")
r, w = await self.protocol.open_connection(host, port)
self.logger.debug(f"Connected to {addr}")
dests_r[addr] = r
dests_w[addr] = w
done, pending = await asyncio.wait(
[self._tunnel(list(dests_r.values()), [client_w]), self._tunnel([client_r], list(dests_w.values()))],
return_when=asyncio.FIRST_EXCEPTION
)
for result in done:
if result.exception():
raise result.exception()
except Exception as e:
self.logger.error(f"Caught exception: {str(e)}")
traceback.print_exc()
There's a lot going in that function related to other aspects of my
application.
I think my question ultimately boils down to: how do I found out the remote address and port associated with the new client given these inputs, the StreamReader and StreamWriter? I'm looking into asyncio's Transport classes: https://docs.python.org/3/library/asyncio-protocol.html
but perhaps others can point me in the right direction.
Wrt asyncio's Transport classes, I can see that they allow you to query "extra" information via the get_extra_info(str) function, eg:
client_r._transport.get_extra_info('socket')
Okay, this works for non-encrypted (non-SSL) traffic. But I can't query the socket on an encrypted transport. I can only get the SSL object:
https://docs.python.org/3/library/ssl.html#ssl.SSLObject
This object provides an attribute "server_hostname" which will give me the hostname/IP that was used to connect, so at this point I just need the port.
Ok, I was able to figure it out eventually.
I really just needed to pass a different key to get_extra_info
Both SSL and non-SSL transports support the "peername" key
So I modified my code to the following:
client_r._transport.get_extra_info('peername')
client_w._transport.get_extra_info('peername')
A separate issue I was running into is that I was querying the 'peername' key after the Stream had been closed, and so I was getting None back.
More information on get_extra_info can be found in the asyncio documentation:

Cant receive data from socket

I'm making a client-server program, and there is problem with client part.
Problem is in infinite receiving data. I've tested this particular class, listed below, in a python interpreter. I've succesfuly(maybe not) connected to google, but then program stoped in function recvData() in data = self.socket.recv(1024)
class client():
def __init__(self, host, port):
self.host = host
self.port = port
self.socket = self.connect()
self.command = commands()
def connect(self):
'''
Connect to a remote host.
'''
try:
import socket
return socket.create_connection((self.host, self.port))
except socket.error:
print(":: Failed to connect to a remote port : ")
def sendCommand(self, comm):
'''
Send command to remote host
Returns server output
'''
comman = comm.encode()
# for case in switch(comman):
# if case(self.command.RETRV_FILES_LIST.encode()):
# self.socket.send(b'1')
# return self.recvData()
# if case():
# print(":: Got wrong command")
if (comman == b'1'):
self.socket.send(b'1')
return self.recvData()
def recvData(self):
'''
Receives all the data
'''
i = 0
total_data = []
while(True):
data = self.socket.recv(1024)
if not data: break
total_data.append(data)
i += 1
if i > 9:
break
return total_data
about commented part :
I thought problem in Case realization, so used just if-then statement. But it's not.
Your problem is that self.socket.recv(1024) only returns an empty string when the socket has been shut down on the server side and all data has been received. The way you coded your client, it has no idea that the full message has been received and waits for more. How you deal with the problem depends very much on the protocol used by the server.
Consider a web server. It sends a line-delimited header including a content-length parameter telling the client exactly how many bytes it should read. The client scans for newlines until the header is complete and then uses that value to do recv(exact_size) (if large, it can read chunks instead) so that the recv won't block when the last byte comes in.
Even then, there a decisions to make. The client knows how large the web page is but may want to send a partial data to the caller so it can start painting the page before all the data is received. Of course, the caller needs to know that is what happens - there is a protocol or set of rules for the API itself.
You need to define how the client knows a message is complete and what exactly it passes back to its caller. A great way to deal with the problem is to let some other protocol such as [zeromq](http://zeromq.org/ do the work for you. A simple python client / server can be implemented with xmlrpc. And there are many other ways.
You said you are implementing a client/server program then you mentioned "connected to google" and telnet... These are all very different things and a single client strategy won't work with all of them.

python asyncore server send data to only one sock

I have to send data only to a connection, as I can do?
server:
import asyncore, socket, threading
class EchoHandler(asyncore.dispatcher_with_send):
def __init__(self,sock):
asyncore.dispatcher.__init__(self,sock=sock);
self.out_buffer = ''
def handle_read(self):
datos = self.recv(1024);
if datos:
print(datos);
self.sock[0].send("signal");
class Server(asyncore.dispatcher):
def __init__(self,host='',port=6666):
asyncore.dispatcher.__init__(self);
self.create_socket(socket.AF_INET, socket.SOCK_STREAM);
self.set_reuse_addr();
self.bind((host,port));
self.listen(1);
def handle_accept(self):
self.sock,self.addr = self.accept();
if self.addr:
print self.addr[0];
handler = EchoHandler(self.sock);
def handle_close(self):
self.close();
cliente = Server();
asyncore.loop()
this line is an example fails, but I want to send data to zero sock:
self.sock[0].send("probando");
for example, if I have 5 sockets choose who to send the data
Explanation
You tried to get sock from list and execute its send method. This causes error, because EchoHandler neither has sock attribute nor it's a list of sockets. The right method is to get instance of EchoHandler you want (based on, eg. IP address, or slots assigned by some user-defined protocol) and then use its send method - here (with dispatcher_with_send) its also better to use special buffer for that than send.
EchoHandler instantion is created on every accept of connection - from then it is an established channel for communication with the given host. Server listens for any non-established connection, while EchoHandlers use socks (given by Server in handle_accept) for established ones, so there are as many EchoHandler instances as connections.
Solution
You need to make some list of connections (EchoHandler instantions; we'll use buffer, not socket's send() directly) and give them opportunity to delete their entries on close:
class Server(asyncore.dispatcher):
def __init__(self, host='', port=6666):
...
self.connections = []
def handle_accept(self):
...
handler = EchoHandler(self.sock, self);
self.connections.append(self.sock)
...
def remove_channel(self, sock):
if sock in self.connections:
self.connections.remove(sock)
class EchoHandler(asyncore.dispatcher_with_send):
def __init__(self, sock, server):
...
self.server = server
def handle_read(self):
datos = self.recv(1024);
if datos:
print(datos);
self.out_buffer += 'I echo you: ' + datos
def handle_close(self):
self.server.remove_channel(self)
self.close()
EchoHandler is now aware of server instance and can remove its socket from list. This echo example is now fully functional, and with working socket list we can proceed to asynchronous sending.
But, at this point you can use this list as you wanted - cliente.connections[0].out_buffer += 'I am data' will do the work, but probably you'd want some better controlling of this. If yes, go ahead.
'For whom, by me'
In order to send data asynchronously, we need to separate asyncore from our control thread, in which we'll enter what to send and to whom.
class ServerThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.daemon = True # if thread is a daemon, it'll be killed when main program exits
self.cliente = Server()
self.start()
def run(self):
print 'Starting server thread...'
asyncore.loop()
thread = ServerThread()
while True:
msg = raw_input('Enter IP and message divided by semicolon: ')
if msg == 'exit':
break
ip, data = msg.split('; ')
for sock in thread.cliente.connections:
if sock.addr[0] == ip:
sock.out_buffer += data
break
This will work and wait for destination IP and data. Remember to have client connected.
As I said, you can use anything to indicate which socket is which. It can be a class with fields for eg. IP and username, so you could send data only to peers whose usernames start with 'D'.
But...
This solution is a bit rough and needs better knowledge of asyncore module if you want to send data nicely (here it has some delay due to how select() works) and make good use of this socket wrapper.
Here and here are some resources.
Syntax note
Although your code will now work, your code has some not-nice things. Semicolons on instructions ends don't cause errors, but making nearly every variable of class attribute can lead to them. For example here:
def handle_accept(self):
self.sock,self.addr = self.accept();
if self.addr:
print self.addr[0];
handler = EchoHandler(self.sock);
self.sock and self.addr might be used in that class for something other (eg. socket-related thing; addresses) and overriding them could make trouble. Methods used for requests should never save state of previous actions.
I hope Python will be good enough for you to stay with it!
Edit: sock.addr[0] can be used instead of sock.socket.getpeername()[0] but it requires self.addr not to be modified, so handle_accept() should look like this:
def handle_accept(self):
sock, addr = self.accept()
if addr:
print addr[0]
handler = EchoHandler(sock, self)
self.connections.append(handler)

Python Socket connection class

I'm trying to create a small program that will log information output from a device via TCP
Basically this just streams data out, that i want to capture, and dump into a database for dealing with later
but the device reboots so i need to be able to reconnect when the socket closes with out any human interference
so this is what i have so far
import socket, time, logging, sys, smtplib # Import socket module
logging.basicConfig(filename='Tcplogger.log',level=logging.DEBUG,format='%(asctime)s : %(levelname)s : %(message)s')
logging.info('|--------------------------------------|')
logging.info('|--------------- TCP Logger Starting---|')
logging.info('|--------------------------------------|')
host = '127.0.0.01' # host or Ip address
port = 12345 # output port
retrytime = 1 # reconnect time
reconnectattemps = 10 # Number of time to try and reconnect
class TPCLogger:
def __init__(self):
logging.debug('****Trying connection****')
print('****Trying connection****')
self.initConnection()
def initConnection(self):
s = socket.socket()
try:
s.connect((host, port))
logging.debug('****Connected****')
except IOError as e:
while 1:
reconnectcount = 0;
logging.error(format(e.errno)+' : '+format(e.strerror))
while 1:
reconnectcount = reconnectcount + 1
logging.error('Retrying connection to Mitel attempt : '+str(reconnectcount))
try:
s.connect((host, port))
connected = True
logging.debug('****Connected****')
except IOError as e:
connected = False
logging.error(format(e.errno)+' : '+format(e.strerror))
if reconnectcount == reconnectattemps:
logging.error('******####### Max Reconnect attempts reached logger will Terminate ######******')
sys.exit("could Not connect")
time.sleep(retrytime)
if connected == True:
break
break
while 1:
s.recv(1034)
LOGGER= TCPLogger()
Which all works fine on start up if a try to connect and its not there it will retry the amount of times set by reconnectattemps
but he is my issue
while 1:
s.recv(1034)
when this fails i want to try to reconnect
i could of course type out or just copy my connection part again but what i want to be able todo is call a function that will handle the connection and retry and hand me back the connection object
for example like this
class tcpclient
#set some var
host, port etc....
def initconnection:
connect to socket and retry if needed
RETURN SOCKET
def dealwithdata:
initconnection()
while 1:
try:
s.recv
do stuff here copy to db
except:
log error
initconnection()
I think this is possible but im really not geting how the class/method system works in python so i think im missing something here
FYI just in case you didn't notice iv very new to python. any other comments on what i already have are welcome too :)
Thanks
Aj
Recommendation
For this use-case I would recommend something higher-level than sockets. Why? Controlling all these exceptions and errors for yourself can be irritating when you just want to retrieve or send data and maintain connection.
Of course you can achieve what you want with your plain solution, but you mess with code a bit more, methinks. Anyway it'll look similarly to class amustafa wrote, with handling socket errors to close/reconnect method, etc.
Example
I made some example for easier solution using asyncore module:
import asyncore
import socket
from time import sleep
class Client(asyncore.dispatcher_with_send):
def __init__(self, host, port, tries_max=5, tries_delay=2):
asyncore.dispatcher.__init__(self)
self.host, self.port = host, port
self.tries_max = tries_max
self.tries_done = 0
self.tries_delay = tries_delay
self.end = False # Flag that indicates whether socket should reconnect or quit.
self.out_buffer = '' # Buffer for sending.
self.reconnect() # Initial connection.
def reconnect(self):
if self.tries_done == self.tries_max:
self.end = True
return
print 'Trying connecting in {} sec...'.format(self.tries_delay)
sleep(self.tries_delay)
self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
try:
self.connect((self.host, self.port))
except socket.error:
pass
if not self.connected:
self.tries_done += 1
print 'Could not connect for {} time(s).'.format(self.tries_done)
def handle_connect(self):
self.tries_done = 0
print 'We connected and can get the stuff done!'
def handle_read(self):
data = self.recv(1024)
if not data:
return
# Check for terminator. Can be any action instead of this clause.
if 'END' in data:
self.end = True # Everything went good. Shutdown.
else:
print data # Store to DB or other thing.
def handle_close(self):
print 'Connection closed.'
self.close()
if not self.end:
self.reconnect()
Client('localhost', 6666)
asyncore.loop(timeout=1)
reconnnect() method is somehow core of your case - it's called when connection is needed to be made: when class initializes or connection brokes.
handle_read() operates any recieved data, here you log it or something.
You can even send data using buffer (self.out_buffer += 'message'), which will remain untouched after reconnection, so class will resume sending when connected again.
Setting self.end to True will inform class to quit when possible.
asyncore takes care of exceptions and calls handle_close() when such events occur, which is convenient way of dealing with connection failures.
You should look at the python documentation to understand how classes and methods work. The biggest difference between python methods and methods in most other languages is the addition of the "self" tag. The self represents the instance that a method is called against and is automatically fed in by the python system. So:
class TCPClient():
def __init__(self, host, port, retryAttempts=10 ):
#this is the constructor that takes in host and port. retryAttempts is given
# a default value but can also be fed in.
self.host = host
self.port = port
self.retryAttempts = retryAttempts
self.socket = None
def connect(self, attempt=0):
if attempts<self.retryAttempts:
#put connecting code here
if connectionFailed:
self.connect(attempt+1)
def diconnectSocket(self):
#perform all breakdown operations
...
self.socket = None
def sendDataToDB(self, data):
#send data to db
def readData(self):
#read data here
while True:
if self.socket is None:
self.connect()
...
Just make sure you properly disconnect the socket and set it to None.

Python TCP socket doesn't close?

Maybe someone here will have a response for this thing which is just driving me insane.
To make it simple, I'm making a kind of proxy. Whenever it receives something, it forwards everything to a server, and sends back the response. So there is one socket always listening on port 4557 for clients, and for each incoming connection, there is a new socket created on a random port to connect to the server port 4556.
Clients <==> Proxy <==> Server
Also, there another socket which is instantiated and listening for requests coming from the server and to be forwarded to the corresponding client.
Here is an example:
Client A connects to proxy on port 4557
Proxy creates a socket to Server on port 4556
Along with that, it creates a socket listening on port 40100
Client sends stuff, forwarded to Server
Client disconnects. Close client connection and socket to server
Some time later, Server sends stuff to proxy on port 40100
Everything's forwarded to Client A (port 40100 corresponding to Client A)
And so on..
So far in my tests, I use a simple python script for sending a unique tcp packet to the proxy, along with a dump server showing received data and echoing back.
So the issue is that when a connection to the proxy is closed, the connection to the Server should also be closed with "sock.close()". However it just seems to be completely ignored. The socket remains as ESTABLISHED.
About the code now.
A few notes.
DTN and Node are respectively Server and Clients.
runCallback is called in a loop until thread dies.
finalCallback is called when the thread is dying.
Associations between remote hosts (Client), proxy ports (to Server) and proxies are kept in the dictionaries: TCPProxyHostRegister (RemoteHost => Proxy), TCPProxyPortRegister (Port => Proxy), TCPPortToHost (Port => RemoteHost).
The first class is TCPListenerThread.
It just listen on a specific port and instantiate proxies (one for each Client=>Server couple and Server=>Client couple) and forward them connections.
class TCPListenerThread(StoppableThread):
def __init__(self, tcp_port):
StoppableThread.__init__(self)
self.tcp_port = tcp_port
self.sock = socket.socket( socket.AF_INET, # Internet
socket.SOCK_STREAM ) # tcp
self.sock.bind( (LOCAL_ADDRESS, self.tcp_port) )
self.sock.listen(1)
def runCallback(self):
print "Listen on "+str(self.tcp_port)+".."
conn, addr = self.sock.accept()
if isFromDTN(addr):
tcpProxy = getProxyFromPort(tcp_port)
if not tcpProxy:
tcpProxy = TCPProxy(host, True)
else:
host = addr[0]
tcpProxy = getProxyFromHost(host)
if not tcpProxy:
tcpProxy = TCPProxy(host, False)
tcpProxy.handle(conn)
def finalCallback(self):
self.sock.close()
Now comes the TCP Proxy:
It associates a remote host (Client) with a port connecting to Server.
If it's a connection coming from a new Client, it will create a new listener (see above) for the Server and create a socket ready to forward everything to Server.
class TCPProxy():
def __init__(self, remote, isFromDTN):
#remote = port for Server or Remote host for Client
self.isFromDTN = isFromDTN
self.conn = None
#add itself to proxy registries
#If listening from a node
if not isFromDTN:
#Set node remote host
self.remoteHost = remote
TCPProxyHostRegister[self.remoteHost] = self
#Set port to DTN interface + listener
self.portToDTN = getNewTCPPort()
TCPPortToHost[self.portToDTN] = self.remoteHost
newTCPListenerThread(self.portToDTN)
#Or from DTN
else:
self.portToDTN = remote
TCPProxyPortRegister[self.portToDTN] = self
self.remoteHost = getRemoteHostFromPortTCP(self.portToDTN)
def handle(self, conn):
print "New connection!"
#shouldn't happen, but eh
if self.conn != None:
self.closeConnections()
self.conn = conn
#init socket with remote
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
#self.sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
if self.isFromDTN:
self.sock.connect((self.remoteHost, 4556)) #TODO: handle dynamic port..
else:
self.sock.connect((DTN_Address, DTN_TCPPort))
#handle connection in a thread
self.handlerThread = newTCPHandlerThread(self)
#handle reply in a therad
self.replyThread = newTCPReplyThread(self)
def closeConnections(self):
try:
if self.conn != None:
print "Close connections!"
self.sock.close()
self.conn.close()
self.conn = None
self.handlerThread.kill()
self.replyThread.kill()
except Exception, err:
print str(err)
#pass
def forward(self, data):
print "TCP forwarding data: "+data
self.sock.send(data)
def forwardBack(self, data):
print "TCP forwarding data back: "+data
self.conn.send(data)
In this proxy class, I instantiate two classes, TCPHandlerThread and TCPReplyThread. They are responsible for forwarding to Server, and forwarding back to Client, respectively.
class TCPHandlerThread(StoppableThread):
def __init__(self, proxy):
StoppableThread.__init__(self)
self.proxy = proxy
def runCallback(self):
test = False
while 1:
data = self.proxy.conn.recv(BUFFER_SIZE)
if test:
self.proxy.sock.close()
test = True
if not data:
break
print "TCP received data:", data
self.proxy.forward(data)
self.kill()
def finalCallback(self):
self.proxy.closeConnections()
class TCPReplyThread(StoppableThread):
def __init__(self, proxy):
StoppableThread.__init__(self)
self.proxy = proxy
def runCallback(self):
while 1:
data = self.proxy.sock.recv(BUFFER_SIZE)
if not data:
break
print "TCP received back data: "+data
self.proxy.forwardBack(data)
self.kill()
def finalCallback(self):
self.proxy.closeConnections()
You see that whenever a connection is closed, the thread dies and the other connection (Client/Server to proxy or Proxy to Server/Client) should be closed in Proxy.closeConnections()
I noticed that when closeConnections() is "data = self.proxy.conn.recv(BUFFER_SIZE)", it goes well, but when it's called even right after the latter statement, it goes wrong.
I wiresharked TCP, and the proxy doesn't send any "bye signal". The socket state doesn't go to TIME_WAIT or whatever, it just remains ESTABLISHED.
Also, I tested it on Windows and Ubuntu.
On Windows it goes exactly as I explained
On Ubuntu, it works well for usually (not always), 2 connections, and the third time I connect with the same client in exactly the same way to the proxy, it goes wrong again exactly as explained.
Here are the three files i'm using so that you can have a look at the whole code. I'm sorry the proxy file might not be really easy to read. Was SUPPOSED to be a quick dev.
http://hognerud.net/stackoverflow/
Thanks in advance..
It's surely something stupid. Please don't hit me too hard when you see it :(
First I'm sorry that I currently have not the time to actually run and test your code.
But the idea came to my mind, that your problem might actually have something todo with using blocking mode vs. non-blocking mode on the socket. In that case you should checkout the "socket" module help in the python documentation, especially socket.setblocking().
My guess is, that the proxy.conn.recv() function only returns, when actually BUFFER_SIZE bytes where received by the socket. Because of this the thread is blocked until enough data was received and therefore the socket doesn't get closed.
As I said first, this is currently just a guess, so please don't vote me down if it doesn't solve the problem...

Categories