How I can to create a multi-client socket in Python ?
Example, I have a list of X Ip Servers and I want to create a X Client Sockets:
IP_SERVERS = ['127.0.0.1', '127.0.0.2', '127.0.0.3']
How can I do that without multi-threads ?
Note:
I want to try to connect to all this IP_SERVERS without to wait the first client socket
connect to the first server.
Thank you !
import socket
servers = [] #add servers here
class Clients:
def __init__(self):
self.socket = new socket() #change this to full socket init
def connect(self, url):
self.socket.connect(url)
def send(self, data):
self.socket.send(data)
def close(self):
self.socket.close()
for url in servers:
client = new Clients()
client.connect(url)
client.send('abcd')
client.close()
Something like that ? Also this is general code just as an idea or example, it will not work if try to run it.
Threads will work better, since it will not go sequentially and wait for sockets to connect and send data etc...
Related
I'm trying to make a Python server where multiple clients can connect but I've run into a problem I tried everything that I found on the internet.
I'm running a laptop whit windows 7 and an I3 processor.
This is the file called tcp:
import socket
def make_server (ip,port):
try:
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.bind((ip, port))
server.listen(1)
return server
except Exception as ex:
print(ex)
return None
def accept(server):
conn, addr = server.accept()
return conn
def make_client():
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
return client
def client_connect(client,ip,port):
client.connect((ip,port))
def sendall(conn,mess):
conn.send(str(mess).encode("utf-8"))
def rec(conn,rate):
mess = conn.recv(rate).decode("utf-8")
return mess
def close(client):
client.close()
This is the server:
from multiprocessing import Process
from random import randint
import tcp
import sys
def start(sip, sport):
print("Making sob server...")
print("id= {}".format(sport))
sserver = tcp.make_server(sip, sport)
print("Sub Server Started!")
sconn = tcp.accept(sserver)
tcp.sendall(sconn, "connected!!")
while True:
try:
tcp.sendall(sconn, randint(0, 100))
except Exception as ex:
print("")
print("From server {} error:".format(port))
print(ex)
print("")
break
ip = "192.168.0.102"
port = 8000
subport = 9000
server = tcp.make_server(ip, port)
if server is None:
sys.exit(0)
print("Started!")
while True:
print("Wating for new connection!")
con = tcp.accept(server)
print("Connected!")
subport = subport + 1
tcp.sendall(con, subport)
print("New Port Sent!")
print("New Port = {}".format(subport))
subs = Process(target=start, args=(ip, subport))
subs.start()
subs.join()
This is the client:
import tcp
import time
nport = 0
ip = "192.168.0.102"
port = 8000
client = tcp.make_client()
tcp.client_connect(client,ip,port)
nport = tcp.rec(client,1024)
print(nport)
tcp.close(client)
nport = int(nport)
time.sleep(1)
print(nport)
client = tcp.make_client()
tcp.client_connect(client,ip,nport)
while True:
mess = tcp.rec(client, 1024)
if(mess):
print(mess)
The error is:
[WinError 10048]Only one usage of each socket address (protocol/network address/port) is normally permitted Python
Feel free to change anything you want.
If you need any info in plus just ask.
You are creating a socket in the client with tcp.make_client. You are then using that socket to connect to the server via tcp.client_connect. Presumably you successfully receive the new port number back from the server. But then you are trying to re-use the same socket to connect to those ports.
This is the proximate cause of your error: A socket can only be used for a single TCP connection. If you want to create a new connection, you must first create a new socket.
That being said, if you are simply trying to create a server that will accept multiple connections, you're making it way too complicated. The server can receive any number of connections on its single listening port, as long as a different address/port combination is used by each client.
One way to structure this in a server is something like this:
# Create and bind listening socket
lsock = socket.socket()
lsock.bind(('', port))
lsock.listen(1)
while True:
csock, addr = lsock.accept()
print("Got connection from {}".format(addr))
# Start sub-process passing it the newly accepted socket as argument
subs = Process(target=start, args=(csock, ))
subs.start()
# Close our handle to the new socket (it will remain open in the
# sub-process which will use it to talk to the client)
csock.close()
# NOTE: do not call subs.join here unless you want the parent to *block*
# waiting for the sub-process to finish (and if so, what is the point in
# creating a sub-process?)
There are several others ways to do it as well: you can create multiple threads to handle multiple connections, or you can handle all connections in a single thread by using select or with asynchronous I/O.
The client is typically much simpler -- as it usually only cares about its own one connection -- and doesn't care which way the server is implemented:
sock = socket.socket()
sock.connect((ip, port))
while True:
sock.send(...)
sock.recv(...)
If the client does wish to connect to the same server again, it simply creates a second socket and call its connect method with the same server IP and port.
Usually, the client never needs to specify its own port, only the server's port. It simply calls connect and the client-side operating system chooses an unused port for it. So the first time, the client creates a socket and connects it (to the server's listening port), the client-side OS may choose port 50001. The next time it creates and connects a socket, it may get 50002 and so on. (The exact port numbers chosen depend on the operating system implementation and other factors, such as what other programs are running and creating connections.)
So, given client IP 192.168.0.101 and server IP 192.168.0.102, and assuming the server is listening on port 8000, this would result in these two connections:
(192.168.0.101/50001) ====> (192.168.0.102/8000)
(192.168.0.101/50002) ====> (192.168.0.102/8000)
I'm working with:
Django 1.11
Python Sockets
I have a Socket server like this:
class SocketServer(threading.Thread):
def __init__(self, ip="127.0.0.1", port=5000, _buffer=1024):
super(RoomSocketServer, self).__init__()
self.IP = ip
self.PORT = port
self.RECV_BUFFER = _buffer # Advisable to keep it as an exponent of 2
self.CONNECTION_LIST = [] # list of socket clients
self.MESSAGE_QUEUES = {} # List of message queue by socket
self.OUTPUTS = []
self.SERVER = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# this has no effect, why ?
self.SERVER.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.SERVER.bind((self.IP, self.PORT))
self.SERVER.listen(10)
# Add server socket to the list of readable connections
self.CONNECTION_LIST.append(self.SERVER)
self.ROOM = Room.objects.create(port=port, ip=ip, type_room=type_room)
def read_sockets(self, read_sockets):
''' ... '''
def write_sockets(self, write_sockets):
''' ... '''
def error_sockets(self, error_sockets):
''' ... '''
def run(self):
while 1:
# Get the list sockets which are ready to be read through select
read_sockets, write_sockets, error_sockets = select.select(self.CONNECTION_LIST, self.OUTPUTS, [])
# Read sockets
self.read_sockets(read_sockets)
self.write_sockets(write_sockets)
self.error_sockets(error_sockets)
self.SERVER.close()
I can run this SocketServer like this anywhere on Django (custom_command, a view, celery...):
from socket_server import SocketServer
socket_server = SocketServer()
socket_server.start()
# And the code continues while the socket server is running
# I would like to save socket_server instance anywhere to access
# Later from anywhere or trigger a signal to finish it
Like I say above, I would like to know (if possible) where would any of you save the instance of the server to access it from different parts of the Django project ?
UPDATE
I tried using memcached on Django but when I try to store the SocketServer instance on memcached I get this error:
PicklingError: Can't pickle : attribute lookup thread.lock failed
The answer is yes and its simple
serverlst = dict()
for x in range(0,5):
tmp = SocketServer("",5000+x) # Note that you should change the port because each port can be listened only by one socket server
tmp.start()
serverlst[5000+x]=tmp
or you can access to server socket with
serverlst[5000].getName() # returns the server socket with port 5000's thread
when you print serverlst after loop, you can see this
{5000: <SocketServer(Thread-5, started 139741574940416)>, 5001: <SocketServer(Thread-6, started 139741583333120)>, 5002: <SocketServer(Thread-7, started 139741658834688)>, 5003: <SocketServer(Thread-8, started 139741667227392)>, 5004: <SocketServer(Thread-9, started 139741233895168)>}
or you can add it to list()
update
sorry i didn't see
I would like to know (if possible) where would any of you save the instance of the server to access it from different parts of the Django project
i can say it depends on the way you develop and i think its not a good idea. its better to use a bridge between threads and Django project like redis which allows you to keep data and access it from entire system and you can use python redis client.
I have to send data only to a connection, as I can do?
server:
import asyncore, socket, threading
class EchoHandler(asyncore.dispatcher_with_send):
def __init__(self,sock):
asyncore.dispatcher.__init__(self,sock=sock);
self.out_buffer = ''
def handle_read(self):
datos = self.recv(1024);
if datos:
print(datos);
self.sock[0].send("signal");
class Server(asyncore.dispatcher):
def __init__(self,host='',port=6666):
asyncore.dispatcher.__init__(self);
self.create_socket(socket.AF_INET, socket.SOCK_STREAM);
self.set_reuse_addr();
self.bind((host,port));
self.listen(1);
def handle_accept(self):
self.sock,self.addr = self.accept();
if self.addr:
print self.addr[0];
handler = EchoHandler(self.sock);
def handle_close(self):
self.close();
cliente = Server();
asyncore.loop()
this line is an example fails, but I want to send data to zero sock:
self.sock[0].send("probando");
for example, if I have 5 sockets choose who to send the data
Explanation
You tried to get sock from list and execute its send method. This causes error, because EchoHandler neither has sock attribute nor it's a list of sockets. The right method is to get instance of EchoHandler you want (based on, eg. IP address, or slots assigned by some user-defined protocol) and then use its send method - here (with dispatcher_with_send) its also better to use special buffer for that than send.
EchoHandler instantion is created on every accept of connection - from then it is an established channel for communication with the given host. Server listens for any non-established connection, while EchoHandlers use socks (given by Server in handle_accept) for established ones, so there are as many EchoHandler instances as connections.
Solution
You need to make some list of connections (EchoHandler instantions; we'll use buffer, not socket's send() directly) and give them opportunity to delete their entries on close:
class Server(asyncore.dispatcher):
def __init__(self, host='', port=6666):
...
self.connections = []
def handle_accept(self):
...
handler = EchoHandler(self.sock, self);
self.connections.append(self.sock)
...
def remove_channel(self, sock):
if sock in self.connections:
self.connections.remove(sock)
class EchoHandler(asyncore.dispatcher_with_send):
def __init__(self, sock, server):
...
self.server = server
def handle_read(self):
datos = self.recv(1024);
if datos:
print(datos);
self.out_buffer += 'I echo you: ' + datos
def handle_close(self):
self.server.remove_channel(self)
self.close()
EchoHandler is now aware of server instance and can remove its socket from list. This echo example is now fully functional, and with working socket list we can proceed to asynchronous sending.
But, at this point you can use this list as you wanted - cliente.connections[0].out_buffer += 'I am data' will do the work, but probably you'd want some better controlling of this. If yes, go ahead.
'For whom, by me'
In order to send data asynchronously, we need to separate asyncore from our control thread, in which we'll enter what to send and to whom.
class ServerThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.daemon = True # if thread is a daemon, it'll be killed when main program exits
self.cliente = Server()
self.start()
def run(self):
print 'Starting server thread...'
asyncore.loop()
thread = ServerThread()
while True:
msg = raw_input('Enter IP and message divided by semicolon: ')
if msg == 'exit':
break
ip, data = msg.split('; ')
for sock in thread.cliente.connections:
if sock.addr[0] == ip:
sock.out_buffer += data
break
This will work and wait for destination IP and data. Remember to have client connected.
As I said, you can use anything to indicate which socket is which. It can be a class with fields for eg. IP and username, so you could send data only to peers whose usernames start with 'D'.
But...
This solution is a bit rough and needs better knowledge of asyncore module if you want to send data nicely (here it has some delay due to how select() works) and make good use of this socket wrapper.
Here and here are some resources.
Syntax note
Although your code will now work, your code has some not-nice things. Semicolons on instructions ends don't cause errors, but making nearly every variable of class attribute can lead to them. For example here:
def handle_accept(self):
self.sock,self.addr = self.accept();
if self.addr:
print self.addr[0];
handler = EchoHandler(self.sock);
self.sock and self.addr might be used in that class for something other (eg. socket-related thing; addresses) and overriding them could make trouble. Methods used for requests should never save state of previous actions.
I hope Python will be good enough for you to stay with it!
Edit: sock.addr[0] can be used instead of sock.socket.getpeername()[0] but it requires self.addr not to be modified, so handle_accept() should look like this:
def handle_accept(self):
sock, addr = self.accept()
if addr:
print addr[0]
handler = EchoHandler(sock, self)
self.connections.append(handler)
Maybe someone here will have a response for this thing which is just driving me insane.
To make it simple, I'm making a kind of proxy. Whenever it receives something, it forwards everything to a server, and sends back the response. So there is one socket always listening on port 4557 for clients, and for each incoming connection, there is a new socket created on a random port to connect to the server port 4556.
Clients <==> Proxy <==> Server
Also, there another socket which is instantiated and listening for requests coming from the server and to be forwarded to the corresponding client.
Here is an example:
Client A connects to proxy on port 4557
Proxy creates a socket to Server on port 4556
Along with that, it creates a socket listening on port 40100
Client sends stuff, forwarded to Server
Client disconnects. Close client connection and socket to server
Some time later, Server sends stuff to proxy on port 40100
Everything's forwarded to Client A (port 40100 corresponding to Client A)
And so on..
So far in my tests, I use a simple python script for sending a unique tcp packet to the proxy, along with a dump server showing received data and echoing back.
So the issue is that when a connection to the proxy is closed, the connection to the Server should also be closed with "sock.close()". However it just seems to be completely ignored. The socket remains as ESTABLISHED.
About the code now.
A few notes.
DTN and Node are respectively Server and Clients.
runCallback is called in a loop until thread dies.
finalCallback is called when the thread is dying.
Associations between remote hosts (Client), proxy ports (to Server) and proxies are kept in the dictionaries: TCPProxyHostRegister (RemoteHost => Proxy), TCPProxyPortRegister (Port => Proxy), TCPPortToHost (Port => RemoteHost).
The first class is TCPListenerThread.
It just listen on a specific port and instantiate proxies (one for each Client=>Server couple and Server=>Client couple) and forward them connections.
class TCPListenerThread(StoppableThread):
def __init__(self, tcp_port):
StoppableThread.__init__(self)
self.tcp_port = tcp_port
self.sock = socket.socket( socket.AF_INET, # Internet
socket.SOCK_STREAM ) # tcp
self.sock.bind( (LOCAL_ADDRESS, self.tcp_port) )
self.sock.listen(1)
def runCallback(self):
print "Listen on "+str(self.tcp_port)+".."
conn, addr = self.sock.accept()
if isFromDTN(addr):
tcpProxy = getProxyFromPort(tcp_port)
if not tcpProxy:
tcpProxy = TCPProxy(host, True)
else:
host = addr[0]
tcpProxy = getProxyFromHost(host)
if not tcpProxy:
tcpProxy = TCPProxy(host, False)
tcpProxy.handle(conn)
def finalCallback(self):
self.sock.close()
Now comes the TCP Proxy:
It associates a remote host (Client) with a port connecting to Server.
If it's a connection coming from a new Client, it will create a new listener (see above) for the Server and create a socket ready to forward everything to Server.
class TCPProxy():
def __init__(self, remote, isFromDTN):
#remote = port for Server or Remote host for Client
self.isFromDTN = isFromDTN
self.conn = None
#add itself to proxy registries
#If listening from a node
if not isFromDTN:
#Set node remote host
self.remoteHost = remote
TCPProxyHostRegister[self.remoteHost] = self
#Set port to DTN interface + listener
self.portToDTN = getNewTCPPort()
TCPPortToHost[self.portToDTN] = self.remoteHost
newTCPListenerThread(self.portToDTN)
#Or from DTN
else:
self.portToDTN = remote
TCPProxyPortRegister[self.portToDTN] = self
self.remoteHost = getRemoteHostFromPortTCP(self.portToDTN)
def handle(self, conn):
print "New connection!"
#shouldn't happen, but eh
if self.conn != None:
self.closeConnections()
self.conn = conn
#init socket with remote
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
#self.sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
if self.isFromDTN:
self.sock.connect((self.remoteHost, 4556)) #TODO: handle dynamic port..
else:
self.sock.connect((DTN_Address, DTN_TCPPort))
#handle connection in a thread
self.handlerThread = newTCPHandlerThread(self)
#handle reply in a therad
self.replyThread = newTCPReplyThread(self)
def closeConnections(self):
try:
if self.conn != None:
print "Close connections!"
self.sock.close()
self.conn.close()
self.conn = None
self.handlerThread.kill()
self.replyThread.kill()
except Exception, err:
print str(err)
#pass
def forward(self, data):
print "TCP forwarding data: "+data
self.sock.send(data)
def forwardBack(self, data):
print "TCP forwarding data back: "+data
self.conn.send(data)
In this proxy class, I instantiate two classes, TCPHandlerThread and TCPReplyThread. They are responsible for forwarding to Server, and forwarding back to Client, respectively.
class TCPHandlerThread(StoppableThread):
def __init__(self, proxy):
StoppableThread.__init__(self)
self.proxy = proxy
def runCallback(self):
test = False
while 1:
data = self.proxy.conn.recv(BUFFER_SIZE)
if test:
self.proxy.sock.close()
test = True
if not data:
break
print "TCP received data:", data
self.proxy.forward(data)
self.kill()
def finalCallback(self):
self.proxy.closeConnections()
class TCPReplyThread(StoppableThread):
def __init__(self, proxy):
StoppableThread.__init__(self)
self.proxy = proxy
def runCallback(self):
while 1:
data = self.proxy.sock.recv(BUFFER_SIZE)
if not data:
break
print "TCP received back data: "+data
self.proxy.forwardBack(data)
self.kill()
def finalCallback(self):
self.proxy.closeConnections()
You see that whenever a connection is closed, the thread dies and the other connection (Client/Server to proxy or Proxy to Server/Client) should be closed in Proxy.closeConnections()
I noticed that when closeConnections() is "data = self.proxy.conn.recv(BUFFER_SIZE)", it goes well, but when it's called even right after the latter statement, it goes wrong.
I wiresharked TCP, and the proxy doesn't send any "bye signal". The socket state doesn't go to TIME_WAIT or whatever, it just remains ESTABLISHED.
Also, I tested it on Windows and Ubuntu.
On Windows it goes exactly as I explained
On Ubuntu, it works well for usually (not always), 2 connections, and the third time I connect with the same client in exactly the same way to the proxy, it goes wrong again exactly as explained.
Here are the three files i'm using so that you can have a look at the whole code. I'm sorry the proxy file might not be really easy to read. Was SUPPOSED to be a quick dev.
http://hognerud.net/stackoverflow/
Thanks in advance..
It's surely something stupid. Please don't hit me too hard when you see it :(
First I'm sorry that I currently have not the time to actually run and test your code.
But the idea came to my mind, that your problem might actually have something todo with using blocking mode vs. non-blocking mode on the socket. In that case you should checkout the "socket" module help in the python documentation, especially socket.setblocking().
My guess is, that the proxy.conn.recv() function only returns, when actually BUFFER_SIZE bytes where received by the socket. Because of this the thread is blocked until enough data was received and therefore the socket doesn't get closed.
As I said first, this is currently just a guess, so please don't vote me down if it doesn't solve the problem...
I'm trying to write a simple load-balancer. It works ok till one of servers (BalanceServer) doesn't close connection then...
Client (ReverseProxy) disconnects but the connection in with BalanceServer stays open.
I tried to add callback (#3) to ReverseProxy.connectionLost to close the connection with one of the servers as I do with closing connection when server disconnects (clientLoseConnection), but at that time the ServerWriter is Null and I cannot terminate it at #1 and #2
How can I ensure that all connections are closed when one of sides disconnects? I guess that also some kind of timeout here would be nice when both client and one of servers hang, but how can I add it so it works on both connections?
from twisted.internet.protocol import Protocol, Factory, ClientCreator
from twisted.internet import reactor, defer
from collections import namedtuple
BalanceServer = namedtuple('BalanceServer', 'host port')
SERVER_LIST = [BalanceServer('127.0.0.1', 8000), BalanceServer('127.0.0.1', 8001)]
def getServer(servers):
while True:
for server in servers:
yield server
# this writes to one of balance servers and responds to client with callback 'clientWrite'
class ServerWriter(Protocol):
def sendData(self, data):
self.transport.write(data)
def dataReceived(self, data):
self.clientWrite(data)
def connectionLost( self, reason ):
self.clientLoseConnection()
# callback for reading data from client to send it to server and get response to client again
def transferData(serverWriter, clientWrite, clientLoseConnection, data):
if serverWriter:
serverWriter.clientWrite = clientWrite
serverWriter.clientLoseConnection = clientLoseConnection
serverWriter.sendData(data)
def closeConnection(serverWriter):
if serverWriter: #1 this is null
#2 So connection is not closed and hangs there, till BalanceServer close it
serverWriter.transport.loseConnection()
# accepts clients
class ReverseProxy(Protocol):
def connectionMade(self):
server = self.factory.getServer()
self.serverWriter = ClientCreator(reactor, ServerWriter)
self.client = self.serverWriter.connectTCP( server.host, server.port )
def dataReceived(self, data):
self.client.addCallback(transferData, self.transport.write,
self.transport.loseConnection, data )
def connectionLost(self, reason):
self.client.addCallback(closeConnection) #3 adding close doesn't work
class ReverseProxyFactory(Factory):
protocol = ReverseProxy
def __init__(self, serverGenerator):
self.getServer = serverGenerator
plainFactory = ReverseProxyFactory( getServer(SERVER_LIST).next )
reactor.listenTCP( 7777, plainFactory )
reactor.run()
You may want to look at twisted.internet.protocols.portforward for an example of hooking up two connections and then disconnecting them. Or just use txloadbalancer and don't even write your own code.
However, loseConnection will never forcibly terminate the connection if there is never any traffic going over it. So if you don't have an application-level ping or any data going over your connections, they may still never shut down. This is a long-standing bug in Twisted. Actually, the longest-standing bug. Perhaps you'd like to help work on the fix :).