import socket
backlog = 1 #Number of queues
sk_1 = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sk_2 = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
local = {"port":1433}
internet = {"port":9999}
sk_1.bind (('', internet["port"]))
sk_1.listen(backlog)
sk_2.bind (('', local["port"]))
sk_2.listen(backlog)
Basically, I have this code. I am trying to listen on two ports: 1433 and 9999. But, this doesn't seems to work.
How can I listen on two ports, within the same python script??
The fancy-pants way to do this if you want to use Python std-lib would be to use SocketServer with the ThreadingMixin -- although the 'select' suggestion is probably the more efficient.
Even though we only define one ThreadedTCPRequestHandler you can easily repurpose it such that each listener has it's own unique handler and it should be fairly trivial to wrap the server/thread creation into a single method if thats the kind of thing you like.
#!/usr/bin/python
import threading
import time
import SocketServer
class ThreadedTCPRequestHandler(SocketServer.BaseRequestHandler):
def handle(self):
self.data = self.request.recv(1024).strip()
print "%s wrote: " % self.client_address[0]
print self.data
self.request.send(self.data.upper())
class ThreadedTCPServer(SocketServer.ThreadingMixIn, SocketServer.TCPServer):
pass
if __name__ == "__main__":
HOST = ''
PORT_A = 9999
PORT_B = 9876
server_A = ThreadedTCPServer((HOST, PORT_A), ThreadedTCPRequestHandler)
server_B = ThreadedTCPServer((HOST, PORT_B), ThreadedTCPRequestHandler)
server_A_thread = threading.Thread(target=server_A.serve_forever)
server_B_thread = threading.Thread(target=server_B.serve_forever)
server_A_thread.setDaemon(True)
server_B_thread.setDaemon(True)
server_A_thread.start()
server_B_thread.start()
while 1:
time.sleep(1)
The code so far is fine, as far as it goes (except that a backlog of 1 seems unduly strict), the problem of course comes when you try to accept a connection on either listening socket, since accept is normally a blocking call (and "polling" by trying to accept with short timeouts on either socket alternately will burn machine cycles to no good purpose).
select to the rescue!-) select.select (or on the better OSs select.poll or even select.epoll or select.kqueue... but, good old select.select works everywhere!-) will let you know which socket is ready and when, so you can accept appropriately. Along these lines, asyncore and asynchat provide a bit more organization (and third-party framework twisted, of course, adds a lot of such "asynchronous" functionality).
Alternatively, you can devote separate threads to servicing the two listening sockets, but in this case, if the different sockets' functionality needs to affect the same shared data structures, coordination (locking &c) may become ticklish. I would certainly recommend trying the async approach first -- it's actually simpler, as well as offering potential for substantially better performance!-)
Related
I have an application which communicates on a specific port, and I would like to listen to all UDP traffic which has this specific port as a source or destination.
Naively I try to do something like:
import socket
UDP_IP = "0.0.0.0"
UDP_PORT = my_port
s = socket.socket(socket.AF_INET, socket.SOCK_RAW, socket.IPPROTO_UDP)
sock.bind((UDP_IP, UDP_PORT))
while True:
data, addr = s.recvfrom(4096)
print("received message:", data)
however this does not work because the application in question is already bound to the port, so I get an error if I try to bind to it in my code.
My next attempt was to use scapy with something like:
from scapy.all import *
import queue
scapy.config.conf.use_pcap = True
pending_pkts = queue.Queue()
def callback(pkt):
pending_pkts.put(pkt)
def worker():
while True:
pkt = pending_pkts.get()
# I do some stuff with the packet here
pending_pkts.task_done()
t = Thread(target=worker)
t.daemon = True
t.start()
sniff(prn=callback, filter="udp and port my_port")
The idea here was that I would stuff the packets into my queue and then do the relatively costly processing on a separate thread. While this does somewhat work, I miss something like 50% of the packets I am interested in, which is unacceptable for the project. I have seen other people running into this issue and try everything suggested (using a very specific filter, using pcap, using multiple threads to avoid costly processing holding things), but evidently this is still not fast enough since I miss so many packets.
I would greatly appreciate it if someone could point me in the right direction as to how I can get ideally 100% packet capture (doesn't need to be realtime, I can accept a couple second delay in processing if it means I get everything) on a UDP port which is already in use. Ideally I would like to stick with python, but I would be willing to also try something in C++ if someone knows of a solution.
Thanks for your time :)
I'm new in python and threading so please be indifferent. I'm trying to do 2-players game in python. Data are send through tcp/ip protocol (client-server architecture). On server I have three threads. One comunicate with one user, second with second and in third thread I'm getting data which was send by client form two others threads. This data are used to check if game is over. And it's all working good. Problems start now. When the game is over I want to send another data to client. So Thread 3 need to send data to client, but two others threads are still working and they still have connections with clients. Generally I do not know how to do this. I tried to send information through the Queue from third thread to others that they should close theirs connections. It's thread class code:
class myThread(threading.Thread):
def __init__(self, threadID, name, conn, conn2, kto, wartosc,
wybor,kolejkaZadan,gracz1,gracz2):
threading.Thread.__init__(self)
self.threadID = threadID
self.name = name
self.conn = conn
self.conn2 = conn2
self.kto = kto
self.wartosc = wartosc
self.wybor = wybor
self.kolejkaZadan = kolejkaZadan
self.gracz1 = gracz1
self.gracz2 = gracz2
def run(self):
if self.wybor == None:
toClient(self.conn,self.conn2,self.kto,self.wartosc,self.gracz1)
else:
while True:
data,kolejkaZwrotna = self.kolejkaZadan.get() // I receive data from two others threads
time.sleep(10)
dataKolejne,kolejkaZwrotna = self.kolejkaZadan.get() // I receive data from two others threads
if data is dataKolejne: // if end
tworzenieXmla(self.gracz1, self.gracz2)
odczytywanieXmla('itemGracza1',gracz1Otrzymane)
plik = open('Marcin.xml', 'rb')
czyZamknacConnection = True
kolejkaZwrotna.put(czyZamknacConnection) // send data to two others threads
while True:
czescXmla = plik.read(10000)
#self.conn2.send(czescXmla)
And It's my send/receiv function which is executed by two other threads:
def toClient(conn, conn2, kto, wartosc,gracz):
wordsBackup = None
kolejkaZwrotna = queue.Queue()
while True:
data = conn.recv(BUFFER_SIZE)
if not data:
break
if kolejkaZwrotna.get() is True://receive form thread 3
conn2.close()
print('closed')
break
if len(data)>7:
print('WARNING', data)
words = str(data.decode()).split()
#print(words[0], words[1])
if kto==1:
conn2.send(data)
if kto==2:
conn2.send(data)
kolejkaZadan.put(words[2],kolejkaZwrotna) // send to thread 3
xmlTablicaDoZapisu(str(int(words[0])),str(int(words[1])),str(int(words[2])),gracz)
Generally there is no error and we can play but there is only one player on each computer so I think server don't send data. I would appreciate any help.
A fix for your current situation would be to change all those connection variables into an array of connections which you could iterate over. You might want to build some container classes which define their behavior since not all clients are the same ( server client, and player clients ). That way you aren't limited by the amount of variables you've declared, and threads available.
Then once a new client connects you simply add it to the array and your iterator will take care of the rest.
This is a common problem with TCP/IP though, in that you always have to have open connections to n clients, which not only takes up resources but since TCP/IP is a queued protocol it could also set the entire game back if any client has a slower connection. In practice your game will always be as laggy as the player with the worst connection.
You have a couple of options.
You can have one thread always open which handles connections. Your supervisor thread. It holds an array of open connections' data and dispenses actions to the other threads. This isn't the best option since you'll quickly encounter Race Conditions such as two threads trying to use the same data.
You can switch over to UDP which will leave your threads wide open since there's no persistent connection. You'd then need to send states to each client, and once they ACK them you can get rid of the data. The majority of games implement UDP now'a'days, even turn based ones.
Beej's guide is probably the most extensive on the internet about UDP/TCP and socket control theory.
http://beej.us/guide/bgnet/output/html/singlepage/bgnet.html
And there's also Gaffer on Games which is a fantastic resource as well.
http://gafferongames.com/networking-for-game-programmers/udp-vs-tcp/
I am completely new to Twisted, but it looks very promising for my project.
I would like to write a Python Twisted application which reads a record from text file every x seconds and contemporary listen on a TCP port (acting as TCP server). If no clients are connected to the TCP server the records are just discarded. If one or more clients are connected to the TCP server, the records are sent to all clients (all clients will receive the same line of the text file)
Can Twisted make this possible with a reasonable amount of LOCs?
Could anybody suggest an example to start with?
Thanks
C
Twisted's documentation includes information about how to run a TCP server. It also includes an information about how to perform work based on the passage of time. This should cover most of what you need to know.
Jean-Paul,
thanks for your answer.
Below is what i put together. The program is sending strings with time stamps to one or more clients connected to the server. Read synchronously from file in this scenario is very simple so i just use a fixed string with the time stamp.
My next step is to substitute the datetime.datetime.now() function call with a call to web service. Basically what i would like to create is kind of proxy that is
client versus a web service and invoke it every x seconds to get the data
TCP server versus a set of clients to stream data continuously, or better to say once a new data chunk is available (as is doing the example below)
The questions are:
Can you point me to an example of a similar system?
How can I combine the runEverySecond() method call with an asynchronous call to the web service using TCPClient capability of Twisted?
Thanks
C
from twisted.internet import protocol, reactor
from twisted.internet import task
import datetime
class Stream(protocol.Protocol):
def __init__(self, f):
self.factory = f
def connectionMade(self):
self.start = True
def forward(self, data):
if self.start:
self.transport.write(data)
class StreamFactory(protocol.Factory):
def __init__(self):
self.connections = []
def buildProtocol(self, addr):
s = Stream(self)
self.connections.append( s )
return s
def runEverySecond(self):
for c in self.connections:
c.forward( str(datetime.datetime.now()) )
f = StreamFactory()
l = task.LoopingCall(f.runEverySecond)
l.start(1.0) # call every second
reactor.listenTCP(8000, f)
reactor.run()
I know that it is not possible to run multiple loops at the same time in Python.
Anyhow, what I need to achieve is that I have one loop running reading loads of sensor data, every 0.25 seconds.
At the same time I have signal devices running in parallel that need to send signals every 3 seconds.
My question is what way is best practice to achieve this?
Does it make sense to write two scripts and run them in parallel?
Does it make sense to use threading?
Is there any other possibility in order to make this work?
I would be greatful for code samples.
Thank you!
Edit:
Both loops are absolutely independent.
So, let's say while script 1 is running, reading the sensor data, when one of the sensors received a value < 300, it should run script 2 which will send the signals. At the same time when the sensors data gets > 300 it should stop script 2.
"Python multiple loops at the same time. I know that it is not possible [...]" - this looks really funny.
It is possible to run two loops at the same time, exactly how you described it. And both ways make much sense, depending on what do you actually need and want. If the tasks are completely independent you should run it as two scripts. If you need those two loops to realize one task and it makes sense for them to be in one file you can use multiprocessing.
Tested for python 2.7.5+ and 3.3.2+.
Here is some minimal example:
from multiprocessing import Process
import time
def f(name):
print('hello', name)
time.sleep(10)
def d(name):
print('test2', name)
time.sleep(10)
if __name__ == '__main__':
p1 = Process(target=f, args=('bob',))
p2 = Process(target=d, args=('alice',))
p1.start()
p2.start()
p1.join()
p2.join()
Script runs for 10s and both strings are printed right away, which means everything works.
time python3 ./process.py
hello bob
test2 alice
real 0m10.073s
user 0m0.040s
sys 0m0.016s
It is also possible by running multiple scripts and some as .pyw for convenience and having them exchange information by UDP sockets. Note 127.0.0.1 is to send to yourself under ANY circumstance. Also for port, just make sure no other programs use the port you use. As in other programs, I mean ANY program that uses ports or even basic router settings.
Sample (send)
import os
from sockets import *
host = "ip"
port = "9000"
addr = (host, port)
UDPSock = socket(AF_INET, SOCK_DGRAM)
data = "Random Text"
send = data.encode("ascii")
UDPSock.sendto(send, addr)
UDPSock.close()
Sample (Receive)
import os
from socket import *
host = ""
port = 9000
addr = (host, port)
UDPSock = socket(AF_INET, SOCK_DGRAM)
UDPSock.bind(addr)
(data, addr) = UDPSock.recvfrom(1024)#1024 is MAX bytes to receive
data = data.decode('ascii')
UDPSock.close()
You can use these to run separate loops and tell what to do from two separate programs.
I'm designing a python program that'll talk to two other process at the same time through sockets. One of the process is a C daemon so this socket will be alive all the time - no problem there. The other process is a PHP web page. So that socket isn't established all the time. Most of the time, the socket is listen()ing on a port.
If both socket are alive all the time, a simple select() call can be used to monitor input from both. But in my situation, this is not possible. How can I achieve this easily?
Thanks,
You can use select() in this case, even in a single-threaded single-process program with only blocking sockets. Here's how you would accept incoming connections with select():
daemonSocket = socket.socket()
...
phpListenSocket = socket.socket()
phpListenSocket.bind(...)
phpListenSocket.listen(...)
phpSocket = None
while True:
rlist = ...
rready, wready, eready = select(rlist, [], [])
if phpListenSocket in rready:
phpSocket, remoteAddr = phpListenSocket.accept()