I have a flask application running with gevent-socketio that I create this way:
server = SocketIOServer(('localhost', 2345), app, resource='socket.io')
gevent.spawn(send_queued_messages_loop, server)
server.serve_forever()
I launch send_queued_messages_loop in a gevent thread that keeps on polling on a gevent.Queue where my program stores data to send it to the socket.io connected clients
I tried different approaches to stop the server (such as using sys.exit) either from the socket.io handler (when the client sends a socket.io message) or from a normal route (when the client makes a request to /shutdown) but in any case, sys.exit seems to fail because of the presence of greenlets.
I tried to call gevent.shutdown() first, but this does not seem to change anything
What would be the proper way to shutdown the server?
Instead of using serve_forever() create a gevent.event.Event and wait for it. To actually initiate shutdown, trigger the event using its set() method:
from gevent.event import Event
stopper = Event()
server = SocketIOServer(('localhost', 2345), app, resource='socket.io')
server.start()
gevent.spawn(send_queued_messages_loop)
try:
stopper.wait()
except KeyboardInterrupt:
print
No matter from where you now want to terminate your process - all you need to do is calling stopper.set().
The try..except is not really necessary but I prefer not getting a stacktrace on a clean CTRL-C exit.
Related
I am trying to detect a failed connection when using the Twisted endpoint connect() function. What is odd is that the following works under Windows and gives the expected result, but on a Linux/Mac OS system I am never seeing the print statement from errBack. Is my code incorrect or does Windows Twisted work differently from the rest?
class Gateway():
def __init__(self):
from twisted.internet.endpoints import TCP4ClientEndpoint
endpoint = TCP4ClientEndpoint(reactor, 'localhost', 8000)
authInterfaceFactory = AuthInterfaceFactory(self.__authMsgProcessor)
d = endpoint.connect(authInterfaceFactory)
d.addErrback(self.ConnFailed)
print("WAITING...")
def ConnFailed(self, msg):
print("[DEBUG] Errback : {0}".format(msg))
Windows Result
WAITING... [DEBUG] Errback : [Failure instance: Traceback (failure
with no frames): : Connection was
refused by other side: 10061: No connection could be made because the
target machine actively refused it..]
I created a client that uses endpoint connect and it immediately returned, although when used it in the same setup as my code it doesn't:
self.__networkThread = threading.Thread(target=reactor.run,
kwargs={"installSignalHandlers": False})
self.__networkThread.start()
from twisted.internet.endpoints import TCP4ClientEndpoint
endpoint = TCP4ClientEndpoint(reactor, 'localhost', 8000)
d = endpoint.connect(authInterfaceFactory)
d.addErrback(self.ConnFailed)
d.addCallback(self.ConnOK)
Is the logic incorrect when running a reactor in a thread (I have to as I want it started at the beginning)?
You can't run the reactor in one thread and use Twisted APIs in another. Apart from a couple APIs dedicated specifically to interacting with threads, you must use all Twisted APIs from a single thread.
"I want it started at the beginning" doesn't sound like a reason to use threads. Many many Twisted-using programs start the reactor "at the beginning" without threads.
(Also please take this as an excellent example of the need for complete examples.)
I have a websocket server using autobahn where it runs some code onMessage.
during this code an exception happens and I perform a try/except clause so I can log the error, it logs correctly. However the connection to the client(javascript) disconnects.
The problem is that if I load a new page it does not reconnect to the websocket server. I attempted to put the code block to start the server in a while loop but the client does not connect succesfully.
if __name__ == '__main__':
import sys
from twisted.python import log
from twisted.internet import reactor, task, threads
logging.basicConfig(filename='/var/log/myerror.log', level=logging.ERROR)
log.startLogging(sys.stdout)
factory = WebSocketServerFactory(u"ws://127.0.0.1:9000", debug=False)
factory.protocol = MyServerProtocol
factory.setProtocolOptions(maxConnections=50)
reactor.listenTCP(9000, factory)
reactor.run()
Does anyone know how to make it so even if the code in 'onMessage' has an exception, the websocket server will continue to accept connections and give other clients the opportunity to run?
Found the problem. I had placed 'reactor.callFromThread(reactor.stop) ' elsewhere in the code and it was killing the reactor.
I'm using python3.4 and Flask.
I'm going to send message to the clients when the comments has been written.
Since I am using Flask, I should open server in the other thread.
In the server thread, len(server.clients) returns correct number of clients.
But when I call article_comments in the main thread, len(server.clients) returns 0, so no clients receive the message.
How can I solve this problem?
server = WebsocketServer(5001)
def server_thread():
server.run_forever()
Thread(target=server_thread).start()
def article_comments():
server.send_message_to_all("Hello World!")
I have a web server running on Django.
Users can create events postponed in time.
These events must be recorded in queue and processed on another server.
Initially I thought to take the Twisted. something like:
#client - django server
factory = pb.PBClientFactory()
reactor.connectTCP(server_ip, server_port, factory)
d = factory.login(credentials.UsernamePassword(login, paswd),)
d.addCallbacks(self.good_connected,self.bad_connected)
d.addCallback(self.add_to_queue)
reactor.run()
def add_to_queue(self, p)
p.callRemote("pickup", data)
#server - twisted server
def perspective_pickup(self, data)
reactor.callLater(timeout, self.pickup_from_queue)
But now I have big doubts about this approach. Maybe do not use twisted? or connect it with Django differently
Run twisted inside of Django is not a good idea anyway. So, try Celery or run HTTP server with twisted and use urllib on django side to send data to twisted server.
I'm using python2.6 with HTTPServer and the ThreadingMixIn, which will handle each request in a separate thread. I'm also using HTTP1.1 persistent connections ('Connection: keep-alive'), so neither the server or client will close a connection after a request.
Here's roughly what the request handler looks like
request, client_address = sock.accept()
rfile = request.makefile('rb', rbufsize)
wfile = request.makefile('wb', wbufsize)
global server_stopping
while not server_stopping:
request_line = rfile.readline() # 'GET / HTTP/1.1'
# etc - parse the full request, write to wfile with server response, etc
wfile.close()
rfile.close()
request.close()
The problem is that if I stop the server, there will still be a few threads waiting on rfile.readline().
I would put a select([rfile, closefile], [], []) above the readline() and write to closefile when I want to shutdown the server, but I don't think it would work on windows because select only works with sockets.
My other idea is to keep track of all the running requests and rfile.close() but I get Broken pipe errors.
Ideas?
You're almost there—the correct approach is to call rfile.close() and to catch the broken pipe errors and exit your loop when that happens.
If you set daemon_threads to true in your HTTPServer subclass, the activity of the threads will not prevent the server from exiting.
class ThreadedHTTPServer(ThreadingMixIn, HTTPServer):
daemon_threads = True
You could work around the Windows problem by making closefile a socket, too -- after all, since it's presumably something that's opened by your main thread, it's up to you to decide whether to open it as a socket or a file;-).