I am working on a Flask-SocketIO application that integrates with zmq. The basic premise of the app is that a zmq message is received by the Flask-SocketIO web server, then that zmq message is converted to a SocketIO message which is sent along to the client (browser). I have the app working but not exactly the way that I want it to be working.
There is a need for the zmq event listener to be on a separate process than the main server process. I was able to use both Redis and RabbitMQ as a message queue for facilitating SocketIO emits from non-server processes. OK, great. So what's the problem?
The problem is that I'd really like to use zmq as a message queue instead of Redis or RabbitMQ since I'm already integrating with zmq in my app. So I read in the Flask-SocketIO docs that Kombu is the mechanism for supporting other types of message queues. OK, cool. But then I notice that zmq as a transport has been removed from the latest release of Kombu. And as far as I can tell it was only experimental when it was a transport option, like in Kombu 3.0.37.
My first approach was to just try zmq as a message transport via Kombu 3.0.37, but that is not working. I'm still trying to determine exactly why that is. But my best guess right now after looking at the source code a bit is that multiple processes are trying to open a PULL socket on the same port, which just doesn't work, even in a simple independent example. And that makes sense. On that front, my next step is to manually create a PULL socket independently of the zmq transport code, and somehow pass that in as a type of singleton for the zmq transport code to use.
Another approach that I'm working is getting zmq to work as a transport in a Hello World Kombu example. I swapped out the connection string in the sample code with zmq+tcp://localhost. This is where it becomes clear that I'm not understanding how to use zmq as a message queue transport. If I run the publisher code, I can send a message. But when I run the client code a few seconds later, it says that the queue is empty. That makes me think that in order for zmq to work as a transport, I may need an external zmq message broker of some sort, which I'm assuming I would need to put together on my own. But I haven't yet got my head around how that would work.
Any suggestions? Is this a waste of time to try to pursue zmq as a transport, or should I continue down this path? Miguel Grinberg (author of Flask-SocketIO) has graciously provided some direction, but I wanted to branch out a little to see if anyone else had some thoughts on the matter.
Related
Recently I've been doing a lot of testing around different ways of serving our Django application. I've settled on uwsgi as it seems to fit our needs pretty well.
I've recently discovered that uwsgi also supports WebSockets and started looking into it and found some examples: https://github.com/unbit/uwsgi/blob/master/tests/
After running the example (websockets_chat.py) and taking a look through uwsgi's documention for their websockets implementation it appears as though you can only send broadcast, or global messages.
Has anyone managed to find a way to transmit a message to a particular user or does uwsgi not support that level of communication yet?
Cheers
There is nothing like broadcast or global messages in websockets specs. They only "upgrades" an http connection to a lower-level one. What you do with that connection is up to you. The examples show integration with redis as message exchanger but you are free to make other uses.
For your specific case you will need to build a shared list of connected users and implements routing. Remember, you cannot rely on node.js way as it is based on a single threaded setup so everything is way simpler. In uWSGI a websocket connection can happens on a thread, a process or a coroutine, so exchanging data between them is the key.
I'm trying to do something seemingly simple, but I'm not sure how to go about it...
I have an application that publishes messages over a ZMQ PUB socket. I'd like to write a small server that subscribes to these updates and broadcasts them to all connected websocket clients.
This seems simple on the surface, but I don't know how to trigger an event in the twisted reactor when data is received through the subscription. I know ZMQ has an ioloop as well, but it is not the same as twisted...
What's the best way to go about this? I don't have to use twisted, some other python framework will do if something can deal with this better (rpclib?)
If you want to use ØMQ with Twisted, you can use txZMQ. The documentation on the linked page has a couple of basic examples which ought to get you started.
I'm trying to design a system that will process large amounts of data and send updates to the client about its progress. I'd like to use nginx (which, thankfully, just started supporting websockets) and uwsgi for the web server, and I'm passing messages through the system with zeromq. Ideally the solution could be written in Python, but I'm also open to a Nodejs or even a Go solution.
Here is the flow that I'd like to achieve:
Client visits a website and requests that a large amount of data be processed.
The server farms out the processing to another process/server [the worker] via zeromq, and replies to the client request explaining that processing has begun, including information about how to set up a websocket with the server.
The client sets up the websocket connection and waits for updates.
When the processing is done, the worker sends a "processing done!" message to the websocket process via zeromq, and the websocket process pushes the message down to the client.
Is what I describe possible? I guess I was thinking that I could run uwsgi in emperor mode so that it can handle one process (port) for the webserver and another for the websocket process. I'm just not sure if I can find a way to both receive zeromq message and manage websocket connections all from the same process. Maybe I have to initiate the final websocket push from the worker?
Any help/correct-direction-pointing/potential-solutions would be much appreciated. Any sample or snippet of an nginx config file with websockets properly routed would be appreciated as well.
Thanks!
Sure, that should be possible. You might want to look at zerogw.
I am developing a testbed for cloud computing environment. I want to establish multiple client connection to a server. What I want is that, server first of all send a data to all the clients specifying sending_interval and then all the clients will keep on sending their data with a time gap of that time_interval (as specified by the server). Please help me out, how can I do the same using python socket program. (i.e. I want multiple client to single server connectivity and also client sending data with the time gap specified by server). Will be great-full if anyone can help me. Thanks in advance.
This problem is easily solved by the ZeroMQ socket library. It is production stable. It allows you to define publisher-subscriber relationships, where a publishing process will publish data on a port regardless of how many (0 to infinite) listening processes there are. They call this the PUB-SUB model; it's in their docs (link below).
It sounds like you want to set up a bunch of clients that are all publishers. They can subscribe to a controlling channel, which which will send updates to their configuration (how often to write). They also act as publishers, pushing out their own data at an interval specified by default/config channel/socket.
Then, you have one or more listening processes that listen to all the clients' published messages. Perhaps you could even have two listening processes, one for backup or DR, or whatever.
We're using ZeroMQ and loving the simplicity it gives; there's no connection errors because the publisher doesn't care if anyone is listening, and the subscriber can start before the publisher and if there's nothing there to listen to, it can just loop around and wait until there is.
Bindings are available in ALL languages (it's freaky). The Python binding isn't pure-python, it does require a C compiler, but is frighteningly fast, and the pub/sub example is a cut/paste, 'golly, it works!' experience.
Link: http://zeromq.org
There are MANY other methods available with this library, including message queues, etc. They have relatively complete documentation, too.
Multi-Client and Single server Socket programming can be achieved by Multithreading in Socket Programming. I have implemented both the method:
Single Client and Single Server
Multiclient and Single Server
In my GitHub Repo Link: https://github.com/shauryauppal/Socket-Programming-Python
What is Multi-threading Socket Programming?
Multithreading is a process of executing multiple threads simultaneously in a single process.
To understand well you can visit Link: https://www.geeksforgeeks.org/socket-programming-multi-threading-python/, written by me.
The fun part of websockets is sending essentially unsolicited content from the server to the browser right?
Well, I'm using django-websocket by Gregor Müllegger. It's a really wonderful early crack at making websockets work in Django.
I have accomplished "hello world." The way this works is: when a request is a websocket, an object, websocket, is appended to the request object. Thus, I can, in the view interpreting the websocket, do something like:
request.websocket.send('We are the knights who say ni!')
That works fine. I get the message back in the browser like a charm.
But what if I want to do that without issuing a request from the browser at all?
OK, so first I save the websocket in the session dictionary:
request.session['websocket'] = request.websocket
Then, in a shell, I go and grab the session by session key. Sure enough, there's a websocket object in the session dictionary. Happy!
However, when I try to do:
>>> session.get_decoded()['websocket'].send('With a herring!')
I get:
Traceback (most recent call last):
File "<console>", line 1, in <module>
error: [Errno 9] Bad file descriptor
Sad. :-(
OK, so I don't know much of anything about sockets, but I know enough to sniff around in a debugger, and lo and behold, I see that the socket in my debugger (which is tied to the genuine websocket from the request) has fd=6, while the one that I grabbed from the session-saved websocket has fd=-1.
Can a socket-oriented person help me sort this stuff out?
I'm the author of django-websocket. I'm not a real expert in the topic of websockets and networking, however I think I have a decent understanding of whats going on. Sorry for going into great detail. Even if most of the answer isn't specific to your question it might help you at some other point. :-)
How websockets work
Let me explain shortly what a websocket is. A websocket starts as something that really looks like a plain HTTP request, established from the browser. It indicates through a HTTP header that it wants to "upgrade" the protocol to be a websocket instead of a HTTP request. If the server supports websockets, it agrees on the handshake and both - server and client - now know that they will use the established tcp socket formerly used for the HTTP request as a connection to interchange websocket messages.
Beside sending and waiting for messages, they have also of course the ability to close the connection at any time.
How django-websocket abuses the python's wsgi request environment to hijack the socket
Now lets get into the details of how django-websocket implements the "upgrading" of the HTTP request in a django request-response cylce.
Django usually uses the WSGI specification to talk to the webserver like apache or gunicorn etc. This specification was designed just with the very limited communication model of HTTP in mind. It assumes that it gets a HTTP request (only incoming data) and returns the response (only outgoing data). This makes it tricky to force django into the concept of a websocket where bidirectional communication is allowed.
What I'm doing in django-websocket to achieve this is that I dig very deeply into the internals of WSGI and django's request object to retrieve the underlaying socket. This tcp socket is then used to handle the upgrade the HTTP request to a websocket instance directly.
Now to your original question ...
I hope the above makes it obvious that when a websocket is established, there is no point in returning a HttpResponse. This is why you usually don't return anything in a view that is handled by django-websocket.
However I wanted to stick close to the concept of a view that holds the logic and returns data based on the input. This is why you should only use the code in your view to handle the websocket.
After you return from the view, the websocket is automatically closed. This is done for a reason: We don't want to keep the socket open for an undefined amount of time and relying on the client (the browser) to close it.
This is why you cannot access a websocket with django-websocket outside of your view. The file descriptor is then of course set to -1 indicating that its already closed.
Disclaimer
I explained above that I'm digging in the surrounding environment of django to get somehow -- in a very hackish way -- access to the underlaying socket. This is very fragile and also not supposed to work since WSGI is not designed for this! I also explained above that the websocket is closed after the view ends - however after the websocket closed down (AND closed the tcp socket), django's WSGI implementation tries to send a HTTP response - it doesn't know about websockets and thinks it is in a normal HTTP request-response cycle. But the socket is already closed an the sending will fail. This usually causes an exception in django.
This didn't affected my testings with the development server. The browser will never notice (you know .. the socket is already closed ;-) - but raising an unhandled error in every request is not a very good concept and may leak memory, doesn't handle database connection shutdown correctly and many athor things that will break at some point if you use django-websocket for more than experimenting.
This is why I would really advise you not to use websockets with django yet. It doesn't work by design. Django and especially WSGI would need a total overhaul to solve these problems (see this discussion for websockets and WSGI). Since then I would suggest using something like eventlet. Eventlet has a working websocket implementation (I borrowed some code from eventlet for the initial version of django-websocket) and since its just plain python code you can import your models and everything else from django. The only drawback is that you need a second webserver running just to handle websockets.
As Gregor Müllegger pointed out, Websockets can't be properly handled by WSGI, because that protocol never was designed to handle such a feature.
uWSGI, since version 1.9.11, can handle Websockets out of the box. Here uWSGI communicates with the application server using raw HTTP rather than the WSGI protocol. A server written that way, can therefore handle the protocol internals and keep the connection open over a long period. Having long living connections handled by a Django view is not a good idea either, because they then would block a worker thread, which is a limited resource.
The main purpose of Websockets, is to have the server push messages to the client in an asynchronous way. This can be a Django view triggered by other browsers (ex.: chat clients, multiplayer games), or an event triggered by, say django-celery (ex.: sport results). It therefore is fundamental for these Django services, to use a message queue for pushing messages to the client.
To handle this in a scalable way, I wrote django-websocket-redis, a Django module which can keep open all those long living Websocket connections in one single thread/process using Redis as the backend message queue.
You could give stargate a bash: http://boothead.github.com/stargate/ and http://pypi.python.org/pypi/stargate/.
It's built on top of pyramid and eventlet (I also contributed a fair bit of the websocket support and tests to eventlet). The big advantage of pyramid for this sort of thing is that it's got the concept of a resource which the url maps to, rather than just the result of a callable. So you end up with a graph of persistent resources that maps to your url structure and websocket connections are simply routed and connected to those resources.
So you end up only needing to do two things:
class YourView(WebSocketView):
def handler(self, websocket):
self.request.context.add_listener(websocket)
while True:
msg = websocket.wait()
# Do something with message
To receive messages
and
resource.send(some_other_message)
Here resource is an instance of a stargate.resource.WebSocketAwareContext (as is self.request.context) above and the send method sends the message to all clients connected with the add_listener method.
To publish a message to all of the connected clients you just call node.send(message)
I'm hopefully going to write up a little example app in the next week or two to demonstrate this a little better.
Feel free to ping me on github if you want some help with it.
request.websocket is probably get closed when you return from the request handler (view). The simple solution is to keep the handler alive (by not returning from the view). If your server is not multi-threaded you won't be able to accept any other simultaneous requests.