I setup a Django + nginx + uwsgi server. In my Django application, I want to send several HTTP request in parallel. I created multiple threads and send each request in a thread. However, when I checked the time stamp at sending each request, I saw all the requests were sent sequentially.
Could anyone tell me how I can send the HTTP request in parallel?
I'm guessing this is due to Python threads and the GIL ...if you used multiprocessing instead of threads you should be able to send them truly in parallel
Related
I have a question regarding the grpc server handles multiple requests in parallel, I have a grpc server, and the server provides an endpoint to handle client requests, and there are multiple clients sending request to the same endpoint.
When different clients send multiple requests to server at the same time, how the server handle those requests received the same time? Will each request will be handled by a thread simultaneously? Or the requests will be queued and handled one by one?
Thanks!
HTTP/2 connections have a limit on the number of maximum concurrent streams on a connection at one time. By default, most servers set this limit to 100 concurrent streams.
A gRPC channel uses a single HTTP/2 connection, and concurrent calls are multiplexed on that connection. When the number of active calls reaches the connection stream limit, additional calls are queued in the client. Queued calls wait for active calls to complete before they are sent. Applications with high load, or long running streaming gRPC calls, could see performance issues caused by calls queuing because of this limit.
But this problem has its own solution, for example in .Net, we could set following setting while defining GrpcChannel:
SocketsHttpHandler.EnableMultipleHttp2Connections = true
and it means, when the concurrent stream limit is reached, create additional HTTP/2 connections by a channel.
I want to implement long polling in python using cyclone or tornado with regards to scalability of service from beginning. Clients might connect for hours to this service. My concept:
Client HTTP requests will be processed by multiple tornado/cyclone handler threads behind NGINX proxy (serving as load balancer). There will be multiple data queues for requests: one for all unprocessed requests from all clients and rest of queues containing responses specific to each connected client, previously generated by worker processes. When requests are delivered to tornado/cyclone handler threads, request data will be sent for processing to worker queue and then processed by workers (which connect to database etc.). Meanwhile tornado/cyclone handler thread will look into client-specific queue and sends response with data back to client (if there is some waiting in queue). Please see the diagram.
Simple diagram: https://i.stack.imgur.com/9ZxcA.png
I am considering queue system because some requests might be pretty heavy on database and some requests might create notifications and messages for other clients. Is this a way to go towards scalable server or is it just overkill?
After doing some research I have decided to go with tornado websockets connected to zeroMQ. Inspired by this answer: Scaling WebSockets with a Message Queue.
I currently have a program that does work on a large set of data, at one point in the process it sends the data to a server for more work to be done, then my program looks for the completed data periodically, sleeping if it is not ready and repeating until it fetches the data, then continuing to do work locally.
Instead of polling repeatedly until the external server has finished, it has the ability to send a simple http post to an address I designate once the work has finished.
So I assume I need flask running at an address that can receive the notification, but I'm unsure of the best way to incorporate flask into the original program. I am thinking just to split my program into 2 parts.
part1.py
does work --> send to external server
part1 ends
flask server.py
receives data --> spawns part2.py with received data
The original program uses multiprocessing pools to offset waiting for the server responses, but with using flask, can I just repeatedly spawn new instances of part2 to do work on the data as it is received?
Am I doing this all completely wrong, I've just put this together with some googling and feel out of my depth
U can use broker with a message queue implemented ex. Celery + Redis or RabbitMQ. Then, when the other server finishes doing whatever it has to do with the data it can produce an event, and the first server will receive a notification.
I have a flask application within which I have many long running asynchronous tasks (~hours). It's important that the state of these tasks is communicated with the client.
I use celery to manage the background task queue, and I'm currently trying to broadcast updates to the client from each background thread via socketIO. Is this possible? Is there a better suited strategy to achieving what I would like?
You did not say, but I assume you plan on using Flask-SocketIO to handle the server-side SocketIO and not the official Node.js server, correct?
What you want to do can be done, but with the current version of Flask-SocketIO, the problem is that the process that hosts the Flask and Flask-SocketIO server owns the socket connections with the clients, so it is the only process that can communicate with them. At this time, Flask-SocketIO does not offer any help in sending data to clients from other processes such as Celery workers, this part you have to implement yourself. Specifically for Celery, you can have your long running tasks expose progress information that the server process can pick up and send to the clients.
I am currently working on improvements to Flask-SocketIO that will enable any process to send messages to connected clients using a Redis pub/sub backend for communication to the Flask-SocketIO server. Once this work is completed you will be able to write data to any client transparently from your Celery process.
You also ask if there is another alternative. You should also consider that the client can poll the server for status. If the updates do not need to be very frequent, then this is an option that is going to be much easier to implement. The client asks the server for status for a given task, and the server in turn asks the Celery task. I showed this approach in my Flask+Celery blog article.
I was able to solve this by creating and endpoint on the Flask server. See my answer here for details
I'm trying to design a system that will process large amounts of data and send updates to the client about its progress. I'd like to use nginx (which, thankfully, just started supporting websockets) and uwsgi for the web server, and I'm passing messages through the system with zeromq. Ideally the solution could be written in Python, but I'm also open to a Nodejs or even a Go solution.
Here is the flow that I'd like to achieve:
Client visits a website and requests that a large amount of data be processed.
The server farms out the processing to another process/server [the worker] via zeromq, and replies to the client request explaining that processing has begun, including information about how to set up a websocket with the server.
The client sets up the websocket connection and waits for updates.
When the processing is done, the worker sends a "processing done!" message to the websocket process via zeromq, and the websocket process pushes the message down to the client.
Is what I describe possible? I guess I was thinking that I could run uwsgi in emperor mode so that it can handle one process (port) for the webserver and another for the websocket process. I'm just not sure if I can find a way to both receive zeromq message and manage websocket connections all from the same process. Maybe I have to initiate the final websocket push from the worker?
Any help/correct-direction-pointing/potential-solutions would be much appreciated. Any sample or snippet of an nginx config file with websockets properly routed would be appreciated as well.
Thanks!
Sure, that should be possible. You might want to look at zerogw.