Sharing websockets object between tornado processes - python

I start the tornado server with multiple processes:
server.bind(8000)
server.start(0)
Assuming I have a 4 processor system this should create 4 processes. For any client that connects I start a websocket (WS) connection. I want to be able to access websocket objects between processes because I may want to broadcast a message between client A on process 1 to client B on process 2. I have a mongo server and the solution i thought of was to pickle the WS for client 1 store it in mongo then get process 2 to retrieve this and unpickle then use the WS. However I don't believe picked objects can be shared between processes.
Can someone suggest the best way to share WS between tornado processes on a multi process system?
Thanks

Live connections cannot be pickled and stored in a database. Instead, each connection is owned by the process that first accepted it, and instead of passing connections around, you pass messages to the server that is handling a particular client.

Related

Best solution for frequently access database in python for network application

I have a network server written in python where client connect. It make new thread for every new client connection which lives while client is connected.
Now while client is connected, server keep doing db queries to get any possible update about client if any is made from admin portal.
I was opening db connection for every thread and letting it connected while client is connected but this is going to be a problem when 1-2 k Clients are connected and db is having 1-2 active connections.
I then changed it to close db connection and reconnect on demand but now with 2-3k clinets, server is making alot of connect and disconnect with db.
I tried MySQL db pool but problem is with 32 max pool size that is not a solution for me.
Anyone have any other idea or solution?
The problem of having to many clients connected at the same time is not something that you can resolve only with code. When your app gets bigger you must have multiple instances of the same python server on different machine. Then you would use a Load Balancer. The Load Balancer is going to act like a forwarder. Basicly your client is going to connect to it and then the Load Balancer will forward data to an instance of your python servers.
If you want to learn more about load balancing, here are some links:
https://iri-playbook.readthedocs.io/en/feat-docker/loadbalancer.html
https://www.nginx.com/resources/glossary/load-balancing/
Now for the database, instead of creating a connection for every client, you could only use one database connection and share it between threads.

Python gRPC long pooling

I have server which must nofity some clients across gRPC connection.
Clients connect to server without timeout and wait for messages every time. Server will notify clients when new record was added to database.
How can I manage server for better performance with multithreading? May be should I use monitor and if record was added I would notify server side gRPC to retrieve data from database and send it to clients?
How do you think?
Thanks
We have some better plans for later in time, but today the best solution might be to implement something that presents the interface of concurrent.futures.Executor but that gives you better efficiency.

Limit number of connections to a rabbit queue?

I use pika-0.10.0 with rabbitmq-3.6.6 broker on ubuntu-16.04. I designed a Request/Reply service. There is a single Request queue where all clients push their requests. Each client creates a unique Reply queue: the server pushes replies targeting this client to this unique queue. My API can be seen as two messages: init and run.
init messages contain big images, thus init is a big and slow request. run messages are lighter and the server reuses previous images. The server can serve multiple clients. Usually client#1 init then run multiple times. If client#2 comes in and init, it will replace the images sent by client#1 on the server. And further run issued by client#1 would use wrong images. Then I am asking:
is it possible to limit the number of connections to a queue? E.g. the server serves one client at a time.
another option would be: the server binds images to a client, saves them, and reuse them when this client runs. It requires more work, and will impact performance if two or more clients' requests are closely interleaved.
sending the images in each run request is not an option, would be too slow.
I think you have a problem in your design. Logically each run corresponds to a certain init so they have to be connected. I'd put a correlation id field into init and run events. When server receives run it checks if it there was a corresponding init processed and uses the result of that init.
Speaking of performance:
You can make init worker queue and have multiple processing servers listen to it. The example is in the RabbitMQ docs
Then, when init request comes in, one of available servers will pick it up, and store your images and the correlation ID. If you have multiple init requests at the same time - no problem, they will be processed eventually (or simultaneosly if servers are free)
Then server that did the process sends reply message to the client queue saying init work is done, and sends name of the queue where run request have to be published.
When ready, client sends its run request to the correct queue.
To directly answer the question:
there is a common misconception that you publish to a queue. In RabbitMQ you publish to an exchange that cares about routing of your messages to a number of queues. So you question really becomes can I limit number of publishing connections to an exchange. I'm pretty sure there is no way of doing so on the broker side.
Even if there was a way of limiting number of connections, imagine the situation:
Client1 comes in, pushes its 'init' request.
Client1 holds its connection, waiting to push run.
Client1 fails or network partition occurs, its connection gets
dropped.
Client2 comes in and pushes its init request.
Client2 fails
Client1 comes back up and pushes its run and gets Client2's
images.
Connection is a transient thing and cannot be relied upon as a transaction mechanism.

inserting data in database through thread

Hi I have client server architecture.
1. server script:
-runs and listen to socket.
-on receiving client response, a new thread is forked to handle the client data
-each thread has to accept the data send by client and store to database
2. Client script:
- runs with timer of every 0.02 second and sends data to server through socket
Now When I run the both script, database get locked frequently.
please let me know how should I handle this.
If you required to see script then let me know.
Your question tags indicate that you are using SQLite. The SQLite database is not really designed for concurrent operation on the same database, its locks are per-database-file. This means that your threads are not running in parallel, but waiting for an exclusive lock on the entire database, which effectively serializes them.
If you need concurrent writes, you should switch to a client-server database that offers finer-grained locking of writes, such as PostgreSQL.

pymongo connection pooling and client requests

I know pymongo is thread safe and has an inbuilt connection pool.
In a web app that I am working on, I am creating a new connection instance on every request.
My understanding is that since pymongo manages the connection pool, it isn't wrong approach to create a new connection on each request, as at the end of the request the connection instance will be reclaimed and will be available on subsequent requests.
Am I correct here, or should I just create a single instance to use across multiple requests?
The "wrong approach" depends upon the architecture of your application. With pymongo being thread-safe and automatic connection pooling, the actual use of a single shared connection, or multiple connections, is going to "work". But the results will depend on what you expect the behavior to be. The documentation comments on both cases.
If your application is threaded, from the docs, each thread accessing a connection will get its own socket. So whether you create a single shared connection, or request a new one, it comes down to whether your requests are threaded or not.
When using gevent, you can have a socket per greenlet. This means you don't have to have a true thread per request. The requests can be async, and still get their own socket.
In a nutshell:
If your webapp requests are threaded, then it doesn't matter which way you access a new connection. The result will be the same (socket per thread)
If your webapp is async via gevent, then it doesn't matter which way you access a new conection. The result will be the same. (socket per greenlet)
If your webapp is async, but NOT via gevent, then you have to take into consideration the notes on the best suggested workflow.

Categories