Hi I have client server architecture.
1. server script:
-runs and listen to socket.
-on receiving client response, a new thread is forked to handle the client data
-each thread has to accept the data send by client and store to database
2. Client script:
- runs with timer of every 0.02 second and sends data to server through socket
Now When I run the both script, database get locked frequently.
please let me know how should I handle this.
If you required to see script then let me know.
Your question tags indicate that you are using SQLite. The SQLite database is not really designed for concurrent operation on the same database, its locks are per-database-file. This means that your threads are not running in parallel, but waiting for an exclusive lock on the entire database, which effectively serializes them.
If you need concurrent writes, you should switch to a client-server database that offers finer-grained locking of writes, such as PostgreSQL.
Related
I am writing a React app with a Flask backend that I want to be able to receive data through a serial port, process this data, and graph it. Currently I am using the backend to send certain pieces of data (current time and available ports) to React. I want to set up a background Python thread that will run continuously to read data from the serial port using pyserial, process it, and send it to React, but I'm not sure what the best way to accomplish this is. My initial search brought me to Celery; however, I'm not sure if it's a good option for a continuous task. Any help is much appreciated!
The problem is that reading from a serial port is normally done in a blocking way. That means you do not poll periodically, but instead you open the port once and then read all the time, waiting for new data to come.
What you need is a separate thread. This is a part of the program that runs in parallel to your normal web server. Then you need some sort of data base to communicate between that thread and the web server. If you want your data to be persistent between device and server restarts, you should install a real database like Postgres. If not, you can simply use an array in your application memory.
In the thread, read from the serial port and write the values to the database/array.
In your REST endpoint, you output the last X values.
Then your client can poll against this endpoint.
(If you want to do it really fancy, you can use a more event-driven approach, but this would be more complicated to implement)
I have developing kind of chat app.
There are python&postgresql in server side, and xcode, android(java) side are client side(Web will be next phase).
Server program is always runing on ubuntu linux. and I create thread for every client connection in server(server program developed by python). I didnt decide how should be db operations?.
Should i create general DB connection and i should use this
connection for every client's DB
operation(Insert,update,delete..etc). In that case If i create
general connection, I guess i got some lock issue in future. (When i try to get chat message list while other user inserting)
IF I create DB connection when each client connected to my server. In that case, Is there too many connection. and it gaves me performance issue in future.
If i create DB connection on before each db operation, then there is so much db connection open and close operation.
Whats your opinion? Whats the best way?
The best way would be to maintain a pool of database connections in the server side.
For each request, use the available connection from the pool to do database operations and release it back to the pool once you're done.
This way you will not be creating new db connections for each request, which would be a costly operation.
I have server which must nofity some clients across gRPC connection.
Clients connect to server without timeout and wait for messages every time. Server will notify clients when new record was added to database.
How can I manage server for better performance with multithreading? May be should I use monitor and if record was added I would notify server side gRPC to retrieve data from database and send it to clients?
How do you think?
Thanks
We have some better plans for later in time, but today the best solution might be to implement something that presents the interface of concurrent.futures.Executor but that gives you better efficiency.
I start the tornado server with multiple processes:
server.bind(8000)
server.start(0)
Assuming I have a 4 processor system this should create 4 processes. For any client that connects I start a websocket (WS) connection. I want to be able to access websocket objects between processes because I may want to broadcast a message between client A on process 1 to client B on process 2. I have a mongo server and the solution i thought of was to pickle the WS for client 1 store it in mongo then get process 2 to retrieve this and unpickle then use the WS. However I don't believe picked objects can be shared between processes.
Can someone suggest the best way to share WS between tornado processes on a multi process system?
Thanks
Live connections cannot be pickled and stored in a database. Instead, each connection is owned by the process that first accepted it, and instead of passing connections around, you pass messages to the server that is handling a particular client.
I'm trying to design a system that will process large amounts of data and send updates to the client about its progress. I'd like to use nginx (which, thankfully, just started supporting websockets) and uwsgi for the web server, and I'm passing messages through the system with zeromq. Ideally the solution could be written in Python, but I'm also open to a Nodejs or even a Go solution.
Here is the flow that I'd like to achieve:
Client visits a website and requests that a large amount of data be processed.
The server farms out the processing to another process/server [the worker] via zeromq, and replies to the client request explaining that processing has begun, including information about how to set up a websocket with the server.
The client sets up the websocket connection and waits for updates.
When the processing is done, the worker sends a "processing done!" message to the websocket process via zeromq, and the websocket process pushes the message down to the client.
Is what I describe possible? I guess I was thinking that I could run uwsgi in emperor mode so that it can handle one process (port) for the webserver and another for the websocket process. I'm just not sure if I can find a way to both receive zeromq message and manage websocket connections all from the same process. Maybe I have to initiate the final websocket push from the worker?
Any help/correct-direction-pointing/potential-solutions would be much appreciated. Any sample or snippet of an nginx config file with websockets properly routed would be appreciated as well.
Thanks!
Sure, that should be possible. You might want to look at zerogw.