I have developing kind of chat app.
There are python&postgresql in server side, and xcode, android(java) side are client side(Web will be next phase).
Server program is always runing on ubuntu linux. and I create thread for every client connection in server(server program developed by python). I didnt decide how should be db operations?.
Should i create general DB connection and i should use this
connection for every client's DB
operation(Insert,update,delete..etc). In that case If i create
general connection, I guess i got some lock issue in future. (When i try to get chat message list while other user inserting)
IF I create DB connection when each client connected to my server. In that case, Is there too many connection. and it gaves me performance issue in future.
If i create DB connection on before each db operation, then there is so much db connection open and close operation.
Whats your opinion? Whats the best way?
The best way would be to maintain a pool of database connections in the server side.
For each request, use the available connection from the pool to do database operations and release it back to the pool once you're done.
This way you will not be creating new db connections for each request, which would be a costly operation.
Related
I have long-running Python executable running.
Open Oracle connection using cx_Oracle on start.
After more than 45-60 mins of idle connects - it get's this error.
Any idea or special setup required in cx_Oracle ?
Instead of leaving a connection unused in your application, consider closing it when it isn't needed, and then reopening when it is needed. Using a connection pool would be recommended, since pools can handle some underlying failures such as yours and will give you a usable connection.
At application initialization start the pool once:
pool = cx_Oracle.SessionPool("username", pw,
"localhost/orclpdb1", min=0, max=4, increment=1)
Then later get the connection and hold it only when you need it:
with pool.acquire() as connection:
cursor = connection.cursor()
for result in cursor.execute(
"""select sys_context('userenv','sid') from dual"""):
print(result)
The end of the with block will release the connection back to the pool. It
won't be closed. The next time acquire() is called the pool can check the
connection is still usable. If it isn't, it will give you a new one. Because of these checks, the pool is useful even if you only have one connection.
See my blog post Always Use Connection Pools — and
How
most of which applies to cx_Oracle.
But if you don't want to change your code, then try setting an Oracle Network parameter EXPIRE_TIME as shown in the cx_Oracle documentation. This can be set in various places. In C-based Oracle clients like cx_Oracle:
With 18c client libraries it can be added as (EXPIRE_TIME=n) to the DESCRIPTION section of a connect descriptor
With 19c client libraries it can additionally be used via Easy Connect: host/service?expire_time=n.
With 21c client libraries it can additionally be used in a client-side sqlnet.ora file
This may not always help, depending what is closing the connection.
Fundamentally you should/could fix the root cause, which could be a firewall timeout, or a DBA-imposed user resource or DB idle time limit.
I'm working on a Python application with an SQL Server database using pyodbc, and I need to open multiple connections from the application's side to the database.
I learnt that the max number of connections allowed on an instance of the SQL Server database is 32,767. My understanding is this is the max that the DB instance "can handle", i.e. all simultaneous users combined.
Is there a limit on how many connections one client can open towards the same database instance, is it also 32,767? If yes, where / how is this limit configured?
Taking an educated guess here that there is no connection count limit on the client side towards the same DB instance, there is a limit of 32,767 on the server side, but the client would be more likely to run out of other resources way before it gets close to this figure.
I was using one connection, one cursor, and threading to insert multiple records, but kept getting a "connection is busy" error, this is resolved by adding "MARS_Connection=yes" in the pyodbc database connection string, thanks to this MS documentation.
Related:
How costly is opening and closing of a DB connection?
Can I use multiple cursors on one connection with pyodbc and MS SQL Server?
I have a network server written in python where client connect. It make new thread for every new client connection which lives while client is connected.
Now while client is connected, server keep doing db queries to get any possible update about client if any is made from admin portal.
I was opening db connection for every thread and letting it connected while client is connected but this is going to be a problem when 1-2 k Clients are connected and db is having 1-2 active connections.
I then changed it to close db connection and reconnect on demand but now with 2-3k clinets, server is making alot of connect and disconnect with db.
I tried MySQL db pool but problem is with 32 max pool size that is not a solution for me.
Anyone have any other idea or solution?
The problem of having to many clients connected at the same time is not something that you can resolve only with code. When your app gets bigger you must have multiple instances of the same python server on different machine. Then you would use a Load Balancer. The Load Balancer is going to act like a forwarder. Basicly your client is going to connect to it and then the Load Balancer will forward data to an instance of your python servers.
If you want to learn more about load balancing, here are some links:
https://iri-playbook.readthedocs.io/en/feat-docker/loadbalancer.html
https://www.nginx.com/resources/glossary/load-balancing/
Now for the database, instead of creating a connection for every client, you could only use one database connection and share it between threads.
I have server which must nofity some clients across gRPC connection.
Clients connect to server without timeout and wait for messages every time. Server will notify clients when new record was added to database.
How can I manage server for better performance with multithreading? May be should I use monitor and if record was added I would notify server side gRPC to retrieve data from database and send it to clients?
How do you think?
Thanks
We have some better plans for later in time, but today the best solution might be to implement something that presents the interface of concurrent.futures.Executor but that gives you better efficiency.
Hi I have client server architecture.
1. server script:
-runs and listen to socket.
-on receiving client response, a new thread is forked to handle the client data
-each thread has to accept the data send by client and store to database
2. Client script:
- runs with timer of every 0.02 second and sends data to server through socket
Now When I run the both script, database get locked frequently.
please let me know how should I handle this.
If you required to see script then let me know.
Your question tags indicate that you are using SQLite. The SQLite database is not really designed for concurrent operation on the same database, its locks are per-database-file. This means that your threads are not running in parallel, but waiting for an exclusive lock on the entire database, which effectively serializes them.
If you need concurrent writes, you should switch to a client-server database that offers finer-grained locking of writes, such as PostgreSQL.