Shared Cassandra Session loses connection and app must be restarted - python

I am working on a Flask web service which uses Cassandra as a datastore.
The web service is setup to use a shared session for Cassandra which, I have read, automatically handle connection pooling.
When I deploy the application, everything works fine for a period of time but after a certain amount of time the session loses all C* hosts in the cluster and refuses to attempt to reconnect. It simply errors out with the message: Unable to complete the operation against any hosts.
What can I do to either have the session automatically attempt to reconnect to the cluster or detect that the session is broken so I can shut it down and create a new session?

You should not need to create a new session. Assuming you are using the datastax python-driver, the driver maintains a 'control connection' which subscribes to node up/down events. If the control connection is lost, it will reconnect to another host in the cluster. It would be useful to turn on debug logging, which will reveal why the nodes in your cluster are being marked down.

Related

python server executing DB/Binance connections every time the system is accessed

I am using Python and Flask as part of a server. When the server starts up, it connects to an Oracle database and Binance Crypto Exchange server
The server starts in either TEST or PRODUCTION mode. In order to determine the mode to use when starting up, I take an input variable and then use it to determine whether or not to connect to the PROD configuration (which would actually execute trades) and the TEST system (which is more like a sandbox)
Whenever I make a call to the server ( ex: http://<myservername.com>:80/ ) it seems as though the server connections are executed with each call. So, if I type in http://<myservername.com>:80/ 7 times, the code that connects to the database (and the code that connects to the Binance server) is EXECUTED SEVEN times.
Question: Is there a place where one can put the connection code so that it is executed ONCE when the server is started up?
I saw the following:
https://damyan.blog/post/flask-series-structure/
How to execute a block of code only once in flask?
Flask at first run: Do not use the development server in a production environment
and tried using the solution in #2
#app.before_first_request
def do_something_only_once():
The code was changed so it had the following below (connection to the Binance server is not shown):
#app.before_first_request
def do_something_only_once():
system_access = input(" Enter the system access to use \n-> ")
if ( system_access.upper() == "TEST" ) :
global_STARTUP_DB_SERVER_MODE = t_system_connect.DBSystemConnection.DB_SERVER_MODE_TEST
print(" connected to TEST database")
if ( system_access.upper() == "PROD" ) :
global_STARTUP_DB_SERVER_MODE = t_system_connect.DBSystemConnection.DB_SERVER_MODE_PROD
print(" connected to PRODUCTION database")
When starting the server up, I never get an opportunity to enter "TEST" ( in order to connect to the "TEST" database). In fact, the code under the area of:
#app.before_first_request
def do_something_only_once():
is never executed at all.
Question: How can one fix the code so that when the server is started, the code responsible for connecting to the Oracle DB server and connecting to the Binance server is only executed ONCE and not every time the server is being accessed by using http://<myservername.com>:80/
Any help, hints or advice would be greatly appreciated
TIA
#Christopher Jones
Thanks for the response.
What I was hoping to do was to have this Flask server implemented as a Docker process. The idea is to start several of these processes at one time. The group of Docker Processes would then be managed by some kind of Dispatcher. When an http://myservername.com:80/ command was executed, the connection information would first go to the Dispatcher which would forward it to a Docker Process that was "free" for usage. My thoughts were that Docker Swarm (or something under Kubernetes) might work in this fashion(?) : one process gets one connection to the DB (and the dispatcher would be responsible for distributing work).
I came from ERP background. The existence of the Oracle Connection Pool was known but it was elected to move most of the work to the OS processing level (in that if one ran "ps -ef | grep <process_name>" they would see all of the processes that the "dispatcher" would forward work to). So, I was looking for something similar - old habits die hard ...
Most Flask apps will be called by more than one user so a connection pool is important. See How to use Python Flask with Oracle Database.
You can open a connection pool at startup:
if __name__ == '__main__':
# Start a pool of connections
pool = start_pool()
...
(where start_pool() calls cx_Oracle.SessionPool() - see the link for the full example)
Then your routes borrow a connection as needed from the pool:
    connection = pool.acquire()
    cursor = connection.cursor()
    cursor.execute("select username from demo where id = :idbv", [id])
    r = cursor.fetchone()
    return (r[0] if r else "Unknown user id")
Even if you only need one connection, a pool of one connection can be useful because it gives some Oracle high availability features that holding open a standalone connection for the duration of the application won't give.

OPC UA zombie connection

I recently started working with Yaskawa's OPC UA Server provided on its robot's controller.
I'm connecting to the server via Python's OPCUA library. Everything works well, but when my code crashes or when I will close terminal without disconnecting from the server I cannot connect to it once again.
I receive an error from library, saying:
The server has reached its maximum number of sessions.
And the only way to solve this is to restart the controller by turning it off and on again.
Documentation of the server is saying that max number of sessions is 2.
Is there a way to clear the connection to the server without restarting the machine?
The server keeps track of the client session and doesn't know that your client crashed.
But the client can define a short enough SessionTimeout, after which the server can remove the crashed session.
The server may have some custom configuration where you can define the maximum number of sessions that it supports. 2 sessions is very limited, but if the hardware is very limited maybe that is the best you can get. See the product documentation about that.

Best solution for frequently access database in python for network application

I have a network server written in python where client connect. It make new thread for every new client connection which lives while client is connected.
Now while client is connected, server keep doing db queries to get any possible update about client if any is made from admin portal.
I was opening db connection for every thread and letting it connected while client is connected but this is going to be a problem when 1-2 k Clients are connected and db is having 1-2 active connections.
I then changed it to close db connection and reconnect on demand but now with 2-3k clinets, server is making alot of connect and disconnect with db.
I tried MySQL db pool but problem is with 32 max pool size that is not a solution for me.
Anyone have any other idea or solution?
The problem of having to many clients connected at the same time is not something that you can resolve only with code. When your app gets bigger you must have multiple instances of the same python server on different machine. Then you would use a Load Balancer. The Load Balancer is going to act like a forwarder. Basicly your client is going to connect to it and then the Load Balancer will forward data to an instance of your python servers.
If you want to learn more about load balancing, here are some links:
https://iri-playbook.readthedocs.io/en/feat-docker/loadbalancer.html
https://www.nginx.com/resources/glossary/load-balancing/
Now for the database, instead of creating a connection for every client, you could only use one database connection and share it between threads.

Hold oracle DB connection between bash and python

I have a bash script which calls a python script to create the Oracle DB connection using cx_oracle. I want to use the same connection object from bash script later as well. But whenever the python script ends, connection object is lost.
Can anyone help to hold the connection object to use further in the bash or can we pass the connection object from python to bash and vice versa!!
You should reconsider your architecture and use some kind of service or web app that remains running.
Connections are made up of (i) a cx_Oracle data structure (ii) a network connection to the database (iii) a database server process.
Once the Python process is closed, then all three are closed by default. So you lose all state like the statement cache, and any session settings like NLS date format. If you enable Database Resident Connection Pooling (DRCP) - see the manual - then the database server process will remain available for re-use which saves some overhead, however the next process will still have to re-authenticate.

should i open db connection for every thread?

I have developing kind of chat app.
There are python&postgresql in server side, and xcode, android(java) side are client side(Web will be next phase).
Server program is always runing on ubuntu linux. and I create thread for every client connection in server(server program developed by python). I didnt decide how should be db operations?.
Should i create general DB connection and i should use this
connection for every client's DB
operation(Insert,update,delete..etc). In that case If i create
general connection, I guess i got some lock issue in future. (When i try to get chat message list while other user inserting)
IF I create DB connection when each client connected to my server. In that case, Is there too many connection. and it gaves me performance issue in future.
If i create DB connection on before each db operation, then there is so much db connection open and close operation.
Whats your opinion? Whats the best way?
The best way would be to maintain a pool of database connections in the server side.
For each request, use the available connection from the pool to do database operations and release it back to the pool once you're done.
This way you will not be creating new db connections for each request, which would be a costly operation.

Categories