Hold oracle DB connection between bash and python - python

I have a bash script which calls a python script to create the Oracle DB connection using cx_oracle. I want to use the same connection object from bash script later as well. But whenever the python script ends, connection object is lost.
Can anyone help to hold the connection object to use further in the bash or can we pass the connection object from python to bash and vice versa!!

You should reconsider your architecture and use some kind of service or web app that remains running.
Connections are made up of (i) a cx_Oracle data structure (ii) a network connection to the database (iii) a database server process.
Once the Python process is closed, then all three are closed by default. So you lose all state like the statement cache, and any session settings like NLS date format. If you enable Database Resident Connection Pooling (DRCP) - see the manual - then the database server process will remain available for re-use which saves some overhead, however the next process will still have to re-authenticate.

Related

Python long idle connection in cx_Oracle getting: DPI-1080: connection was closed by ORA-3113

I have long-running Python executable running.
Open Oracle connection using cx_Oracle on start.
After more than 45-60 mins of idle connects - it get's this error.
Any idea or special setup required in cx_Oracle ?
Instead of leaving a connection unused in your application, consider closing it when it isn't needed, and then reopening when it is needed. Using a connection pool would be recommended, since pools can handle some underlying failures such as yours and will give you a usable connection.
At application initialization start the pool once:
pool = cx_Oracle.SessionPool("username", pw,
"localhost/orclpdb1", min=0, max=4, increment=1)
Then later get the connection and hold it only when you need it:
with pool.acquire() as connection:
cursor = connection.cursor()
for result in cursor.execute(
"""select sys_context('userenv','sid') from dual"""):
print(result)
The end of the with block will release the connection back to the pool. It
won't be closed. The next time acquire() is called the pool can check the
connection is still usable. If it isn't, it will give you a new one. Because of these checks, the pool is useful even if you only have one connection.
See my blog post Always Use Connection Pools — and
How
most of which applies to cx_Oracle.
But if you don't want to change your code, then try setting an Oracle Network parameter EXPIRE_TIME as shown in the cx_Oracle documentation. This can be set in various places. In C-based Oracle clients like cx_Oracle:
With 18c client libraries it can be added as (EXPIRE_TIME=n) to the DESCRIPTION section of a connect descriptor
With 19c client libraries it can additionally be used via Easy Connect: host/service?expire_time=n.
With 21c client libraries it can additionally be used in a client-side sqlnet.ora file
This may not always help, depending what is closing the connection.
Fundamentally you should/could fix the root cause, which could be a firewall timeout, or a DBA-imposed user resource or DB idle time limit.

How does Postges Server know to keep a database connection open

I wonder how does Postgres sever determine to close a DB connection, if I forgot at the Python source code side.
Does the Postgres server send a ping to the source code? From my understanding, this is not possible.
PostgreSQL indeed does something like that, although it is not a ping.
PostgreSQL uses a TCP feature called keepalive. Once enabled for a socket, the operating system kernel will regularly send keepalive messages to the other party (the peer), and if it doesn't get an answer after a couple of tries, it closes the connection.
The default timeouts for keepalive are pretty long, in the vicinity of two hours. You can configure the settings in PostgreSQL, see the documentation for details.
The default values and possible values vary according to the operating system used.
There is a similar feature available for the client side, but it is less useful and not enabled by default.
When your script quits your connection will close and the server will clean it up accordingly. Likewise, it's often the case in garbage collected languages like Python that when you stop using the connection and it falls out of scope it will be closed and cleaned up.
It is possible to write code that never releases these resources properly, that just perpetually creates new handles, something that can be problematic if you don't have something server-side that handles killing these after some period of idle time. Postgres doesn't do this by default, though it can be configured to, but MySQL does.
In short Postgres will keep a database connection open until you kill it either explicitly, such as via a close call, or implicitly, such as the handle falling out of scope and being deleted by the garbage collector.

Is it good idea to use database connection directly through python CGI script?

Is it a good idea to use database connection directly from python CGI Script, which will be called from our client side?
If it is then, every time our server side script is called it will try to connect to database.
So basically if we have 100,000 clients trying to call our server script it will connect to that database and do its job(Which may be similar for all clients)?
If it's not then how should we tackle such situations?
Thanks

python mysql database connection error

I am trying to access the remote database from one Linux server to another which is connected via LAN.
but it is not working.. after some time it will generate an error
`_mysql_exceptions.OperationalError: (2003, "Can't connect to MySQL server on '192.168.0.101' (99)")'
this error is random it will raise any time.
each time create a new db object in all methods.
and close the connection as well then also why this error raise.
can any one please help me to sort out this problem
This issue is due to so many pending request on the remote database.
So in this situation MySql closes the connection to the running script.
to overcome this situation put
time.sleep(sec) # here int is a seconds in number that to sleep the script.
it will solve this issue.. without transferring database to local server or any other administrative task on mysql
My solution was to collect more queries for one commit statement if those were insert queries.

Shared Cassandra Session loses connection and app must be restarted

I am working on a Flask web service which uses Cassandra as a datastore.
The web service is setup to use a shared session for Cassandra which, I have read, automatically handle connection pooling.
When I deploy the application, everything works fine for a period of time but after a certain amount of time the session loses all C* hosts in the cluster and refuses to attempt to reconnect. It simply errors out with the message: Unable to complete the operation against any hosts.
What can I do to either have the session automatically attempt to reconnect to the cluster or detect that the session is broken so I can shut it down and create a new session?
You should not need to create a new session. Assuming you are using the datastax python-driver, the driver maintains a 'control connection' which subscribes to node up/down events. If the control connection is lost, it will reconnect to another host in the cluster. It would be useful to turn on debug logging, which will reveal why the nodes in your cluster are being marked down.

Categories