python server executing DB/Binance connections every time the system is accessed - python

I am using Python and Flask as part of a server. When the server starts up, it connects to an Oracle database and Binance Crypto Exchange server
The server starts in either TEST or PRODUCTION mode. In order to determine the mode to use when starting up, I take an input variable and then use it to determine whether or not to connect to the PROD configuration (which would actually execute trades) and the TEST system (which is more like a sandbox)
Whenever I make a call to the server ( ex: http://<myservername.com>:80/ ) it seems as though the server connections are executed with each call. So, if I type in http://<myservername.com>:80/ 7 times, the code that connects to the database (and the code that connects to the Binance server) is EXECUTED SEVEN times.
Question: Is there a place where one can put the connection code so that it is executed ONCE when the server is started up?
I saw the following:
https://damyan.blog/post/flask-series-structure/
How to execute a block of code only once in flask?
Flask at first run: Do not use the development server in a production environment
and tried using the solution in #2
#app.before_first_request
def do_something_only_once():
The code was changed so it had the following below (connection to the Binance server is not shown):
#app.before_first_request
def do_something_only_once():
system_access = input(" Enter the system access to use \n-> ")
if ( system_access.upper() == "TEST" ) :
global_STARTUP_DB_SERVER_MODE = t_system_connect.DBSystemConnection.DB_SERVER_MODE_TEST
print(" connected to TEST database")
if ( system_access.upper() == "PROD" ) :
global_STARTUP_DB_SERVER_MODE = t_system_connect.DBSystemConnection.DB_SERVER_MODE_PROD
print(" connected to PRODUCTION database")
When starting the server up, I never get an opportunity to enter "TEST" ( in order to connect to the "TEST" database). In fact, the code under the area of:
#app.before_first_request
def do_something_only_once():
is never executed at all.
Question: How can one fix the code so that when the server is started, the code responsible for connecting to the Oracle DB server and connecting to the Binance server is only executed ONCE and not every time the server is being accessed by using http://<myservername.com>:80/
Any help, hints or advice would be greatly appreciated
TIA
#Christopher Jones
Thanks for the response.
What I was hoping to do was to have this Flask server implemented as a Docker process. The idea is to start several of these processes at one time. The group of Docker Processes would then be managed by some kind of Dispatcher. When an http://myservername.com:80/ command was executed, the connection information would first go to the Dispatcher which would forward it to a Docker Process that was "free" for usage. My thoughts were that Docker Swarm (or something under Kubernetes) might work in this fashion(?) : one process gets one connection to the DB (and the dispatcher would be responsible for distributing work).
I came from ERP background. The existence of the Oracle Connection Pool was known but it was elected to move most of the work to the OS processing level (in that if one ran "ps -ef | grep <process_name>" they would see all of the processes that the "dispatcher" would forward work to). So, I was looking for something similar - old habits die hard ...

Most Flask apps will be called by more than one user so a connection pool is important. See How to use Python Flask with Oracle Database.
You can open a connection pool at startup:
if __name__ == '__main__':
# Start a pool of connections
pool = start_pool()
...
(where start_pool() calls cx_Oracle.SessionPool() - see the link for the full example)
Then your routes borrow a connection as needed from the pool:
    connection = pool.acquire()
    cursor = connection.cursor()
    cursor.execute("select username from demo where id = :idbv", [id])
    r = cursor.fetchone()
    return (r[0] if r else "Unknown user id")
Even if you only need one connection, a pool of one connection can be useful because it gives some Oracle high availability features that holding open a standalone connection for the duration of the application won't give.

Related

Python long idle connection in cx_Oracle getting: DPI-1080: connection was closed by ORA-3113

I have long-running Python executable running.
Open Oracle connection using cx_Oracle on start.
After more than 45-60 mins of idle connects - it get's this error.
Any idea or special setup required in cx_Oracle ?
Instead of leaving a connection unused in your application, consider closing it when it isn't needed, and then reopening when it is needed. Using a connection pool would be recommended, since pools can handle some underlying failures such as yours and will give you a usable connection.
At application initialization start the pool once:
pool = cx_Oracle.SessionPool("username", pw,
"localhost/orclpdb1", min=0, max=4, increment=1)
Then later get the connection and hold it only when you need it:
with pool.acquire() as connection:
cursor = connection.cursor()
for result in cursor.execute(
"""select sys_context('userenv','sid') from dual"""):
print(result)
The end of the with block will release the connection back to the pool. It
won't be closed. The next time acquire() is called the pool can check the
connection is still usable. If it isn't, it will give you a new one. Because of these checks, the pool is useful even if you only have one connection.
See my blog post Always Use Connection Pools — and
How
most of which applies to cx_Oracle.
But if you don't want to change your code, then try setting an Oracle Network parameter EXPIRE_TIME as shown in the cx_Oracle documentation. This can be set in various places. In C-based Oracle clients like cx_Oracle:
With 18c client libraries it can be added as (EXPIRE_TIME=n) to the DESCRIPTION section of a connect descriptor
With 19c client libraries it can additionally be used via Easy Connect: host/service?expire_time=n.
With 21c client libraries it can additionally be used in a client-side sqlnet.ora file
This may not always help, depending what is closing the connection.
Fundamentally you should/could fix the root cause, which could be a firewall timeout, or a DBA-imposed user resource or DB idle time limit.

Hold oracle DB connection between bash and python

I have a bash script which calls a python script to create the Oracle DB connection using cx_oracle. I want to use the same connection object from bash script later as well. But whenever the python script ends, connection object is lost.
Can anyone help to hold the connection object to use further in the bash or can we pass the connection object from python to bash and vice versa!!
You should reconsider your architecture and use some kind of service or web app that remains running.
Connections are made up of (i) a cx_Oracle data structure (ii) a network connection to the database (iii) a database server process.
Once the Python process is closed, then all three are closed by default. So you lose all state like the statement cache, and any session settings like NLS date format. If you enable Database Resident Connection Pooling (DRCP) - see the manual - then the database server process will remain available for re-use which saves some overhead, however the next process will still have to re-authenticate.

How to start clients from server itself in python?

I'm developting a automation framework with little manual intervention.
There is one server and 3 client machines.
what server does is it sends some command to each client one by one and get the output of that command and stores in a file.
But to establish the connection I have to manually start clients in different machine in the command line, is there a way that the server itself sends a signal or something to start the client sends command stores output and then start next client so on in python?
Edited.
After the below suggestion, I used spur module
import spur
ss = spur.SshShell(hostname = "172.16.6.58",username ='username',password='some_password',shell_type=spur.ssh.ShellTypes.minimal,missing_host_key=spur.ssh.MissingHostKey.accept)
res = ss.run(['python','clientsock.py'])
I'm trying to start the clientsock.py file in one of the client machine (server is already running in the current machine) but, it hangs there nothing happens. what am i missing here?

How to call a function with a class object as a parameter in string form

class host_struct(object):
host_id = dict()
host_license_id = dict()
def m(a):
eval(a)
host = host_struct()
m('host.host_id={1:1}')
print host
The above code doesn't work and is a sample of what I am trying to accomplish. I am trying to solve a problem where I need to call a function with a class object as a string, yet in the function manipulate the object as a class.
Here is my problem: I have a connection pooler/broker module, that maintains a persistent connection to the server. The server sets a inactivity TTL on all connections of 30 minutes. So every 29 minutes the broker need to touch the server to maintain a persistent connection. At the same time the connection broker needs to process client requests which it will send to the server and when the server responds, send the server's reply to the client.
The communications to the server are via a connection class that has many complex objects. So allowing the client modules to directly manipulate the class would bypass the connection broker entirely which will result in the server terminating the connection due to the inactivity TTL.
Is this possible? Is there a better way to address this problem?
Here is some additional background. I am opening a connection to VMWare vCenter. To initiate the connection, I instantiate the connection class, then call a connection method. Currently in my client programs, I am doing all of this now. However I am running into a problem with vCenter and need to connect once when I start the program and use the same connection for the entire run. Currently I am opening a connection to vCenter do my work, close the connection and sleep for a period of time then repeat the process. This continual connect/disconnect is causing issues. So I wrote a test to see if I could address the issues my maintaining a persistent connection and I was successful.
vcenter = VIServer()
vcenter.connect(*config_values)
At this point, the vcenter object is connected to the server. There are several method calls I need to make to query certain objects. Here are 2 examples of the many I use:
vms = vcenter._retrieve_properties_traversal(property_names=vm_objects,obj_type='VirtualMachine')
or
api_version = vcenter.get_api_version()
The first line will retrieve specific VM objects from the server and the second gets the API version. I would like to call this method from the connection broker because he will be the one that is keeping the connection to vCenter open.
So in my connection broker I would like to pass 'vcenter.get_api_version()' as a string argument and have the connection broker execute api = vcenter.get_api_version().
Does this help to clarify?
Use exec instead of eval. Example:
class host_struct: # no need for parentheses if not inheriting from something besides object
host_id = {} # use of {} is more idiomatic than dict()
host_license_id = {}
def m(a):
exec a
host = host_struct()
m('host.host_id.update({1:1})') # updating will add to existing dict instead of replacing
print host.host_id[1]
Running this script produces the expecte output of 1.

Disconnecting from host with Python Fabric when using the API

The website says:
Closing connections: Fabric’s
connection cache never closes
connections itself – it leaves this up
to whatever is using it. The fab tool
does this bookkeeping for you: it
iterates over all open connections and
closes them just before it exits
(regardless of whether the tasks
failed or not.)
Library users will need to ensure they
explicitly close all open connections
before their program exits, though we
plan to makes this easier in the
future.
I have searched everywhere, but I can't find out how to disconnect or close the connections. I am looping through my hosts and setting env.host_string. It is working, but hangs when exiting. Any help on how to close? Just to reiterate, I am using the library, not a fabfile.
If you don't want to have to iterate through all open connections, fabric.network.disconnect_all() is what you're looking for. The docstring reads
"""
Disconnect from all currently connected servers.
Used at the end of fab's main loop, and also intended for use by library users.
"""
The main.py for fabric has this:
from fabric.state import commands, connections
for key in connections.keys():
if state.output.status:
print "Disconnecting from %s..." %, denormalize(key), connections[key].close()
fabric.state.connections is a dict with the value being: paramiko.SSHClient
So off I go to close those.
You can disconnect from a specific connection, by host name, using the following code snippet (with fabric 1.10.1):
def disconnect(host):
host = host or fabric.api.env.host_string
if host and host in fabric.state.connections:
fabric.state.connections[host].get_transport().close()
from fabric.network import disconnect_all
disconnect_all()

Categories