How to implement a Mysql client like REPL in Python - python

I want a client with Fabric and Python raw_input builtin function to implement a Mysql Client like console.
Every ok for single instruction. But for multiline SQLs, fab process a SQL need some times (ssh to remote server and echo the SQL to local mysql client), and then raw_input will lose the reset of multiline SQLs. In the mysql console you could paste multiline SQLs and will been executed perfect every single instruction.
Why not using mysql client directly ? There are a gate of the servers. And we need a simple alias to avoid write the parameters every time. And some post jobs after execute SQLs (post the echo of the execution, etc.)
I guess the fabric run was blocking the sys.stdin and make the input losing. But I'm failing execute run in a independent thread.

Related

python server executing DB/Binance connections every time the system is accessed

I am using Python and Flask as part of a server. When the server starts up, it connects to an Oracle database and Binance Crypto Exchange server
The server starts in either TEST or PRODUCTION mode. In order to determine the mode to use when starting up, I take an input variable and then use it to determine whether or not to connect to the PROD configuration (which would actually execute trades) and the TEST system (which is more like a sandbox)
Whenever I make a call to the server ( ex: http://<myservername.com>:80/ ) it seems as though the server connections are executed with each call. So, if I type in http://<myservername.com>:80/ 7 times, the code that connects to the database (and the code that connects to the Binance server) is EXECUTED SEVEN times.
Question: Is there a place where one can put the connection code so that it is executed ONCE when the server is started up?
I saw the following:
https://damyan.blog/post/flask-series-structure/
How to execute a block of code only once in flask?
Flask at first run: Do not use the development server in a production environment
and tried using the solution in #2
#app.before_first_request
def do_something_only_once():
The code was changed so it had the following below (connection to the Binance server is not shown):
#app.before_first_request
def do_something_only_once():
system_access = input(" Enter the system access to use \n-> ")
if ( system_access.upper() == "TEST" ) :
global_STARTUP_DB_SERVER_MODE = t_system_connect.DBSystemConnection.DB_SERVER_MODE_TEST
print(" connected to TEST database")
if ( system_access.upper() == "PROD" ) :
global_STARTUP_DB_SERVER_MODE = t_system_connect.DBSystemConnection.DB_SERVER_MODE_PROD
print(" connected to PRODUCTION database")
When starting the server up, I never get an opportunity to enter "TEST" ( in order to connect to the "TEST" database). In fact, the code under the area of:
#app.before_first_request
def do_something_only_once():
is never executed at all.
Question: How can one fix the code so that when the server is started, the code responsible for connecting to the Oracle DB server and connecting to the Binance server is only executed ONCE and not every time the server is being accessed by using http://<myservername.com>:80/
Any help, hints or advice would be greatly appreciated
TIA
#Christopher Jones
Thanks for the response.
What I was hoping to do was to have this Flask server implemented as a Docker process. The idea is to start several of these processes at one time. The group of Docker Processes would then be managed by some kind of Dispatcher. When an http://myservername.com:80/ command was executed, the connection information would first go to the Dispatcher which would forward it to a Docker Process that was "free" for usage. My thoughts were that Docker Swarm (or something under Kubernetes) might work in this fashion(?) : one process gets one connection to the DB (and the dispatcher would be responsible for distributing work).
I came from ERP background. The existence of the Oracle Connection Pool was known but it was elected to move most of the work to the OS processing level (in that if one ran "ps -ef | grep <process_name>" they would see all of the processes that the "dispatcher" would forward work to). So, I was looking for something similar - old habits die hard ...
Most Flask apps will be called by more than one user so a connection pool is important. See How to use Python Flask with Oracle Database.
You can open a connection pool at startup:
if __name__ == '__main__':
# Start a pool of connections
pool = start_pool()
...
(where start_pool() calls cx_Oracle.SessionPool() - see the link for the full example)
Then your routes borrow a connection as needed from the pool:
    connection = pool.acquire()
    cursor = connection.cursor()
    cursor.execute("select username from demo where id = :idbv", [id])
    r = cursor.fetchone()
    return (r[0] if r else "Unknown user id")
Even if you only need one connection, a pool of one connection can be useful because it gives some Oracle high availability features that holding open a standalone connection for the duration of the application won't give.

Hold oracle DB connection between bash and python

I have a bash script which calls a python script to create the Oracle DB connection using cx_oracle. I want to use the same connection object from bash script later as well. But whenever the python script ends, connection object is lost.
Can anyone help to hold the connection object to use further in the bash or can we pass the connection object from python to bash and vice versa!!
You should reconsider your architecture and use some kind of service or web app that remains running.
Connections are made up of (i) a cx_Oracle data structure (ii) a network connection to the database (iii) a database server process.
Once the Python process is closed, then all three are closed by default. So you lose all state like the statement cache, and any session settings like NLS date format. If you enable Database Resident Connection Pooling (DRCP) - see the manual - then the database server process will remain available for re-use which saves some overhead, however the next process will still have to re-authenticate.

How does Postges Server know to keep a database connection open

I wonder how does Postgres sever determine to close a DB connection, if I forgot at the Python source code side.
Does the Postgres server send a ping to the source code? From my understanding, this is not possible.
PostgreSQL indeed does something like that, although it is not a ping.
PostgreSQL uses a TCP feature called keepalive. Once enabled for a socket, the operating system kernel will regularly send keepalive messages to the other party (the peer), and if it doesn't get an answer after a couple of tries, it closes the connection.
The default timeouts for keepalive are pretty long, in the vicinity of two hours. You can configure the settings in PostgreSQL, see the documentation for details.
The default values and possible values vary according to the operating system used.
There is a similar feature available for the client side, but it is less useful and not enabled by default.
When your script quits your connection will close and the server will clean it up accordingly. Likewise, it's often the case in garbage collected languages like Python that when you stop using the connection and it falls out of scope it will be closed and cleaned up.
It is possible to write code that never releases these resources properly, that just perpetually creates new handles, something that can be problematic if you don't have something server-side that handles killing these after some period of idle time. Postgres doesn't do this by default, though it can be configured to, but MySQL does.
In short Postgres will keep a database connection open until you kill it either explicitly, such as via a close call, or implicitly, such as the handle falling out of scope and being deleted by the garbage collector.

Is it good idea to use database connection directly through python CGI script?

Is it a good idea to use database connection directly from python CGI Script, which will be called from our client side?
If it is then, every time our server side script is called it will try to connect to database.
So basically if we have 100,000 clients trying to call our server script it will connect to that database and do its job(Which may be similar for all clients)?
If it's not then how should we tackle such situations?
Thanks

accessing router via telnet lib python

I'm working on a code that uses telnetlib of python to connect to a router and execute commands and stores the output in a file.
I'm using read_until('#') function and expecting a Router prompt, then execute the next command but my code freezes when I receive a '--More--' data from the remote telnet side. I tried using a pattern match to find '--More--' but then sometime the --More-- keyword doesn't come at once.
Any suggestion ?
Do I have to send some IAC command to the remote telnet side ?
sometime the --More-- keyword doesn't come at once
Try passing in a timeout.
Example: set timeout to 5 seconds for read_until():
read_until('--More--', 5)
Alternatively, you could use the expect() function to look for either '#' or '--More--' with a timeout:
expect(['#', '--More--'], 5)

Categories