How to use cx_Oracle session pool with Flask gracefuly? - python

I'm a newbie to Python and Flask, and I use Oracle, when learning Flask tutorial, I code as follow, but it smells really bad, please help me with these questions, thanks a lot!
1) need I release connection to poll explicitly?
2) how can I implement poll acquire and release gracefully?
def get_dbpool():
if not hasattr(g, 'db_pool'):
g.dbPool = connect_db()
return g.dbPool
#app.teardown_appcontext
def close_db(error):
if hasattr(g, 'db_pool'):
g.dbPool.close()
#app.route('/')
def hello_world():
db = get_dbpool().acquire()
cursor=db.cursor()
sql=''
cursor.execute(sql)
rows = cursor.fetchall()
cursor.close()
get_dbpool().release(db)
return json.jsonify(combines=rows)

There is no need to release the connection to the pool explicitly unless you intend to keep processing for some time and don't need the connection any longer. cx_Oracle automatically releases the connection back to the pool when the connection goes out of scope (function ends), provided that you haven't implemented a circular reference to the connection, of course! In that case you would have to wait until garbage collection executes. Hopefully that answers your questions!

Related

How to avoid the QueuePool limit error using Flask-SQLAlchemy?

I'm developing a webapp using Flask-SQLAlchemy and a Postgre DB, then I have this dropdown list in my webpage which is populated from a select to the DB, after selecting different values for a couple of times I get the "sqlalchemy.exc.TimeoutError:".
My package's versions are:
Flask-SQLAlchemy==2.5.1
psycopg2-binary==2.8.6
SQLAlchemy==1.4.15
My parameters for the DB connection are set as:
app.config['SQLALCHEMY_POOL_SIZE'] = 20
app.config['SQLALCHEMY_MAX_OVERFLOW'] = 20
app.config['SQLALCHEMY_POOL_TIMEOUT'] = 5
app.config['SQLALCHEMY_POOL_RECYCLE'] = 10
The error I'm getting is:
sqlalchemy.exc.TimeoutError: QueuePool limit of size 20 overflow 20 reached, connection timed out, timeout 5.00 (Background on this error at: https://sqlalche.me/e/14/3o7r)
After changing the value of the 'SQLALCHEMY_MAX_OVERFLOW' from 20 to 100 I get the following error after some value changes on the dropdown list.
psycopg2.OperationalError: connection to server at "localhost" (::1), port 5432 failed: FATAL: sorry, too many clients already
Every time a new value is selected from the dropdown list, four queries are triggered to the database and they are used to populate four corresponding tables in my HTML with the results from that query.
I have a 'db.session.commit()' statement after every single query to the DB, but even though I have it, I get this error after a few value changes to my dropdown list.
I know that I should be looking to correctly manage my connection sessions, but I'm strugling with this. I thought about setting the pool timeout to 5s, instead of the default 30s in hopes that the session would be closed and returned to the pool in a faster way, but it seems it didn't help.
As a suggestion from #snakecharmerb, I checked the output of:
select * from pg_stat_activity;
I ran the webapp for 10 different values before it showed me an error, which means all the 20+20 sessions where used and are left in an 'idle in transaction' state.
Do anybody have any idea suggestion on what should I change or look for?
I found a solution to the issue I was facing, in another post from StackOverFlow.
When you assign your flask app to your db variable, on top of indicating which Flask app it should use, you can also pass on session options, as below:
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy(app, session_options={'autocommit': True})
The usage of 'autocommit' solved my issue.
Now, as suggested, I'm using:
app.config['SQLALCHEMY_POOL_SIZE'] = 1
app.config['SQLALCHEMY_MAX_OVERFLOW'] = 0
Now everything is working as it should.
The original post which helped me is: Autocommit in Flask-SQLAlchemy
#snakecharmerb, #jorzel, #J_H -> Thanks for the help!
You are leaking connections.
A little counterintuitively,
you may find you obtain better results with a lower pool limit.
A given python thread only needs a single pooled connection,
for the simple single-database queries you're doing.
Setting the limit to 1, with 0 overflow,
will cause you to notice a leaked connection earlier.
This makes it easier to pin the blame on the source code that leaked it.
As it stands, you have lots of code, and the error is deferred
until after many queries have been issued,
making it harder to reason about system behavior.
I will assume you're using sqlalchemy 1.4.29.
To avoid leaking, try using this:
from contextlib import closing
from sqlalchemy import create_engine, text
from sqlalchemy.orm import scoped_session, sessionmaker
engine = create_engine(some_url, future=True, pool_size=1, max_overflow=0)
get_session = scoped_session(sessionmaker(bind=engine))
...
with closing(get_session()) as session:
try:
sql = """yada yada"""
rows = session.execute(text(sql)).fetchall()
session.commit()
...
# Do stuff with result rows.
...
except Exception:
session.rollback()
I am using flask-restful.
So when I got this error -> QueuePool limit of size 20 overflow 20 reached, connection timed out, timeout 5.00 (Background on this error at: https://sqlalche.me/e/14/3o7r)
I found out in logs that my checked out connections are not closing. this I found out using logger.info(db_session.get_bind().pool.status())
def custom_decorator(error_message, db_session):
def api_decorator(func):
def api_request(self, *args, **kwargs):
try:
response = func(self)
db_session.commit()
return response
except Exception as err:
db_session.rollback()
logger.error(error_message.format(err))
return error_response(
message=f"Internal Server Error",
status_code=HTTPStatus.INTERNAL_SERVER_ERROR,
)
finally:
db_session.close()
return api_request
return api_decorator
So I had to create this decorator which handles the db_session closing automatically. Using this I am not getting any active checked out connections.
you can use the decorators in your function as follows:
#custom_decorator("blah", db_session)
def example():
"some code"

Python, Tornado, TorMySQL

I have an old, large project based in Python 2.7 with Tornado framework. To work with MySQL, it initially used Tornado-MySQL with raw SQL queries, and it worked well, but now it must use MySQL 8, and that library is obsolete, unmaintained.
So, now I set TorMySQL library – it connects well to MySQL Server 8, but I don't fully understand how to use it, and this leads so bugs.
In one project's file I wrote this code to access databases:
from tornado import gen
from tornado.gen import Return
from tornado.ioloop import IOLoop
import tormysql
import settings
POOL = tormysql.ConnectionPool(
max_connections = 20,
idle_seconds = 7200, #timeout time, 0 is not timeout
wait_connection_timeout = 3,
host='127.0.0.1',
port=3306,
user=settings.MYSQL_USER,
passwd=settings.MYSQL_PASSWORD,
db='aivanf',
use_unicode=True,
charset='utf8mb4')
#gen.coroutine
def executePool(query, params):
with (yield POOL.Connection()) as conn:
with conn.cursor() as cursor:
try:
yield cursor.execute(query, params)
except Exception, ex:
print('Exception!\n{}'.format(ex))
yield conn.rollback()
raise Return(None)
else:
first = query[:10].lower()
if 'update' in first or 'insert' in first:
yield conn.commit()
if 'select' in first:
raise Return(cursor.fetchall())
else:
raise Return(None)
I use if's because this single function is called with different types of queries. I know, it's ugly, but works fine. Similar, but even simpler code for Tornado-MySQL worked completely perfect, but with MySQL 5.7 only.
However, some UPDATE / INSERT queries seem to be skipped, and I get these messages:
(1213, u'Deadlock found when trying to get lock; try restarting transaction')
WARNING:root:Connection maybe not release, used time 25.32s {'port': 3306, 'host': '127.0.0.1', 'user': '...', 'database': '...'} <3,2>.
Also, sometimes different clients of the server see different versions of data – like if they had different connections with own uncommitted data.
How to solve the problem?
I suppose that the problem about the pool – maybe I have to close / recreate it? The TorMySQL page has also this line: yield pool.close()
You probably have to conn.commit() even after a SELECT query - otherwise a run of SELECT queries are done within the same transaction as the first.
I think most users are accustomed to "autocommit" by default, but that does not seem to be the default mode for TorMySQL
(I was confused the same as you were, for the first couple days of using TorMySQL :)

Executing a Transaction using SQLAlchemy in a multithreaded environment

I'm working on an application which uses Flask, SQLAlchemy and PostgreSQL. I've to write a transaction that executes multiple queries on the database.
def exec_query_1():
with db.engine.connect() as connection:
connection.execute(#some-query)
def exec_query_2():
with db.engine.connect() as connection:
connection.execute(#some-query)
def exec_query_3():
with db.engine.connect() as connection:
connection.execute(#some-query)
def execute_transaction():
with db.engine.connect() as connection:
with connection.begin() as transaction:
exec_query_1()
exec_query_2()
exec_query_3()
Given that the application is multithreaded, will this code work as expected?
If yes, how? If no, what would be the right approach to make it work?
The code will not work as expected, even in a single thread. The connections opened in the functions are separate1 from the connection used in execute_transaction() and have their own transactions. You should arrange your code so that the functions receive the connection with the ongoing transaction as an argument:
def exec_query_1(connection):
connection.execute(#some-query)
def exec_query_2(connection):
connection.execute(#some-query)
def exec_query_3(connection):
connection.execute(#some-query)
def execute_transaction():
with db.engine.connect() as connection:
with connection.begin() as transaction:
exec_query_1(connection)
exec_query_2(connection)
exec_query_3(connection)
Remember that connections are not thread-safe, so don't share them between threads. "When do I construct a Session, when do I commit it, and when do I close it?" is a good read, altough about Session.
1 May depend on pool configuration.

SQLAlchemy connection hangs

def get_engine():
engine = create_engine('mysql+mysqlconnector://...my_conn_string...', echo=True)
return engine
def generic_execute(sql):
db = get_engine()
connection = db.connect()
connection.execute(sql)
The code above executes the query properly but appears to hang infinitely.
How does one properly "close" or "kill" this connection? Thank you very much!
As you said the connection need to be needs to be closed as stated by the documentation.
So after you are done executing the sql query you need to call:
connection.close()
Also if you are done with the engine db you can call db.dispose() to clean everything.

Set database connection timeout in Python

I'm creating a RESTful API which needs to access the database. I'm using Restish, Oracle, and SQLAlchemy. However, I'll try to frame my question as generically as possible, without taking Restish or other web APIs into account.
I would like to be able to set a timeout for a connection executing a query. This is to ensure that long running queries are abandoned, and the connection discarded (or recycled). This query timeout can be a global value, meaning, I don't need to change it per query or connection creation.
Given the following code:
import cx_Oracle
import sqlalchemy.pool as pool
conn_pool = pool.manage(cx_Oracle)
conn = conn_pool.connect("username/p4ss#dbname")
conn.ping()
try:
cursor = conn.cursor()
cursor.execute("SELECT * FROM really_slow_query")
print cursor.fetchone()
finally:
cursor.close()
How can I modify the above code to set a query timeout on it?
Will this timeout also apply to connection creation?
This is similar to what java.sql.Statement's setQueryTimeout(int seconds) method does in Java.
Thanks
for the query, you can look on timer and conn.cancel() call.
something in those lines:
t = threading.Timer(timeout,conn.cancel)
t.start()
cursor = conn.cursor()
cursor.execute(query)
res = cursor.fetchall()
t.cancel()
In linux see /etc/oracle/sqlnet.ora,
sqlnet.outbound_connect_timeout= value
also have options:
tcp.connect_timeout and sqlnet.expire_time, good luck!
You could look at setting up PROFILEs in Oracle to terminate the queries after a certain number of logical_reads_per_call and/or cpu_per_call
Timing Out with the System Alarm
Here's how to use the operating system timout to do this. It's generic, and works for things other than Oracle.
import signal
class TimeoutExc(Exception):
"""this exception is raised when there's a timeout"""
def __init__(self): Exception.__init__(self)
def alarmhandler(signame,frame):
"sigalarm handler. raises a Timeout exception"""
raise TimeoutExc()
nsecs=5
signal.signal(signal.SIGALRM, alarmhandler) # set the signal handler function
signal.alarm(nsecs) # in 5s, the process receives a SIGALRM
try:
cx_Oracle.connect(blah blah) # do your thing, connect, query, etc
signal.alarm(0) # if successful, turn of alarm
except TimeoutExc:
print "timed out!" # timed out!!

Categories