Why does there appear to be a significant delay between calling session.close() and the session actually closing?
I'm "using up" connections in a way that doesn't feel right. Is there a better way to do this or a design pattern I'm missing?
Following the guide here I use the following code, copied for completeness:
#contextmanager
def session_scope():
"""Provide a transactional scope around a series of operations."""
session = Session()
try:
yield session
session.commit()
except:
session.rollback()
raise
finally:
session.close()
def run_my_program():
with session_scope() as session:
ThingOne().go(session)
ThingTwo().go(session)
This works great for reliably committing data and avoiding invalid sessions.
The problem is with hitting connection limits.
For example, say I have a page that makes 5 asynchronous calls per visit. If I visit the page, and hit refresh in quick succession it will spawn 5 * number_times_refreshed connections. It will eventually close them but there is a non negligible time delay.
The issue was the use of sessionmaker() and binding the db engine. Specifically this not good way:
def newSession():
engine = create_engine(settings.DATABASE_URL)
Base.metadata.bind = engine
DBSession = sessionmaker(bind=engine)
session = DBSession()
return session
#contextmanager
def session_scope():
session = newSession()
try:
yield session
session.commit()
except:
session.rollback()
raise
finally:
session.close()
This creates a new engine with each request. In addition to this being a poor pattern, it caused confusion on which "connection" was being closed. In this context we have both a "session" and a "DBSession". The session does get closed with session.close() but this does not touch the "DBSession".
A better way (Solution):
engine = create_engine(settings.DATABASE_URL)
Base.metadata.bind = engine
DBSession = sessionmaker(bind=engine)
#contextmanager
def session_scope():
session = DBSession()
try:
yield session
session.commit()
except:
session.rollback()
raise
finally:
session.close()
Creates one connection for the lifetime of the app
Related
I am using the scoped_session for my APIs from sqlalchemy python
class DATABASE():
def __init__(self):
engine = create_engine(
'mssql+pyodbc:///?odbc_connect=%s' % (
urllib.parse.quote_plus(
'DRIVER={/usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so};SERVER=localhost;'
'DATABASE=db1;UID=sa;PWD=admin;port=1433;'
)), isolation_level='READ COMMITTED', connect_args={'options': '-c lock_timeout=30 -c statement_timeout=30', 'timeout': 40}, max_overflow=10, pool_size=30, pool_timeout=60)
session = sessionmaker(bind=engine)
self.Session = scoped_session(session)
def calculate(self, book_id):
session = self.Session
output = None
try:
result = session.query(Book).get(book_id)
if result:
output = result.pages
except:
session.rollback()
finally:
session.close()
return output
def generate(self):
session = self.Session
try:
result = session.query(Order).filter(Order.product_name=='book').first()
pages = self.calculate(result.product_id)
if not output:
result.product_details = str(pages)
session.commit()
except:
session.rollback()
finally:
session.close()
return output
database = DATABASE()
database.generate()
Here, the session is not committing, then I go through the code, the generate function calls the calculate function, there, after the calculations are completed, the session is closed - due to this, the changes made in the generate function is not committed to the database
If I remove the session.close() from calculate function, the changes made in generate function is committed to the database
From the blogs, it is recommend to close the session after API complete its accessing to the database
How to resolve this, and what is the flow of sqlalchemy?
Thanks
Scoped sessions default to being thread-local, so as long as you are in the same thread, the factory (self.Session in this case) will always return the same session. So calculate and generate are both using the same session, and closing it in calculate will roll back the changes made in generate.
Moreover, scoped sessions should not be closed, they should be removed from the session registry (by calling self.Session.remove()); they will be closed automatically in this case.
You should work out where in your code you will have finished with your session, and remove it there and nowhere else. It will probably be best to commit or rollback in the same place. In the code in the question I'd remove rollback and close from calculate.
The docs on When do I construct a Session, when do I commit it, and when do I close it? and Contextual/Thread-local Sessions should be helpful.
All of the sudden, in my Celery application I'm getting lots of:
(psycopg2.errors.InFailedSqlTransaction) current transaction is aborted, commands ignored until end of transaction block
The truth is that I have no idea how to properly setup Celery to work with SQLAlchemy. I have a BaseTask that all tasks inherit from and it looks like this:
from sqlalchemy.orm import scoped_session, sessionmaker
session_factory = sessionmaker(
autocommit=False,
autoflush=False,
bind=create_engine("postgresql://****")
)
Session = scoped_session(session_factory)
class BaseTask(celery.Task):
def after_return(self, *args: tuple, **kwargs: dict) -> None:
Session.remove()
#property
def session(self) -> OrmSession:
return Session()
And all of my tasks (bound or not) are either using self.session or {task_func).session to make the queries. Should I rather use a context manager around my queries, whitin the tasks like:
#contextmanager
def session_scope():
"""Provide a transactional scope around a series of operations."""
session = Session()
try:
yield session
session.commit()
except:
session.rollback()
raise
finally:
session.close()
#app.task()
def my_task():
with session_scope() as session:
do_a_query(session)
Can someone please explain to me how those sessions work? And guide me towards the correct "Celery use of SQLAlchemy"?
Thank you.
engine = db.create_engine(self.url, convert_unicode=True, pool_size=5, pool_recycle=1800, max_overflow=10)
connection = self.engine.connect()
Session = scoped_session(sessionmaker(bind=self.engine, autocommit=False, autoflush=True))
I initialize my session like this. And
def first():
with Session() as session:
second(session)
def second(session):
session.add(obj)
third(session)
def third(session):
session.execute(query)
I use my session like this.
I think the pool is assigned one for each session. So, I think that the above code can work well when pool_size=1, max_overflow=0. But, when I set up like that It stuck and return Exception like.
descriptor '__init__' requires a 'super' object but received a 'tuple'
Why is that? The pool is assigned more than one per session rather than one by one?
And when using session with with, Can do I careless commit and rollback when exception?
I'm getting this AttributeError: __enter__ when I try to use SQLAalchemy session like in this guide.
My code:
Session = scoped_session(sessionmaker(autoflush=True, autocommit=False, bind=engine))
#contextmanager
def session_scope():
session = Session()
try:
yield session
session.commit()
except:
session.rollback()
raise
finally:
session.close()
class SomeClass:
def __init__(self):
self.session_scope = session_scope
def something_with_session(self):
with self.session_scope as session: # <-- error
What am I doing wrong? I'm using Python 3.6
You have to call the function to get the context
with self.session_scope() as session:
...
For those used to SQLAlchemy 1.4 way of running the session construction / close process via context manager like so:
with Session() as session:
# Do something
If you are getting AttributeError: __enter__, check that the SQLAlchemy version in your environment is really SQLAlchemy>=1.4. More details in this answer.
How session is able to output user objects even after session.close() is executed ?
__init__.py
def get_db(request):
maker = request.registry.dbmaker
session = maker()
def cleanup(request):
if request.exception is not None:
session.rollback()
else:
session.commit()
session.close()
request.add_finished_callback(cleanup)
return session
view.py
#view_config(route_name='get_path', renderer='json')
def get_path(request):
session = request.db
session.query(User).all() # outputs user objects
session.close()
session.query(User).all() # outputs user objects
Even i tried session.bind.dispose() (should dispose all connections)
When you run session.close() it releases all the resources it might have used, i.e. transactional/connection. It does not physically close the session so that no other queries can be run after you have called it...
From sqlalchemy docs:
Closing
The close() method issues a expunge_all(), and releases any transactional/connection resources. When connections are returned to the connection pool, transactional state is rolled back as well.
So it would still run the query after you called session.close(). Any uncommitted transactions, as stated above, will be rolled back.