closing session in sqlalchemy - python

I have created a method in seperate python file. Whenever I have to get any data from database I call this method.
Now I am doing a for loop where for every iteration, db call is made to below method for ex-
def get_method(self, identifier):
sess = session.get_session()
id = sess.query(..).filter(I.. == ..)
return list(id)[0]
def get_session():
engine = create_engine('postgresql+psycopg2://postgres:postgres#localhost/db', echo=True)
Session = sessionmaker(engine)
sess = Session()
return sess
I am getting FATAL: sorry, too many clients already , probably because I am not closing the sess object . Even after closing I am getting the same issue.
How do I handle this.

You shouldn't be opening your session within the for loop. Do that before your loop begins, and close it after you're finished with your transactions. The documentation is helpful here: when to open and close sessions

Related

scoped_session.close() in sqlalchemy

I am using the scoped_session for my APIs from sqlalchemy python
class DATABASE():
def __init__(self):
engine = create_engine(
'mssql+pyodbc:///?odbc_connect=%s' % (
urllib.parse.quote_plus(
'DRIVER={/usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so};SERVER=localhost;'
'DATABASE=db1;UID=sa;PWD=admin;port=1433;'
)), isolation_level='READ COMMITTED', connect_args={'options': '-c lock_timeout=30 -c statement_timeout=30', 'timeout': 40}, max_overflow=10, pool_size=30, pool_timeout=60)
session = sessionmaker(bind=engine)
self.Session = scoped_session(session)
def calculate(self, book_id):
session = self.Session
output = None
try:
result = session.query(Book).get(book_id)
if result:
output = result.pages
except:
session.rollback()
finally:
session.close()
return output
def generate(self):
session = self.Session
try:
result = session.query(Order).filter(Order.product_name=='book').first()
pages = self.calculate(result.product_id)
if not output:
result.product_details = str(pages)
session.commit()
except:
session.rollback()
finally:
session.close()
return output
database = DATABASE()
database.generate()
Here, the session is not committing, then I go through the code, the generate function calls the calculate function, there, after the calculations are completed, the session is closed - due to this, the changes made in the generate function is not committed to the database
If I remove the session.close() from calculate function, the changes made in generate function is committed to the database
From the blogs, it is recommend to close the session after API complete its accessing to the database
How to resolve this, and what is the flow of sqlalchemy?
Thanks
Scoped sessions default to being thread-local, so as long as you are in the same thread, the factory (self.Session in this case) will always return the same session. So calculate and generate are both using the same session, and closing it in calculate will roll back the changes made in generate.
Moreover, scoped sessions should not be closed, they should be removed from the session registry (by calling self.Session.remove()); they will be closed automatically in this case.
You should work out where in your code you will have finished with your session, and remove it there and nowhere else. It will probably be best to commit or rollback in the same place. In the code in the question I'd remove rollback and close from calculate.
The docs on When do I construct a Session, when do I commit it, and when do I close it? and Contextual/Thread-local Sessions should be helpful.

How does a session work with SQL Alchemy with pool?

engine = db.create_engine(self.url, convert_unicode=True, pool_size=5, pool_recycle=1800, max_overflow=10)
connection = self.engine.connect()
Session = scoped_session(sessionmaker(bind=self.engine, autocommit=False, autoflush=True))
I initialize my session like this. And
def first():
with Session() as session:
second(session)
def second(session):
session.add(obj)
third(session)
def third(session):
session.execute(query)
I use my session like this.
I think the pool is assigned one for each session. So, I think that the above code can work well when pool_size=1, max_overflow=0. But, when I set up like that It stuck and return Exception like.
descriptor '__init__' requires a 'super' object but received a 'tuple'
Why is that? The pool is assigned more than one per session rather than one by one?
And when using session with with, Can do I careless commit and rollback when exception?

How to keep session active for long time in sqlalchemy?

I have a code that runs a query from a query list. These query are long and take quite a long time to execute. Since I am executing these query in a loop, the session seems to expire and I get a error telling me that the connection to the server was lost.
Then I created the session as well as engine inside the loop (I closed the session and disposed the engine at the end of the loop.) I have understood that creating new connection is an expensive operation.
How can I re-use the connection in this case so that I do not have to create the session and engine each time?
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
# an Engine, which the Session will use for connection
# resources
some_engine = create_engine('mysql://user:password#localhost/')
# create a configured "Session" class
Session = sessionmaker(bind=some_engine)
# create a Session
session = Session()
for long_query in long_query_list:
# work with sess
session.execute(long_query)
session.commit()

Cascade updates after sqlalchemy "after_update" event

I am trying to cascade updates after a model has been updated:
event.listens_for(Task, "after_update")
def propagate_status(mapper, connection, target):
object = target.object
if object.is_finished():
object.status = COMPLETED
The issue is that I can't seem to get the object status commited to the actual db. I can tell that the status does get set because I have a listener attached to object.status.
I have tried calling commit() on the object but that results in:
ResourceClosedError: This transaction is closed
Is there a better way to accomplish this? Should I use the connection object directly?
For those who find this later on. I was able to successfully use the connection directly:
#event.listens_for(Task, "after_update")
def propagate_status(mapper, connection, target):
obj_table = Object.__table__
object = target.object
if object.is_finished():
connection.execute(
scan_table.update().
where(object_table.c.id == object.id).
values(status=COMPLETED)
)
If there is a more elegant way I am all for it.
When calling after_update the session is in the flush state. So you won't be able to commit any changes with that session. Another way is to create a new session inside the after_update event handler
#event.listens_for(Task, "after_update")
def propagate_status(mapper, connection, target):
obj_table = Object.__table__
object = target.object
if object.is_finished():
session = db.create_scoped_session() # or any other way
session.query(Object).filter(Object.id == target.id).update({"status": COMPLETED})
session.commit()

Zombie Connection in SQLAlchemy

DBSession = sessionmaker(bind=self.engine)
def add_person(name):
s = DBSession()
s.add(Person(name=name))
s.commit()
Everytime I run add_person() another connection is created with my postgreSQL DB.
Looking at:
SELECT count(*) FROM pg_stat_activity;
I see the count going up, until I get a Remaining connection slots are reserved for non-replication superuser connections error.
How do I kill those connections? Am I wrong in opening a new session everytime I want to add a Person record?
In general, you should keep your Session object (here DBSession) separate from any functions that make changes to the database. So in your case you might try something like this instead:
DBSession = sessionmaker(bind=self.engine)
session = DBSession() # create your session outside of functions that will modify database
def add_person(name):
session.add(Person(name=name))
session.commit()
Now you will not get new connections every time you add a person to the database.

Categories