Do we need to do session.begin() explicitly? - python

Specifically, do I need to call begin after doing a commit or rollback? I saw stuff suggesting that a new session always enters begin state; but I was wondering about auto-committed transactions happening when a session is begun.
When must I issue a begin? Will multiple begins in same session behave the same as in a MySQL terminal?
I have cases like (look at comments):
--1 A method that does transactions in a loop:
for ...: #EACH ONE DESERVES TO HAVE OWN TRANSACTION
session.begin()
for ....:
session.execute("insert into...")
session.commit()
--2 A function that calls another function in same session:
def f1(): #can be done standalone
session = Session()
session.begin()
...do stuff
session.commit()
def f2():
session = Session()
session.begin()
a = session.execute("select...")
if stuff_not_fine():
session.rollback() #KILL OF CURRENT TRANSACTION
f1()
session.begin() #CONTINUE WHERE IT LEFT
a = session.execute("select...")
...do rest of stuff

A SQL connection is as well a context manager. So you can do
session = Session()
with session as cursor:
# do stuff
In order to roolback, you might introduce an exception which, if raised, causes the context manager to rollback the transaction. You should remember to catch the exception, however.

Related

scoped_session.close() in sqlalchemy

I am using the scoped_session for my APIs from sqlalchemy python
class DATABASE():
def __init__(self):
engine = create_engine(
'mssql+pyodbc:///?odbc_connect=%s' % (
urllib.parse.quote_plus(
'DRIVER={/usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so};SERVER=localhost;'
'DATABASE=db1;UID=sa;PWD=admin;port=1433;'
)), isolation_level='READ COMMITTED', connect_args={'options': '-c lock_timeout=30 -c statement_timeout=30', 'timeout': 40}, max_overflow=10, pool_size=30, pool_timeout=60)
session = sessionmaker(bind=engine)
self.Session = scoped_session(session)
def calculate(self, book_id):
session = self.Session
output = None
try:
result = session.query(Book).get(book_id)
if result:
output = result.pages
except:
session.rollback()
finally:
session.close()
return output
def generate(self):
session = self.Session
try:
result = session.query(Order).filter(Order.product_name=='book').first()
pages = self.calculate(result.product_id)
if not output:
result.product_details = str(pages)
session.commit()
except:
session.rollback()
finally:
session.close()
return output
database = DATABASE()
database.generate()
Here, the session is not committing, then I go through the code, the generate function calls the calculate function, there, after the calculations are completed, the session is closed - due to this, the changes made in the generate function is not committed to the database
If I remove the session.close() from calculate function, the changes made in generate function is committed to the database
From the blogs, it is recommend to close the session after API complete its accessing to the database
How to resolve this, and what is the flow of sqlalchemy?
Thanks
Scoped sessions default to being thread-local, so as long as you are in the same thread, the factory (self.Session in this case) will always return the same session. So calculate and generate are both using the same session, and closing it in calculate will roll back the changes made in generate.
Moreover, scoped sessions should not be closed, they should be removed from the session registry (by calling self.Session.remove()); they will be closed automatically in this case.
You should work out where in your code you will have finished with your session, and remove it there and nowhere else. It will probably be best to commit or rollback in the same place. In the code in the question I'd remove rollback and close from calculate.
The docs on When do I construct a Session, when do I commit it, and when do I close it? and Contextual/Thread-local Sessions should be helpful.

How does a session work with SQL Alchemy with pool?

engine = db.create_engine(self.url, convert_unicode=True, pool_size=5, pool_recycle=1800, max_overflow=10)
connection = self.engine.connect()
Session = scoped_session(sessionmaker(bind=self.engine, autocommit=False, autoflush=True))
I initialize my session like this. And
def first():
with Session() as session:
second(session)
def second(session):
session.add(obj)
third(session)
def third(session):
session.execute(query)
I use my session like this.
I think the pool is assigned one for each session. So, I think that the above code can work well when pool_size=1, max_overflow=0. But, when I set up like that It stuck and return Exception like.
descriptor '__init__' requires a 'super' object but received a 'tuple'
Why is that? The pool is assigned more than one per session rather than one by one?
And when using session with with, Can do I careless commit and rollback when exception?

sqlalchemy session.close() not closing connection

How session is able to output user objects even after session.close() is executed ?
__init__.py
def get_db(request):
maker = request.registry.dbmaker
session = maker()
def cleanup(request):
if request.exception is not None:
session.rollback()
else:
session.commit()
session.close()
request.add_finished_callback(cleanup)
return session
view.py
#view_config(route_name='get_path', renderer='json')
def get_path(request):
session = request.db
session.query(User).all() # outputs user objects
session.close()
session.query(User).all() # outputs user objects
Even i tried session.bind.dispose() (should dispose all connections)
When you run session.close() it releases all the resources it might have used, i.e. transactional/connection. It does not physically close the session so that no other queries can be run after you have called it...
From sqlalchemy docs:
Closing
The close() method issues a expunge_all(), and releases any transactional/connection resources. When connections are returned to the connection pool, transactional state is rolled back as well.
So it would still run the query after you called session.close(). Any uncommitted transactions, as stated above, will be rolled back.

Is safe to pass session as parameter into inner function to avoid zombie sessions and pool overflow?

I am fairly new to SQLAlchemy and I wonder what is a right style to write code with sessions and splitting sqlalchemy queries into couple functions and to avoid zombie sessions in case of any exception ( To avoid overflow pool and make server irresponsible)
So, my question is ok to in one function create session and pass into another as parameter and in inner just call flush and in outter commit with finnaly, is this safe way to do or there is better way ?
For example
class Fetcher(object):
def main(self, name):
try:
session = Session()
user = session.query(UserModel).filter(UserModel.name.like(name)).first()
if user and user.active:
relatives = _fetch_relatives(session, user.id)
user.active = utc_time()
session.commit()
except Exception as e:
print e
session.rollback()
finally:
session.close()
def _fetch_relatives(self, session, id):
relatives = []
try:
for r in session.query(RelativesModel).filter(RelativesModel.relative_id == id).all():
relatives.apped({'name': r.name, 'age': r.age})
r.readed = utc_time()
session.flush()
except Exception as e:
print e
session.rollback()
finally:
session.close()
return relatives
the best approach is to have just one outermost transactional scope for an entire operation. Where you demarcate this scope is often something dependent on how the application works and there's some thoughts on this here.
for the example given, having just one outermost scope would probably look like this, seeing that your object is called a "fetcher" and I'd assume a typical use case in your application has to fetch more than one thing - it's best to keep the scope of transactions and sessions outside of the scope of objects that work with specific parts of the database:
class Fetcher(object):
def main(self, session, name):
user = session.query(UserModel).filter(UserModel.name.like(name)).first()
if user and user.active:
relatives = _fetch_relatives(session, user.id)
user.active = utc_time()
def _fetch_relatives(self, session, id):
relatives = []
for r in session.query(RelativesModel).filter(RelativesModel.relative_id == id).all():
relatives.apped({'name': r.name, 'age': r.age})
r.readed = utc_time()
session.flush()
return relatives
def run_my_program():
session = Session()
try:
f1 = Fetcher()
f1.main(session, "somename")
# work with other fetchers, etc.
session.commit()
except Exception as e:
session.rollback()
finally:
session.close()

should SQLAlchemy session.begin_nested() be committed with transaction.commit() when using pyramid_tm?

I'm developing a pyramid application and currently in the process of moving from sqlite to postgresql. I've found postgresql more restrictive transaction management is giving me a bad time.
I am using the pyramid_tm because I find it convenient. Most of my problems occur during my async calls. What I have is views that serve up dynamic forms. The idea is - if we got an id that corresponds to a database row we edit the existing row. Otherwise, we are adding a new guy.
#view_config(route_name='contact_person_form',
renderer='../templates/ajax/contact_person_form.pt',
permission='view',
request_method='POST')
def contact_person_form(request):
try:
contact_person_id = request.params['contact_person']
DBSession.begin_nested()
contact_person = DBSession.query(ContactPerson).filter(ContactPerson.id == contact_person_id).one()
transaction.commit()
except (NoResultFound, DataError):
DBSession.rollback()
contact_person = ContactPerson(name='', email='', phone='')
return dict(contact_person=contact_person)
I need to begin a nested transaction because otherwise my lazy request method which is registered with config.add_request_method(get_user, 'user', reify=True) and called when rendering my view
def get_user(request):
userid = unauthenticated_userid(request)
if userid is not None:
user = DBSession.query(Employee).filter(Employee.id == userid).first()
return user
complains that the transaction has been interrupted and the SELECT on employees will be skipped.
I have two questions:
Is it okay to do transaction.commit() on a session.begin_nested() nested transaction? I don't exactly where SQLAlchemy ends and pyramid_tm begins. If I try to commit the session I get an exception that says I can only commit using the transaction manager. On the other hand DBSession.rollback() works fine.
Does handling this like
try:
#do something with db
except:
#oops, let's do something else
make sense? I have the feeling this is 'pythonic', but I'm not sure if this underlying transaction thing calls for non-pythonic means.
Calling transaction.commit() in your code is committing the session and causing your contact_person object to be expired when you try to use it later after the commit. Similarly if your user object is touched on both sides of the commit you'll have problems.
As you said, if there is an exception (NoResultFound) then your session is now invalidated. What you're looking for is a savepoint, which transaction supports, but not directly through begin_nested. Rather, you can use transaction.savepoint() combined with DBSession.flush() to handle errors.
The logic here is that flush executes the SQL on the database, raising any errors and allowing you to rollback the savepoint. After the rollback, the session is recovered and you can go on your merry way. Nothing has been committed yet, leaving that job for pyramid_tm at the end of the request.
try:
sp = transaction.savepoint()
contact_person = DBSession.query(ContactPerson)\
.filter(ContactPerson.id == contact_person_id)\
.one()
DBSession.flush()
except (NoResultFound, DataError):
sp.rollback()
contact_person = ContactPerson(name='', email='', phone='')
return dict(contact_person=contact_person)

Categories