I need to create a new database connection(session) to avoid an unexpected commit from a MySql procedure in my django transaction. How to set up it in django?
I have tried to duplicate the database configuration in setting file. It worked for me but it seems not a good solution. See my code for more detail.
#classmethod
def get_sequence_no(cls, name='', enable_transaction=False):
"""
return the sequence no for the given key name
"""
if enable_transaction:
valobj = cls.objects.using('sequence_no').raw("call sp_get_next_id('%s')" % name)
return valobj[0].current_val
else:
valobj = cls.objects.raw("call sp_get_next_id('%s')" % name)
return valobj[0].current_val
Does anyone know how to use a custom database connection to call the procedure?
If you have a look at the django.db module, you can see that django.db.connection is a proxy for django.db.connections[DEFAULT_DB_ALIAS] and django.db.connections is an instance of django.db.utils.ConnectionHandler.
Putting this together, you should be able to get a new connection like this:
from django.db import connections
from django.db.utils import DEFAULT_DB_ALIAS, load_backend
def create_connection(alias=DEFAULT_DB_ALIAS):
connections.ensure_defaults(alias)
connections.prepare_test_settings(alias)
db = connections.databases[alias]
backend = load_backend(db['ENGINE'])
return backend.DatabaseWrapper(db, alias)
Note that this function will open a new connection every time it is called and you are responsible for closing it. Also, the APIs it uses are probably considered internal and might change without notice.
To close the connection, it should be enough to call .close() on the object return by the create_connection function:
conn = create_connection()
# do some stuff
conn.close()
With modern Django you can just do:
from django.db import connections
new_connection = connections.create_connection('default')
Related
I'm developing a webapp using Flask-SQLAlchemy and a Postgre DB, then I have this dropdown list in my webpage which is populated from a select to the DB, after selecting different values for a couple of times I get the "sqlalchemy.exc.TimeoutError:".
My package's versions are:
Flask-SQLAlchemy==2.5.1
psycopg2-binary==2.8.6
SQLAlchemy==1.4.15
My parameters for the DB connection are set as:
app.config['SQLALCHEMY_POOL_SIZE'] = 20
app.config['SQLALCHEMY_MAX_OVERFLOW'] = 20
app.config['SQLALCHEMY_POOL_TIMEOUT'] = 5
app.config['SQLALCHEMY_POOL_RECYCLE'] = 10
The error I'm getting is:
sqlalchemy.exc.TimeoutError: QueuePool limit of size 20 overflow 20 reached, connection timed out, timeout 5.00 (Background on this error at: https://sqlalche.me/e/14/3o7r)
After changing the value of the 'SQLALCHEMY_MAX_OVERFLOW' from 20 to 100 I get the following error after some value changes on the dropdown list.
psycopg2.OperationalError: connection to server at "localhost" (::1), port 5432 failed: FATAL: sorry, too many clients already
Every time a new value is selected from the dropdown list, four queries are triggered to the database and they are used to populate four corresponding tables in my HTML with the results from that query.
I have a 'db.session.commit()' statement after every single query to the DB, but even though I have it, I get this error after a few value changes to my dropdown list.
I know that I should be looking to correctly manage my connection sessions, but I'm strugling with this. I thought about setting the pool timeout to 5s, instead of the default 30s in hopes that the session would be closed and returned to the pool in a faster way, but it seems it didn't help.
As a suggestion from #snakecharmerb, I checked the output of:
select * from pg_stat_activity;
I ran the webapp for 10 different values before it showed me an error, which means all the 20+20 sessions where used and are left in an 'idle in transaction' state.
Do anybody have any idea suggestion on what should I change or look for?
I found a solution to the issue I was facing, in another post from StackOverFlow.
When you assign your flask app to your db variable, on top of indicating which Flask app it should use, you can also pass on session options, as below:
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy(app, session_options={'autocommit': True})
The usage of 'autocommit' solved my issue.
Now, as suggested, I'm using:
app.config['SQLALCHEMY_POOL_SIZE'] = 1
app.config['SQLALCHEMY_MAX_OVERFLOW'] = 0
Now everything is working as it should.
The original post which helped me is: Autocommit in Flask-SQLAlchemy
#snakecharmerb, #jorzel, #J_H -> Thanks for the help!
You are leaking connections.
A little counterintuitively,
you may find you obtain better results with a lower pool limit.
A given python thread only needs a single pooled connection,
for the simple single-database queries you're doing.
Setting the limit to 1, with 0 overflow,
will cause you to notice a leaked connection earlier.
This makes it easier to pin the blame on the source code that leaked it.
As it stands, you have lots of code, and the error is deferred
until after many queries have been issued,
making it harder to reason about system behavior.
I will assume you're using sqlalchemy 1.4.29.
To avoid leaking, try using this:
from contextlib import closing
from sqlalchemy import create_engine, text
from sqlalchemy.orm import scoped_session, sessionmaker
engine = create_engine(some_url, future=True, pool_size=1, max_overflow=0)
get_session = scoped_session(sessionmaker(bind=engine))
...
with closing(get_session()) as session:
try:
sql = """yada yada"""
rows = session.execute(text(sql)).fetchall()
session.commit()
...
# Do stuff with result rows.
...
except Exception:
session.rollback()
I am using flask-restful.
So when I got this error -> QueuePool limit of size 20 overflow 20 reached, connection timed out, timeout 5.00 (Background on this error at: https://sqlalche.me/e/14/3o7r)
I found out in logs that my checked out connections are not closing. this I found out using logger.info(db_session.get_bind().pool.status())
def custom_decorator(error_message, db_session):
def api_decorator(func):
def api_request(self, *args, **kwargs):
try:
response = func(self)
db_session.commit()
return response
except Exception as err:
db_session.rollback()
logger.error(error_message.format(err))
return error_response(
message=f"Internal Server Error",
status_code=HTTPStatus.INTERNAL_SERVER_ERROR,
)
finally:
db_session.close()
return api_request
return api_decorator
So I had to create this decorator which handles the db_session closing automatically. Using this I am not getting any active checked out connections.
you can use the decorators in your function as follows:
#custom_decorator("blah", db_session)
def example():
"some code"
I have the following set up for which on session.query() SqlAlchemy returns stale data:
Web application running on Flask with Gunicorn + supervisor.
one of the services is composed in this way:
app.py:
#app.route('/api/generatepoinvoice', methods=["POST"])
#auth.login_required
def generate_po_invoice():
try:
po_id = request.json['po_id']
email = request.json['email']
return jsonify(response=POInvoiceGenerator.get_invoice(po_id, email))
except Exception as ex:
app.logger.error("generate_po_invoice(): " + ex.message)
in another folder i have the database related stuff:
DatabaseModels (folder)
|-->Model.py
|-->Connection.py
that's what is contained in the connection.py file:
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker, scoped_session
from sqlalchemy.ext.declarative import declarative_base
engine = create_engine(DB_BASE_URI, isolation_level="READ COMMITTED")
Session = scoped_session(sessionmaker(bind=engine))
session = Session()
Base = declarative_base()
and thats an extract of the model.py file:
from DatabaseModels.Connection import Base
from sqlalchemy import Column, String, etc...
class Po(Base):
__tablename__ = 'PLC_PO'
id = Column("POId", Integer, primary_key=True)
code = Column("POCode", String(50))
etc...
Then i have another file POInvoiceGenerator.py
that contains the call to the database for fetching some data:
import DatabaseModels.Connection as connection
import DatabaseModels.model as model
def get_invoice(po_code, email):
try:
po_code = po_code.strip()
PLCConnection.session.expire_all()
po = connection.session.query(model.Po).filter(model.Po.code == po_code).first()
except Exception as ex:
logger.error("get_invoice(): " + ex.message)
in subsequent users calls to this service sometimes i start to get errors like: could not find data in the db for that specific code and so on. Like if the data are stale and so on.
My first approach was to add isolation_level="READ COMMITTED" to the engine declaration and then to create a scoped session, but the stale data reading keeps appening.
Is there anyone that had any idea if my setup is wrong (the session and the model are reused among multiple methods and files)
Thanks in advance.
even if the solution pointed by #TonyMountax seems valid and made me discover something that i didn't know about SqlAlchemy, In the end i opted for something different.
I figured out that the connection established by SqlAlchemy was durable since it was created from a pool of connection everytime, this somehow was causing the data to be stale.
i added a NullPool to my code:
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker, scoped_session
from sqlalchemy.pool import NullPool
engine = create_engine(DB_URI, isolation_level="READ COMMITTED", poolclass=NullPool)
Session = scoped_session(sessionmaker(bind=engine))
session = Session()
and then i'm calling a session close for every query that i make:
session.query("some query..")
session.close()
this will cause SqlAlchemy to create a new connection every time and get fresh data from the db.
Hope that this is the correct way to use it and that might be useful to someone else.
The way you instantiate your database connections means that they are reused for the next request, and they have some state left from the previous request. SQLAlchemy uses a concept of sessions to interact with the database, so that your data does not abruptly change in a single request even if you happen to perform the same query twice. This makes sense when you are using the ORM query features. For instance, if you were to query len(User.friendlist) twice during the same session, but a friend request was accepted during the request, then it will still show the same number in both locations.
To fix this, you must set up the session on first request, then you must tear it down when the request is finished. To do so is not trivial, but there is a well-established project that does it already: Flask-SQLAlchemy. It's from Pallets, the people behind Flask itself and Jinja2.
Is it possible to access an in-memory SQLite database from different threads?
In the following sample code I create a SQLite database in memory and create a table. When I now go to a different execution context, which I think have to do when I go to a different thread, the created table isn't there anymore. If I would open a file based SQLite database, the table would be there.
Can I achieve the same behavior for an in-memory database?
from peewee import *
db = SqliteDatabase(':memory:')
class BaseModel(Model):
class Meta:
database = db
class Names(BaseModel):
name = CharField(unique=True)
print(Names.table_exists()) # this returns False
Names.create_table()
print(Names.table_exists()) # this returns True
print id(db.get_conn()) # Our main thread's connection.
with db.execution_context():
print(Names.table_exists()) # going to another context, this returns False if we are in :memory: and True if we work on a file *.db
print id(db.get_conn()) # A separate connection.
print id(db.get_conn()) # Back to the original connection.
Working!!
cacheDB = SqliteDatabase('file:cachedb?mode=memory&cache=shared')
Link
http://charlesleifer.com/blog/managing-database-connections-with-peewee/
https://groups.google.com/forum/#!topic/peewee-orm/78invrt3xyo
DBSession = sessionmaker(bind=self.engine)
def add_person(name):
s = DBSession()
s.add(Person(name=name))
s.commit()
Everytime I run add_person() another connection is created with my postgreSQL DB.
Looking at:
SELECT count(*) FROM pg_stat_activity;
I see the count going up, until I get a Remaining connection slots are reserved for non-replication superuser connections error.
How do I kill those connections? Am I wrong in opening a new session everytime I want to add a Person record?
In general, you should keep your Session object (here DBSession) separate from any functions that make changes to the database. So in your case you might try something like this instead:
DBSession = sessionmaker(bind=self.engine)
session = DBSession() # create your session outside of functions that will modify database
def add_person(name):
session.add(Person(name=name))
session.commit()
Now you will not get new connections every time you add a person to the database.
I'm developing a pyramid application and currently in the process of moving from sqlite to postgresql. I've found postgresql more restrictive transaction management is giving me a bad time.
I am using the pyramid_tm because I find it convenient. Most of my problems occur during my async calls. What I have is views that serve up dynamic forms. The idea is - if we got an id that corresponds to a database row we edit the existing row. Otherwise, we are adding a new guy.
#view_config(route_name='contact_person_form',
renderer='../templates/ajax/contact_person_form.pt',
permission='view',
request_method='POST')
def contact_person_form(request):
try:
contact_person_id = request.params['contact_person']
DBSession.begin_nested()
contact_person = DBSession.query(ContactPerson).filter(ContactPerson.id == contact_person_id).one()
transaction.commit()
except (NoResultFound, DataError):
DBSession.rollback()
contact_person = ContactPerson(name='', email='', phone='')
return dict(contact_person=contact_person)
I need to begin a nested transaction because otherwise my lazy request method which is registered with config.add_request_method(get_user, 'user', reify=True) and called when rendering my view
def get_user(request):
userid = unauthenticated_userid(request)
if userid is not None:
user = DBSession.query(Employee).filter(Employee.id == userid).first()
return user
complains that the transaction has been interrupted and the SELECT on employees will be skipped.
I have two questions:
Is it okay to do transaction.commit() on a session.begin_nested() nested transaction? I don't exactly where SQLAlchemy ends and pyramid_tm begins. If I try to commit the session I get an exception that says I can only commit using the transaction manager. On the other hand DBSession.rollback() works fine.
Does handling this like
try:
#do something with db
except:
#oops, let's do something else
make sense? I have the feeling this is 'pythonic', but I'm not sure if this underlying transaction thing calls for non-pythonic means.
Calling transaction.commit() in your code is committing the session and causing your contact_person object to be expired when you try to use it later after the commit. Similarly if your user object is touched on both sides of the commit you'll have problems.
As you said, if there is an exception (NoResultFound) then your session is now invalidated. What you're looking for is a savepoint, which transaction supports, but not directly through begin_nested. Rather, you can use transaction.savepoint() combined with DBSession.flush() to handle errors.
The logic here is that flush executes the SQL on the database, raising any errors and allowing you to rollback the savepoint. After the rollback, the session is recovered and you can go on your merry way. Nothing has been committed yet, leaving that job for pyramid_tm at the end of the request.
try:
sp = transaction.savepoint()
contact_person = DBSession.query(ContactPerson)\
.filter(ContactPerson.id == contact_person_id)\
.one()
DBSession.flush()
except (NoResultFound, DataError):
sp.rollback()
contact_person = ContactPerson(name='', email='', phone='')
return dict(contact_person=contact_person)