DBSession = sessionmaker(bind=self.engine)
def add_person(name):
s = DBSession()
s.add(Person(name=name))
s.commit()
Everytime I run add_person() another connection is created with my postgreSQL DB.
Looking at:
SELECT count(*) FROM pg_stat_activity;
I see the count going up, until I get a Remaining connection slots are reserved for non-replication superuser connections error.
How do I kill those connections? Am I wrong in opening a new session everytime I want to add a Person record?
In general, you should keep your Session object (here DBSession) separate from any functions that make changes to the database. So in your case you might try something like this instead:
DBSession = sessionmaker(bind=self.engine)
session = DBSession() # create your session outside of functions that will modify database
def add_person(name):
session.add(Person(name=name))
session.commit()
Now you will not get new connections every time you add a person to the database.
Related
engine = db.create_engine(self.url, convert_unicode=True, pool_size=5, pool_recycle=1800, max_overflow=10)
connection = self.engine.connect()
Session = scoped_session(sessionmaker(bind=self.engine, autocommit=False, autoflush=True))
I initialize my session like this. And
def first():
with Session() as session:
second(session)
def second(session):
session.add(obj)
third(session)
def third(session):
session.execute(query)
I use my session like this.
I think the pool is assigned one for each session. So, I think that the above code can work well when pool_size=1, max_overflow=0. But, when I set up like that It stuck and return Exception like.
descriptor '__init__' requires a 'super' object but received a 'tuple'
Why is that? The pool is assigned more than one per session rather than one by one?
And when using session with with, Can do I careless commit and rollback when exception?
My application does not update the database - all queries are SELECT statements. I'm struggling how best to handle direct changes to the database (i.e. opening MySQLWorkbench and changing data there). Without session.commit(), my Flask application is returning stale data.
My solution right now is to have a session.commit() as the first line of each Flask endpoint, but I feel this is the incorrect way of handling this.
Session creation at start of app:
engine = db.create_engine('mysql+pymysql://...')
connection = engine.connect()
metadata = db.MetaData()
Base = declarative_base()
Session = sessionmaker(autoflush=True)
Session.configure(bind=engine)
session = Session()
session.expire_all() to mark all session data as expired. Then when you are trying to access something, it will be fetched from the database.
session.expire(object) does the same but for objects only
db.session.refresh(some_object) expires and reloads all object data
Nice article about that can be found here: https://www.michaelcho.me/article/sqlalchemy-commit-flush-expire-refresh-merge-whats-the-difference
I have a code that runs a query from a query list. These query are long and take quite a long time to execute. Since I am executing these query in a loop, the session seems to expire and I get a error telling me that the connection to the server was lost.
Then I created the session as well as engine inside the loop (I closed the session and disposed the engine at the end of the loop.) I have understood that creating new connection is an expensive operation.
How can I re-use the connection in this case so that I do not have to create the session and engine each time?
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
# an Engine, which the Session will use for connection
# resources
some_engine = create_engine('mysql://user:password#localhost/')
# create a configured "Session" class
Session = sessionmaker(bind=some_engine)
# create a Session
session = Session()
for long_query in long_query_list:
# work with sess
session.execute(long_query)
session.commit()
I have a class that create the a MongoClient inside:
db = MongoDB ('mydb' , 'config')
I am successfully able to connect to 'mydb' database and 'config' collection - but after querying the collection I do need this connection to database again. I proceed to create connection with another database and collection
db = MongoDB ('mapping' , 'box_details')
In such a case how can I close the connection to DB previously - is it that it would automatically get closed when app exits?
I'd recommend you to open connection using pymongo.MongoClient which will return mongo_client object. mongo_cient has instance method close allowing you to close connection manually.
Please see documentation about mongo_client
I tried to totally seperate Flask and SQLAlchemy using this method but Flask still seems to be able to detect my database and start a new transaction at the beginning of each request.
The db.py file creates a new session and defines a simple model of a table:
from sqlalchemy import create_engine
from sqlalchemy.orm import scoped_session, sessionmaker
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, String
engine = create_engine("mysql://web:kingtezdu#localhost/web_unique")
print("creating new session")
db_session = scoped_session(sessionmaker(bind=engine))
Base = declarative_base()
Base.query = db_session.query_property()
# define model of 'persons' table
class Person(Base):
__tablename__ = "persons"
name = Column(String(30), primary_key=True)
def __repr__(self):
return "Person(\"{0.name}\")".format(self)
# create table
Base.metadata.create_all(bind=engine)
And app.py, a simple Flask application using the SQLAlchemy session and model:
from flask import Flask, escape
app = Flask(__name__)
# importing new session
from db import db_session, Person
# registering for app teardown to remove session
#app.teardown_appcontext
def shutdown_session(exception=None):
db_session.remove()
#app.route("/query")
def query():
# query all persons in the database
all_persons = Person.query.all()
print all_persons
return "" # we use the console output
if __name__ == "__main__":
app.run(debug=True)
Let's run this:
$ python app.py
creating new session
* Running on http://127.0.0.1:5000/
* Restarting with reloader
creating new session
Weired enough it runs db.py two times but we just ignore this, let's access the webpage /query:
[]
127.0.0.1 - - [23/Dec/2015 18:20:14] "GET /query HTTP/1.1" 200 -
We can see that our request was answered, though we only use the console output. There is no Person in the database yet, let's add one:
mysql> INSERT INTO persons (name) VALUES ("Marie");
Query OK, 1 row affected (0.11 sec)
Marie is part of the database now so we reload the webpage:
[Person("Marie")]
127.0.0.1 - - [23/Dec/2015 18:24:48] "GET /query HTTP/1.1" 200 -
As you can see the session already knows about Marie. Flask didn't create a new session. That means that there was a new transaction started. Contrast this to the plan python example below to see the difference.
My question is how Flask is able to start a new transaction on the begin of each request. Flask shouldn't know about the database but seems to be able to change something about it's behaviour.
In case you don't know what a SQLAlchemy transaction is read this paragraph extracted from Managing Transactions:
When the transactional state is completed after a rollback or commit,
the Session releases all Transaction and Connection resources, and
goes back to the “begin” state, which will again invoke new Connection
and Transaction objects as new requests to emit SQL statements are
received.
So a transaction is ended by a commit and will cause a new connection to be set up which will then make the session read the database again. In reality this means that you have to commit when you want to see changes made to the database:
First in interactive python mode:
>>> from db import db_session, Person
creating new session
>>> Person.query.all()
[]
Switch over to MySQL and insert a new Person:
mysql> INSERT INTO persons (name) VALUES ("Paul");
Query OK, 1 row affected (0.03 sec)
Finally try to load Paul into our session:
>>> Person.query.all()
[]
>>> db_session.commit()
>>> Person.query.all()
[Person("Paul")]
I think the issue here is that scoped_session somewhat hides what happens to the actual sessions in use. When your teardown handler
# registering for app teardown to remove session
#app.teardown_appcontext
def shutdown_session(exception=None):
db_session.remove()
runs at the end of each request, you call db_session.remove() which disposes of the session used in that particular request along with any transaction context. See http://docs.sqlalchemy.org/en/latest/orm/contextual.html for the details, particularly
The scoped_session.remove() method first calls Session.close() on the
current Session, which has the effect of releasing any
connection/transactional resources owned by the Session first, then
discarding the Session itself. “Releasing” here means that connections
are returned to their connection pool and any transactional state is
rolled back, ultimately using the rollback() method of the underlying
DBAPI connection.