I have a web application which communicates with a database using SQLAlchemy. The connection is done using certificates and keys. The problem is that the certificates are changing every now and then while the application is still online (which means that the sqlalchemy session is already initiated with the old certificates). Hence, when the certificates have changed, I get an psycopg2.OperationalError saying that the connection was closed unexpectedly.
I must restart the app for the session to load the new certificates when initiating to fix the problem.
My session is created once when the app is starting and then I use the same session for all database actions.
My question is how can I check if the session is still valid before using it?
Many thanks
Related
In Tiangolo's FastAPI, it states that you can create a persistent database connection by using a dependency
https://fastapi.tiangolo.com/tutorial/sql-databases/#create-a-dependency
However, in the Async database docs, the database is connected to in app startup
https://fastapi.tiangolo.com/advanced/async-sql-databases/#connect-and-disconnect
This same pattern is followed in the encode/databases docs
https://www.encode.io/databases/connections_and_transactions/
Which is the right pattern? It seems to me that using dependencies, one database connection would be created per API call, while connecting the the database during startup would establish one database connection per worker. If this is correct, connecting to the database on startup would be far superior.
What the difference between the two and is one better?
I won't go into the details of each database library. Let me just say that most modern tools use a connection pool. They do it explicitly or implicitly, hiding behind certain abstractions like Session in your first link.
In all of your examples, the connection pool is created when the application starts. And when creating a session, there is no heavy operation of establishing a new connection, but only a connection is fetched from the pool, and when the session is closed, the connection is returned to the pool.
I have a Python Flask web site running in PythonAnywhere. It runs fine for some time and then I start getting "User 'felipeavl' has exceeded the 'max_user_connections' resource (current value: 3) "
I am using SQLAlchemy and setting pool_recycle as advised in PythonAnywhere forums :
engine = create_engine(SQLALCHEMY_DATABASE_URI, pool_recycle=280)
I am also closing the session in all my flask methods, although SQlAlchemy was supposed to manage the connections, if I am not wrong:
def listarEmissores():
session = DBSession()
emissores = session.query(Emissor).all()
session.close()
return render_template('listar_emissores.html', emissores=emissores);
In my local MySql database everything runs fine. Am I missing any other configurations ?
You may have to email support at Pythonanywhere to get them to reset the connections on their end (as I had to). Otherwise, you will see I had the same issue (and my solution) at the bottom of the following page:
How do I fix this Gets server error, which is causing display issues?
Consider a web site that persists on disk the activitiy of its users, without requiring them to log-in/authenticate.
This would enable the user to return and find all their activity intact, even if the server had been relaunched.
Because the user session
from Flask import session
session['foo'] = 'bar'
is an ordinary dict, I'm assuming it gets wiped when the server is stopped and relaunched. It is hence not persistent if the user's two visits cross a server relaunch.
To do so using Flask, we'd use a database session
from flask.ext.sqlalchemy import SQLAlchemy
db = SQLAlchemy()
db.session.add(..)
db.session.commit()
and, since users are not logged-in, we'd distinguish between distinct users through their user sessions.
What is a unique ID that can be extracted from the user session to persist in a database session? The idea is that when a user returns, the cookie in their browser would identify them uniquely, which in turn would mean that the identifier we'd hash from the user session would remain intact.
From the terminology I assume you are using Flask -
So, the db.session does not refer to the "browsing" session, with date related to a single viewer- ratehr, it refers to a db connection session that is unique for that web view cycle (and not for the browsing session) - these are different things.
And as such, the code above do persist your object on the DB in a permanent, unique way. If you can't see them when restarting the Flask application it must be because you are using a transient database. Just adjust your configurations to use a permanent - in disk - database, instead of either an in-memory sqlite instance, or a database inside a docker container that is re-created every time.
The "session" you are thinking of, as "browsing" session does not exist out-of the box in Flask- you either roll yur own, or use one of the plug-ins such as https://pythonhosted.org/Flask-Session/ - this is what is related to PHP's "session".
I've built a small python REST service using Flask, with Flask-SQLAlchemy used for talking to the MySQL DB.
If I connect directly to the MySQL server everything is good, no problems at all. If I use HAproxy (handles HA/failover, though in this dev environment there is only one DB server) then I constantly get MySQL server has gone away errors if the application doesn't talk to the DB frequently enough.
My HAproxy client timeout is set to 50 seconds, so what I think is happening is it cuts the stream, but the application isn't aware and tries to make use of an invalid connection.
Is there a setting I should be using when using services like HAproxy?
Also it doesn't seem to reconnect automatically, but if I issue a request manually I get Can't reconnect until invalid transaction is rolled back, which is odd since it is just a select() call I'm making, so I don't think it is a commit() I'm missing - or should I be calling commit() after every ORM based query?
Just to tidy up this question with an answer I'll post what I (think I) did to solve the issues.
Problem 1: HAproxy
Either increase the HAproxy client timeout value (globally, or in the frontend definition) to a value longer than what MySQL is set to reset on (see this interesting and related SF question)
Or set SQLALCHEMY_POOL_RECYCLE = 30 (30 in my case was less than HAproxy client timeout) in Flask's app.config so that when the DB is initialised it will pull in those settings and recycle connections before HAproxy cuts them itself. Similar to this issue on SO.
Problem 2: Can't reconnect until invalid transaction is rolled back
I believe I fixed this by tweaking the way the DB is initialised and imported across various modules. I basically now have a module that simply has:
from flask.ext.sqlalchemy import SQLAlchemy
db = SQLAlchemy()
Then in my main application factory I simply:
from common.database import db
db.init_app(app)
Also since I wanted to easily load table structures automatically I initialised the metadata binds within the app context, and I think it was this which cleanly handled the commit() issue/error I was getting, as I believe the database sessions are now being correctly terminated after each request.
with app.app_context():
# Setup DB binding
db.metadata.bind = db.engine
I am currently working on a new web application that needs to execute an SQL statement before giving a session to the application itself.
In detail: I am running a PostgreSQL database server with multiple schemas and I need to execute a SET search_path statement before the application uses the session. I am also using the ZopeTransactionExtension to have transactions automatically handled at the request level.
To ensure the exectuion of the SQL statement, there seem to be two possible ways:
Executing the statement at the Engine/Connection level via SQLAlchemy events (from Multi-tenancy with SQLAlchemy)
Executing the statement at the session level (from SQLAlchemy support of Postgres Schemas)
Since I am using a scoped session and want to keep my transactions intact, I wonder which of these ways will possibly disturb transaction management.
For example, does the Engine hand out a new connection from the Pool on every query? Or is it attached to the session for its lifetime, i.e. until the request has been processed and the session & transaction are closed/committed?
On the other hand, since I am using a scoped session, can I perform it the way zzzeek suggested it in the second link? That is, is the context preserved and automatically reset once the transaction is over?
Is there possibly a third way that I am missing?
For example, does the Engine hand out a new connection from the Pool on every query?
only if you have autocommit=True, which should not be the case.
Or is it attached to the session for its lifetime, i.e. until the request has been processed and the session & transaction are closed/committed?
it's attached per transaction. But the "search_path" in Postgresql is per postgresql session (not to be confused with SQLAlchemy session) - its basically the lifespan of the connection itself.
The session (and the engine, and the pool) these days has a ton of event hooks you can grab onto in order to set up state like this. If you want to stick with the Session you can try after_begin.