PyhtonAnywhere Exceeded Max User Connections in Flask app - python

I have a Python Flask web site running in PythonAnywhere. It runs fine for some time and then I start getting "User 'felipeavl' has exceeded the 'max_user_connections' resource (current value: 3) "
I am using SQLAlchemy and setting pool_recycle as advised in PythonAnywhere forums :
engine = create_engine(SQLALCHEMY_DATABASE_URI, pool_recycle=280)
I am also closing the session in all my flask methods, although SQlAlchemy was supposed to manage the connections, if I am not wrong:
def listarEmissores():
session = DBSession()
emissores = session.query(Emissor).all()
session.close()
return render_template('listar_emissores.html', emissores=emissores);
In my local MySql database everything runs fine. Am I missing any other configurations ?

You may have to email support at Pythonanywhere to get them to reset the connections on their end (as I had to). Otherwise, you will see I had the same issue (and my solution) at the bottom of the following page:
How do I fix this Gets server error, which is causing display issues?

Related

SQLAlchemy raises QueuePool limit of size 10 overflow 10 reached, connection timed out after some time

While using Flask-SQLAlchemy I get the error 'QueuePool limit of size 10 overflow 10 reached, connection timed out' consistently, after some time. I tried to increase connection pool size, but it only deferred the problem.
def create_app(config_name):
app = Flask(__name__)
app.config.from_object(config[config_name])
config[config_name].init_app(app)
initialize_db(app)
db = SQLAlchemy()
def initialize_db(app):
db.init_app(app)
SQLALCHEMY_POOL_SIZE = 100
I figured out the problem. The issue was sometimes database connection was going to lost state, which is causing pool size to be Exhausted after some interval.
To fix the issue I made the MySQL server configuration change for query timeout and made it 1 second.
After 1 second if the query didn't respond it will throw Exception and I added except block in code where that query was invoked(In my case it was GET query). In the Except block, I issued rollback command.
I just had this issue.
My situation
I have not yet launched my website, and so I am spending almost all of my time interacting with the local version, which is being run from PyCharm (i.e. on my computer), and so I only noticed the error on my local version of the site, and I didn't even try to see if it would also occur on the PythonAnywhere-hosted production version of the site.
I am using a MySQL database.
IIRC I first noticed the problem when I started running frequent (every 10 seconds) API calls from my front-end JavaScript.
I had debug=False in my app.run(), so it wasn't being caused by debug=True, which is something suggested elsewhere.
How I fixed it
The problem went away when I got rid of threaded=True in my app.run().
So, previously it looked like this:
app.run(debug=False, threaded=True)
and I changed it to look like this:
app.run(debug=False)

Taking mongoengine.connect out of the setting.py in django

Most of the blog posts and examples on the web, in purpose of connecting to the MongoDB using Mongoengine in Python/Django, have suggested that we should add these lines to the settings.py file of the app:
from mongoengine import connect
connect('project1', host='localhost')
It works fine for most of the cases except one I have faced recently:
When the database is down!
Let say if db goes down, the process that is taking care of the web server (in my case, Supervisord) will stop running the app because of Exception that connect throws. It may try few more times but after its timeout reached, it will stop trying.
So even if your app has some parts that are not tied to db, they will also break down.
A quick solution to this is adding a try/exception block to the connect code:
try:
connect('project1', host='localhost')
except Exception as e:
print(e)
but I am looking for a better and clean way to handle this.
Unfortunately this is not really possible with mongoengine unless you go with the try-except solution like you did.
You could try to connect with pure pymongo version 3.0+ using MongoClient and registering the connection manually in the mongoengine.connection._connection_settings dictionary (quite hacky but should work). From pymongo documentation:
Changed in version 3.0: MongoClient is now the one and only client class for a standalone server, mongos, or replica set. It includes the functionality that had been split into MongoReplicaSetClient: it can connect to a replica set, discover all its members, and monitor the set for stepdowns, elections, and reconfigs.
The MongoClient constructor no longer blocks while connecting to the server or servers, and it no longer raises ConnectionFailure if they are unavailable, nor ConfigurationError if the user’s credentials are wrong. Instead, the constructor returns immediately and launches the connection process on background threads.

Sessions always empty with flask / heroku

I'm having an issue with my application on Heroku where sessions aren't persisting. Specifically, flask's SecureCookieSession object is empty, every time a request is made. Contrast this with running my application on localhost, where the contents of SecureCookieSession persist the way they should.
Also I'm using flask-login + flask-seasurf, but I'm pretty sure the issue happening somewhere between flask / gunicorn / heroku.
Here are three questions that describe a similar issue:
Flask sessions not persisting on heroku
Flask session not persisting
Flask-Login and Heroku issues
Except I'm not dealing with AJAX or multiple workers here (it's a single heroku free dyno, with a single line in the Procfile). I do get the feeling that using server side sessions with redis or switching from Heroku to something like EC2 might solve my problem though.
Also, here's my git repo if it helps https://gitlab.com/collectqt/quirell/tree/develop. And I'm testing session stuff with
def _before_request(self):
LOG.debug('SESSION REQUEST '+str(flask.session))
def _after_request(self, response):
LOG.debug('SESSION RESPONSE '+str(flask.session))
return response
Got the solved with some external help, mainly by changing the secret key to use a random string I came up with, instead of os.urandom(24)
Changing to server side redis sessions helped too, if only by making testing simpler
Just in case someone else comes across this question, check APPLICATION_ROOT configuration variable. I recently deployed a Flask application to a subdirectory under nginx with a reverse-proxy and setting the APPLICATION_ROOT variable broke Flask's session. Cookies aren't being set under the correct path because of that.

Flask-SQLAlchemy "MySQL server has gone away" when using HAproxy

I've built a small python REST service using Flask, with Flask-SQLAlchemy used for talking to the MySQL DB.
If I connect directly to the MySQL server everything is good, no problems at all. If I use HAproxy (handles HA/failover, though in this dev environment there is only one DB server) then I constantly get MySQL server has gone away errors if the application doesn't talk to the DB frequently enough.
My HAproxy client timeout is set to 50 seconds, so what I think is happening is it cuts the stream, but the application isn't aware and tries to make use of an invalid connection.
Is there a setting I should be using when using services like HAproxy?
Also it doesn't seem to reconnect automatically, but if I issue a request manually I get Can't reconnect until invalid transaction is rolled back, which is odd since it is just a select() call I'm making, so I don't think it is a commit() I'm missing - or should I be calling commit() after every ORM based query?
Just to tidy up this question with an answer I'll post what I (think I) did to solve the issues.
Problem 1: HAproxy
Either increase the HAproxy client timeout value (globally, or in the frontend definition) to a value longer than what MySQL is set to reset on (see this interesting and related SF question)
Or set SQLALCHEMY_POOL_RECYCLE = 30 (30 in my case was less than HAproxy client timeout) in Flask's app.config so that when the DB is initialised it will pull in those settings and recycle connections before HAproxy cuts them itself. Similar to this issue on SO.
Problem 2: Can't reconnect until invalid transaction is rolled back
I believe I fixed this by tweaking the way the DB is initialised and imported across various modules. I basically now have a module that simply has:
from flask.ext.sqlalchemy import SQLAlchemy
db = SQLAlchemy()
Then in my main application factory I simply:
from common.database import db
db.init_app(app)
Also since I wanted to easily load table structures automatically I initialised the metadata binds within the app context, and I think it was this which cleanly handled the commit() issue/error I was getting, as I believe the database sessions are now being correctly terminated after each request.
with app.app_context():
# Setup DB binding
db.metadata.bind = db.engine

SqlAlchemy "too many clients already" error

I'm fairly new to python and pyramid framework. Recently I was introduced to SQLSoup to take care of my database (postgres) needs.
dbEngine1 = SqlSoup(settings['sqlalchemy.db1.url'])
users = dbEngine1.users.fetchall()
Everything is working great, however after a short period of using the pyramid app, I'm getting this error message. I have to kill pyramid to release all idling connections in postgres (about 50 idling connection before throwing below exception)
sorry, too many clients already
My question is, how do I close this idling connection, I tried adding a line of code as shown below, but it does not help.
dbEngine1 = SqlSoup(settings['sqlalchemy.db1.url'])
users = dbEngine1.users.fetchall()
dbEngine1.engine.connect().close()
Any pointer from SQLAlchemy gurus?
Seems like you create dbEngine1 on each request to yours Pyramid app.
For proper usage SqlSoup in webapp you must use SA Sessions.
See section "Accessing the Session" on this page.
how do I close this idling connection
SqlSoup such as raw SA use the connection pool, each connection in pool at idle status util query executes. This connection pool must be created once.

Categories