In my program there are multiple asynchronous functions updating data in a database.
There can be some cases where the functions are executed parallelly.
My question is :
In my case, do I need to create a new connection each time in a function or having a single connection throughout the program will work fine.
Second, in case of a single connection is it necessary to close it at the end?
Also, please recommend the best tool to access .db file outside the code [Just in case], that shouldn't interrupt the connections of the code with the database even if I make some changes personally outside the code.
Note : Am on windows
Thanks!
We are having issues recently with our prod servers connecting to Oracle. Intermittently we are getting "DatabaseError: ORA-12502: TNS:listener received no CONNECT_DATA from client". This issue is completely random and goes away in a second by itself and it's not a Django problem, can replicate it with SQLPlus from the servers.
We opened ticket with Oracle support but in the meantime i'm wondering if it's possible to simply retry any DB-related operation when it fails.
The problem is that i can't use try/catch blocks in the code to handle this since this can happen on ANY DB interaction in the entire codebase. I have to do this at a lower level so that i do it only once. Is there any way to install an error handler or something like that directly on django.db.backends.oracle level so that it will cover all the codebase? Basically, all i want to do is this:
try:
execute_sql()
catch:
if code == ORA-12502:
sleep 1 second
#re-try the same operation
exexute_sql()
Is this even possible or I'm out of luck?
Thanks!
I made a python web app and deployed it to heroku successfully and it works well to an extent.
The problem starts when once in a while the worker process throws a connection reset by peer error once in a while for which i have to go in and redeploy again only for it to happen again.
This process affects the entire web app as those small glitches cause the entire program to malfunction and produce inconsistent if not wrong information, so I'm trying to validate if an exception handling statement in the following format would work:
def conti():
opens the connection to the site
performs the operations needed
closes the connection
try:
conti()
except:
conti()
How can i make the try statement sort of recursive so that whenever the error happens it would still continue.
Do i need to put the try statement in a recursive function to handle the error.
Thank you.
My recommendation is to consider a connection pool. If you are on Heroku and using PostgreSQL, you are probably already using psycopg2, which has a pool built in. See psycopg2 and infinite python script
This will avoid either recursion or explicit connection state/error detection in your code.
The situation is the following:
I have an app which uses Angularjs for the front-end and Flask for the back-end.
And I have a route that looks like:
#app.route('/api/route1', methods=['POST'])
def route1():
result = some_package.long_task()
result2 = some_package.short_task(result)
return jsonify(result)
The long_task function is executing some bash commands using check_output, using the database, reading and writing files and so on. It can take a couple of hours to get finished.
Let's say that the user gets tired of waiting and closes the browser window 10 minutes after the process started.
My questions are:
Will both long_task and short_task will be executed anyway?
Is it possible that this situation creates any memory leakage?
Is it possible to know that the user closed the browser at some point? (including when I try to return the response: would it be possible to know that the response wasn't delivered?)
Does the answer of this question depends on what server am I using? (Tornado, uWSGI...)
Thanks a lot for your answers.
Will both long_task and short_task will be executed anyway?
Yes, they will. Flask doesn't know if connection closed by the client.
Is it possible that this situation creates any memory leakage?
Yes, but it depends only on your code in long_task and short_task. Memory leaks can occur even if there is no connection throw. You can write to log difference between allocated memory before and after request.
Is it possible to know that the user closed the browser at some point? (including when I try to return the response: would it be
possible to know that the response wasn't delivered?) Does the answer
of this question depends on what server am I using? (Tornado,
uWSGI...)
Simple answer - no. But it can be done with some sort of hacking with streaming empty response while long_task is executed and catching exception which will be asserted if client close connection.
You can read about it here: Stop processing Flask route if request aborted
I'm using Flask-SQLAlchemy 1.0, Flask 0.10, SQLAlchemy 0.8.2, and Python 2.7.5. I'm connecting to MySQL 5.6 with Oracle's MySQL Connector/Python 1.0.12.
When I restart my web server (either Apache2 or Flask's built-in), I receive the exception OperationalError: MySQL Connection not available after MySQL's wait_timeout expires (default 8 hours).
I've found people with similar problems and explicitly set SQLALCHEMY_POOL_RECYCLE = 7200, even though that's Flask-SQLAlchemy's default. When I put a breakpoint here, I see that the teardown function is successfully calling session.remove() after each request. Any ideas?
Update 7/21/2014:
Since this question continues to receive attention, I must add that I did try some of the proposals. Two of my attempts looked like the following:
First:
#contextmanager
def safe_commit():
try:
yield
db.session.commit()
except:
db.session.rollback()
raise
This allowed me to wrap my commit calls like so:
with safe_commit():
model = Model(prop=value)
db.session.add(model)
I am 99% certain that I did not miss any db.session.commit calls with this method and I still had problems.
Second:
def managed_session():
def decorator(f):
#wraps(f)
def decorated_function(*args, **kwargs):
try:
response = f(*args, **kwargs)
db.session.commit()
return response
except:
db.session.rollback()
raise
finally:
db.session.close()
return decorated_function
return decorator
To further ensure I wasn't missing any commit calls, I made a Flask wrapper that enabled code such as (if I remember correctly):
#managed_session()
def hello(self):
model = Model(prop=value)
db.session.add(model)
return render_template(...
Unfortunately, neither method worked. I also recall trying to issue SELECT(1) calls in an attempt to re-establish the connection, but I don't have that code anymore.
To me, the bottom line is MySQL/SQL Alchemy has issues. When I migrated to Postgres, I didn't have to worry about my commits. Everything just worked.
I was having this problem and it was driving me nuts. I tried playing with SQLALCHEMY_POOL_RECYCLE but this didn't seem to fix the problem.
I finally found http://docs.sqlalchemy.org/en/latest/orm/session.html#when-do-i-construct-a-session-when-do-i-commit-it-and-when-do-i-close-it , and adapted for flask-sqlalchemy.
After I started using the following pattern, I haven't seen the problem. The key seems to be always assuring that a commit() or rollback() is executed. So if there is if-then-else which doesn't flow through commit() (e.g., for detected error), also do commit() or rollback() before redirect, abort, render_template call.
class DoSomething(MethodView):
def get(self):
try:
# do stuff
db.session.commit()
return flask.render_template('sometemplate.html')
except:
db.session.rollback()
raise
app.add_url_rule('/someurl',view_func=DoSomething.as_view('dosomething'),methods=['GET'])
UPDATE 7/22/2014
I discovered that I also had to change the SQLALCHEMY_POOL_RECYCLE to be less than the MySQL interactive_timeout. On the godaddy server, interactive_timeout was set to 60, so I set SQLALCHEMY_POOL_RECYCLE to 50. I think both the pattern I used, and this timeout were necessary to make the problem go away, but at this point I'm not positive. However, I'm pretty sure that when SQLALCHEMY_POOL_RECYCLE was greater than interactive_timeout, I was still getting the operational error.
I ran across the same issue recently - the first request to the MYSQL database after a long period of FLASK & SQLAlchemy application inactivity (at least 8 hours) results in an unhandled exception, which in turn implies 500 Internal Server Error: Connection Unavailable. All subsequent requests are just fine.
I managed to boil down the problem to MYSQL connection by decreasing the ##session.wait_timeout (and ##global just in case) value to 5 seconds. Then every odd request was just alright, while every second after 5-plus-second pause failed. The conclusion was obvious - SQLAlchemy was using open, but timeouted on the database end connection.
Solution
In my case it turned out the solution is spelled out in the SQLAlchemy – MYSQL has gone away blog post:
The first thing to make sure of is [...] the value of pool_recycle should be less than your MYSQLs wait_timeout value.
In MYSQL documentation you can find wait_timeout defaults to 8 hours (28 800 seconds), while SQLAlchemy engine's pool_recycle default value is -1, that entails no connection recycle whatsoever. I simply passed the value of 21 600 (6 hours) to the create_engine function and the error is gone.
sqlalchemy provides 2 ways of handling with disconnections, details in the documentation
Short version:
Optimistically
use try...except block to catch disconnection exceptions. This will return a 500 on the failing request, then the web application continues as normal. So use this one if disconnection happens infrequently. Note: you'll need to wrap each potential-to-fail operations in the try...except block.
Pessimistically (the one I'm using)
Basically do an extra ping operation (something like SELECT 1) each time a connection is checked out from the pool. If the ping fails raise DisconnectionError, upon which the host pool will attempt to force a new connection to be created (in fact the pool will try 3 times before officially give up). In this way your application won't see 500 error. The tradeoff is the extra SQL executed, although according to the doc the overhead is small.