Web.py has its own database API, web.db. It's possible to use SQLObject instead, but I haven't been able to find documentation describing how to do this properly. I'm especially interested in managing database connections. It would be best to establish a connection at the wsgi entry point, and reuse it. Webpy cookbook contains an example how to do this with SQLAlchemy. I'd be interested to see how to properly do a similar thing using SQLObject.
This is how I currently do it:
class MyPage(object):
def GET(self):
ConnectToDatabase()
....
return render.MyPage(...)
This is obviously inefficient, because it establishes a new database connection on each query. I'm sure there's a better way.
As far as I understand the SQLAlchemy example given, a processor is used, that is, a session is created for each connection and committed when the handler is complete (or rolled back if an error has occurred).
I don't see any simple way to do what you propose, i.e. open a connection at the WSGI entry point. You will probably need a connection pool to serve multiple clients at the same time. (I have no idea what are the requirements for efficiency, code simplicity and so on, though. Please comment.)
Inserting ConnectToDatabase calls into each handler is of course ugly. I suggest that you adapt the cookbook example replacing the SQLAlchemy session with a SQLObject connection.
Related
I am developing a web-based application using Python, Flask, MySQL, and uWSGI. However, I am not using SQL Alchemy or any other ORM. I am working with a preexisting database from an old PHP application that wouldn't play well with an ORM anyway, so I'm just using mysql-connector and writing queries by hand.
The application works correctly when I first start it up, but when I come back the next morning I find that it has become broken. I'll get errors like mysql.connector.errors.InterfaceError: 2013: Lost connection to MySQL server during query or the similar mysql.connector.errors.OperationalError: 2055: Lost connection to MySQL server at '10.0.0.25:3306', system error: 32 Broken pipe.
I've been researching it and I think I know what the problem is. I just haven't been able to find a good solution. As best as I can figure, the problem is the fact that I am keeping a global reference to the database connection, and since the Flask application is always running on the server, eventually that connection expires and becomes invalid.
I imagine it would be simple enough to just create a new connection for every query, but that seems like a far from ideal solution. I suppose I could also build some sort of connection caching mechanism that would close the old connection after an hour or so and then reopen it. That's the best option I've been able to come up with, but I still feel like there ought to be a better one.
I've looked around, and most people that have been receiving these errors have huge or corrupted tables, or something to that effect. That is not the case here. The old PHP application still runs fine, the tables all have less than about 50,000 rows, and less than 30 columns, and the Python application runs fine until it has sat for about a day.
So, here's to hoping someone has a good solution for keeping a continually open connection to a MySQL database. Or maybe I'm barking up the wrong tree entirely, if so hopefully someone knows.
I have it working now. Using pooled connections seemed to fix the issue for me.
mysql.connector.connect(
host='10.0.0.25',
user='xxxxxxx',
passwd='xxxxxxx',
database='xxxxxxx',
pool_name='batman',
pool_size = 3
)
def connection():
"""Get a connection and a cursor from the pool"""
db = mysql.connector.connect(pool_name = 'batman')
return (db, db.cursor())
I call connection() before each query function and then close the cursor and connection before returning. Seems to work. Still open to a better solution though.
Edit
I have since found a better solution. (I was still occasionally running into issues with the pooled connections). There is actually a dedicated library for Flask to handle mysql connections, which is almost a drop-in replacement.
From bash: pip install Flask-MySQL
Add MYSQL_DATABASE_HOST, MYSQL_DATABASE_USER, MYSQL_DATABASE_PASSWORD, MYSQL_DATABASE_DB to your Flask config. Then in the main Python file containing your Flask App object:
from flaskext.mysql import MySQL
mysql = MySQL()
mysql.init_app(app)
And to get a connection: mysql.get_db().cursor()
All other syntax is the same, and I have not had any issues since. Been using this solution for a long time now.
Background:
This is the situation I am facing and so far my current solution seems rather clunky. I want to improve on it. Right now:
I setup connections to each database in the main function of the Pyramid application:
def main(global_config, **settings):
a_engine = engine_from_config(settings, 'A.')
b_engine = engine_from_config(settings, 'B.')
ASession.configure(bind=a_engine)
BSession.configure(bind=b_engine)
"ASession" and "BSession" are simply globally defined scoped_session in /models/init.py.
ASession = scoped_session(sessionmaker(extension=ZopeTransactionExtension()))
I define model base class like so. For example:
ABase = declarative_base()
class user(ABase):
id = Column(Integer, primary_key=True)
name = Column(String)
This somehow already doesn't feel very clean. But now that this model is supposed to be accessed from a different application, I also need to define the engine and connection again in that application. This feels extremely redundant.
Problem Abstracted:
Assume that there are 2 different databases:
A and B
Also assume that you want A and B to be accessible from 2 different applications (e.g.: Pyramid application, Bokeh Server App which uses Tornado) using the same model.
In short, how would one best pattern objects/models/classes/functions to produce clean non-redundant code in Python3?
Initial Thought After The Question Was Posted:
Thinking about this a bit more, I think I want each model to be somehow "self-contained". The model should bring with it methods for initiating connections. In other words, the initiation of db connections should be decoupled from the web application itself.
And it should be done in an instance kind of manner. So that multiple applications can use the same models. Each application would have its own session connection to either DB.
How would the community pattern this? Friday afternoons don't lend themselves to find answers to these kinds of questions for myself at least.
I have done this. My recommendation below is how I like doing it, but is not the only way. I would ditch scoped sessions and the transaction manager and make explicit session management objects, with request lifecycle callbacks handling creation, closing, committing, or rolling back your sessions. Basically scoped sessions are a way to simulate a global by getting the same item for that thread of execution. The other way to do this in Pyramid is to attach things to the registry and the request, because you have those everywhere. You attach shared components to the registry (the ZCA) and per-request objects to the request.
When you have multiple sessions, I've found it much easier to reason about them and keep track of them if they are handled by components that wrap up everything for that engine. So for a case like that you describe, I've made two different DB engine components, that are created on start up, attached to the registry, and have a method for getting a fresh session. If you create these components properly, they should be usable in any application, whether it's Pyramid, Tornado, or your test script. You just make sure it has a constructor with some sane way of passing in settings for setting up the engine, whether it's a settings dict or kwargs. I then make my data model(s) live in their own python packages and it's easy to have any app in the family import the model, instantiate the engine components and go to town. Note that if you like using the ZCA registry (and I love it, it's a fantastic DI system), there's nothing preventing you from using it in non-pyramid apps, you just set it up manually in your server start up code.
In Pyramid specifically, I make a custom Request class and use the reify decorator to allow other pyramid code to get the session(s) for that request. The request class has end-of-life-callbacks attached to close out the sessions, and to do rollbacks or commits. There is a bit more boilerplate, but for me it's cleaner in that I can very easily trace where and when in code and time my session management is happening. It's also a good way for testing.
That said, there are lots of smart folks in SQLAlchemy/Pyramid land who swear by scoped sessions and the transaction manager, so there are other valid approaches. Hope that helps.
How can I force my engine to reconnect if a query returns an OperationalError like user does not have access to the database or something like that?
engine = create_engine(url, pool_recycle=3600)
Session = sessionmaker(bind=engine)
try:
sesh = Session()
sesh.query....
sesh.close()
except OperationalError:
# force engine to reconnect here somehow?
If you catch an error that indicates the connection was closed during an operation, SQLAlchemy automatically reconnects on the next access. However, when a database disconnects, your transaction is gone, so SQLAlchemy requires that you emit rollback() on the Session in order to establish within your application that a new transaction is to take place. you then need to start your whole transaction all over again.
Dealing with that issue has a few angles. You should read through the Dealing with Disconnects section of the documentation which illustrates two ways to work with disconnects. Beyond that, if you truly wanted to pick up your transaction from where you left off, you'd need to "replay" the whole thing back, assuming you've done more than one thing in your transaction. This is best suited by application code that packages what it needs to do in a function that can be called again. Note that a future version of SQLAlchemy may introduce an extension called the Transaction Replay Extension that provides another way of doing this, however it will have lots of caveats, as replaying a lost transaction in a generic way is not a trivial affair.
A lot have happened since this question was first answered.
By taking a pessimistic error handling approach you get the most bang for the buck - easy implementation and very effective.
Apply the pool_pre_ping=Truewhen you create the engine, like this:
engine = create_engine("mysql+pymysql://user:pw#host/db", pool_pre_ping=True)
See more: docs.sqlalchemy.org/en/latest/core/pooling.html#pool-disconnects-pessimistic
Another approach is to deal with errors in an optimistic way - as they happen. In that case you can wrap the execute statement in a try and except and the invalidate the connection if an exception is raised. Once the connection is invalidated you and re-instantiate it.
See more: docs.sqlalchemy.org/en/latest/core/pooling.html#disconnect-handling-optimistic
Both approaches works great in situations where your connection otherwise would timeout e.g. overnight / weekend. It also makes it much easier for IT operations to take a database down and not have to worry too much about downstream applications relying on a restart. How ever this is not a silver bullet, it's worth thinking about secure transaction handling (as mentioned by zzzeek) if you deal with very critical transactions.
If you are using sqlalchemy in Flask, when you query like
MyModel.query.all()
and you get errors like
File "./lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 427, in _revalidate_connection
"Can't reconnect until invalid "
StatementError: (sqlalchemy.exc.InvalidRequestError) Can't reconnect until invalid transaction is rolled back
You can simply reconnect DB by
MyModel.query.session.close()
MyModel.query.all()
I figured I would make this a different question for the sake of being tidy. It is based on:
SQLAlchemy won't update my database and SQLAlchemy session: how to keep it alive?.
So here's the deal: I have a Pyramid application that's talking to a daemon, which in turn talks to a database.
Now for some reason stuff isn't getting committed to the database when I add it to the database session variable, as in:
DBSession.add(ModelInstance)
Calling flush or commit doesn't make it commit.
This is how I make DBSession:
settings = {
'sqlalchemy.url':'blah blah'
}
DBSession = scoped_session(sessionmaker(extension=ZopeTransactionExtension()))
engine = engine_from_config(settings, 'sqlalchemy.')
DBSession.configure(bind=engine)
This seems fine to me because I can query the database fine. ie: this sort of thing works:
DBSession.query(ModelClass).get(id)
This fine gentleman https://stackoverflow.com/users/100297/martijn-pieters suggested the use of the following little bit of code:
import transaction
transaction.commit()
And that worked fine for making sure that my stuff got committed. The only problem is that it somehow renders my DBSession useless. So if I want to use the objects that my session is keeping track of I need to re instantiate the session and those items. This sucks. It takes up a whole lot of time.
My question is, in short, how can I avoid this?
And in long:
How do I get my DBSession to commit properly without breaking it?
OR
How do I fix my DBSession and associated model instances without the need for lengthly database calls?
AND
Any idea why this is happening? I have successfully constructed a DBSession in the same way inside the Pyramid app I mentioned and it worked totally fine, it committed when I wanted it to and everything.
For details of the errors I encountered please refer to the two questions I mentioned in the beginning
In SQLAlchemy sessions expire the objects they are managing upon commit. This is sane because after commit you have no guarantees in a concurrent world that something else isn't changing the state they are attempting to mirror in the database.
Pyramid recommends the use of a transaction manager that helps you maintain a single transaction per request. It will automatically call transaction.commit() for you after the request is complete. In this way, you don't have to think/worry about objects expiring, and transactions are properly aborted if your code raises an exception.
The way to setup the transaction manager is by installing pyramid_tm and zope.sqlalchemy and then connecting your DBSession to zope.sqlalchemy.ZopeTransactionExtension. Then things will "just work".
DBSession = scoped_session(sessionmaker(extension=ZopeTransactionExtension()))
# ...
def main(global_conf, **settings):
config = Configurator(...)
config.include('pyramid_tm')
# ...
If you need to populate a new object's primary key or ensure that some SQL will execute properly you can use DBSession.flush() to execute the SQL within your transaction without actually committing it. Any errors will be raised there for you to catch and deal with.
This basic setup of your sessions is described within the tutorial in the Pyramid documentation:
http://docs.pylonsproject.org/projects/pyramid/en/1.4-branch/tutorials/wiki2/basiclayout.html
Update: I realized I kind of answered your question about using the transaction manager in Pyramid, which you are already using successfully. I think that the answer also clearly explains what's going on with the ZopeTransactionExtension, however, and you just need confirmation about committing transactions. You'd be wise to simply use one transaction in your script, which you can create via
import transaction
with transaction.manager:
# do tons of database stuff
Now if an exception happens the transaction will be aborted, and if not it will be committed.
According to sqlalchemy's doc
Another behavior of commit() is that by default it expires the state
of all instances present after the commit is complete. This is so that
when the instances are next accessed, either through attribute access
or by them being present in a Query result set, they receive the most
recent state. To disable this behavior, configure sessionmaker() with
expire_on_commit=False.
What seems like a problem is actually done on purpose : when you commit, all the objects are marked as expired. This is done so that you don't keep using old cached values that may have changed in the database.
The reason it works in pyramid is because each request has its own transaction, and queries object before working on them. You try to use object from a preview transaction, which might not be a good idea, because their may not be in sync with the database.
To solve your problem, you can either make sure you don't reuse object after the end of a transaction (may you need your transactions to include more things), or you can use expire_on_commit=False as advised in the transaction. But if you use the latter, be aware that the object may be outdated.
If I'm reading this correctly, I think I know what is going on...
The downside of the pyramid/zope transaction managers is that they're all or nothing - due to the way they're implemented, you can't both use them and call commit(). I don't remember exactly why, but I once dug into all their code after fighting with this for hours , and there just wasn't a way to commit within the page.
For a variety of reasons I decided to not use the automatic transaction wrapping in my apps. i had a lot of scenarios where I want one or more commits , or needed to use savepoints (i use postgresql).
I want to use MySQLdb in Pylons, but can't figure out where to actually connect. It would seem that making the connection in app_globals.py would be convenient:
class Globals(object):
def __init__(self):
self.db = MySQLdb.connect()
Then controllers can access the db via the globals. However, this seems to lead to problems with 'MySQL has gone away' errors, and crashes with concurrent requests, so I'm thinking that this is bad because of thread safety.
What is the best way to do this? Should each controller open and then close a MySQL connection? Thanks!
Summarizing the comments on your question: you should probably use SQLAlchemy's non-ORM features such as the SQL Expression Language and the engine API, which each let you still get some of the benefits of SQLAlchemy without needing to go all the way up to ORM-ness.