I've been working on a Flask app for a while, using SQLAlchemy to access a MySQL database. I've finally started looking into writing tests for this (I'm a strong believer in testing, but am new to Flask and SQLA and Python for that matter, so delayed this), and am having a problem getting my structure set up.
My production database isn't using any unusual MySQL features, and in other languages/frameworks I've been able to set up a test framework using an in-memory SQLite database. (For example, I have a Perl app using DBIx::Class to run a SQL Server db but with a test suite built on SQLite.) However, with SQLAlchemy I've needed to declare a few specific MySQL things in my model, and I'm not sure how to get around this. In particular, I use TINYINT and CHAR types for a few columns, and I seem to have to import these from sqlalchemy.dialects.mysql, since these aren't generic types in SQLA. Thus I'll have a class declaration like:
class Item(db.Model):
...
size = db.Column(TINYINT, db.ForeignKey('size.size_id'), nullable=False)
So even though if I were using raw SQL, I could use TINYINT with SQLite or MySQL and it would work fine, here, it's coming from the mysql dialect class.
I don't want to override my entire model class in order to cover seemingly trivial things like this. Is there some other solution? I've read what I could about using different databases for testing and production, but this issue hasn't been mentioned. It would be a lot easier to use an in-memory SQLite db for testing, instead of having to have a MySQL test database available for everything.
Related
I was using SQLAlchemy ORM to connect to a in memory database when I decided to implement a versioning tracking to the DB schema. To do this I've been following the tutorial on how to set up Versioning using SQLAlchemy, but now I'm wondering if there is a way for me to get my upgrade and downgrade scripts to also update/create my SQLAlchemy.orm tables?
I ask this because I now don't know how to write code using only SQLAlchemy Migrate since a developer might not know of the most recent change done to the database. Currently the developer just has to look at the file containing the class that maps to a table in the DB to know what is available, but from my understanding using Migrate would not synchronize these classes with the changes applied in a upgrade/downgrade script. This synchronization would need to be done manually. I looked at reflect but this doesn't seem to require prior knowledge as to the structure of the table.
I know I must be missing something. I could have my DB opened in HeidiSQL and [ALT + TAB] each time my memory wants to confirm something in the DB but this is slows me down a lot when I used to just be able to use auto complete on classes as I type (Note: I'm heavily dyslexic and I'm prone to many spelling mistakes which is why I auto complete drastically improves my productivity). Is there a way for the upgrade scripts to create/update/delete files containing ORM classes?
ie.
class ExtractionEvent(Base):
__tablename__ = 'ExtractionEvents'
Id = Column(Integer, primary_key=True, autoincrement=True)
...
We have our infrastructure up in AWS, which includes a database.
Our transfer of data occurs in Python using SQLAlchemy ORM, which we use to mimic the database schema. At this point it's very simple so it's no big deal.
But if the schema changes/grows, then a manual change needs to be done in the code as well each time.
I was wondering: what is the proper way to centralize the schema of the database, so that there is one source of truth for it?
Check out the Glue Schema Registry - this is pretty much what it's made for.
I am a little confused with the topic alluded to in the title.
So, when a Flask app is started, does the SQLAlchemy search theSQLALCHEMY_DATABASE_URI for the correct, in my case, MySQL database. Then, does it create the tables if they do not exist already?
What if the database that is programmed into theSQLALCHEMY_DATABASE_URI variable in the config.py file does not exist?
What if that database exists, and only a few of the tables exist (There are more tables coded into the SQLAlchemy code than exist in the actual MySQL database)? Does it erase those tables and then create new tables with the current specs?
And what if those tables do all exist? Do they get erased and re-created?
I am trying to understand how the entire process works so that I (1) Don't lose database information when changes are made to the schema, and (2) can write the necessary code to completely manage how and when the SQLAlchemy talks to the actual Database.
Tables are not created automatically; you need to call the SQLAlchemy.create_all() method to explicitly to have it create tables for you:
db = SQLAlchemy(app)
db.create_all()
You can do this with command-line utility, for example. Or, if you deploy to a PaaS such as Google App Engine, a dedicated admin-only view.
The same applies for database table destruction; use the SQLAlchemy.drop_all() method.
See the Creating and Dropping tables chapter of the documentation, or take a look at the database chapter of the Mega Flask Tutorial.
You can also delegate this task to Flask-Migrate or similar schema versioning tools. These help you record and edit schema creation and migration steps; the database schema of real-life projects is never static and you would want to be able to move existing data between versions or the schema. Creating the initial schema is then just the first step.
I tried a few different things but I could not succeed, so maybe it's just not possible.
When I create a sqlalchemy engine with
create_engine('mysql://user#host/database')
it looks like the database is compulsory, while it's not using the sqlite backend.
But since I have to manipulate many different databases on the same server, I would like to avoid creating multiple engines..
Is there a way?
Otherwise what I could is to create a superengine that creates all the needed engines and then redirect to the right engine depending on the database name requested...
One engine pretty much equals one connectable database. See SQLAlchemy docs. You can have as many engines as you want, or you can throw away the engines that you do not need anymore.
Creating the engine one time for each of your databases would be the way I would do it. The you will be able to access them as needed instead of creating an engine then destroying it then recreating etc.
Are there database testing tools for python (like sqlunit)? I want to test the DAL that is built using sqlalchemy
Follow the design pattern that Django uses.
Create a disposable copy of the database. Use SQLite3 in-memory, for example.
Create the database using the SQLAlchemy table and index definitions. This should be a fairly trivial exercise.
Load the test data fixture into the database.
Run your unit test case in a database with a known, defined state.
Dispose of the database.
If you use SQLite3 in-memory, this procedure can be reasonably fast.