How do I handle migrations in sqlalchemy without any framework? - python

I have seen sqlalchemy-migrate and alembic, but I do not want to use those frameworks. How can I write the migration script? Most of the migrations as I understand revolve around altering/dropping existing tables? Additionally, I use sqlalchemy mostly at orm level than schema/core/engine level?
The reasons I wish to do-it-myself is mostly a learning purpose and understanding how django orm automatically generates a migration script?

You should just use alembic to execute raw sql to start. Then if you decide to try to use more alembic features you'll be all set.
For example after creating a new revision named drop nick you can execute raw sql:
op.execute ('ALTER TABLE users DROP COLUMN nickname')
This way alembic can handle the version numbers but you can, or rather have to, do all the sql manipulations manually.

Related

SQL alechmy update database schema

I'm using SQL alechmy for a project, I've got a system running with some important data, but I would like to update my schemas to make new features. What's the best practice here?
Can schemas be updated without dropping all tables and recreate database? (currently I am running into trouble with
manage.py migrate
Not working. Ie. Not picking up new changes to table fields)
Thanks a lot in advance,
C

Can peewee drop a database

I am working on a function within a project that could drop staging databases. I already use peewee throughout the project so it would make things easier to not have use pymysql . Is it possible? I've seen it i believe for dropping tables but not a db.
Just double checking
I did see a ticket in github from 2014 regarding this issue but wanted to see if there was any new info on this as a possibility.
Peewee has no provisions for either creating or deleting databases, no will it ever be likely to support that. Check your db vendor for the appropriate methods for doing this.

Importing data from multiple related tables in mySQL to SQLite3 or postgreSQL

I'm updating from an ancient language to Django. I want to keep the data from the old project into the new.
But old project is mySQL. And I'm currently using SQLite3 in dev mode. But read that postgreSQL is most capable. So first question is: Is it better to set up postgreSQL while in development. Or is it an easy transition to postgreSQL from SQLite3?
And for the data in the old project. I am bumping up the table structure from the old mySQL structure. Since it got many relation db's. And this is handled internally with foreignkey and manytomany in SQLite3 (same in postgreSQL I guess).
So I'm thinking about how to transfer the data. It's not really much data. Maybe 3-5.000 rows.
Problem is that I don't want to have same table structure. So a import would be a terrible idea. I want to have the sweet functionality provided by SQLite3/postgreSQL.
One idea I had was to join all the data and create a nested json for each post. And then define into what table so the relations are kept.
But this is just my guessing. So I'm asking you if there is a proper way to do this?
Thanks!
better create the postgres database. write down the python script which take the data from the mysql database and import in postgres database.

Flask-SQLAlchemy - When are the tables/databases created and destroyed?

I am a little confused with the topic alluded to in the title.
So, when a Flask app is started, does the SQLAlchemy search theSQLALCHEMY_DATABASE_URI for the correct, in my case, MySQL database. Then, does it create the tables if they do not exist already?
What if the database that is programmed into theSQLALCHEMY_DATABASE_URI variable in the config.py file does not exist?
What if that database exists, and only a few of the tables exist (There are more tables coded into the SQLAlchemy code than exist in the actual MySQL database)? Does it erase those tables and then create new tables with the current specs?
And what if those tables do all exist? Do they get erased and re-created?
I am trying to understand how the entire process works so that I (1) Don't lose database information when changes are made to the schema, and (2) can write the necessary code to completely manage how and when the SQLAlchemy talks to the actual Database.
Tables are not created automatically; you need to call the SQLAlchemy.create_all() method to explicitly to have it create tables for you:
db = SQLAlchemy(app)
db.create_all()
You can do this with command-line utility, for example. Or, if you deploy to a PaaS such as Google App Engine, a dedicated admin-only view.
The same applies for database table destruction; use the SQLAlchemy.drop_all() method.
See the Creating and Dropping tables chapter of the documentation, or take a look at the database chapter of the Mega Flask Tutorial.
You can also delegate this task to Flask-Migrate or similar schema versioning tools. These help you record and edit schema creation and migration steps; the database schema of real-life projects is never static and you would want to be able to move existing data between versions or the schema. Creating the initial schema is then just the first step.

Are there database testing tools for python (like sqlunit)?

Are there database testing tools for python (like sqlunit)? I want to test the DAL that is built using sqlalchemy
Follow the design pattern that Django uses.
Create a disposable copy of the database. Use SQLite3 in-memory, for example.
Create the database using the SQLAlchemy table and index definitions. This should be a fairly trivial exercise.
Load the test data fixture into the database.
Run your unit test case in a database with a known, defined state.
Dispose of the database.
If you use SQLite3 in-memory, this procedure can be reasonably fast.

Categories