I was using SQLAlchemy ORM to connect to a in memory database when I decided to implement a versioning tracking to the DB schema. To do this I've been following the tutorial on how to set up Versioning using SQLAlchemy, but now I'm wondering if there is a way for me to get my upgrade and downgrade scripts to also update/create my SQLAlchemy.orm tables?
I ask this because I now don't know how to write code using only SQLAlchemy Migrate since a developer might not know of the most recent change done to the database. Currently the developer just has to look at the file containing the class that maps to a table in the DB to know what is available, but from my understanding using Migrate would not synchronize these classes with the changes applied in a upgrade/downgrade script. This synchronization would need to be done manually. I looked at reflect but this doesn't seem to require prior knowledge as to the structure of the table.
I know I must be missing something. I could have my DB opened in HeidiSQL and [ALT + TAB] each time my memory wants to confirm something in the DB but this is slows me down a lot when I used to just be able to use auto complete on classes as I type (Note: I'm heavily dyslexic and I'm prone to many spelling mistakes which is why I auto complete drastically improves my productivity). Is there a way for the upgrade scripts to create/update/delete files containing ORM classes?
ie.
class ExtractionEvent(Base):
__tablename__ = 'ExtractionEvents'
Id = Column(Integer, primary_key=True, autoincrement=True)
...
Related
I'm currently developing a Python Flask app that will allow users to write out paragraphs and store them in a MySQL database. Are there any Python libraries that will let users have the benefits of version control? Ideally users would be able to track edits so that users can revert to previous versions of the text they've written.
If you're using SQL Alchemy, checkout: sqlalchemy-continuum.
Features:
Does not store updates which don’t change anything
Supports alembic migrations
Can revert objects data as well as all object relations at given transaction even if the object was deleted
Transactions can be queried afterwards using SQLAlchemy query syntax
Querying for changed records at given transaction
Querying for versions of entity that modified given property
Querying for transactions, at which entities of a given class changed
History models give access to parent objects relations at any given point in time
Or check versioning-objects in SQLAlchemy documentation.
I also found this tutorial Database Content Versioning Using SQLAlchemy by Googling: sqlalchemy history table, you may find other solutions.
Our current project relies heavily on SQL Alchemy for table creation/data insertion. We would like to switch to timescaledb's hypertables, but it seems the recommended way to create hypertables is by executing a
create_hypertable
command. I need to be able to dynamically create tables, and so manually doing this for every table created is not really an option. One way of handling the conversion is to run a python script sending psycopg2 commands to convert all newly-created tables into hypertables, but this seems a little clumsy. Does timescaledb offer any integration with SQL Alchemy with regards to creating hypertables?
We currently do not offer any specific integrations with SQL Alchemy (broadly or specifically for creating hypertables). We are always interested in hearing new feature requests, so if you wanted to post your issue/use case on our Github it would help us keep better track of it for future work.
One thing that might work for your use case is to create an event trigger that executes on table creation. You'd have to check that it's in the correct schema since TimescaleDB creates its own chunk tables dynamically and you don't want to have them converted to hypertables.
See this answer for more info on event triggers:
execute a trigger when I create a table
Here is a practical example of using event trigger to create a hyper table:
from sqlalchemy import Column, Integer, DateTime, event, DDL, orm
Base = orm.declarative_base()
class ExampleModel(Base):
__tablename__ = 'example_model'
id = Column(Integer, primary_key=True)
time = Column(DateTime)
event.listen(
ExampleModel.__table__,
'after_create',
DDL(f"SELECT create_hypertable('{ExampleModel.__tablename__}', 'time');")
)
I've been working on a Flask app for a while, using SQLAlchemy to access a MySQL database. I've finally started looking into writing tests for this (I'm a strong believer in testing, but am new to Flask and SQLA and Python for that matter, so delayed this), and am having a problem getting my structure set up.
My production database isn't using any unusual MySQL features, and in other languages/frameworks I've been able to set up a test framework using an in-memory SQLite database. (For example, I have a Perl app using DBIx::Class to run a SQL Server db but with a test suite built on SQLite.) However, with SQLAlchemy I've needed to declare a few specific MySQL things in my model, and I'm not sure how to get around this. In particular, I use TINYINT and CHAR types for a few columns, and I seem to have to import these from sqlalchemy.dialects.mysql, since these aren't generic types in SQLA. Thus I'll have a class declaration like:
class Item(db.Model):
...
size = db.Column(TINYINT, db.ForeignKey('size.size_id'), nullable=False)
So even though if I were using raw SQL, I could use TINYINT with SQLite or MySQL and it would work fine, here, it's coming from the mysql dialect class.
I don't want to override my entire model class in order to cover seemingly trivial things like this. Is there some other solution? I've read what I could about using different databases for testing and production, but this issue hasn't been mentioned. It would be a lot easier to use an in-memory SQLite db for testing, instead of having to have a MySQL test database available for everything.
I have seen sqlalchemy-migrate and alembic, but I do not want to use those frameworks. How can I write the migration script? Most of the migrations as I understand revolve around altering/dropping existing tables? Additionally, I use sqlalchemy mostly at orm level than schema/core/engine level?
The reasons I wish to do-it-myself is mostly a learning purpose and understanding how django orm automatically generates a migration script?
You should just use alembic to execute raw sql to start. Then if you decide to try to use more alembic features you'll be all set.
For example after creating a new revision named drop nick you can execute raw sql:
op.execute ('ALTER TABLE users DROP COLUMN nickname')
This way alembic can handle the version numbers but you can, or rather have to, do all the sql manipulations manually.
I'm designing a bulk import tool from an old system into a new Django based system.
I'd like to retain all of the current IDs of objects (they are just 5 digit strings), now due to the design in the current system there are lots of references between these objects.
To import I can see 2 possible techniques - import a known object, and carefully recurse through these relationships making sure to import in the right way and only set relationships as soon as I know they exist
... or start at item 00001 set the foreignkeys to values I know exist and just grab everything in order, knowing that once we get to item 99999 all the relationships will exist.
So is there a way to set a foreignkey to the ID of an item that doesn't exist, but will, even for imports only?
To add further complexity, not all of these relationships are straightforward foreignkeys, some are ManyToMany relationships as well.
To be able to handle any database that Django supports and not have to deal with peculiarities of the backend, I'd export the old database in the format that Django loaddata can read, and then give this exported file to loaddata. This command has no issue importing the type of structure you are talking about.
Creating the file that loaddata will read could be done by writing your own converter that reads the old database and dumps an appropriate file. However, a way which might be easier would be to create a throwaway Django project with models that have the same structure as the tables in the old database, point the Django project to the old database, and use dumpdata to create the file. If table details between the old database and the new database have changed, you'd still have to modify the file but at least some of the conversion work would have already been done.
A more direct way would be to bypass Django completely to do the import in SQL but turn off foreign key constraints for the time of the import. For MySQL this would be done by setting foreign_key_checks to 0 for the time of the import, and then back to 1 when done. For SQLite this would be done by using PRAGMA foreign_keys = OFF; and then ON when done.
Postgresql does not allow just turning off these constraints but Django creates foreign key constraints as DEFERRABLE INITIALLY DEFERRED, which means that the constraint is not checked until the end of a transaction. So initiating a transaction, importing and then committing should work. If something prevents this, then you have to drop the constraint before importing and add it back afterwards.
Sounds like you need a database migration tool like South, the standard for Django. Worth noting that Django 1.7 Beta 1 was released recently and it provides in-built migration.