Alembic Migrations on Multiple Models - python

I am attempting to create a revision with --autogenerate using Alembic for two Models, but am receiving a duplicate table keys error. Does, a schema need to be specified? If so, how can it be set? The documentation I've read says to use __table_args__ = {'schema': 'somename'}, but that hasn't helped. Any tips or suggestions are greatly appreciated.
My current setup is:
base.py
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
workspace.py
from sqlalchemy import Column, Integer, String
from base import Base
class WorkspaceModel(Base):
__tablename__ = 'workspaces'
id = Column(Integer, primary_key=True)
name = Column(String)
host.py
from sqlalchemy import Column, Integer, String
from base import Base
class HostModel(Base):
__tablename__ = 'hosts'
id = Column(Integer, primary_key=true)
ip = Column(String)
alembic/env.py
from host import HostModel
from workspace import WorkspaceModel
target_metadata = [HostModel.metadata, WorkspaceModel.metadata]
Error
ValueError: Duplicate table keys across multiple MetaData objects: "hosts", "workspaces"

To make it clear from what #esdotzed and #univerio said, you have to use a single Base.metadata - but still import the individual models.
In the original question, this is how the alembic/env.py should look like:
from base import Base
# This two won't be referenced, but *have* to be imported to populate `Base.metadata`
from host import HostModel
from workspace import WorkspaceModel
target_metadata = Base.metadata
If you didn't import both models, the autogenerated migration would end up deleting your whole database - because Base.metadata doesn't know about any model by itself.

quoting univerio's answer from the comment section:
target_metadata should just be target_metadata = Base.metadata
Using Base.metadata doesn't mean you can remove the imports from host import HostModel and from workspace import WorkspaceModel
It worked for me.

I just want to add to #mgarciaisaia answer, it will work but the thing is when I tried changing, for example, the max length of username field of User model and running alembic revision --autogenerate -m "test migration", alembic output a migration file with empty upgrade() and downgrade() functions!
Note: the following operations will erase your data from the database so please back them up beforehand!
In order to update the changes made to the original User model, I had to
Delete the first migration file
Rerun alembic revision --autogenerate -m "update user model" and alembic upgrade head again for the changes to appear inside upgrade() and downgrade() functions of the migration file.

Related

Is it possible to use SQLAlchemy where the models are defined in different files?

I'm using SQLAlchemy for the first time and trying to define my models / schema. My only experience with ORM prior to this was with Rails and ActiveRecord.
Anyway, following SQLAlchemy's ORM Quick Start, this is the basic example they use (I have removed a few lines which are not relevant to my question):
from sqlalchemy import ForeignKey
from sqlalchemy import String
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
from sqlalchemy.orm import relationship
class Base(DeclarativeBase):
pass
class User(Base):
__tablename__ = "user_account"
id: Mapped[int] = mapped_column(primary_key=True)
name: Mapped[str] = mapped_column(String(30))
addresses: Mapped[list["Address"]] = relationship(
back_populates="user", cascade="all, delete-orphan"
)
class Address(Base):
__tablename__ = "address"
id: Mapped[int] = mapped_column(primary_key=True)
email_address: Mapped[str]
user_id: Mapped[int] = mapped_column(ForeignKey("user_account.id"))
user: Mapped["User"] = relationship(back_populates="addresses")
My question is: is it possible to create two models in separate files (user.py and address.py), defining each model in its own file, and then import them and run a command like the following in order to instantiate the database:
from sqlalchemy import create_engine
engine = create_engine("sqlite://", echo=True)
Base.metadata.create_all(engine)
Yes -- what you are asking is both possible and is the standard way of creating SQLAlchemy Schemas.
The only gotcha is that you want all of your models to inherit from the same DeclarativeBase. This is essentially what registers them with SQLAlchemy and other plugins/tools like Alembic.
There are a few organization structures you can use, but a simple and common one is something like this:
- project_root
- models
+ __init__.py
+ user.py
+ address.py
- utils
- scripts
In models.__init__, create and export your Base (I'd recommend giving it a slightly more specific name, such as ModelBase). Then in each of your model files, just import ModelBase and cut/paste your existing code.
Some projects enforce a single model per file. Others group related models together.
Since all of your models import and use a single DeclaritiveBase, you can call methods like BaseModel.metadata.create_all(). Just keep in mind you will still need to import those actual models somewhere.
An easy way to handle that is to import them inside of models.__init__.py. That way you will essentially load them whenever you import and use ModelBase.
# models.__init__.py
from .user import User
from .address import Address
class ModelBase(DeclarativeBase):
pass

Alembic operations with multiple schemas

NOTE: This is a question I have already found the answer for, but wanted to share it so it could help other people facing the same problem.
I was trying to perform some alembic operations in my multi-schema postgresql database such as .add_column or .alter_table (although it will be the same question for .create_table or .drop_table). For example: op.add_column('table_name', 'new_column_name')
However, I was getting the same error saying basically that the table name could not be found. This, as far as I understand it, is caused because alembic is not recognizing the schema and is searching for that table in the public schema. Then, I tried to specify the schema in the table_name as 'schema_name.table_name' but had no luck.
I came across similar questions Perform alembic upgrade in multiple schemas or Alembic support for multiple Postgres schemas, but didn't find a satisfactory answer.
After searching for it into the alembic documentation, I found that there is actually an schema argument for the different operations. For example:
op.add_column('table_name', 'column_name', schema='schema_name')
Alembic will automatically pick up the schema from a table if it is already defined in a declarative SQLAlchemy model.
For example, with the following setup:
# models.py
from sqlalchemy import Column, Integer, String
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class SomeClass(Base):
__tablename__ = 'some_table'
id = Column(Integer, primary_key=True)
name = Column(String(50))
__table_args__ = {"schema": "my_schema"}
# alembic/env.py
from models import Base
target_metadata = Base.metadata
[...]
Running:
alembic revision --autogenerate -m "test"
Would result in a default migration script with a schema specified:
def upgrade_my_db():
# ### commands auto generated by Alembic - please adjust! ###
op.create_table('some_table',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('name', sa.String(length=50), nullable=True),
sa.PrimaryKeyConstraint('id'),
schema='my_schema'
)
# ### end Alembic commands ###

How to recognise a sqlite database created with sqlalchemy using pydal?

I create a very simple database with sqlalchemy as follows:
from sqlalchemy import Column, Integer, String
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
Base = declarative_base()
class Person(Base):
__tablename__ = 'person'
id = Column(Integer, primary_key=True)
name = Column(String(250), nullable=False)
engine = create_engine('sqlite:///sqlalchemy_example.db')
# Create all tables in the engine. This is equivalent to "Create Table"
# statements in raw SQL.
Base.metadata.create_all(engine)
Base.metadata.bind = engine
DBSession = sessionmaker(bind=engine)
session = DBSession()
# Insert a Person in the person table
new_person = Person(name='new person')
session.add(new_person)
session.commit()
and then I tried to read it using pyDAL reference.
from pydal import DAL, Field
db = DAL('sqlite://sqlalchemy_example.db', auto_import=True)
db.tables
>> []
db.define_table('person', Field('name'))
>> OperationalError: table "person" already exists
How do I access the table using pyDAL?
thank you
First, do not set auto_import=True, as that is only relevant if pyDAL *.table migration metadata files exist for the tables, which will not be the case here.
Second, pyDAL does not know the table already exists, and because migrations are enabled by default, it attempts to create the table. To prevent this, you can simply disable migrations:
# Applies to all tables.
db = DAL('sqlite://sqlalchemy_example.db', migrate_enabled=False)
or:
# Applies to this table only.
db.define_table('person', Field('name'), migrate=False)
If you would like pyDAL to take over migrations for future changes to this table, then you should run a "fake migration", which will cause pyDAL to generate a *.table migration metadata file for this table without actually running the migration. To do this, temporarily make the following change:
db.define_table('person', Field('name'), fake_migrate=True)
After leaving the above in place for a single request, the *.table file will be generated, and you can remove the fake_migrate=True argument.
Finally, note that pyDAL expects the id field to be an auto-incrementing integer primary key field.

How to use Enum with SQLAlchemy and Alembic?

Here's my Post model:
class Post(Base):
__tablename__ = 'posts'
title = db.Column(db.String(120), nullable=False)
description = db.Column(db.String(2048), nullable=False)
I'd like to add Enum status to it. So, I've created a new Enum:
import enum
class PostStatus(enum.Enum):
DRAFT='draft'
APPROVE='approve'
PUBLISHED='published'
And added a new field to model:
class Post(Base):
...
status = db.Column(db.Enum(PostStatus), nullable=False, default=PostStatus.DRAFT.value, server_default=PostStatus.DRAFT.value)
After doing FLASK_APP=server.py flask db migrate, a such migration was generated:
def upgrade():
op.add_column('posts', sa.Column('status', sa.Enum('DRAFT', 'APPROVE', 'PUBLISHED', name='poststatus'), server_default='draft', nullable=False))
After trying to upgrade DB, I'm getting:
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) type "poststatus" does not exist
LINE 1: ALTER TABLE posts ADD COLUMN status poststatus DEFAULT 'draf...
^
[SQL: "ALTER TABLE posts ADD COLUMN status poststatus DEFAULT 'draft' NOT NULL"]
Why type poststatus was not created on DB-level automatically? In the similar migration it was.
How to specify server_default option properly? I need both ORM-level defaults and DB-level ones, because I'm altering existing rows, so ORM defaults are not applied.
Why real values in DB are 'DRAFT', 'APPROVE', 'PUBLISHED', but not draft, etc? I supposed there should be ENUM values, not names.
Thank you in advance.
Why real values in DB are 'DRAFT', 'APPROVE', 'PUBLISHED', but not draft, etc? I supposed there should be ENUM values, not names.
As Peter Bašista's already mentioned SQLAlchemy uses the enum names (DRAFT, APPROVE, PUBLISHED) in the database. I assume that was done because the enum values ("draft", "approve", ...) can be arbitrary types in Python and they are not guaranteed to be unique (unless #unique is used).
However since SQLAlchemy 1.2.3 the Enum class accepts a parameter values_callable which can be used to store enum values in the database:
status = db.Column(
db.Enum(PostStatus, values_callable=lambda obj: [e.value for e in obj]),
nullable=False,
default=PostStatus.DRAFT.value,
server_default=PostStatus.DRAFT.value
)
Why type poststatus was not created on DB-level automatically? In the similar migration it was.
I think basically you are hitting a limitation of alembic: It won't handle enums on PostgreSQL correctly in some cases. I suspect the main issue in your case is Autogenerate doesn't correctly handle postgresql enums #278.
I noticed that the type is created correctly if I use alembic.op.create_table so my workaround is basically:
enum_type = SQLEnum(PostStatus, values_callable=lambda enum: [e.value for e in enum])
op.create_table(
'_dummy',
sa.Column('id', Integer, primary_key=True),
sa.Column('status', enum_type)
)
op.drop_table('_dummy')
c_status = Column('status', enum_type, nullable=False)
add_column('posts', c_status)
Use the following function example in case you are using PostgreSQL:
from sqlalchemy.dialects import postgresql
from ... import PostStatus
from alembic import op
import sqlalchemy as sa
def upgrade():
post_status = postgresql.ENUM(PostStatus, name="status")
post_status.create(op.get_bind(), checkfirst=True)
op.add_column('posts', sa.Column('status', post_status))
def downgrade():
post_status = postgresql.ENUM(PostStatus, name="status")
post_status.drop(op.get_bind())
This and related StackOverflow threads resort to PostgreSQL dialect-specific typing. However, generic support may be easily achieved in an Alembic migration as follows.
First, import the Python enum, the SQLAlchemy Enum, and your SQLAlchemy declarative base wherever you're going to declare your custom SQLAlchemy Enum column type.
import enum
from sqlalchemy import Enum
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
Let's take OP's original Python enumerated class:
class PostStatus(enum.Enum):
DRAFT='draft'
APPROVE='approve'
PUBLISHED='published'
Now we create a SQLAlchemy Enum instantiation:
PostStatusType: Enum = Enum(
PostStatus,
name="post_status_type",
create_constraint=True,
metadata=Base.metadata,
validate_strings=True,
)
When you run your Alembic alembic revision --autogenerate -m "Revision Notes" and try to apply the revision with alembic upgrade head, you'll likely get an error about the type not existing. For example:
...
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedObject) type "post_status_type" does not exist
LINE 10: post_status post_status_type NOT NULL,
...
To fix this, import your SQLAlchemy Enum class and add the following to your upgrade() and downgrade() functions in the Alembic autogenerated revision script.
from myproject.database import PostStatusType
...
def upgrade() -> None:
PostStatusType.create(op.get_bind(), checkfirst=True)
... the remainder of the autogen code...
def downgrade() -> None:
...the autogen code...
PostStatusType.drop(op.get_bind(), checkfirst=True)
Finally, be sure to update the auto-generated sa.Column() declaration in the table(s) using the enumerated type to simply reference the SQLAlchemy Enum type instead of using Alembic's attempt to re-declare it. For example in def upgrade() -> None:
op.create_table(
"my_table",
sa.Column(
"post_status",
PostStatusType,
nullable=False,
),
)
I can only answer the third part of your question.
The documentation for the Enum type in SQLAlchemy states that:
Above, the string names of each element, e.g. “one”, “two”, “three”, are persisted to the database; the values of the Python Enum, here indicated as integers, are not used; the value of each enum can therefore be any kind of Python object whether or not it is persistable.
So, it is by SQLAlchemy design that Enum names, not values are persisted into the database.

How to set up global connection to database?

I have problem with setting up database connection. I want to set connection, where I can see this connection in all my controllers.
Now I use something like this in my controller:
db = create_engine('mysql://root:password#localhost/python')
metadata = MetaData(db)
email_list = Table('email',metadata,autoload=True)
In development.ini I have:
sqlalchemy.url = mysql://root#password#localhost/python
sqlalchemy.pool_recycle = 3600
How do I set _____init_____.py?
I hope you got pylons working; for anyone else that may later read question I'll present some pointers in the right direction.
First of all, you are only creating a engine and a metadata object. While you can use the engine to create connections directly you would almost always use a Session to manage querying and updating your database.
Pylons automatically setups this for you by creating a engine from your configuration file, then passing it to yourproject.model.__init__.py:init_model() which binds it to a scoped_session object.
This scoped_session object is available from yourproject.model.meta and is the object you would use to query your database. For example:
record = meta.Session.query(model.MyTable).filter(id=42)
Because it is a scoped_session it automatically creates a Session object and associates it with the current thread if it doesn't already exists. Scoped_session passes all action (.query(), .add(), .delete()) down into the real Session object and thus allows you a simple way to interact the database with having to manage the non-thread-safe Session object explicitly.
The scoped_session, Session, object from yourproject.model.meta is automatically associated with a metadata object created as either yourproject.model.meta:metadata (in pylons 0.9.7 and below) or yourproject.model.meta:Base.metadata (in pylons 1.0). Use this metadata object to define your tables. As you can see in newer versions of pylons a metadata is associated with a declarative_base() object named Base, which allows you to use SqlAlchemy's declarative style.
Using this from the controller
from yourproject import model
from yourproject.model import Session
class MyController(..):
def resource(self):
result = Session.query(model.email_list).\
filter(model.email_list.c.id=42).one()
return str(result)
Use real connections
If you really want to get a connection object simply use
from yourproject.model import Session
connection = Session.connection()
result = connection.execute("select 3+4;")
// more connection executions
Session.commit()
However this is all good, but what you should be doing is...
This leaves out that you are not really using SqlAlchemy much. The power of SqlAlchemy really shines when you start mapping your database tables to python classes. So anyone looking into using pylons with a database should take a serious look at what you can do with SqlAlchemy. If SqlAlchemy starts out intimidating simply start out with using its declarative approach, which should be enough for almost all pylons apps.
In your model instead of defining Table constructs, do this:
from sqlalchemy import Column, Integer, Unicode, ForeignKey
from sqlalchemy.orm import relation
from yourproject.model.meta import Base
class User(Base):
__tablename__ = 'users'
# primary_key implies nullable=False
id = Column(Integer, primary_key=True, index=True)
# nullable defaults to True
name = Column(Unicode, nullable=False)
notes = relation("UserNote", backref="user")
query = Session.query_property()
class UserNote(Base):
__tablename__ = 'usernotess'
# primary_key implies nullable=False
id = Column(Integer, primary_key=True, index=True)
userid = Column(Integer, index=True, ForeignKey("User.id"))
# nullable defaults to True
text = Column(Unicode, nullable=False)
query = Session.query_property()
Note the query objects. These are smart object that live on the class and associates your classes with the scoped_session(), Session. This allows you to event more easily extract data from your database.
from sqlalchemy.orm import eagerload
def resource(self):
user = User.query.filter(User.id==42).options(eagerload("notes")).one()
return "\n".join([ x.text for x in user.notes ])
1.0 version of Pylons use declarative syntax. More about this, you can see here .
In mode/init.py you can write somthing like this:
from your_programm.model.meta import Session, Base
from sqlalchemy import *
from sqlalchemy.types import *
def init_model(engine):
Session.configure(bind=engine)
class Foo(Base) :
__tablename__ = "foo"
id = Column(Integer, primary_key=True)
name = Column(String)
...
What you want to do is modify the Globals class in your app_globals.py file to include a .engine (or whatever) attribute. Then, in your controllers, you use from pylons import app_globals and app_globals.engine to access the engine (or metadata, session, scoped_session, etc...).

Categories