I followed the documentation from Alembic to auto-generate migrations. My project structure looks like this:
alembic/
versions/
env.py
README
script.py.mako
data/
__init__.py
db.py
models.py
alembic.ini
app.db
I made changes to env.py by exactly following the document:
from logging.config import fileConfig
from sqlalchemy import engine_from_config
from sqlalchemy import pool
from alembic import context
from data.models import Base
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# Interpret the config file for Python logging.
# This line sets up loggers basically.
fileConfig(config.config_file_name)
# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
target_metadata = Base.metadata
# other values from the config, defined by the needs of env.py,
# can be acquired:
# my_important_option = config.get_main_option("my_important_option")
# ... etc.
def run_migrations_offline():
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url,
target_metadata=target_metadata,
literal_binds=True,
dialect_opts={"paramstyle": "named"},
)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online():
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
connectable = engine_from_config(
config.get_section(config.config_ini_section),
prefix="sqlalchemy.",
poolclass=pool.NullPool,
)
with connectable.connect() as connection:
context.configure(
connection=connection, target_metadata=target_metadata
)
with context.begin_transaction():
context.run_migrations()
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()
This is my __init__.py:
import os
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from .models import Base
basedir = os.path.abspath(os.path.dirname(os.path.dirname(__file__)))
db_url = os.environ.get('DATABASE_URL') or \
'sqlite:///' + os.path.join(basedir, 'app.db')
engine = create_engine(db_url, echo=False)
Base.metadata.create_all(engine)
session = sessionmaker(bind=engine)
I created a User class in models.py like this:
from sqlalchemy import Column, Sequence, Integer, String
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class User(Base):
__tablename__ = 'user'
id = Column(Integer, Sequence('id_seq'), primary_key=True)
first_name = Column(String, nullable=False)
last_name = Column(String, nullable=False)
def __repr__(self):
return "<%s('%s', '%s')>" \
% (self.__class__.__qualname__, self.first_name,
self.last_name)
After that, I run migration by:
alembic revision --autogenerate -m "Added user table"
However, I got empty upgrade() and downgrade() in migration file:
"""Added user table
Revision ID: 279933caec54
Revises:
Create Date: 2021-05-12 16:21:05.772568
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '279933caec54'
down_revision = None
branch_labels = None
depends_on = None
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
pass
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
pass
# ### end Alembic commands ###
But when I checked the database, the user table was created there (I didn't do it myself). How did it happen? How did alembic create empty migration but generating the table?
I found the answer in this question. The problem described in this question is not the same as mine (the alembic didn't create an empty migration file in this case) though. That's why I asked the question here (may be seen as duplicated).
So as the answer suggested, I commented the Base.metadata.create_all(engine) in the __init__.py. It is this line of code that creates the user table (if it not exist). So when alembic checks my database, the table is already there. Since no differences are found, the upgrade() and downgrade() are empty.
in this situation don't import your Base model from base file, import your Base from some where you have extend this Base class
for example I have a structure like below:
also i imported Base class in alembic/.env from models.trade.
in this form your base class metadata will detect your models and your auto generation migrations will work fine.
Related
I am working on an application using sqlalchemy, postgres and alembic.
The project structure is as follows:
.
├── alembic.ini
├── main.py
├── migrations
│ ├── env.py
│ ├── README
│ ├── script.py.mako
│ └── versions
├── models
│ ├── base.py
│ ├── datamodel1.py
│ ├── datamodel2.py
│ └── __init__.py
└── requirements.txt
3 directories, 10 files
Where:
the content of models/base.py is :
from sqlalchemy.ext.declarative.api import declarative_base, DeclarativeMeta
Base: DeclarativeMeta = declarative_base()
The content of models/datamodel1.py is :
from models.base import Base
from sqlalchemy.sql.schema import Column
from sqlalchemy.sql.sqltypes import String, Date, Float
class Model1(Base):
__tablename__ = 'model1_table'
model1_id = Column(String, primary_key=True)
col1 = Column(String)
col2 = Column(String)
The content of models/datamodel2.py is :
from models.base import Base
from sqlalchemy.orm import relationship
from sqlalchemy.sql.sqltypes import String, Integer, Date
from sqlalchemy.sql.schema import Column, ForeignKey
# The many to may relationship table
class Model1Model2(Base):
__tablename__ = 'model1_model2_table'
id = Column(Integer, primary_key=True)
model_1_id = Column(String, ForeignKey('model1.model1_id'))
model_2_id = Column(Integer, ForeignKey('model2.model2_id'))
class Model2(Base):
__tablename__ = 'model2_table'
model2_id = Column(Integer, primary_key=True)
model2_col1 = Column(String)
model2_col2 = Column(Date)
# Many to many relationship
model1_model2 = relationship('Model1', secondary='model1_model2_table', backref='model1_table')
The content of migrations/env.py is :
from logging.config import fileConfig
from sqlalchemy import engine_from_config
from sqlalchemy import pool
from alembic import context
import sys
sys.path.append('./')
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# I added the following 2 lines to replace the sqlalchemy.url in alembic.ini file.
db_string = f'postgresql+psycopg2://{db_username}:{db_password}#{db_host}:{db_port}/{db_name}'
config.set_main_option('sqlalchemy.url', db_string)
# Interpret the config file for Python logging.
# This line sets up loggers basically.
fileConfig(config.config_file_name)
# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
from models.datamodel1 import Model1
from models.datamodel2 import Model2, Model1Model2
from models.base import Base
target_metadata = Base.metadata
# other values from the config, defined by the needs of env.py,
# can be acquired:
# my_important_option = config.get_main_option("my_important_option")
# ... etc.
def run_migrations_offline():
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url,
target_metadata=target_metadata,
literal_binds=True,
dialect_opts={"paramstyle": "named"},
include_schemas=True,
)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online():
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
connectable = engine_from_config(
config.get_section(config.config_ini_section),
prefix="sqlalchemy.",
poolclass=pool.NullPool,
)
with connectable.connect() as connection:
context.configure(
connection=connection,
target_metadata=target_metadata,
include_schemas=True
)
with context.begin_transaction():
context.run_migrations()
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()
As for alembic.ini file I didn't make any changes, I just commented the line:
sqlalchemy.url = driver://user:pass#localhost/dbname
because I assign it in migrations/env.py
When I make changes and run alembic revision --autogenerate -m 'Add new updates' the migration files are generated correctly and everything works as expected.
But when I run alembic revision --autogenerate -m 'Add new updates' when there are no changes, it shows this in the terminal:
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [alembic.ddl.postgresql] Detected sequence named 'model2_table_model2_id_seq' as owned by integer column 'model2_table(model2_id)', assuming SERIAL and omitting
INFO [alembic.ddl.postgresql] Detected sequence named 'model1_model2_table_id_seq' as owned by integer column 'model1_model2_table(id)', assuming SERIAL and omitting
Generating /home/user/projects/dev/project/migrations/versions/45c6fbdbd23c_add_new_updates.py ... done
And it generates empty migration file that contains:
"""Add new updates
Revision ID: 45c6fbdbd23c
Revises: 5c17014a7c18
Create Date: 2021-12-27 17:11:13.964287
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '45c6fbdbd23c'
down_revision = '5c17014a7c18'
branch_labels = None
depends_on = None
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
pass
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
pass
# ### end Alembic commands ###
Is this the expected behavior or it has something to do with my architecture?
How to prevent Alembic from generating those empty migration files when there are no changes?
Is this the expected behavior or it has something to do with my architecture?
This is the expected behavior. Command alembic revision --autogenerate always creates a new migration file. If no changes exist than it creates an empty one.
You can use alembic-autogen-check to check if your migrations is in sync with models.
~ $ alembic-autogen-check
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO: Migrations in sync.
How to prevent Alembic from generating those empty migration files when there are no changes?
Also alembic-autogen-check returns zero code only in migrations are in sync with models. So, you can use it as one command
alembic-autogen-check || alembic revision --autogenerate -m 'Add new updates'
But it seems less convenient than use it separately
How to configure sqlAlchemy to not create tables in flask sqlAlchemy?
i only use flask-sqlalchemy Model's features,but won't create table.
How to configure this?
because i want to use in sql View,if run db.create_all(),View become to table.
thanks.
this is a View!
# -*- coding: utf-8 -*-
from settings.dataBase import CRUDMixin, db
class ViewExample(db.Model, CRUDMixin):
__tablename__ = "view_example"
id = db.Column(db.Integer, primary_key=True, nullable=False)
name = db.Column(db.String(50))
title = db.Column(db.String(100))
is_albums = db.Column(db.Integer)
is_attach = db.Column(db.Integer)
is_spec = db.Column(db.Integer)
sort_id = db.Column(db.Integer)
def __init__(self, **kwargs):
for key, value in kwargs.items():
setattr(self, key, value)
def __str__(self):
return 'id : %s' % self.id
i find a example,but i dont know how to use in flask-sqlalchemy.
alembic slqlalchemy document:
https://alembic.sqlalchemy.org/en/latest/cookbook.html#don-t-emit-create-table-statements-for-views
flask-migrate env.py
i add a include_object func ,but dont get is_view.
from __future__ import with_statement
import logging
from logging.config import fileConfig
from sqlalchemy import engine_from_config
from sqlalchemy import pool
from alembic import context
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# Interpret the config file for Python logging.
# This line sets up loggers basically.
fileConfig(config.config_file_name)
logger = logging.getLogger('alembic.env')
# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
from flask import current_app
config.set_main_option(
'sqlalchemy.url', current_app.config.get(
'SQLALCHEMY_DATABASE_URI').replace('%', '%%'))
target_metadata = current_app.extensions['migrate'].db.metadata
# other values from the config, defined by the needs of env.py,
# can be acquired:
# my_important_option = config.get_main_option("my_important_option")
# ... etc.
def include_object(object, name, type_, reflected, compare_to):
"""
Exclude views from Alembic's consideration.
"""
print(object.info)
return not object.info.get('is_view', False)
def run_migrations_offline():
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url, target_metadata=target_metadata, literal_binds=True,
)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online():
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
# this callback is used to prevent an auto-migration from being generated
# when there are no changes to the schema
# reference: http://alembic.zzzcomputing.com/en/latest/cookbook.html
def process_revision_directives(context, revision, directives):
if getattr(config.cmd_opts, 'autogenerate', False):
script = directives[0]
if script.upgrade_ops.is_empty():
directives[:] = []
logger.info('No changes in schema detected.')
connectable = engine_from_config(
config.get_section(config.config_ini_section),
prefix='sqlalchemy.',
poolclass=pool.NullPool,
)
with connectable.connect() as connection:
context.configure(
connection=connection,
target_metadata=target_metadata,
include_object=include_object,# add
process_revision_directives=process_revision_directives,
**current_app.extensions['migrate'].configure_args
)
with context.begin_transaction():
context.run_migrations()
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()
add __table_args__ = { "info": dict(is_view=True)} is good.
class ViewExample(db.Model, CRUDMixin):
__tablename__ = "view_example"
__table_args__ = { "info": dict(is_view=True)} # add
thanks! this questions been solved.
if you are not using alembic, the answer given here is not full.
what you need to do is change the metadata class that comes with base.Model in the following way:
class ViewAwareMetaData(MetaData):
def __init__(self):
super(ViewAwareMetaData, self).__init__()
def create_all(self, bind=None, tables=None, checkfirst=True):
tables = [table for table in tables if not table.info.get('is_view', False)]
super().create_all(bind=bind, tables=tables, checkfirst=checkfirst)
and then use the new meta when initializaing sqlalchemy:
db = SQLAlchemy(engine_options=engine_options, metadata=ViewAwareMetaData())
You can add __abstract__ = True in the model that your don't want to create
So have 2 different models on 2 different schemas. One has a foreign key relation to the other. I run BaseOne.metadata.create_all(engine) then BaseTwo.metadata.create_all(engine) I get sqlalchemy.exc.NoReferencedTableError: Foreign key associated with column...
BaseOne = declarative_base(metadata=MetaData(schema="a"))
BaseTwo = declarative_base(metadata=MetaData(schema="b"))
class Parent(BaseOne):
__tablename__ = "parent"
parent_id = Column(Integer, primary_key=True)
other_col = Column(String(20))
children = relationship("Child", backref="parent")
class Child(BaseTwo):
__tablename__ = "child"
child_id = Column(Integer, primary_key=True)
parent_id = Column(Integer, ForeignKey("a.parent.parent_id"), nullable=False)
# Where I'm creating them
BaseOne.metadata.create_all(engine)
BaseTwo.metadata.create_all(engine)
Should note I've also tried explicitly stating the schema via __table_args__. Also I have connected to my postgres instance and have verified that the parent table exists with the target column.
It appears the issue was due to the fact I used multiple MetaData objects. It appears that they were unable to see each other. Simplified to a single declarative base and using __table_args__ to declare the schemas appeared to work. If someone knows how to declare multiple metadata objects and still be able to use .create_all feel free to post.
This may be solved by using Alembic to manage table creation. Ensure that all bases are included in the target_metadata list e.g.:
# pylint: skip-file
import os
from logging.config import fileConfig
from alembic import context
from sqlalchemy import engine_from_config
from sqlalchemy import pool
import unimatrix.ext.octet.orm
import gpo.infra.orm
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# Interpret the config file for Python logging.
# This line sets up loggers basically.
fileConfig(config.config_file_name)
# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
target_metadata = [
unimatrix.ext.octet.orm.metadata,
gpo.infra.orm.Relation.metadata
]
# Configure SQLAlchemy to use the DB_URI environment variable.
config.set_main_option("sqlalchemy.url", os.environ["DB_URI"])
def run_migrations_offline():
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url,
target_metadata=target_metadata,
literal_binds=True,
dialect_opts={"paramstyle": "named"},
)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online():
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
connectable = engine_from_config(
config.get_section(config.config_ini_section),
prefix="sqlalchemy.",
poolclass=pool.NullPool,
)
with connectable.connect() as connection:
context.configure(
connection=connection, target_metadata=target_metadata
)
with context.begin_transaction():
context.run_migrations()
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()
``
Sorry i wasn't clear before,
Edited:
I am using default pyramid application with sqlalchemy (backend: postgresql), generated using
pcreate -s alchemy
So i have all the settings related to DBSession, pyramid_tm etc. I have this code
class User(Base):
id = Column(Integer, primary_key=True)
connection = relationship("UserConnection", uselist=False, backref="user")
class UserConnection(Base):
userid = Column(Integer, ForeignKey('user.id'), primary_key=True)
friends = Column(JSON)
def add_connection(user, friend):
with transaction.manager:
if not user.connection:
user_conn = UserConnection(userid=user.id, friends=[])
else:
user_conn = user.connection
user_conn.friends.append(friend)
print(user_connection.friends)
session.add(user_conn)
when i run add_connection() first time that is user.connection is not there. New record gets created but on next run ( in case this goes to else ) record don't get updated, on console i can only see ROLLBACK/COMMIT but no other statements.
The print statement there shows the updated result but db is not updated.
You should use transaction in request scope.
zope.sqlalchemy and pyramid_tm can do that for you.
You can use my code:
pyramid_sqlalchemy.py
# -*- coding: utf-8 -*-
""" Pyramid sqlalchemy lib.
Session will be available as dbsession attribute on request.
! Do not close session on your own.
"""
import sqlalchemy
from sqlalchemy.orm import sessionmaker, scoped_session
from zope.sqlalchemy import ZopeTransactionExtension
Session = scoped_session(sessionmaker(
extension=ZopeTransactionExtension()
))
def includeme(config):
""" Setup sqlalchemy session connection for pyramid app.
:param config: Pyramid configuration
"""
config.include('pyramid_tm')
# creates database engine from ini settings passed to pyramid wsgi
engine = sqlalchemy.engine_from_config(
config.get_settings(), connect_args={
'charset': 'utf8'
}
)
# scoped session gives us thread safe session
Session.configure(bind=engine)
# make database session available in every request
config.add_request_method(
callable=lambda request: Session, name='dbsession', property=True
)
Install zope.sqlalchemy and pyramid_tm using pip and call config.include(pyramid_sqlalchemy)
I have the following models in file listpull/models.py:
from datetime import datetime
from listpull import db
class Job(db.Model):
id = db.Column(db.Integer, primary_key=True)
list_type_id = db.Column(db.Integer, db.ForeignKey('list_type.id'),
nullable=False)
list_type = db.relationship('ListType',
backref=db.backref('jobs', lazy='dynamic'))
record_count = db.Column(db.Integer, nullable=False)
status = db.Column(db.Integer, nullable=False)
sf_job_id = db.Column(db.Integer, nullable=False)
created_at = db.Column(db.DateTime, nullable=False)
compressed_csv = db.Column(db.LargeBinary)
def __init__(self, list_type, created_at=None):
self.list_type = list_type
if created_at is None:
created_at = datetime.utcnow()
self.created_at = created_at
def __repr__(self):
return '<Job {}>'.format(self.id)
class ListType(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(80), unique=True, nullable=False)
def __init__(self, name):
self.name = name
def __repr__(self):
return '<ListType {}>'.format(self.name)
I call ./run.py init then ./run.py migrate then ./run.py upgrade, and I see the migration file generated, but its empty:
"""empty message
Revision ID: 5048d48b21de
Revises: None
Create Date: 2013-10-11 13:25:43.131937
"""
# revision identifiers, used by Alembic.
revision = '5048d48b21de'
down_revision = None
from alembic import op
import sqlalchemy as sa
def upgrade():
### commands auto generated by Alembic - please adjust! ###
pass
### end Alembic commands ###
def downgrade():
### commands auto generated by Alembic - please adjust! ###
pass
### end Alembic commands ###
run.py
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from listpull import manager
manager.run()
listpull/__init__.py
# -*- coding: utf-8 -*-
# pylint: disable-msg=C0103
""" listpull module """
from flask import Flask
from flask.ext.sqlalchemy import SQLAlchemy
from flask.ext.script import Manager
from flask.ext.migrate import Migrate, MigrateCommand
from mom.client import SQLClient
from smartfocus.restclient import RESTClient
app = Flask(__name__)
app.config.from_object('config')
db = SQLAlchemy(app)
migrate = Migrate(app, db)
manager = Manager(app)
manager.add_command('db', MigrateCommand)
mom = SQLClient(app.config['MOM_HOST'],
app.config['MOM_USER'],
app.config['MOM_PASSWORD'],
app.config['MOM_DB'])
sf = RESTClient(app.config['SMARTFOCUS_URL'],
app.config['SMARTFOCUS_LOGIN'],
app.config['SMARTFOCUS_PASSWORD'],
app.config['SMARTFOCUS_KEY'])
import listpull.models
import listpull.views
UPDATE
If I run the shell via ./run.py shell and then do from listpull import * and call db.create_all(), I get the schema:
mark.richman#MBP:~/code/nhs-listpull$ sqlite3 app.db
-- Loading resources from /Users/mark.richman/.sqliterc
SQLite version 3.7.12 2012-04-03 19:43:07
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> .schema
CREATE TABLE job (
id INTEGER NOT NULL,
list_type_id INTEGER NOT NULL,
record_count INTEGER NOT NULL,
status INTEGER NOT NULL,
sf_job_id INTEGER NOT NULL,
created_at DATETIME NOT NULL,
compressed_csv BLOB,
PRIMARY KEY (id),
FOREIGN KEY(list_type_id) REFERENCES list_type (id)
);
CREATE TABLE list_type (
id INTEGER NOT NULL,
name VARCHAR(80) NOT NULL,
PRIMARY KEY (id),
UNIQUE (name)
);
sqlite>
Unfortunately, the migrations still do not work.
When you call the migrate command Flask-Migrate (or actually Alembic underneath it) will look at your models.py and compare that to what's actually in your database.
The fact that you've got an empty migration script suggests you have updated your database to match your model through another method that is outside of Flask-Migrate's control, maybe by calling Flask-SQLAlchemy's db.create_all().
If you don't have any valuable data in your database, then open a Python shell and call db.drop_all() to empty it, then try the auto migration again.
UPDATE: I installed your project here and confirmed that migrations are working fine for me:
(venv)[miguel#miguel-linux nhs-listpull]$ ./run.py db init
Creating directory /home/miguel/tmp/mark/nhs-listpull/migrations...done
Creating directory /home/miguel/tmp/mark/nhs-listpull/migrations/versions...done
Generating /home/miguel/tmp/mark/nhs-listpull/migrations/script.py.mako...done
Generating /home/miguel/tmp/mark/nhs-listpull/migrations/env.pyc...done
Generating /home/miguel/tmp/mark/nhs-listpull/migrations/env.py...done
Generating /home/miguel/tmp/mark/nhs-listpull/migrations/README...done
Generating /home/miguel/tmp/mark/nhs-listpull/migrations/alembic.ini...done
Please edit configuration/connection/logging settings in
'/home/miguel/tmp/mark/nhs-listpull/migrations/alembic.ini' before
proceeding.
(venv)[miguel#miguel-linux nhs-listpull]$ ./run.py db migrate
INFO [alembic.migration] Context impl SQLiteImpl.
INFO [alembic.migration] Will assume non-transactional DDL.
INFO [alembic.autogenerate] Detected added table 'list_type'
INFO [alembic.autogenerate] Detected added table 'job'
Generating /home/miguel/tmp/mark/nhs-
listpull/migrations/versions/48ff3456cfd3_.py...done
Try a fresh checkout, I think your setup is correct.
Ensure to import the Models in the manage.py file (or the file with the migrate instance). You have to import the models in the file, even if you are not explicitly using them. Alembic needs these imports to migrate, and to create the tables in the database. For example:
# ... some imports ...
from api.models import User, Bucketlist, BucketlistItem # Import the models
app = create_app('dev')
manager = Manager(app)
migrate = Migrate(app, db)
manager.add_command('db', MigrateCommand)
# ... some more code here ...
if __name__ == "__main__":
manager.run()
db.create_all()
I had the same issue but a different problem caused it.
Flask-migrate workflow consists of two consequent commands:
flask db migrate
which generates the migration and
flask db upgrade
which applies the migration. I forgot to run the last one and tried to start next migration without applying the previous one.
For anyone coming who comes across this, my problem was having
db.create_all()
in my main flask application file
which created the new table without the knowledge of alembic
Simply comment it out or delete it altogether so it doesn't mess with future migrations.
but unlike #Miguel's suggestion, instead of dropping the whole database (i had important information in it), i was able to fix it by deleting the new table created by Flask SQLAlchemy and then running the migration.
and this time alembic detected the new table and created a proper migration script
I just encountered a similar problem. I'd like to share my solution for anyone else encountering this thread. For me, I had my models in a package. For example models/user.py and I tried from app.models import * which did not detect anything on the migrate. However, if I changed the import to from app.models import user this is okay why my project is young, but as I have more models a bulk import would be preferable.
Strange solve for me is: delete database and folder migrations. Then
>>> from app import db
>>> db.create_all()
After flask db init or python app.py db init and then flask db migrate or python app.py db migrate. Wow, It's strange, but it works for me.