The Problem (Short Description)
I have a migration file deleted, but it was previously applied in the database. I want to revert the changes it has made.
The Problem (Long Description)
I had forked a python package and had made some changes on one of their models. I used the fork instead of the official package in my project and had my new migration applied in my database.
A couple of days ago, I replaced the fork with the official version of the package and did what I wanted to do in another way without changing the model anymore. This means, that for the same app, now one migration is missing (the migration that I created).
However, the changes that this migration made on the database are still present and the migration name is still listed on the django_migrations table. This has occurred weird problems, since the code and the database are not synchronized.
What I have tried
I tried running python manage.py migrate <appname> <previous_migration> where the appname is the app I have problem with migrations and the previous_migration is the last migration the app had before I added mine.
I tried modifying the database directly by manually reverting the changes that were made by that now deprecated migration.
I tried deleting the table completely and letting the migration re-create it, but apparently I was wrong since Django thinks that the migrations are applied and doesn't re-apply them
What I want to achieve
I want to get rid of that migration, revert the changes it has made, without causing any problem. I do not care about the data that this table has (even in production), so even if the solution includes deleting the affected table, I don't mind.
However, I'm looking for a clean solution that won't complicate things too much. More likely I would love if there was something that I could do in the code and not in the database directly.
Related
I'm working on a codebase which has a lot of migrations. We also have SQL views which are defined in the migrations and later updated in the migrations (whenever changes to the SQL views were required). We have a challenge where when running unit tests the migrations are not applied in the same appear as they currently appear in the django_migrations table. This causes unit tests to fail.
My thought is, I could probably write a script to the dependencies of all the migrations files to follow the same order of django_migrations thus ensuring that migrations are run in the correct order. Then I can look into squashing migrations.
But this is a lengthy process and I'm not sure if this will actually work.
Has anyone come across this issue before? If so, how did you overcome it?
I'm still pretty new to Python as well as Django so I have a situation I'm not sure how to resolve.
Main issue is that on deploy of my code to dev, deployment fails, to stage or prod, it passes.
I worked on an issue where I had to drop some columns in a table in our app.
After making the changes, I deployed to dev and asked for a code review.
In code review, it was suggested I change the name of the migration file to something more descriptive rather than just leaving it 0018_auto_.
I made that change and deployed to dev and stage. Dev failed (when I expected it to succeed) because the new name was seen and django tried to drop columns that no longer exist. In stage, the name was never changed and the columns were dropped for the first time using that new name of the file.
So stage deploys just fine.
How do I resolve this error on dev so it recognizes this migration already took place?
Thanks!
If you are 100% sure that the only change done for 0018 was the file name, and that it is the last migration for your app, you can do:
python manage.py migrate myapp 0017 --fake # Move to 17 without removing changes done by 18
python manage.py migrate myapp 0018 --fake # 18 is already applied, so just migrate with the new filename
This will keep the changes that are present in 0018, and just apply the changes to the filename.
Django constructs a table named djang_migrations that keeps track of the migrations that have been applied. If you later change the name of a migration file, then it will appear as if that migration did not run.
You can rename the migration in the django_migrations table of your development environment with:
UPDATE django_migrations
SET name = '0018_new_name'
WHERE app = '%%%name_of_the_app'
AND name = '0018_auto_'
where 0018_new_name is the new name of the migration file.
By updating this name, Django should now consider the new migration file as done.
All these were great answers and will no doubt help others now and in the future!
I tried all these answers but only one thing worked. The long way!
For full visibility (this worked locally and I understand --fake now in practice, thanks #brian-destura!):
python manage.py migrate myapp 0017 --fake # Move to 17 without removing changes done by 18
python manage.py migrate myapp 0018 --fake # 18 is already applied, so just migrate with the new filename
In deployment through gitlab, our django / zappa app just didn't want to deploy successfully though (even with many different solutions tried).
In the end, the only way to fix the issue was to update the 0018 migration file to add back all those fields, rollback the changes in the models and serializers, and re-deploy to dev.
After that, revert all those changes including the migration file, and re-deploy again to dev.
There's probably some easier way to do this but with all those with more backend knowledge out for the holidays (I'm a front end originally with more backend knowledge/projects through the last year), I was very happy to resolve and release my changes.
Thanks to you guys for some things to try, ideas, and def knowledge I could use to create this, my own long winded solution!
I had my migrations file in version control, and it went out of sync between production and development (I know, that was the mistake, but it's happened now).
As a result, I couldn't run migrations in production. I deleted the migrations folder, and then re-ran makemigrations and migrate. makemigrations logs out creating all the fields. However, migrate simply says "no migrations to apply", and the extra fields do not appear in the database.
All I've changed is adding nullable fields to a model, so it should be a straightforward migration.
I can drop the whole db and start over, but I'd prefer not to because it takes a long time to re-populate.
Is there a way that I can force Django to find the differences between the DB and the models, and build the correct migrations to add the fields?
I attempted adding nonsense models to try and trigger a refresh. But that hasn't changed anything.
So, I committed and pushed all my code, and then deployed my web application successfully. Then, I added a new model to my 'home' app, which (for a reason I now understand, but doesn't matter here), created an IntegrityError (django.db.utils.IntegrityError: insert or update on table "foo" violates foreign key constraint "bar"). I ran python manage.py makemigrations, python manage.py migrate, which causes the the IntegrityError.
However, even if I remove all of my new model code(so that git status comes up with nothing), the IntegrityError still happens. If I connect to my db via a different python instance and download select * from django_migrations;, the latest db migration: 0020 there is eight migrations away from my latest local home/migrations migration file: 0028.
--> My question is: is it safe for me to delete my local 0021-0028 migration files? Will this fix my problem?
If you haven't applied your migrations to db, it is safe to delete them and recreate them.
Possible reasons of why you run into this error are:
You deleted your model code but, when you run migrate it reads your migration files (which has information about your deleted model) and tries to apply migration operations. If you didn't run makemigrations command after you've deleted your model, migration system won't be able to detect your changes and will think that your model is still there.
Even if you've run makemigrations after you've deleted your model there'll be dependency issues in your migrations files, because the new migration files will depend on old ones (with which you had problems)
That's why we can say that it is safe to delete them, if they haven't applied, but at the same time you should be careful with your migration dependencies.
This documentation information maybe useful.
OK, so I crossed my fingers, backed my local 0021-0028 migration files, and then deleted them. It worked. I think they key is that the migration files were not yet in the database yet, but not 100% sure. +1 if anyone can answer further for clarification.
I could create tables using the command alembic revision -m 'table_name' and then defining the versions and migrate using alembic upgrade head.
Also, I could create tables in a database by defining a class in models.py (SQLAlchemy).
What is the difference between the two? I'm very confused. Have I messed up the concept?
Also, when I migrate the database using Alembic, why doesn't it form a new class in my models.py? I know the tables have been created because I checked them using a SQLite browser.
I have done all the configurations already. The target for Alembic's database and SQLALCHEMY_DATABASE-URI in config.py are the same .db file.
Yes, you are thinking about it in the wrong way.
Let's say you don't use Alembic or any other migration framework. In that case you create a new database for your application with the following steps:
Write your model classes
Create and configure a brand new database
Run db.create_all(), which looks at your models and creates the corresponding tables in your database.
So now consider the case of an upgrade. For example, let's say you release version 1.0 of your application and now start working on version 2.0, which requires some changes to your database. How can you achieve that? The limitation here is that db.create_all() does not modify tables, it can only create them from scratch. So it goes like this:
Make the necessary changes to your model classes
Now you have two options to transfer those changes to the database:
5.1 Destroy the database so that you can run db.create_all() again to get the updated tables, maybe backing up and restoring the data so that you don't lose it. Unfortunately SQLAlchemy does not help with the data, you'll have to use database tools for that.
5.2 Apply the changes manually, directly to the database. This is error prone, and it would be tedious if the change set is large.
Now consider that you have development and production databases, that means the work needs to be done twice. Also think about how tedious would it be when you have several releases of your application, each with a different database schema and you need to investigate a bug in one of the older releases, for which you need to recreate the database as it was in that release.
See what the problem is when you don't have a migration network?
Using Alembic, you have a little bit of extra work when you start, but it pays off because it simplifies your workflow for your upgrades. The creation phase goes like this:
Write your model classes
Create and configure a brand new database
Generate an initial Alembic migration, either manually or automatically. If you go with automatic migrations, Alembic looks at your models and generates the code that applies those to the database.
Run the upgrade command, which runs the migration script, effectively creating the tables in your database.
Then when you reach the point of doing an upgrade, you do the following:
Make the necessary changes to your model classes
Generate another Alembic migration. If you let Alembic generate this for you, then it compares your models classes against the current schema in your database, and generates the code necessary to make the database match the models.
Run the upgrade command. This applies the changes to the database, without the need to destroy any tables or back up data. You can run this upgrade on all your databases (production, development, etc.).
Important things to consider when using Alembic:
The migration scripts become part of your source code, so they need to be committed to source control along with your own files.
If you use the automatic migration generation, you always have to review the generated migrations. Alembic is not always able to determine the exact changes, so it is possible that the generated script needs some manual fine tuning.
Migration scripts have upgrade and downgrade functions. That means that they not only simplify upgrades, but also downgrades. If you need to sync the database to an old release, the downgrade command does it for you without any additional work on your part!
I can add that for Django there are two commands
makemigrations (which creates migrations files)
migrate (which translates migrations into sql and executes them on database)
I found its great for somebody's understanding to switch between batteries included framework(Django) and other frameworks like Flask/ Falcon.
So using alembic migrations is very convenient, and makes easy to track database changes.