I recently reinstalled my OS and lost some old data from a local postgre db. But I backed up all the migration files using Alembic. And now I just want to restore the database schema from the migration files, not data. Is that possible?
It is doable. All migration files have a revision number. Your first migration file has something like:
revision = '22a39a2bf2ed'
down_revision = None
and your second revision file has something like:
revision = '507003430224'
down_revision = '22a39a2bf2ed'
As you can see all revision files are linked.
The only thing that you need to do is make your first migration file manually, then run
alembic upgrade head
Then you need to replace the content of this file with you previous first migration file. Then open your second migration file and replace downgrade_version number with this new number.
Now you should be able to run
alembic upgrade head
again and your database should be upgraded
Related
I have two issues both of which are inter-related
Issue #1
My app has an online Postgres Database that it is using to store data. Because this is a Dockerized app, migrations that I create no longer appear on my local host but are instead stored in the Docker container.
All of the questions I've seen so far do not seem to have issues with making migrations and adding the unique constraint to one of the fields within the table.
I have written shell code to run a python script that returns me the contents of the migrations file in the command prompt window. I was able to obtain the migrations file that was to be applied and added a row to the django_migrations table to specify the same. I then ran makemigrations and migrate but it said there were no changes applied (which leads me to believe that the row which I added into the database should only have automatically been created by django after it had detected migrations on its own instead of me specifying the migrations file and asking it to make the changes). The issue is that now, the new migrations still detect the following change
Migrations for 'mdp':
db4mdp/mdp/migrations/0012_testing.py
- Alter field mdp_name on languages
Despite detecting this apparent 'change', I get the following error,
return self.cursor.execute(sql, params)
django.db.utils.ProgrammingError: relation "mdp_mdp_mdp_fullname_281e4228_uniq" already exists
I have already checked on my postgres server using pgadmin4 to check if the constraint has actually been applied on it. And it has with the name next to relation as specified above. So why then, does Django apparently detect this as a change that is to be made. The thing is, if I now remove the new migrations file that I created in my python directory, it probably will run (since the changes have 'apparently' been made in the database) but I won't have the migrations file to keep track of the changes. I don't need if I need to keep the migrations around now that I'm using an online database though. I will not be rolling any changes I make back nor will I be making changes to often. This is just a one/two time thing but I want to resolve the error.
Issue #2
The reason I used 'apparently' in my above issue is that even though the constraints section in my public schema show me that the constraints have been applied, for some reason, when I try to create a new entry into my table with a non-unique string in the field that I've defined as unique, it allows it creation anyway.
You never add anything manually to django_migrations table. Let django do it. If it is not doing it, no matter what, you code is not production ready.
I understand that you are doing your development inside docker. When you do it, you mount your docker volume to local volume. Since you have not mounted that, your migratins will not show in local.
Refer Volumes. It should resolve your issues.
For anyone trying to find an alternate solution to this problem other than mounting volumes due to time constraints, this answer might help but #deosha 's is still the correct way to go about it. I fixed the problem by deleting all my tables and the rows corresponding to migrations to my specific app (it isn't necessary to delete the auth tables etc. because you won't be deleting the rows corresponding to those in the django_migrations table). Following this I used the following within the shell script called by my Dockerfile.
python manage.py makemigrations --name testing
python testing_migrations.py
It needs to be named for the next step. After this line of code I ran the python script testing_migrations which contains the following code:
import os
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
migrations_file = os.path.join(BASE_DIR, '<path_to_new_migration_file>')
with open(migrations_file, 'r') as file:
print(file.read())
Typically the first migration created for the app you have made will be /0001_testing.py (which is why naming it was necessary earlier). Since the contents of this file are visible to the container while it is up and running, you will be able to print the contents of the file. Then run the migrate command. This creates a column in the django_migrations table that makes it appear to django that the migration has been applied. However, on your local machine this migrations file doesn't exist. So copy the contents of the file from the print statement above and put it into a .py file with the same name as mentioned above and save it in the migrations folder of the above on your local device.
You can follow this method for all successive migrations by repeating the process and incrementing the number in the testing_migrations file as required.
Squashing migrations once you're done making the table will help. If you're doing this all in development and have no requirement to roll-back the changes to the database schema then just put this into production after deleting all the migrations files and the rows in your django_migrations table corresponding to your app as was done initially. Drop your tables and let your first new migrations file re-create them and then import in your data once again.
This is not the recommended method. Use deosha's if you're not on a time crunch.
I want to add alembic to an existing ,sqlalchemy using, project, with a working production db. I fail to find what's the standard way to do a "zero" migration == the migration setting up the db as it is now (For new developers setting up their environment)
Currently I've added import the declarative base class and all the models using it to the env.py , but first time alembic -c alembic.dev.ini revision --autogenerate does create the existing tables.
And I need to "fake" the migration on existing installations - using code. For django ORM I know how to make this work, but I fail to find what's the right way to do this with sqlalchemy/alembic
alembic revision --autogenerate inspects the state of the connected database and the state of the target metadata and then creates a migration that brings the database in line with metadata.
If you are introducing alembic/sqlalchemy to an existing database, and you want a migration file that given an empty, fresh database would reproduce the current state- follow these steps.
Ensure that your metadata is truly in line with your current database(i.e. ensure that running alembic revision --autogenerate creates a migration with zero operations).
Create a new temp_db that is empty and point your sqlalchemy.url in alembic.ini to this new temp_db.
Run alembic revision --autogenerate. This will create your desired bulk migration that brings a fresh db in line with the current one.
Remove temp_db and re-point sqlalchemy.url to your existing database.
Run alembic stamp head. This tells sqlalchemy that the current migration represents the state of the database- so next time you run alembic upgrade head it will begin from this migration.
New installation: applying the migration
Simply run alembic upgrade head against an empty database. This will apply all the migrations (in your case, the initial migration as it's the only one) to the database.
If you want to do this from code rather than from shell, you can do it the following way:
from alembic.config import Config
from alembic import command
alembic_cfg = Config("/path/to/yourapp/alembic.ini")
command.upgrade(alembic_cfg, "head")
Existing installation: faking the migration
SQL way
One way would be running this SQL against the database:
CREATE TABLE IF NOT EXISTS alembic_version (
version_num VARCHAR(32) NOT NULL
);
INSERT INTO alembic_version (version_num) VALUES ('your initial migration version');
The first statement creates the table that alembic uses to track your database/migration state. The second statement basically tells alembic that your database state corresponds to the version of your initial migration, or, in other words, fakes the migration.
Alembic way
Alembic has a stamp command, which basically does the same thing. It can be called from shell as alembic stamp head, or from code (taken from the cookbook):
from alembic.config import Config
from alembic import command
alembic_cfg = Config("/path/to/yourapp/alembic.ini")
command.stamp(alembic_cfg, "head")
My goal is to add a second database to my current project, using alembic. I have the default alembic folder, and using 'alembic init alembic_second' I created a second folder structure. I modified the env.py in the second folder and the root alembic.ini. When I run
alembic -n 'alembic_second' revision -m "create second" --head=base --version-path=alembic_second/versions --autogenerate
the output is:
postgresql:// (all the correct second database connection stuff)
INFO [alembic.migration] Context impl PostgresqlImpl.
INFO [alembic.migration] Will assume transactional DDL.
ERROR [alembic.util] Target database is not up to date.
FAILED: Target database is not up to date.
postgresql://(all the correct second database info)
The solution here doesn't work for me because my new versions folder is empty, my problem is I can't run my FIRST migration on this new database. As you can see in my terminal input, I am specifying the new versions folder.
Also, I put a print statement in my second env.py, and I am successfully seeing that, so it is hitting the correct env.py.
Any ideas on how to get past this error and create my first revision?
Thank you!
It turns out that after I sorted through differences between the two database structures, the core problem was the same as the one I linked to, I just needed to add all of the proper flags to make it work with two databases.
Running:
alembic -n 'alembic_second' stamp head --version-path=alembic_second/versions
solved my problems.
Everything I found about this via searching was either wrong or incomplete in some way. So, how do I:
delete everything in my postgresql database
delete all my alembic revisions
make it so that my database is 100% like new
This works for me:
1) Access your session, in the same way you did session.create_all, do session.drop_all.
2) Delete the migration files generated by alembic.
3) Run session.create_all and initial migration generation again.
If you want to do this without dropping your migration files the steps are:
Drop all the tables in the db that are to be recreated.
Truncate the alembic_version table (so it will start from the beginning) - this is where the most recent version is kept.
The you can run:
alembic upgrade head
and everything will be recreated. I ran into this problem when my migrations got in a weird state during development and I wanted to reset alembic. This worked.
No idea how to mess with alembic, but for the database, you can just log into the SQL console and use DROP DATABASE foo.
Or were you wanting to clear out all the data, but leave the tables there? If so, Truncating all tables in a postgres database has some good answers.
I'm following this tutorial.. and the initial auto generate is perfect.. it basically creates the migration file with the upgrade and downgrade methods just fine.
so let's say this is the migration file's revision number: 3e96cf22770b.. all of my upgrade statements look like this:
def upgrade():
### commands auto generated by Alembic - please adjust! ###
op.create_table('hashtag',
sa.Column('id', sa.VARCHAR(), autoincrement=False, nullable=False),
sa.Column('text', sa.VARCHAR(), autoincrement=False, nullable=True),
sa.PrimaryKeyConstraint('id', name=u'hashtag_pkey')
)
and my downgrade statement looks like this:
def downgrade():
### commands auto generated by Alembic - please adjust! ###
op.drop_table('user_access_token')
Now I made a simple modification to my models.py file, this is what it looks like on git:
- verificationCode = Column(String())
+ isVerified = Column(Boolean())
The thing is, I have no idea how to run an autogenerate statement that actually just give me a delta migration file.. ie i just want a migration file that replaces one column by another..
i tried setting the current revision to 3e96cf22770b then running
python migrate.py db revision --autogenerate
but then it keeps on creating duplicates of the initial migration file (ie migrating the entire database schema) rather than just the delta.. ideas?
Alembic automatically generates a migration script (using the --autogenerate flag) by observing the current DB schema (it actually connects the DB and fetches the schema) and the new model (in your python code). So when you want to create a new migration script, make sure your database is at the previous schema (3e96cf22770b in your case).
Not sure how you tried to set the current schema, but you can check for the schema at alembic_version table in your DB.
You should be able to run:
python migrate.py db migrate
and that should create a new migration file for you. Once you get that you can run:
python migrate.py db upgrade
and that will upgrade your database.
Before you upgrade your db, look at the migration file and see if it's doing what you're wanting to do.