Local and heroku db get out of sync while migrating using alembic - python

I'm bulding an app with Flask and Angular hosted on heroku. I have a problem with migrating heroku postgresql. I'm using flask-migrate which is a tiny wrapper around alembic. Locally everything is fine. I got an exception when I run heroku run upgrade which runs alembic upgrade command.
INFO [alembic.migration] Context impl PostgresqlImpl.
INFO [alembic.migration] Will assume transactional DDL.
INFO [alembic.migration] Running upgrade None -> 19aeffe4063d, empty message
Traceback (most recent call last):
File "manage.py", line 13, in <module>
manager.run()
...
cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (ProgrammingError) relation "users" already exists
'\nCREATE TABLE users (\n\tid SERIAL NOT NULL, \n\tusername VARCHAR(32), \n\tpassword_hash VARCHAR(128), \n\tPRIMARY KEY (id)\n)\n\n' {}
Simply, alembic is trying to run from the first migration which creates the db. I tried to set explicitly the proper revision with heroku run python manage.py db upgrade +2 or revision number but the exception is the same.
lukas$ heroku run python manage.py db current
Running `python manage.py db current` attached to terminal... up, run.1401
INFO [alembic.migration] Context impl PostgresqlImpl.
INFO [alembic.migration] Will assume transactional DDL.
Current revision for postgres://...: None
My guess was that due to ephemeral filesystem of Heroku revision in not stored, but that shouldn't be a problem if I explicitly set the revision, right? :)
How can I set the current head of revision?
Here is relevant code:
Procfile:
web: gunicorn server:app
init: python manage.py db init
upgrade: python manage.py db upgrade
models.py
db = SQLAlchemy()
ROLE_USER = 0
ROLE_ADMIN = 1
class User(db.Model):
__tablename__ = "users"
id = db.Column(db.Integer, primary_key = True)
username = db.Column(db.String(32), unique = True, index = True)
email = db.Column(db.String(255), unique = True)
password_hash = db.Column(db.String(128))
role = db.Column(db.SmallInteger, default = ROLE_USER)

The init command that you put in the Procfile creates a brand new Alembic repository, which is something you only do once on your development machine. When you deploy to a new machine all you need to do is run the upgrade command to get the database created and updated to the last revision.
Alembic and Flask-Migrate have a command called stamp that can help you fix this problem. With stamp you can tell Alembic to write the revision of your choice to the database, without touching the database itself.
For example, creating a database from scratch when there are a lot of migrations can take a long time if Alembic has to go through all the migrations one by one. Instead, you can create the database with db.create_all() and then run:
$ ./manage.py db stamp HEAD
and with this the database is marked as updated.
Also, at some point I favored the idea of putting maintenance commands in the Procfile, but these days I only put services there, so I would only leave the web line there. To upgrade the database I think it is more predictable to run the command explicitly:
$ heroku run python manage.py db upgrade

Related

Django migration is not changing database in AWS Elastic Beanstalk

I have deployed my Djnago app using AWS Elastic Beanstalk. I have added some data and everything was working fine. Then I have increased max_length of one of the CharField in model.py. I have deployed the app again using 'eb deploy' command. But it did not increase the field length in database. If I try to add subject_id greater than 10, I get this error: django.db.utils.DataError: value too long for type character varying(10)
Model.py:
class Subject(models.Model):
#subject_id = models.CharField(max_length=10, blank=True)
subject_id = models.CharField(max_length=50, blank=True)
0001_initial.py:
#Generated by Django 3.0.4 on 2021-02-18 18:54
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
('accounts', '0001_initial'),]
operations = [
migrations.CreateModel(
name='Subject',
fields=[('subject_id', models.CharField(blank=True, max_length=50)),],),]
I have added migration command in .ebextensions/django_build.config file. The 0001_initial.py file inside migration folder shows the changes. But it is not updated in the database(AWS RDS postgresql). I have checked django_migrations table in postgresql database. It's showing the last migration happened when I first created the instance.
I need to change subject_id field length of Subject model in existing database. Any help will be appreciated.
.ebextensions/django_build.config:
container_commands:
01_create_log_folder:
command: "mkdir -p log && chown webapp:webapp -R log"
02_source_env:
command: "source /etc/profile.d/sh.local"
leader_only: true
03_database_migrations:
command: "source /var/app/venv/*/bin/activate && python3 manage.py makemigrations --noinput && python3 manage.py migrate --noinput && deactivate"
leader_only: true
django_migrations table:
select * from django_migrations;
id | app | name | applied
----+-----------------+------------------------------------------+-----------------------
28 | fileupload | 0001_initial | 2021-01-22 11:42:40.726471+00
Since your revised migration has the same name (0001_initial) as the one you've already applied, it is not being executed. You need to either:
Create a new migration (0002_new_migration) that alters the fields you created in the first one
OR
First rollback the migration you have and then apply the revised one
To rollback the migration you have, you'll need to SSH into the ELB instance:
Login via SSH - check your AWS console for specific instructions
Then run the following to reset the accounts migrations
source /opt/python/run/venv/bin/activate
source /opt/python/current/env
cd /opt/python/current/app
./manage.py migrate accounts zero
When you deploy next, you'll be starting the accounts model from scratch and your new migration will run.
This is no different than when you reverse migrations locally using manage.py and migrate, except you're doing it on your remote instance.

Flask db migrat in Heroku didnt change database schema

Im trying to migrate my flask db using heroku. I performed the migrations in my local app, then committed the changes to github and deployed to heroku. I then executed
heroku run flask db migrate and heroku run flask db upgrade and based on the logs everything seems to have worked fine and without any errors:
INFO [alembic.autogenerate.compare] Detected added column 'users.active'
INFO [alembic.autogenerate.compare] Detected added column 'users.password'
Generating /app/migrations/versions/a92ff10fdb60_.py ... done
C:\Users\A>heroku run flask db upgrade -a certifit
Running flask db upgrade on ⬢ certifit... up, run.6994 (Free)
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
As seen from logs, the migration should have added two new columns to the users table:
INFO [alembic.autogenerate.compare] Detected added column 'users.active'
INFO [alembic.autogenerate.compare] Detected added column 'users.password'
However, when I run a SQL query the result is the same as prior migrataion:
id | email | username | password_hash | acquirer_id
----+--------------------+----------+-----------------------------------------------------------------------------------------------+-------------
4 | pooostgre#mail.com | fffff | a3990046bdc7d7a861363eab41f5f4ac8a7f574fe314ea | 11
Any ideas what could be the problem?
Thanks
This article helped a lot https://gist.github.com/mayukh18/2223bc8fc152631205abd7cbf1efdd41/
In short:
Changed SQLALCHEMY_DATABASE_URI from os.environ.get('DATABASE_URL') to heroku DATABASE_URL value (actual link). Im not sure if this made any difference during migrating database, but still I'll mention it here;
I ran flask db migrate and flask db upgrade in my local flask app.py (previously I did it in my heroku environment);
Changed SQLALCHEMY_DATABASE_URI = os.environ.get('DATABASE_URL') (because of step 1);
Pushed changes to github and deployed new version to heroku

Alembic migrations - script persistence between two deployments

I have problem with running automated migrations with alembic library (I use raw alembic library).
So this is setup of the application:
I have scheduler (python script which calculates something and then
stores it in database)
and flask REST API (which uses data stored in database by scheduler
to return adequate response)
I then deploy the app with script which runs these three commands:
alembic revision --autogenereate
alembic upgrade head
python run_scheduler.py
After initial deployment, alembic_version table is created in PostgreSQL database with identifier value under version_num column, and migration script (lets call this script xx.py) is created in alembic/versions/
When I redeploy the app (with running migrations and scheduler): I get the
" Can't locate revision identified by 'xxxxxxx'
Why?
Because there is no xx.py script anymore (docker is built from source control repo) and xx is the value under version_num column in alembic_version table.
How to approach to and solve this problem?
Quick fix of author: delete alembic_version table with code below (inside alembic/env.py script)
target_metadata = Base.metadata # for context
sql.execute('DROP TABLE IF EXISTS alembic_version', engine)

Why won't my database migrate in SQLAlchemy? [duplicate]

I'd like to make a migration for a Flask app. I am using Alembic.
However, I receive the following error.
Target database is not up to date.
Online, I read that it has something to do with this.
http://alembic.zzzcomputing.com/en/latest/cookbook.html#building-an-up-to-date-database-from-scratch
Unfortunately, I don't quite understand how to get the database up to date and where/how I should write the code given in the link.
After creating a migration, either manually or as --autogenerate, you must apply it with alembic upgrade head. If you used db.create_all() from a shell, you can use alembic stamp head to indicate that the current state of the database represents the application of all migrations.
This Worked For me
$ flask db stamp head
$ flask db migrate
$ flask db upgrade
My stuation is like this question, When I execute "./manage.py db migrate -m 'Add relationship'", the error occused like this "
alembic.util.exc.CommandError: Target database is not up to date."
So I checked the status of my migrate:
(venv) ]#./manage.py db heads
d996b44eca57 (head)
(venv) ]#./manage.py db current
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
715f79abbd75
and found that the heads and the current are different!
I fixed it by doing this steps:
(venv)]#./manage.py db stamp heads
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Running stamp_revision 715f79abbd75 -> d996b44eca57
And now the current is same to the head
(venv) ]#./manage.py db current
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
d996b44eca57 (head)
And now I can do the migrate again.
This can be solved bby many ways :
1 To fix this error, delete the latest migration file ( a python file) then try to perform a migration afresh.
If issue still persists try these commands :
$ flask db stamp head # To set the revision in the database to the head, without performing any migrations. You can change head to the required change you want.
$ flask db migrate # To detect automatically all the changes.
$ flask db upgrade # To apply all the changes.
$ flask db stamp head # To set the revision in the database to the head, without performing any migrations. You can change head to the required change you want.
$ flask db migrate # To detect automatically all the changes.
$ flask db upgrade # To apply all the changes.
You can find more info at the documentation https://flask-migrate.readthedocs.io/en/latest/
I had to delete some of my migration files for some reason. Not sure why. But that fixed the problem, kind of.
One issue is that the database ends up getting updated properly, with all the new tables, etc, but the migration files themselves don't show any changes when I use automigrate.
If someone has a better solution, please let me know, as right now my solution is kind of hacky.
I did too run into different heads and I wanted to change one of the fields from string to integer, so first run:
$ flask db stamp head # to make the current the same
$ flask db migrate
$ flask db upgrade
It's solved now!
To fix this error, delete the latest migration file ( a python file) then try to perform a migration afresh.
This can also happen if you, like myself, have just started a new project and you are using in-memory SQLite database (sqlite:///:memory:). If you apply a migration on such a database, obviously the next time you want to say auto-generate a revision, the database will still be in its original state (empty), so alembic will be complaining that the target database is not up to date. The solution is to switch to a persisted database.
I also had the same problem input with flask db migrate
I used
flask db stamp head
and then
flask db migrate
Try to drop all tables before execute the db upgrade command.
To solve this, I drop(delete) the tables in migration and run these commands
flask db migrate
and
flask db upgrade

Django on Heroku: relation does not exist

I built a Django 1.9 project locally with sqlite3 as my default database. I have an application named Download which defines the DownloadedSongs table in models.py:
models.py
from __future__ import unicode_literals
from django.db import models
class DownloadedSongs(models.Model):
song_name = models.CharField(max_length = 255)
song_artist = models.CharField(max_length = 255)
def __str__(self):
return self.song_name + ' - ' + self.song_artist
Now, in order to deploy my local project to Heroku, I added the following lines at the bottom of my settings.py file:
import dj_database_url
DATABASES['default'] = dj_database_url.config()
My application has a form with a couple of text fields, and on submitting that form, the data gets inserted into the DownloadedSongs table. Now, when I deployed my project on Heroku and tried submitting this form, I got the following error:
Exception Type: ProgrammingError at /download/
Exception Value: relation "Download_downloadedsongs" does not exist
LINE 1: INSERT INTO "Download_downloadedsongs" ("song_name", "song_a...
This is how my requirements.txt file looks like:
beautifulsoup4==4.4.1
cssselect==0.9.1
dj-database-url==0.4.1
dj-static==0.0.6
Django==1.9
django-toolbelt==0.0.1
gunicorn==19.6.0
lxml==3.6.0
psycopg2==2.6.1
requests==2.10.0
static3==0.7.0
Also, I did try to run the following commands as well:
heroku run python manage.py makemigrations
heroku run python manage.py migrate
However, the issue still persists. What seems to be wrong here?
Make sure your local migration folder and content is under git version control.
If not, add, commit & push them as follows (assuming you have a migrations folder under <myapp>, and your git remote is called 'heroku'):
git add <myapp>/migrations/*
git commit -m "Fix Heroku deployment"
git push heroku
Wait until the push is successful and you get the local prompt back.
Then log in to heroku and there execute migrate.
To do this in one execution environment, do not launch these as individual heroku commands, but launch a bash shell and execute both commands in there: (do not type the '~$', this represents the Heroku prompt)
heroku run bash
~$ ./manage.py migrate
~$ exit
You must not run makemigrations via heroku run. You must run it locally, and commit the result to git. Then you can deploy that code and run those generated migrations via heroku run python manage.py migrate.
The reason is that heroku run spins up a new dyno each time, with a new filesystem, so any migrations generated in the first command are lost by the time the second command runs. But in any case, migrations are part of your code, and must be in version control.
As Heroku's dynos don't have a filesystem that persists across deploys, a file-based database like SQLite3 isn't going to be suitable. It's a great DB for development/quick prototypes, though. https://stackoverflow.com/a/31395988/784648
So between deploys your entire SQLite database is going to be wiped, you should move onto a dedicated database when you deploy to heroku I think. I know heroku has a free tier for postgres databases which I'd recommend if you just want to test deployment to heroku.
python manage.py makemigrations
python manage.py migrate
python manage.py migrate --run-syncdb
this worked for me.
I know this is old, but I had this issue and found this thread useful.
To sum up, the error can also appear when executing the migrations (which is supposed to create the needed relations in the DB), because recent versions of Django check your urls.py before doing the migrations. In my case - and in many others' it seems, loading urls.py meant loading the views, and some views were class-based and had an attribute defined through get_object_or_404:
class CustomView(ParentCustomView):
phase = get_object_or_404(Phase, code='C')
This is what was evaluated before the migrations actually ran, and caused the error. I fixed it by turning my view's attribute as a property:
class CustomView(ParentCustomView):
#property
def phase(self):
return get_object_or_404(Phase, code='C')
You'll know quite easily if this is the problem you are encountering, as the Traceback will point you towards the problematic view.
Also this problem might not appear in development because you have migrated before creating the view.

Categories