Alembic migrations - script persistence between two deployments - python

I have problem with running automated migrations with alembic library (I use raw alembic library).
So this is setup of the application:
I have scheduler (python script which calculates something and then
stores it in database)
and flask REST API (which uses data stored in database by scheduler
to return adequate response)
I then deploy the app with script which runs these three commands:
alembic revision --autogenereate
alembic upgrade head
python run_scheduler.py
After initial deployment, alembic_version table is created in PostgreSQL database with identifier value under version_num column, and migration script (lets call this script xx.py) is created in alembic/versions/
When I redeploy the app (with running migrations and scheduler): I get the
" Can't locate revision identified by 'xxxxxxx'
Why?
Because there is no xx.py script anymore (docker is built from source control repo) and xx is the value under version_num column in alembic_version table.
How to approach to and solve this problem?

Quick fix of author: delete alembic_version table with code below (inside alembic/env.py script)
target_metadata = Base.metadata # for context
sql.execute('DROP TABLE IF EXISTS alembic_version', engine)

Related

Cannot connect to mysql backend in airflow [airflow remains connected to default sqlite db]

I have changed below configs to connect airflow to mysql database. But the airflow reamins connected to default sqlite database. Please see below config that I tried to connect to mysql db.
1)config of airflow.cfg:
executor = LocalExecutor
sql_alchemy_conn = mysql+pymysql://root:12345678#localhost:3306/airflow
2) pip install PyMySQL (installed pyMysql package)
3)Installed mysql server in ubuntu machine where airflow is running
But when i run "airflow db init", It stills points to sqlite and no new tables are created in mysql db.
->airflow db init
o/p:
DB: sqlite:////root/airflow/airflow.db
[2022-01-29 11:04:32,833] {db.py:684} INFO - Creating tables
...
Initialization done
NOte:-
Above in output it still shows "DB: sqlite:////root/airflow/airflow.db"
You can set configuration options in various ways. From https://airflow.apache.org/docs/apache-airflow/stable/howto/set-config.html (in this order):
set as an environment variable (AIRFLOW__CORE__SQL_ALCHEMY_CONN)
set as a command environment variable (AIRFLOW__CORE__SQL_ALCHEMY_CONN_CMD)
set as a secret environment variable (AIRFLOW__CORE__SQL_ALCHEMY_CONN_SECRET)
set in airflow.cfg
command in airflow.cfg
secret key in airflow.cfg
Airflow’s built in defaults
Since you say you've set it in airflow.cfg, I'm guessing AIRFLOW_HOME isn't configured, and it will assume your home directory for that. Are you running your Airflow in a different folder than your home directory? If so, you can configure AIRFLOW_HOME=[your airflow project dir] and it should pick up your airflow.cfg.
You can inspect the value of sql_alchemy_conn plus where it's configured in the Airflow UI -> Admin -> Configurations (requires AIRFLOW__WEBSERVER__EXPOSE_CONFIG=True). Scroll down past the default.cfg and the table shows each configuration's source.

Why won't my database migrate in SQLAlchemy? [duplicate]

I'd like to make a migration for a Flask app. I am using Alembic.
However, I receive the following error.
Target database is not up to date.
Online, I read that it has something to do with this.
http://alembic.zzzcomputing.com/en/latest/cookbook.html#building-an-up-to-date-database-from-scratch
Unfortunately, I don't quite understand how to get the database up to date and where/how I should write the code given in the link.
After creating a migration, either manually or as --autogenerate, you must apply it with alembic upgrade head. If you used db.create_all() from a shell, you can use alembic stamp head to indicate that the current state of the database represents the application of all migrations.
This Worked For me
$ flask db stamp head
$ flask db migrate
$ flask db upgrade
My stuation is like this question, When I execute "./manage.py db migrate -m 'Add relationship'", the error occused like this "
alembic.util.exc.CommandError: Target database is not up to date."
So I checked the status of my migrate:
(venv) ]#./manage.py db heads
d996b44eca57 (head)
(venv) ]#./manage.py db current
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
715f79abbd75
and found that the heads and the current are different!
I fixed it by doing this steps:
(venv)]#./manage.py db stamp heads
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Running stamp_revision 715f79abbd75 -> d996b44eca57
And now the current is same to the head
(venv) ]#./manage.py db current
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
d996b44eca57 (head)
And now I can do the migrate again.
This can be solved bby many ways :
1 To fix this error, delete the latest migration file ( a python file) then try to perform a migration afresh.
If issue still persists try these commands :
$ flask db stamp head # To set the revision in the database to the head, without performing any migrations. You can change head to the required change you want.
$ flask db migrate # To detect automatically all the changes.
$ flask db upgrade # To apply all the changes.
$ flask db stamp head # To set the revision in the database to the head, without performing any migrations. You can change head to the required change you want.
$ flask db migrate # To detect automatically all the changes.
$ flask db upgrade # To apply all the changes.
You can find more info at the documentation https://flask-migrate.readthedocs.io/en/latest/
I had to delete some of my migration files for some reason. Not sure why. But that fixed the problem, kind of.
One issue is that the database ends up getting updated properly, with all the new tables, etc, but the migration files themselves don't show any changes when I use automigrate.
If someone has a better solution, please let me know, as right now my solution is kind of hacky.
I did too run into different heads and I wanted to change one of the fields from string to integer, so first run:
$ flask db stamp head # to make the current the same
$ flask db migrate
$ flask db upgrade
It's solved now!
To fix this error, delete the latest migration file ( a python file) then try to perform a migration afresh.
This can also happen if you, like myself, have just started a new project and you are using in-memory SQLite database (sqlite:///:memory:). If you apply a migration on such a database, obviously the next time you want to say auto-generate a revision, the database will still be in its original state (empty), so alembic will be complaining that the target database is not up to date. The solution is to switch to a persisted database.
I also had the same problem input with flask db migrate
I used
flask db stamp head
and then
flask db migrate
Try to drop all tables before execute the db upgrade command.
To solve this, I drop(delete) the tables in migration and run these commands
flask db migrate
and
flask db upgrade

Django on Heroku: relation does not exist

I built a Django 1.9 project locally with sqlite3 as my default database. I have an application named Download which defines the DownloadedSongs table in models.py:
models.py
from __future__ import unicode_literals
from django.db import models
class DownloadedSongs(models.Model):
song_name = models.CharField(max_length = 255)
song_artist = models.CharField(max_length = 255)
def __str__(self):
return self.song_name + ' - ' + self.song_artist
Now, in order to deploy my local project to Heroku, I added the following lines at the bottom of my settings.py file:
import dj_database_url
DATABASES['default'] = dj_database_url.config()
My application has a form with a couple of text fields, and on submitting that form, the data gets inserted into the DownloadedSongs table. Now, when I deployed my project on Heroku and tried submitting this form, I got the following error:
Exception Type: ProgrammingError at /download/
Exception Value: relation "Download_downloadedsongs" does not exist
LINE 1: INSERT INTO "Download_downloadedsongs" ("song_name", "song_a...
This is how my requirements.txt file looks like:
beautifulsoup4==4.4.1
cssselect==0.9.1
dj-database-url==0.4.1
dj-static==0.0.6
Django==1.9
django-toolbelt==0.0.1
gunicorn==19.6.0
lxml==3.6.0
psycopg2==2.6.1
requests==2.10.0
static3==0.7.0
Also, I did try to run the following commands as well:
heroku run python manage.py makemigrations
heroku run python manage.py migrate
However, the issue still persists. What seems to be wrong here?
Make sure your local migration folder and content is under git version control.
If not, add, commit & push them as follows (assuming you have a migrations folder under <myapp>, and your git remote is called 'heroku'):
git add <myapp>/migrations/*
git commit -m "Fix Heroku deployment"
git push heroku
Wait until the push is successful and you get the local prompt back.
Then log in to heroku and there execute migrate.
To do this in one execution environment, do not launch these as individual heroku commands, but launch a bash shell and execute both commands in there: (do not type the '~$', this represents the Heroku prompt)
heroku run bash
~$ ./manage.py migrate
~$ exit
You must not run makemigrations via heroku run. You must run it locally, and commit the result to git. Then you can deploy that code and run those generated migrations via heroku run python manage.py migrate.
The reason is that heroku run spins up a new dyno each time, with a new filesystem, so any migrations generated in the first command are lost by the time the second command runs. But in any case, migrations are part of your code, and must be in version control.
As Heroku's dynos don't have a filesystem that persists across deploys, a file-based database like SQLite3 isn't going to be suitable. It's a great DB for development/quick prototypes, though. https://stackoverflow.com/a/31395988/784648
So between deploys your entire SQLite database is going to be wiped, you should move onto a dedicated database when you deploy to heroku I think. I know heroku has a free tier for postgres databases which I'd recommend if you just want to test deployment to heroku.
python manage.py makemigrations
python manage.py migrate
python manage.py migrate --run-syncdb
this worked for me.
I know this is old, but I had this issue and found this thread useful.
To sum up, the error can also appear when executing the migrations (which is supposed to create the needed relations in the DB), because recent versions of Django check your urls.py before doing the migrations. In my case - and in many others' it seems, loading urls.py meant loading the views, and some views were class-based and had an attribute defined through get_object_or_404:
class CustomView(ParentCustomView):
phase = get_object_or_404(Phase, code='C')
This is what was evaluated before the migrations actually ran, and caused the error. I fixed it by turning my view's attribute as a property:
class CustomView(ParentCustomView):
#property
def phase(self):
return get_object_or_404(Phase, code='C')
You'll know quite easily if this is the problem you are encountering, as the Traceback will point you towards the problematic view.
Also this problem might not appear in development because you have migrated before creating the view.

Run Alembic migrations on Google App Engine

I have a Flask app that uses SQLAlchemy (Flask-SQLAlchemy) and Alembic (Flask-Migrate). The app runs on Google App Engine. I want to use Google Cloud SQL.
On my machine, I run python manage.py db upgrade to run my migrations against my local database. Since GAE does not allow arbitrary shell commands to be run, how do I run the migrations on it?
Whitelist your local machine's IP: https://console.cloud.google.com/sql/instances/INSTANCENAME/access-control/authorization?project=PROJECTNAME
Create an user: https://console.cloud.google.com/sql/instances/INSTANCENAME/access-control/users?project=PROJECTNAME
Assign an external IP address to the instance: https://console.cloud.google.com/sql/instances/INSTANCENAME/access-control/ip?project=PROJECTNAME
Use the following SQLAlchemy connection URI: SQLALCHEMY_DATABASE_URI = 'mysql://user:pw#ip:3306/DBNAME'
Remember to release the IP later as you are charged for every hour it's not used
It's all just code you can run, so you can create an admin endpoint with which to effect an upgrade:
#app.route('/admin/dbupgrade')
def dbupgrade():
from flask_migrate import upgrade, Migrate
migrate = Migrate(app, db)
upgrade(directory=migrate.directory)
return 'migrated'
(Dropwizard, for instance, caters nicely for such admin things via tasks)
You can whitelist the ip of your local machine for the Google Cloud SQL instance, then you run the script on your local machine.

Local and heroku db get out of sync while migrating using alembic

I'm bulding an app with Flask and Angular hosted on heroku. I have a problem with migrating heroku postgresql. I'm using flask-migrate which is a tiny wrapper around alembic. Locally everything is fine. I got an exception when I run heroku run upgrade which runs alembic upgrade command.
INFO [alembic.migration] Context impl PostgresqlImpl.
INFO [alembic.migration] Will assume transactional DDL.
INFO [alembic.migration] Running upgrade None -> 19aeffe4063d, empty message
Traceback (most recent call last):
File "manage.py", line 13, in <module>
manager.run()
...
cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (ProgrammingError) relation "users" already exists
'\nCREATE TABLE users (\n\tid SERIAL NOT NULL, \n\tusername VARCHAR(32), \n\tpassword_hash VARCHAR(128), \n\tPRIMARY KEY (id)\n)\n\n' {}
Simply, alembic is trying to run from the first migration which creates the db. I tried to set explicitly the proper revision with heroku run python manage.py db upgrade +2 or revision number but the exception is the same.
lukas$ heroku run python manage.py db current
Running `python manage.py db current` attached to terminal... up, run.1401
INFO [alembic.migration] Context impl PostgresqlImpl.
INFO [alembic.migration] Will assume transactional DDL.
Current revision for postgres://...: None
My guess was that due to ephemeral filesystem of Heroku revision in not stored, but that shouldn't be a problem if I explicitly set the revision, right? :)
How can I set the current head of revision?
Here is relevant code:
Procfile:
web: gunicorn server:app
init: python manage.py db init
upgrade: python manage.py db upgrade
models.py
db = SQLAlchemy()
ROLE_USER = 0
ROLE_ADMIN = 1
class User(db.Model):
__tablename__ = "users"
id = db.Column(db.Integer, primary_key = True)
username = db.Column(db.String(32), unique = True, index = True)
email = db.Column(db.String(255), unique = True)
password_hash = db.Column(db.String(128))
role = db.Column(db.SmallInteger, default = ROLE_USER)
The init command that you put in the Procfile creates a brand new Alembic repository, which is something you only do once on your development machine. When you deploy to a new machine all you need to do is run the upgrade command to get the database created and updated to the last revision.
Alembic and Flask-Migrate have a command called stamp that can help you fix this problem. With stamp you can tell Alembic to write the revision of your choice to the database, without touching the database itself.
For example, creating a database from scratch when there are a lot of migrations can take a long time if Alembic has to go through all the migrations one by one. Instead, you can create the database with db.create_all() and then run:
$ ./manage.py db stamp HEAD
and with this the database is marked as updated.
Also, at some point I favored the idea of putting maintenance commands in the Procfile, but these days I only put services there, so I would only leave the web line there. To upgrade the database I think it is more predictable to run the command explicitly:
$ heroku run python manage.py db upgrade

Categories