flask-migrate: can't upgrade database becase "table does not exist" - python

I am working on a flask application using sqlalchemy with a postgres database. I am migrating my databsed with flask-migrate.
I had to change the name of one of my tables in the database and when trying to migrate (flask-migrate) I got an error
sqlalchemy.exc.InternalError: (psycopg2.InternalError) cannot drop table category_announcement_date because other objects depend on it
DETAIL: constraint announcement_dates_id_fkey on table announcement_dates depends on table category_announcement_date
HINT: Use DROP ... CASCADE to drop the dependent objects too.
[SQL: '\nDROP TABLE category_announcement_date']
I did not know how to tell flask-migrate about this issue so I came up with the great idea to do it manually so I went to psql and dropped the table together with the CASCADE command as suggested by the error message. That all worked fine but now I can't finish the migration? When running upgrade I get
python manage.py db upgrade
...
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) table "category_announcement_date" does not exist
which is probably because I just dropped the table manually?
Does anybody know how I can get out of this mess?
thanks carl

ok I noticed deleting the version files and repeating the migrate does the trick
cheers
fl

Related

SQL alechmy update database schema

I'm using SQL alechmy for a project, I've got a system running with some important data, but I would like to update my schemas to make new features. What's the best practice here?
Can schemas be updated without dropping all tables and recreate database? (currently I am running into trouble with
manage.py migrate
Not working. Ie. Not picking up new changes to table fields)
Thanks a lot in advance,
C

Heroku Datsbase Migration Error (django.db.utils.DataError)

I have made a website using Django and hosted on Heroku. Everything was working fine until I try to alter my one of the column in the database. The thing is that I alter the column from max_length=1000 to max_length=60 but I forgot that I have entered value more than 60 so I got django.db.utils.DataError error that value is too long for 60.
For this problem solution, I refer one of the solutions available in StackOverflow to remove all the migrated file from its folder except init.py and then do fresh migration. So I tried that and I work with charm and I have changed the value so it could fit the max limit.
But now, when I try to create new column in database it says django.db.utils.ProgrammingError django.db.utils.ProgrammingError: column "column_name" of relation "table" does not exist
Actually, when I try to run my app on the local server it works fine and even it allows me to create a new column in my database but when I try to migrate it to Heroku database it shows django.db.utils.ProgrammingError
make a backup from your server data. clean all the data on your database. delete all the migration.py files just as you read and then makemigrations and migrate again.

Django ProgrammingError must appear in the GROUP BY clause or be used in an aggregate function

Give it any basic model say;
class Post(models.Model):
created = models.DateTimeField(auto_now_add=True)
title = models.CharField(_('Title'), max_length=100)
content = models.TextField(_('Content html'), max_length=65000)
author = models.ForeignKey('user.User', on_delete=models.SET_NULL)
A query like Post.objects.annotate(Count('id')) (or any field, any annotate()) fails with the following error:
ProgrammingError: column "post.created" must appear in the GROUP BY clause or be used in an aggregate function
LINE 1: SELECT "post"."id", "post"."created", "post"."ti...
Using django 1.11.16, and postgres 9.4.19.
As I read here in another stackoverflow question tried different django versions and postgres versions; using django 2.0, 2.1.2, postgres 9.5.. same error! Reading around I've seen that might be a problem related to SQL, but I'm having this problem only at one server running Ubuntu 18.04 (bionic). Running the query locally at Ubuntu 16.04, with django 1.11.16, or any version above that and postgres 9.4 or above runs fine in my local system.. So the problem might actually be related with some low level libraries perhaps, I'm not running complex queries, any simple annotate() with django 1.11+ fails in ubuntu 18.04 with postgres 9.4 or 9.5
[UPDATE]
Might be useful for you if you find yourself in this scenario with no clues of what's going on, verify that the table in question has its indexes created. My problem turned out to be the posts table had no PRIMARY KEY definition nor any other constraint, a fail in a pg_restore which restored all the data and some schema definitions, yeah you read right some other schema definitions were missing, no idea of why.. But instead of trying to debug what happened with pg_restore in the first place, I ran an initial python manage migrate on an empty DB so the schema was correctly created this time and verified (psql -d <db_name> -c '\d posts'), then run pg_restore again with the --data-only and --disable-triggers flags.. So finally I got the schema and data properly restored and the query worked
That error message is because PostgreSQL will not make a guess as to what to do with non-grouped columns when there's a aggregate function in the query. This is one of the cases where the Django ORM handwaves too much and allows us to shoot ourselves in the foot.
I ran a test on my projects that use Django 2.1 and another that uses 1.11 with Model.objects.annotate(Count('id')) with no issues. If you post the full QuerySet, people will be able to help you further.
I met the problem and I solved it.
you should add the .order_by("xxx") follow your ORM!
before:
search_models.Author.objects.values("first_institution", 'institution_code').annotate(counts=Count('first_institution'))
after:
search_models.Author.objects.values("first_institution", 'institution_code').annotate(counts=Count('first_institution')).order_by('-counts')
It's worked for me, also wish can help you.

Clear postgresql and alembic and start over from scratch

Everything I found about this via searching was either wrong or incomplete in some way. So, how do I:
delete everything in my postgresql database
delete all my alembic revisions
make it so that my database is 100% like new
This works for me:
1) Access your session, in the same way you did session.create_all, do session.drop_all.
2) Delete the migration files generated by alembic.
3) Run session.create_all and initial migration generation again.
If you want to do this without dropping your migration files the steps are:
Drop all the tables in the db that are to be recreated.
Truncate the alembic_version table (so it will start from the beginning) - this is where the most recent version is kept.
The you can run:
alembic upgrade head
and everything will be recreated. I ran into this problem when my migrations got in a weird state during development and I wanted to reset alembic. This worked.
No idea how to mess with alembic, but for the database, you can just log into the SQL console and use DROP DATABASE foo.
Or were you wanting to clear out all the data, but leave the tables there? If so, Truncating all tables in a postgres database has some good answers.

Issue migrating in Django with South (Mysql)

I just had an issue where south seems to be confused between two migrations.
Last (4) migrations:
I had a table:
class VulnFuzz(models.Model):
fuzzVector = models.CharField(max_length=200)
FkToVulnTbl = models.ForeignKey(Vuln)
I wanted to change that table to something else:
class VulnFizz(models.Model):
fizzVector = models.CharField(max_length=200)
FkToVulnTbl = models.ForeignKey(Vuln)
The problem is, when I ran:
python manage.py schemamigration Scanner --auto
it says to then migrate it, so I do using:
python manage.py migrate Scanner
It says:
Migrating forwards to 0041_auto__del_field_vulnfizz_FkToVulnTbl.
> Scanner:0032_auto__chg_field_vulnfuzz_FkToVulnTbl__del_index_vulnfuzz_FkToVulnTbl
FATAL ERROR - The following SQL query failed: DESCRIBE `Scanner_vulnfuzz`
The error was: (1146, "Table 'vulnawarefinal.scanner_vulnfuzz' doesn't exist")
! Error found during real run of migration! Aborting.
! Since you have a database that does not support running
! schema-altering statements in transactions, we have had
! to leave it in an interim state between migrations.
! You *might* be able to recover with: = CREATE INDEX `Scanner_vulnfuzz_30a95dc2` ON `Scanner_vulnfuzz` (`FkToVulnTbl_id`);
I had already tried running the suggestion of before changing tables:
CREATE INDEX `Scanner_vulnfuzz_30a95dc2` ON `Scanner_vulnfuzz` (`FkToVulnTbl_id`)
But that didnt fix it.
I am at a loss now, how should I go about fixing this? Or should I redo the whole db?
Thank you
You're at migration 31 and you want to migrate to 41. The error occurs at migration 32 so the problem is not with your current described code change and the associated migration 41.
Migration 32 expects table Scanner_vulnfuzz to be there but it's not. In other words: your database is not in the state that South expects it to be.
You may be able to recover by migrating back to the migration that creates the table and then running all migration again. Otherwise you may have to recreate the database and run syncdb and migrate again from the beginning.

Categories