I just had an issue where south seems to be confused between two migrations.
Last (4) migrations:
I had a table:
class VulnFuzz(models.Model):
fuzzVector = models.CharField(max_length=200)
FkToVulnTbl = models.ForeignKey(Vuln)
I wanted to change that table to something else:
class VulnFizz(models.Model):
fizzVector = models.CharField(max_length=200)
FkToVulnTbl = models.ForeignKey(Vuln)
The problem is, when I ran:
python manage.py schemamigration Scanner --auto
it says to then migrate it, so I do using:
python manage.py migrate Scanner
It says:
Migrating forwards to 0041_auto__del_field_vulnfizz_FkToVulnTbl.
> Scanner:0032_auto__chg_field_vulnfuzz_FkToVulnTbl__del_index_vulnfuzz_FkToVulnTbl
FATAL ERROR - The following SQL query failed: DESCRIBE `Scanner_vulnfuzz`
The error was: (1146, "Table 'vulnawarefinal.scanner_vulnfuzz' doesn't exist")
! Error found during real run of migration! Aborting.
! Since you have a database that does not support running
! schema-altering statements in transactions, we have had
! to leave it in an interim state between migrations.
! You *might* be able to recover with: = CREATE INDEX `Scanner_vulnfuzz_30a95dc2` ON `Scanner_vulnfuzz` (`FkToVulnTbl_id`);
I had already tried running the suggestion of before changing tables:
CREATE INDEX `Scanner_vulnfuzz_30a95dc2` ON `Scanner_vulnfuzz` (`FkToVulnTbl_id`)
But that didnt fix it.
I am at a loss now, how should I go about fixing this? Or should I redo the whole db?
Thank you
You're at migration 31 and you want to migrate to 41. The error occurs at migration 32 so the problem is not with your current described code change and the associated migration 41.
Migration 32 expects table Scanner_vulnfuzz to be there but it's not. In other words: your database is not in the state that South expects it to be.
You may be able to recover by migrating back to the migration that creates the table and then running all migration again. Otherwise you may have to recreate the database and run syncdb and migrate again from the beginning.
Related
I have made a website using Django and hosted on Heroku. Everything was working fine until I try to alter my one of the column in the database. The thing is that I alter the column from max_length=1000 to max_length=60 but I forgot that I have entered value more than 60 so I got django.db.utils.DataError error that value is too long for 60.
For this problem solution, I refer one of the solutions available in StackOverflow to remove all the migrated file from its folder except init.py and then do fresh migration. So I tried that and I work with charm and I have changed the value so it could fit the max limit.
But now, when I try to create new column in database it says django.db.utils.ProgrammingError django.db.utils.ProgrammingError: column "column_name" of relation "table" does not exist
Actually, when I try to run my app on the local server it works fine and even it allows me to create a new column in my database but when I try to migrate it to Heroku database it shows django.db.utils.ProgrammingError
make a backup from your server data. clean all the data on your database. delete all the migration.py files just as you read and then makemigrations and migrate again.
I have two issues both of which are inter-related
Issue #1
My app has an online Postgres Database that it is using to store data. Because this is a Dockerized app, migrations that I create no longer appear on my local host but are instead stored in the Docker container.
All of the questions I've seen so far do not seem to have issues with making migrations and adding the unique constraint to one of the fields within the table.
I have written shell code to run a python script that returns me the contents of the migrations file in the command prompt window. I was able to obtain the migrations file that was to be applied and added a row to the django_migrations table to specify the same. I then ran makemigrations and migrate but it said there were no changes applied (which leads me to believe that the row which I added into the database should only have automatically been created by django after it had detected migrations on its own instead of me specifying the migrations file and asking it to make the changes). The issue is that now, the new migrations still detect the following change
Migrations for 'mdp':
db4mdp/mdp/migrations/0012_testing.py
- Alter field mdp_name on languages
Despite detecting this apparent 'change', I get the following error,
return self.cursor.execute(sql, params)
django.db.utils.ProgrammingError: relation "mdp_mdp_mdp_fullname_281e4228_uniq" already exists
I have already checked on my postgres server using pgadmin4 to check if the constraint has actually been applied on it. And it has with the name next to relation as specified above. So why then, does Django apparently detect this as a change that is to be made. The thing is, if I now remove the new migrations file that I created in my python directory, it probably will run (since the changes have 'apparently' been made in the database) but I won't have the migrations file to keep track of the changes. I don't need if I need to keep the migrations around now that I'm using an online database though. I will not be rolling any changes I make back nor will I be making changes to often. This is just a one/two time thing but I want to resolve the error.
Issue #2
The reason I used 'apparently' in my above issue is that even though the constraints section in my public schema show me that the constraints have been applied, for some reason, when I try to create a new entry into my table with a non-unique string in the field that I've defined as unique, it allows it creation anyway.
You never add anything manually to django_migrations table. Let django do it. If it is not doing it, no matter what, you code is not production ready.
I understand that you are doing your development inside docker. When you do it, you mount your docker volume to local volume. Since you have not mounted that, your migratins will not show in local.
Refer Volumes. It should resolve your issues.
For anyone trying to find an alternate solution to this problem other than mounting volumes due to time constraints, this answer might help but #deosha 's is still the correct way to go about it. I fixed the problem by deleting all my tables and the rows corresponding to migrations to my specific app (it isn't necessary to delete the auth tables etc. because you won't be deleting the rows corresponding to those in the django_migrations table). Following this I used the following within the shell script called by my Dockerfile.
python manage.py makemigrations --name testing
python testing_migrations.py
It needs to be named for the next step. After this line of code I ran the python script testing_migrations which contains the following code:
import os
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
migrations_file = os.path.join(BASE_DIR, '<path_to_new_migration_file>')
with open(migrations_file, 'r') as file:
print(file.read())
Typically the first migration created for the app you have made will be /0001_testing.py (which is why naming it was necessary earlier). Since the contents of this file are visible to the container while it is up and running, you will be able to print the contents of the file. Then run the migrate command. This creates a column in the django_migrations table that makes it appear to django that the migration has been applied. However, on your local machine this migrations file doesn't exist. So copy the contents of the file from the print statement above and put it into a .py file with the same name as mentioned above and save it in the migrations folder of the above on your local device.
You can follow this method for all successive migrations by repeating the process and incrementing the number in the testing_migrations file as required.
Squashing migrations once you're done making the table will help. If you're doing this all in development and have no requirement to roll-back the changes to the database schema then just put this into production after deleting all the migrations files and the rows in your django_migrations table corresponding to your app as was done initially. Drop your tables and let your first new migrations file re-create them and then import in your data once again.
This is not the recommended method. Use deosha's if you're not on a time crunch.
Give it any basic model say;
class Post(models.Model):
created = models.DateTimeField(auto_now_add=True)
title = models.CharField(_('Title'), max_length=100)
content = models.TextField(_('Content html'), max_length=65000)
author = models.ForeignKey('user.User', on_delete=models.SET_NULL)
A query like Post.objects.annotate(Count('id')) (or any field, any annotate()) fails with the following error:
ProgrammingError: column "post.created" must appear in the GROUP BY clause or be used in an aggregate function
LINE 1: SELECT "post"."id", "post"."created", "post"."ti...
Using django 1.11.16, and postgres 9.4.19.
As I read here in another stackoverflow question tried different django versions and postgres versions; using django 2.0, 2.1.2, postgres 9.5.. same error! Reading around I've seen that might be a problem related to SQL, but I'm having this problem only at one server running Ubuntu 18.04 (bionic). Running the query locally at Ubuntu 16.04, with django 1.11.16, or any version above that and postgres 9.4 or above runs fine in my local system.. So the problem might actually be related with some low level libraries perhaps, I'm not running complex queries, any simple annotate() with django 1.11+ fails in ubuntu 18.04 with postgres 9.4 or 9.5
[UPDATE]
Might be useful for you if you find yourself in this scenario with no clues of what's going on, verify that the table in question has its indexes created. My problem turned out to be the posts table had no PRIMARY KEY definition nor any other constraint, a fail in a pg_restore which restored all the data and some schema definitions, yeah you read right some other schema definitions were missing, no idea of why.. But instead of trying to debug what happened with pg_restore in the first place, I ran an initial python manage migrate on an empty DB so the schema was correctly created this time and verified (psql -d <db_name> -c '\d posts'), then run pg_restore again with the --data-only and --disable-triggers flags.. So finally I got the schema and data properly restored and the query worked
That error message is because PostgreSQL will not make a guess as to what to do with non-grouped columns when there's a aggregate function in the query. This is one of the cases where the Django ORM handwaves too much and allows us to shoot ourselves in the foot.
I ran a test on my projects that use Django 2.1 and another that uses 1.11 with Model.objects.annotate(Count('id')) with no issues. If you post the full QuerySet, people will be able to help you further.
I met the problem and I solved it.
you should add the .order_by("xxx") follow your ORM!
before:
search_models.Author.objects.values("first_institution", 'institution_code').annotate(counts=Count('first_institution'))
after:
search_models.Author.objects.values("first_institution", 'institution_code').annotate(counts=Count('first_institution')).order_by('-counts')
It's worked for me, also wish can help you.
I have 6 models in two files(benef and petition) in addition to the user information. There are some inter dependencies. When I changed the model, there was some error. so I thought of starting afresh and want to drop all tables.
I ran sqlflush and Sqlclear with following result.
Sqlflush result is
BEGIN;
DELETE FROM "django_admin_log";
DELETE FROM "auth_permission";
DELETE FROM "auth_group";
DELETE FROM "auth_group_permissions";
DELETE FROM "django_session";
DELETE FROM "auth_user_groups";
DELETE FROM "auth_user_user_permissions";
DELETE FROM "benef_beneficiary_information";
DELETE FROM "petition_employer";
DELETE FROM "petition_job";
DELETE FROM "nashvegas_migration";
DELETE FROM "benef_beneficiary";
DELETE FROM "auth_user";
DELETE FROM "benef_beneficiaryname";
DELETE FROM "petition_petition";
DELETE FROM "django_content_type";
COMMIT;
Finished "C:\pyProjs\immiFile\manage.py sqlflush" execution.
Sqlclear benef result is
BEGIN;
DROP TABLE "benef_beneficiary_information";
DROP TABLE "benef_beneficiary";
DROP TABLE "benef_beneficiaryname";
COMMIT;
Finished "C:\pyProjs\immiFile\manage.py sqlclear benef" execution.
sqlclear petition result is
BEGIN;
DROP TABLE "petition_petition";
DROP TABLE "petition_job";
DROP TABLE "petition_employer";
COMMIT;
Finished "C:\pyProjs\immiFile\manage.py sqlclear petition" execution.
But then when I run the project and go to admin, I still see the old tables and when I click on them, the error related to field comes, which was originally caused by model change. The data is not relevant
OperationalError at /admin/benef/beneficiary/
no such column: benef_beneficiary.last_edited_by_id
I want to start afresh. What is the solution?
I am using Django 1.8 and Python 2.7
From the documentation:
sqlflush: Prints the SQL statements that would be executed for the flush command.
It just prints the statements that would be executed. It doesn't touch the database (same goes for sqlclear). You need to use flush instead.
Also note this from the documentation on flush:
Removes all data from the database and re-executes any
post-synchronization handlers. The table of which migrations have been
applied is not cleared.
If you would rather start from an empty database and re-run all
migrations, you should drop and recreate the database and then run
migrate instead.
I want to start afresh. What is the solution?
The sure fire way, which will delete everything (including your data):
Shut down the application (if you are using runserver, make sure its stopped)
Delete the database from the database server. If you are using sqlite, simply delete the db file.
Delete all migration directories created under your app(s).
Type ./manage.py makemigrations
Type ./manage.py migrate
I am working on a flask application using sqlalchemy with a postgres database. I am migrating my databsed with flask-migrate.
I had to change the name of one of my tables in the database and when trying to migrate (flask-migrate) I got an error
sqlalchemy.exc.InternalError: (psycopg2.InternalError) cannot drop table category_announcement_date because other objects depend on it
DETAIL: constraint announcement_dates_id_fkey on table announcement_dates depends on table category_announcement_date
HINT: Use DROP ... CASCADE to drop the dependent objects too.
[SQL: '\nDROP TABLE category_announcement_date']
I did not know how to tell flask-migrate about this issue so I came up with the great idea to do it manually so I went to psql and dropped the table together with the CASCADE command as suggested by the error message. That all worked fine but now I can't finish the migration? When running upgrade I get
python manage.py db upgrade
...
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) table "category_announcement_date" does not exist
which is probably because I just dropped the table manually?
Does anybody know how I can get out of this mess?
thanks carl
ok I noticed deleting the version files and repeating the migrate does the trick
cheers
fl