I have made a website using Django and hosted on Heroku. Everything was working fine until I try to alter my one of the column in the database. The thing is that I alter the column from max_length=1000 to max_length=60 but I forgot that I have entered value more than 60 so I got django.db.utils.DataError error that value is too long for 60.
For this problem solution, I refer one of the solutions available in StackOverflow to remove all the migrated file from its folder except init.py and then do fresh migration. So I tried that and I work with charm and I have changed the value so it could fit the max limit.
But now, when I try to create new column in database it says django.db.utils.ProgrammingError django.db.utils.ProgrammingError: column "column_name" of relation "table" does not exist
Actually, when I try to run my app on the local server it works fine and even it allows me to create a new column in my database but when I try to migrate it to Heroku database it shows django.db.utils.ProgrammingError
make a backup from your server data. clean all the data on your database. delete all the migration.py files just as you read and then makemigrations and migrate again.
Related
I have a python script for creating the tables and specifying the values that should be added to it. I use the following two commands to do the migration.
python manage.py makemigrations
python manage.py migrate
This is what happens:
When I use the above commands initially (when no tables are there), all the tables are created with correct values in it.
After doing the first migration (creating the tables and adding values), if I add a new column to the table in the script, and then run the above commands, the new column is added successfully to the table.
However, after doing the first migration (creating the tables and adding values), if I change the values to be added to the table (in the script), and then run the above commands, I get the output "no changes detected". And the values have not been updated either.
How can I achieve third step mentioned above. I am a newbie to Django so please help me out.
Both the command shows only if you change column name. If you change value inside column than it does not show in terminal. You can check values inside table via admin log in.
I added 'models' and created a file using 'makemigrations'.
I want to have the initial data in the database at the same time as 'migrate'.
However, no matter how much I edit the 'migrations' file, there is an error that says no because there is no 'table' in the database before 'migrate'.
Help me...
This will help https://docs.djangoproject.com/en/2.2/topics/migrations/#data-migrations
This is also the nice blog where you can create data migration similar to how you create database migration.
https://simpleisbetterthancomplex.com/tutorial/2017/09/26/how-to-create-django-data-migrations.html
You might want to look into Django data migrations:
https://docs.djangoproject.com/en/2.2/topics/migrations/#data-migrations
In the operations, run your actual table creation before running data initialization. Please give code example if you run into problems.
I have two issues both of which are inter-related
Issue #1
My app has an online Postgres Database that it is using to store data. Because this is a Dockerized app, migrations that I create no longer appear on my local host but are instead stored in the Docker container.
All of the questions I've seen so far do not seem to have issues with making migrations and adding the unique constraint to one of the fields within the table.
I have written shell code to run a python script that returns me the contents of the migrations file in the command prompt window. I was able to obtain the migrations file that was to be applied and added a row to the django_migrations table to specify the same. I then ran makemigrations and migrate but it said there were no changes applied (which leads me to believe that the row which I added into the database should only have automatically been created by django after it had detected migrations on its own instead of me specifying the migrations file and asking it to make the changes). The issue is that now, the new migrations still detect the following change
Migrations for 'mdp':
db4mdp/mdp/migrations/0012_testing.py
- Alter field mdp_name on languages
Despite detecting this apparent 'change', I get the following error,
return self.cursor.execute(sql, params)
django.db.utils.ProgrammingError: relation "mdp_mdp_mdp_fullname_281e4228_uniq" already exists
I have already checked on my postgres server using pgadmin4 to check if the constraint has actually been applied on it. And it has with the name next to relation as specified above. So why then, does Django apparently detect this as a change that is to be made. The thing is, if I now remove the new migrations file that I created in my python directory, it probably will run (since the changes have 'apparently' been made in the database) but I won't have the migrations file to keep track of the changes. I don't need if I need to keep the migrations around now that I'm using an online database though. I will not be rolling any changes I make back nor will I be making changes to often. This is just a one/two time thing but I want to resolve the error.
Issue #2
The reason I used 'apparently' in my above issue is that even though the constraints section in my public schema show me that the constraints have been applied, for some reason, when I try to create a new entry into my table with a non-unique string in the field that I've defined as unique, it allows it creation anyway.
You never add anything manually to django_migrations table. Let django do it. If it is not doing it, no matter what, you code is not production ready.
I understand that you are doing your development inside docker. When you do it, you mount your docker volume to local volume. Since you have not mounted that, your migratins will not show in local.
Refer Volumes. It should resolve your issues.
For anyone trying to find an alternate solution to this problem other than mounting volumes due to time constraints, this answer might help but #deosha 's is still the correct way to go about it. I fixed the problem by deleting all my tables and the rows corresponding to migrations to my specific app (it isn't necessary to delete the auth tables etc. because you won't be deleting the rows corresponding to those in the django_migrations table). Following this I used the following within the shell script called by my Dockerfile.
python manage.py makemigrations --name testing
python testing_migrations.py
It needs to be named for the next step. After this line of code I ran the python script testing_migrations which contains the following code:
import os
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
migrations_file = os.path.join(BASE_DIR, '<path_to_new_migration_file>')
with open(migrations_file, 'r') as file:
print(file.read())
Typically the first migration created for the app you have made will be /0001_testing.py (which is why naming it was necessary earlier). Since the contents of this file are visible to the container while it is up and running, you will be able to print the contents of the file. Then run the migrate command. This creates a column in the django_migrations table that makes it appear to django that the migration has been applied. However, on your local machine this migrations file doesn't exist. So copy the contents of the file from the print statement above and put it into a .py file with the same name as mentioned above and save it in the migrations folder of the above on your local device.
You can follow this method for all successive migrations by repeating the process and incrementing the number in the testing_migrations file as required.
Squashing migrations once you're done making the table will help. If you're doing this all in development and have no requirement to roll-back the changes to the database schema then just put this into production after deleting all the migrations files and the rows in your django_migrations table corresponding to your app as was done initially. Drop your tables and let your first new migrations file re-create them and then import in your data once again.
This is not the recommended method. Use deosha's if you're not on a time crunch.
I am working on a flask application using sqlalchemy with a postgres database. I am migrating my databsed with flask-migrate.
I had to change the name of one of my tables in the database and when trying to migrate (flask-migrate) I got an error
sqlalchemy.exc.InternalError: (psycopg2.InternalError) cannot drop table category_announcement_date because other objects depend on it
DETAIL: constraint announcement_dates_id_fkey on table announcement_dates depends on table category_announcement_date
HINT: Use DROP ... CASCADE to drop the dependent objects too.
[SQL: '\nDROP TABLE category_announcement_date']
I did not know how to tell flask-migrate about this issue so I came up with the great idea to do it manually so I went to psql and dropped the table together with the CASCADE command as suggested by the error message. That all worked fine but now I can't finish the migration? When running upgrade I get
python manage.py db upgrade
...
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) table "category_announcement_date" does not exist
which is probably because I just dropped the table manually?
Does anybody know how I can get out of this mess?
thanks carl
ok I noticed deleting the version files and repeating the migrate does the trick
cheers
fl
Everything I found about this via searching was either wrong or incomplete in some way. So, how do I:
delete everything in my postgresql database
delete all my alembic revisions
make it so that my database is 100% like new
This works for me:
1) Access your session, in the same way you did session.create_all, do session.drop_all.
2) Delete the migration files generated by alembic.
3) Run session.create_all and initial migration generation again.
If you want to do this without dropping your migration files the steps are:
Drop all the tables in the db that are to be recreated.
Truncate the alembic_version table (so it will start from the beginning) - this is where the most recent version is kept.
The you can run:
alembic upgrade head
and everything will be recreated. I ran into this problem when my migrations got in a weird state during development and I wanted to reset alembic. This worked.
No idea how to mess with alembic, but for the database, you can just log into the SQL console and use DROP DATABASE foo.
Or were you wanting to clear out all the data, but leave the tables there? If so, Truncating all tables in a postgres database has some good answers.