So I know there are already a ton of questions by people who changed a model and then failed to apply the migration to their database. However, in my case, I know for a fact that the migration was applied, as I can see the new table data.
Basically, I installed django-cms, and then I added a field to the djangocms_column plugin's models.py to allow me to add a Bootstrap class name to my columns (e.g. col-md-4, col-md-6, etc.).
if hasattr(settings, "COLUMN_CLASS_CHOICES"):
CLASS_CHOICES = settings.COLUMN_CLASS_CHOICES
else:
CLASS_CHOICES = (
('col-md-1', _("col-md-1")),
('col-md-2', _("col-md-2")),
('col-md-3', _('col-md-3')),
('col-md-4', _("col-md-4")),
('col-md-5', _('col-md-5')),
('col-md-6', _("col-md-6")),
('col-md-7', _('col-md-7')),
('col-md-8', _('col-md-8')),
('col-md-9', _('col-md-9')),
('col-md-10', _('col-md-10')),
('col-md-11', _('col-md-11')),
('col-md-12', _('col-md-12')),
('', _('none')),
)
...
#python_2_unicode_compatible
class Column(CMSPlugin):
"""
A Column for the MultiColumns Plugin
"""
width = models.CharField(_("width"), choices=WIDTH_CHOICES, default=WIDTH_CHOICES[0][0], max_length=50)
"""
This is the new field:
"""
bs_class = models.CharField(_("bs_class"), choices=CLASS_CHOICES, default=CLASS_CHOICES[0][0], max_length=50)
def __str__(self):
return u"%s" % self.get_width_display()
I then ran ./manage.py makemigrations and then ./manage.py migrate, and now the table looks like this:
sqlite> select * from djangocms_column_column;
cmsplugin_ptr_id bs_class width
---------------- ---------- ----------
3 col-md-1 33%
5 col-md-1 33%
7 col-md-1 33%
19 col-md-1 33%
21 col-md-1 33%
23 col-md-1 33%
Yet when I try to access the test server, I still get the following error:
OperationalError at /en/
no such column: djangocms_column_column.bs_class
Request Method: GET
Request URL: http://localhost:8000/en/
Django Version: 1.7.10
Exception Type: OperationalError
Exception Value:
no such column: djangocms_column_column.bs_class
And, yes, I've tried deleting the database and running ./manage.py migrate, but the site still displays the same error. Is there a special migration procedure one must use to modify plugins installed in the ./env/lib/python2.7/site-packages folder?
So I actually figured out what was causing this behavior. In designing my gulp tasks, I restructured the project folder, putting all of my django-created files inside of a src subdirectory.
I did this thinking it'd be easier to watch my app files for changes this way without unintentionally triggering my watch tasks when gulpfile.js or files in bower_components were modified. (Ultimately, it didn't matter, since my globs were more specific than just the django project root.)
This wouldn't have been a problem except that settings.DATABASES['default']['NAME'] was the relative path project.db. As a result, when I ran ./manage.py migrate from within the /src directory, it performed the migrations on /src/project.db. And when I ran src/manage.py migrate from the parent directory, the migrations were performed on /project.db. The djangocms app itself was using the latter, while I'd been performing all of my migrations on the former.
So the lessons here are:
Make sure your sqlite file is specified using an absolute path.
When you encounter seemingly inexplicable migration issues, check to make sure you don't have multiple .db files floating around in your workspace.
Have you tried deleting migrations in migration folder inside the app?
Related
I have made a few changes in the code, but nothing that changes the model or adds new things to the database.
When I run on my computer it works fine, but when I try to build the docker image and run it gives me this error:
Traceback (most recent call last):
File "/home/go/.local/lib/python3.9/site-packages/django/db/backends/utils.py", line 82, in _execute
return self.cursor.execute(sql)
psycopg2.errors.UndefinedColumn: column c.relispartition does not exist
LINE 3: CASE WHEN c.relispartition THEN 'p' WHEN c.relki...
^
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/app/manage.py", line 35, in <module>
main()
File "/app/manage.py", line 31, in main
execute_from_command_line(sys.argv)
File "/home/go/.local/lib/python3.9/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line
utility.execute()
File "/home/go/.local/lib/python3.9/site-packages/django/core/management/__init__.py", line 413, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/go/.local/lib/python3.9/site-packages/django/core/management/base.py", line 373, in run_from_argv
self.execute(*args, **cmd_options)
File "/home/go/.local/lib/python3.9/site-packages/django/core/management/base.py", line 417, in execute
output = self.handle(*args, **options)
File "/home/go/.local/lib/python3.9/site-packages/django/core/management/base.py", line 90, in wrapped
res = handle_func(*args, **kwargs)
File "/home/go/.local/lib/python3.9/site-packages/django/core/management/commands/migrate.py", line 92, in handle
executor = MigrationExecutor(connection, self.migration_progress_callback)
File "/home/go/.local/lib/python3.9/site-packages/django/db/migrations/executor.py", line 18, in __init__
self.loader = MigrationLoader(self.connection)
File "/home/go/.local/lib/python3.9/site-packages/django/db/migrations/loader.py", line 53, in __init__
self.build_graph()
File "/home/go/.local/lib/python3.9/site-packages/django/db/migrations/loader.py", line 223, in build_graph
self.applied_migrations = recorder.applied_migrations()
File "/home/go/.local/lib/python3.9/site-packages/django/db/migrations/recorder.py", line 77, in applied_migrations
if self.has_table():
File "/home/go/.local/lib/python3.9/site-packages/django/db/migrations/recorder.py", line 56, in has_table
tables = self.connection.introspection.table_names(cursor)
File "/home/go/.local/lib/python3.9/site-packages/django/db/backends/base/introspection.py", line 52, in table_names
return get_names(cursor)
File "/home/go/.local/lib/python3.9/site-packages/django/db/backends/base/introspection.py", line 47, in get_names
return sorted(ti.name for ti in self.get_table_list(cursor)
File "/home/go/.local/lib/python3.9/site-packages/django/db/backends/postgresql/introspection.py", line 49, in get_table_list
cursor.execute("""
File "/home/go/.local/lib/python3.9/site-packages/django/db/backends/utils.py", line 98, in execute
return super().execute(sql, params)
File "/home/go/.local/lib/python3.9/site-packages/django/db/backends/utils.py", line 66, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/home/go/.local/lib/python3.9/site-packages/django/db/backends/utils.py", line 75, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/home/go/.local/lib/python3.9/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/home/go/.local/lib/python3.9/site-packages/django/db/utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/home/go/.local/lib/python3.9/site-packages/django/db/backends/utils.py", line 82, in _execute
return self.cursor.execute(sql)
django.db.utils.ProgrammingError: column c.relispartition does not exist
LINE 3: CASE WHEN c.relispartition THEN 'p' WHEN c.relki...
^
For me this is strange because I have no control over what this table does.
When I run makemigrations and migrate it tells me that nothing has changed.
If I run the docker image against a local postgres database it works and there is no such error, this error is happening on production database.
I have tried the solutions to similar problems here on stackoverflow but nothing worked. It seems like the only solution is to create a new database...
I am testing Django for the first time to create my own to do list.
All has been working fine so far until I synchronise to my sqlite3 database with
python manage.py syncdb
I have managed to debug all the errors so far. The error I cant seem to debug is
TypeError: CASCADE() missing 4 required positional arguments: 'collector', 'field', 'sub_objs', and 'using'
Here is the model code:
class Item(models.Model):
worktasks = models.CharField(max_length=250)
focus = models.CharField(max_length=250)
#...
todo_list = models.ForeignKey('Todo', on_delete=models.CASCADE())
def __str__(self):
return self.worktasks + '-' + self.lessons
I've tried removing the brackets "()" after CASCADE which resulted in the output
Unknown command: 'syncdb'
I am working on Pycharm - Python Version 3.7
Your fix with removing brackets is correct, but this is only half of the problem. The second half is that you're trying to use a command that doesn't exist. syncdb is no longer present in new Django (it was removed in Django 1.9). Instead of it, you should use the migrations system. Take look at this documentation page.
I'm using Alembic as migration tool and I'm launching the following pseudo script on an already updated database (no revision entries for Alembic, the database schema is just up to date).
revision = '1067fd2d11c8'
down_revision = None
from alembic import op
import sqlalchemy as sa
def upgrade():
op.add_column('box', sa.Column('has_data', sa.Boolean, server_default='0'))
def downgrade():
pass
It gives me the following error only with PostgreSQL behind (it's all good with MySQL):
INFO [alembic.migration] Context impl PostgresqlImpl.
INFO [alembic.migration] Will assume transactional DDL.
INFO [root] (ProgrammingError) ERREUR: la colonne « has_data » de la relation « box » existe déjà
Last line means the column has_data already exists.
I want to check that the column exists before op.add_column.
We ran into the same issue: we had to accommodate an edge case when a column added in a revision might exist in the schema. Silencing the error is not an option, as that will rollback the current transaction (unless using sqlite), and the version table will not be updated. Checking for column existence seems optimal here. Here's our solution (same idea as in the accepted answer, but updated for 2022):
from alembic import op
from sqlalchemy import inspect
def column_exists(table_name, column_name):
bind = op.get_context().bind
insp = inspect(bind)
columns = insp.get_columns(table_name)
return any(c["name"] == column_name for c in columns)
This is called from a revision file, so the context accessed via op.get_context() has been configured (presumably in your env.py, and the bind exists.
The easiest answer is not to try to do this. Instead, make your Alembic migrations represent the full layout of the database. Then any migrations you make will be based off the changes to the existing database.
To make a starting migration if you already have a database, temporarily point at an empty database and run alembic revision --autogenerate -m "base". Then, point back at the actual database and run alembic stamp head to say that the current state of the database is represented by the latest migration, without actually running it.
If you don't want to do that for some reason, you can choose not to use --autogenerate and instead generate empty revisions that you fill in with the operations you want. Alembic won't stop you from doing this, it's just much less convenient.
I am, unfortunately, in a situation where we have multiple versions with different schemas that all need to migrate to a single codebase. There are no migrations anywhere yet and no versions tagged in any db. So the first migration will have these conditional checks. After the first migration, everything will be in a known state and I can avoid such hacks.
So I added this in my migration (credit belongs to http://www.derstappen-it.de/tech-blog/sqlalchemie-alembic-check-if-table-has-column):
from alembic import op
from sqlalchemy import engine_from_config
from sqlalchemy.engine import reflection
def _table_has_column(table, column):
config = op.get_context().config
engine = engine_from_config(
config.get_section(config.config_ini_section), prefix='sqlalchemy.')
insp = reflection.Inspector.from_engine(engine)
has_column = False
for col in insp.get_columns(table):
if column not in col['name']:
continue
has_column = True
return has_column
My upgrade function has the following checks (note that I have a batch flag set that adds the with op.batch_alter_table line, which probably isn't in most setups:
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table('mytable', schema=None) as batch_op:
if not _table_has_column('mytable', 'mycol'):
batch_op.add_column(sa.Column('mycol', sa.Integer(), nullable=True))
if not _table_has_column('mytable', 'mycol2'):
batch_op.add_column(sa.Column('mycol2', sa.Integer(), nullable=True))
I have a process_view middleware that sets the module and view names into the request like this:
class ViewName( object ):
def process_view( self, request, view_func, view_args, view_kwargs ):
request.module_name = view_func.__module__
request.view_name = view_func.__name__
I use the names together as a key for session based pagination.
But as of yesterday, for reasons I am unable to discover,
view_func.__module__ now returns 'cp.models', which is a model file in
one of my apps.
I've gone back one commit at a time trying to find the cause. The issue is still there even after reverting code to more than a month ago.
I'm seeing only two python packages changed recently on the server, my app uses neither, and the updates were more than a month ago:
cat /var/log/dpkg.log*|grep "upgrade" |grep python
2013-01-25 03:41:05 upgrade python-problem-report 2.0.1-0ubuntu15.12.0.1-0ubuntu17.1
2013-01-25 03:41:06 upgrade python-apport 2.0.1-0ubuntu15.1 2.0.1-0ubuntu17.1
I also tried rearranging my middleware list, didn't help.
Any idea what else might be causing this issue?
I can't run "./manage.py evolve --hint --execute" for my GeoDjango project. It exits with error:
File "/home/viktor/.virtualenvs/senv/lib/python2.6/site-packages/django_evolution-0.6.7-py2.6.egg/django_evolution/db/__init__.py", line 18, in __init__
module = __import__('.'.join(module_name),{},{},[''])
ImportError: No module named django.contrib.gis.db.backends.postgis
Here's the mutation hint:
from django_evolution.mutations import AddField, DeleteField
from django.contrib.gis.db.models.fields import PointField
MUTATIONS = [
AddField('Geodata', 'position_real', PointField, initial=<<USER VALUE REQUIRED>>),
AddField('Geodata', 'position', PointField, initial=<<USER VALUE REQUIRED>>),
DeleteField('Geodata', 'real_lat'),
DeleteField('Geodata', 'lat'),
DeleteField('Geodata', 'lng'),
DeleteField('Geodata', 'real_lng')
]
#----------------------
Trial evolution successful.
However, web app runs fine using *backends.postgis database engine. It seems to be a django-evolution problem only.
Any ideas how can I make evolution to work?
Thanks.
I solved this simply by adding DATABASE_ENGINE = "postgresql" in my settings.py.
Looking at the code which uses this variable in site-packages/django_evolution-0.6.0-py2.7.egg/tests/utils.py (your path will probably be slightly different), utils.py will, depending on the database, call one of the files in django_evolution-0.6.0-py2.7.egg/django_evolution/db. Looking then at the postgresql.py file in the db directory, it appears to be a short script which runs a few basic tests against your database implementation. Since the postgis backend would not likely be significantly different from out-of-the-box postgres, I believe it is safe to use postgresql as your DATABASE_ENGINE value.