I'm using Alembic as migration tool and I'm launching the following pseudo script on an already updated database (no revision entries for Alembic, the database schema is just up to date).
revision = '1067fd2d11c8'
down_revision = None
from alembic import op
import sqlalchemy as sa
def upgrade():
op.add_column('box', sa.Column('has_data', sa.Boolean, server_default='0'))
def downgrade():
pass
It gives me the following error only with PostgreSQL behind (it's all good with MySQL):
INFO [alembic.migration] Context impl PostgresqlImpl.
INFO [alembic.migration] Will assume transactional DDL.
INFO [root] (ProgrammingError) ERREUR: la colonne « has_data » de la relation « box » existe déjà
Last line means the column has_data already exists.
I want to check that the column exists before op.add_column.
We ran into the same issue: we had to accommodate an edge case when a column added in a revision might exist in the schema. Silencing the error is not an option, as that will rollback the current transaction (unless using sqlite), and the version table will not be updated. Checking for column existence seems optimal here. Here's our solution (same idea as in the accepted answer, but updated for 2022):
from alembic import op
from sqlalchemy import inspect
def column_exists(table_name, column_name):
bind = op.get_context().bind
insp = inspect(bind)
columns = insp.get_columns(table_name)
return any(c["name"] == column_name for c in columns)
This is called from a revision file, so the context accessed via op.get_context() has been configured (presumably in your env.py, and the bind exists.
The easiest answer is not to try to do this. Instead, make your Alembic migrations represent the full layout of the database. Then any migrations you make will be based off the changes to the existing database.
To make a starting migration if you already have a database, temporarily point at an empty database and run alembic revision --autogenerate -m "base". Then, point back at the actual database and run alembic stamp head to say that the current state of the database is represented by the latest migration, without actually running it.
If you don't want to do that for some reason, you can choose not to use --autogenerate and instead generate empty revisions that you fill in with the operations you want. Alembic won't stop you from doing this, it's just much less convenient.
I am, unfortunately, in a situation where we have multiple versions with different schemas that all need to migrate to a single codebase. There are no migrations anywhere yet and no versions tagged in any db. So the first migration will have these conditional checks. After the first migration, everything will be in a known state and I can avoid such hacks.
So I added this in my migration (credit belongs to http://www.derstappen-it.de/tech-blog/sqlalchemie-alembic-check-if-table-has-column):
from alembic import op
from sqlalchemy import engine_from_config
from sqlalchemy.engine import reflection
def _table_has_column(table, column):
config = op.get_context().config
engine = engine_from_config(
config.get_section(config.config_ini_section), prefix='sqlalchemy.')
insp = reflection.Inspector.from_engine(engine)
has_column = False
for col in insp.get_columns(table):
if column not in col['name']:
continue
has_column = True
return has_column
My upgrade function has the following checks (note that I have a batch flag set that adds the with op.batch_alter_table line, which probably isn't in most setups:
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table('mytable', schema=None) as batch_op:
if not _table_has_column('mytable', 'mycol'):
batch_op.add_column(sa.Column('mycol', sa.Integer(), nullable=True))
if not _table_has_column('mytable', 'mycol2'):
batch_op.add_column(sa.Column('mycol2', sa.Integer(), nullable=True))
Related
I'm trying to test a theory that prefetching in browsers is causing unexplained deletes within my django app.
Here's my delete method in my views.py:
def delete(request, part_id=None):
obj = epe.objects.get(id=part_id)
obj.delete()
logger.error('Someone deleted record: '+str(part_id))
return HttpResponseRedirect(reverse('epe_home'))
And how I use the url in my template:
<td><input class="btn btn-danger" type="button" value="Delete" /></td>
You can see I'm logging when this method is activated, but I still have had unexplained deletes without any logs from the logger. Which makes me wonder if the unexplained deletes are caused by my method at all.
The only logs I have of the deletes are from MySQL logs like these:
6798 Connect
user#hostname on dbname
6798 Query
SET NAMES utf8
6798 Query
set autocommit=0
6798 Query
set autocommit=1
6798 Query
SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED
6798 Query
SELECT `Epe_epe`.`id`, `Epe_epe`.`epe_type`, `Epe_epe`.`epe_type2_id`, `Epe_epe`.`epe_date`, `Epe_epe`.`epe_ani`, `Epe_epe`.`epe_ani2_id`, `Epe_epe`.`epe_apn`, `Epe_epe`.`epe_apn2_id`, `Epe_epe`.`epe_weight`, `Epe_epe`.`epe_drug_type1`, `Epe_epe`.`epe_drug1`, `Epe_epe`.`epe_dose1`, `Epe_epe`.`epe_amount1`, `Epe_epe`.`epe_route1`, `Epe_epe`.`epe_time1`, `Epe_epe`.`epe_drug_type2`, `Epe_epe`.`epe_drug2`, `Epe_epe`.`epe_dose2`, `Epe_epe`.`epe_amount2`, `Epe_epe`.`epe_route2`, `Epe_epe`.`epe_time2`, `Epe_epe`.`epe_drug_type3`, `Epe_epe`.`epe_drug3`, `Epe_epe`.`epe_dose3`, `Epe_epe`.`epe_amount3`, `Epe_epe`.`epe_route3`, `Epe_epe`.`epe_time3`, `Epe_epe`.`epe_drug_type4`, `Epe_epe`.`epe_drug4`, `Epe_epe`.`epe_dose4`, `Epe_epe`.`epe_amount4`, `Epe_epe`.`epe_route4`, `Epe_epe`.`epe_time4`, `Epe_epe`.`epe_drug_type5`, `Epe_epe`.`epe_drug5`, `Epe_epe`.`epe_dose5`, `Epe_epe`.`epe_amount5`, `Epe_epe`.`epe_route5`, `Epe_epe`.`epe_time5`, `Epe_epe`.`epe_drug_type6`, `Epe_epe`.`epe_drug6`, `Epe_epe`.`epe_dose6`, `Epe_epe`.`epe_amount6`, `Epe_epe`.`epe_route6`, `Epe_epe`.`epe_time6`, `Epe_epe`.`epe_iso_start`, `Epe_epe`.`epe_iso_end`, `Epe_epe`.`epe_o2_end`, `Epe_epe`.`epe_start1`, `Epe_epe`.`epe_start2`, `Epe_epe`.`epe_start3`, `Epe_epe`.`epe_start4`, `Epe_epe`.`epe_start5`, `Epe_epe`.`epe_start6`, `Epe_epe`.`epe_start7`, `Epe_epe`.`epe_start8`, `Epe_epe`.`epe_hr1`, `Epe_epe`.`epe_hr2`, `Epe_epe`.`epe_hr3`, `Epe_epe`.`epe_hr4`, `Epe_epe`.`epe_hr5`, `Epe_epe`.`epe_hr6`, `Epe_epe`.`epe_hr7`, `Epe_epe`.`epe_hr8`, `Epe_epe`.`epe_spo2_1`, `Epe_epe`.`epe_spo2_2`, `Epe_epe`.`epe_spo2_3`, `Epe_epe`.`epe_spo2_4`, `Epe_epe`.`epe_spo2_5`, `Epe_epe`.`epe_spo2_6`, `Epe_epe`.`epe_spo2_7`, `Epe_epe`.`epe_spo2_8`, `Epe_epe`.`epe_temp1`, `Epe_epe`.`epe_temp2`, `Epe_epe`.`epe_temp3`, `Epe_epe`.`epe_temp4`, `Epe_epe`.`epe_temp5`, `Epe_epe`.`epe_temp6`, `Epe_epe`.`epe_temp7`, `Epe_epe`.`epe_temp8`, `Epe_epe`.`epe_etco2_1`, `Epe_epe`.`epe_etco2_2`, `Epe_epe`.`epe_etco2_3`, `Epe_epe`.`epe_etco2_4`, `Epe_epe`.`epe_etco2_5`, `Epe_epe`.`epe_etco2_6`, `Epe_epe`.`epe_etco2_7`, `Epe_epe`.`epe_etco2_8`, `Epe_epe`.`epe_rr1`, `Epe_epe`.`epe_rr2`, `Epe_epe`.`epe_rr3`, `Epe_epe`.`epe_rr4`, `Epe_epe`.`epe_rr5`, `Epe_epe`.`epe_rr6`, `Epe_epe`.`epe_rr7`, `Epe_epe`.`epe_rr8`, `Epe_epe`.`epe_comment` FROM `Epe_epe` WHERE `Epe_epe`.`id` = 1508
6798 Query
set autocommit=0
6798 Query
DELETE FROM `Epe_epe` WHERE `Epe_epe`.`id` IN (1508)
6798 Query
commit
6798 Query
set autocommit=1
6798 Quit
The delete method above is the only case where I allow deletes within my app. I understand that having a destructive action within a GET request can maybe cause the unexplained deletes I'm seeing, so I'm trying to figure out if that's the case here. Is there any other place it could come from?
I'm not sure this relevant but I'm still using the dev server that bundles with django. I allow multiple users on multiple machines to access my app to help with the debugging process. Could not using a production level server in this case somehow cause unexplained deletes?
Django class based generic views allow object deletion see the docs so if you are using them you get delete method for granted in every view even though you don't define (overload) it.
The approach you have taken seems correct. However I have another approach to find out the deletes. You can write a custom manager for the django model and then override the delete(). Put a logger there, the added advantage to this approach is that if any other django app or any one else tries to delete a row from this model then your custom delete() would be used.
You can follow this article about how to implement delete using manager or overwrite the delete function of the queryset.
So I know there are already a ton of questions by people who changed a model and then failed to apply the migration to their database. However, in my case, I know for a fact that the migration was applied, as I can see the new table data.
Basically, I installed django-cms, and then I added a field to the djangocms_column plugin's models.py to allow me to add a Bootstrap class name to my columns (e.g. col-md-4, col-md-6, etc.).
if hasattr(settings, "COLUMN_CLASS_CHOICES"):
CLASS_CHOICES = settings.COLUMN_CLASS_CHOICES
else:
CLASS_CHOICES = (
('col-md-1', _("col-md-1")),
('col-md-2', _("col-md-2")),
('col-md-3', _('col-md-3')),
('col-md-4', _("col-md-4")),
('col-md-5', _('col-md-5')),
('col-md-6', _("col-md-6")),
('col-md-7', _('col-md-7')),
('col-md-8', _('col-md-8')),
('col-md-9', _('col-md-9')),
('col-md-10', _('col-md-10')),
('col-md-11', _('col-md-11')),
('col-md-12', _('col-md-12')),
('', _('none')),
)
...
#python_2_unicode_compatible
class Column(CMSPlugin):
"""
A Column for the MultiColumns Plugin
"""
width = models.CharField(_("width"), choices=WIDTH_CHOICES, default=WIDTH_CHOICES[0][0], max_length=50)
"""
This is the new field:
"""
bs_class = models.CharField(_("bs_class"), choices=CLASS_CHOICES, default=CLASS_CHOICES[0][0], max_length=50)
def __str__(self):
return u"%s" % self.get_width_display()
I then ran ./manage.py makemigrations and then ./manage.py migrate, and now the table looks like this:
sqlite> select * from djangocms_column_column;
cmsplugin_ptr_id bs_class width
---------------- ---------- ----------
3 col-md-1 33%
5 col-md-1 33%
7 col-md-1 33%
19 col-md-1 33%
21 col-md-1 33%
23 col-md-1 33%
Yet when I try to access the test server, I still get the following error:
OperationalError at /en/
no such column: djangocms_column_column.bs_class
Request Method: GET
Request URL: http://localhost:8000/en/
Django Version: 1.7.10
Exception Type: OperationalError
Exception Value:
no such column: djangocms_column_column.bs_class
And, yes, I've tried deleting the database and running ./manage.py migrate, but the site still displays the same error. Is there a special migration procedure one must use to modify plugins installed in the ./env/lib/python2.7/site-packages folder?
So I actually figured out what was causing this behavior. In designing my gulp tasks, I restructured the project folder, putting all of my django-created files inside of a src subdirectory.
I did this thinking it'd be easier to watch my app files for changes this way without unintentionally triggering my watch tasks when gulpfile.js or files in bower_components were modified. (Ultimately, it didn't matter, since my globs were more specific than just the django project root.)
This wouldn't have been a problem except that settings.DATABASES['default']['NAME'] was the relative path project.db. As a result, when I ran ./manage.py migrate from within the /src directory, it performed the migrations on /src/project.db. And when I ran src/manage.py migrate from the parent directory, the migrations were performed on /project.db. The djangocms app itself was using the latter, while I'd been performing all of my migrations on the former.
So the lessons here are:
Make sure your sqlite file is specified using an absolute path.
When you encounter seemingly inexplicable migration issues, check to make sure you don't have multiple .db files floating around in your workspace.
Have you tried deleting migrations in migration folder inside the app?
I am attempting to delete 300,000+ spam comments from a Django site that is using the Zinnia blogging app. Zinnia includes a command for deleting spam called, appropriately, spam_cleanup but running this command spews thousands of the following error before being terminated by the OS.
OperationalError: (1040, 'Too many connections')
The code for the spam_cleanup command is as follows:
class Command(NoArgsCommand):
"""
Command object for removing comments
marked as non-public and removed.
"""
help = "Delete the entries's comments marked as non-public and removed."
def handle_noargs(self, **options):
verbosity = int(options.get('verbosity', 1))
content_type = ContentType.objects.get_for_model(Entry)
spams = comments.get_model().objects.filter(
#is_public=False, is_removed=True,
content_type=content_type)
spams_count = spams.count()
spams.delete()
if verbosity:
print('%i spam comments deleted.' % spams_count)
My initial thought was just to break the query down to only delete say 80 items at at time using the limit property but Django tells me that I can't do that on delete:
AssertionError: Cannot use 'limit' or 'offset' with delete.
It's not reasonable to increase the max connections on MySQL to 300,000, right? I also read that Django emulates cascade on delete but does not set it at the DB level so a raw SQL query could orphan all the relations. I am lost as to how to perform this delete properly, please help!
Is it possible to create concurrent indexes for DB table through alembic script?
I'm using postgres DB, and able to create concurrent table indexes through sql command on postgres prompt.(create index concurrently on ();)
But couldn't find way to create same through Db migration(alembic) script. If we create normal index(not concurrent) , it'll lock DB table so can't perform any query in parallel. So just want to know how to create concurrent index through alembic(DB migration) script
Alembic supports PostgreSQL concurrently indexes creation
def upgrade():
op.execute('COMMIT')
op.create_index('ix_1', 't1', ['col1'], postgresql_concurrently=True)
I'm not using Postgres and I am not able to test it, but it should be possible.
According to:
http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html
Concurrent indexes are allowed in the Postgres dialect from version 0.9.9.
However, a migration script like this should work with older versions (direct SQL creation):
from alembic import op, context
from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey
from sqlalchemy.orm import relationship
from sqlalchemy.sql import text
# ---------- COMMONS
# Base objects for SQL operations are:
# - use op = INSERT, UPDATE, DELETE
# - use connection = SELECT (and also INSERT, UPDATE, DELETE but this object has lot of logics)
metadata = MetaData()
connection = context.get_bind()
tbl = Table('test', metadata, Column('data', Integer), Column("unique_key", String))
# If you want to define a index on the current loaded schema:
# idx1 = Index('test_idx1', tbl.c.data, postgresql_concurrently=True)
def upgrade():
...
queryc = \
"""
CREATE INDEX CONCURRENTLY test_idx1 ON test (data, unique_key);
"""
# it should be possible to create an index here (direct SQL):
connection.execute(text(queryc))
...
Whereas concurrent indexes are allowed in Postgresql, Alembic does not support concurrent operations, only one process should be running at a time.
The new version of SQLite has the ability to enforce Foreign Key constraints, but for the sake of backwards-compatibility, you have to turn it on for each database connection separately!
sqlite> PRAGMA foreign_keys = ON;
I am using SQLAlchemy -- how can I make sure this always gets turned on?
What I have tried is this:
engine = sqlalchemy.create_engine('sqlite:///:memory:', echo=True)
engine.execute('pragma foreign_keys=on')
...but it is not working!...What am I missing?
EDIT:
I think my real problem is that I have more than one version of SQLite installed, and Python is not using the latest one!
>>> import sqlite3
>>> print sqlite3.sqlite_version
3.3.4
But I just downloaded 3.6.23 and put the exe in my project directory!
How can I figure out which .exe it's using, and change it?
For recent versions (SQLAlchemy ~0.7) the SQLAlchemy homepage says:
PoolListener is deprecated. Please refer to PoolEvents.
Then the example by CarlS becomes:
engine = create_engine(database_url)
def _fk_pragma_on_connect(dbapi_con, con_record):
dbapi_con.execute('pragma foreign_keys=ON')
from sqlalchemy import event
event.listen(engine, 'connect', _fk_pragma_on_connect)
Building on the answers from conny and shadowmatter, here's code that will check if you are using SQLite3 before emitting the PRAGMA statement:
from sqlalchemy import event
from sqlalchemy.engine import Engine
from sqlite3 import Connection as SQLite3Connection
#event.listens_for(Engine, "connect")
def _set_sqlite_pragma(dbapi_connection, connection_record):
if isinstance(dbapi_connection, SQLite3Connection):
cursor = dbapi_connection.cursor()
cursor.execute("PRAGMA foreign_keys=ON;")
cursor.close()
I now have this working:
Download the latest sqlite and pysqlite2 builds as described above: make sure correct versions are being used at runtime by python.
import sqlite3
import pysqlite2
print sqlite3.sqlite_version # should be 3.6.23.1
print pysqlite2.__path__ # eg C:\\Python26\\lib\\site-packages\\pysqlite2
Next add a PoolListener:
from sqlalchemy.interfaces import PoolListener
class ForeignKeysListener(PoolListener):
def connect(self, dbapi_con, con_record):
db_cursor = dbapi_con.execute('pragma foreign_keys=ON')
engine = create_engine(database_url, listeners=[ForeignKeysListener()])
Then be careful how you test if foreign keys are working: I had some confusion here. When using sqlalchemy ORM to add() things my import code was implicitly handling the relation hookups so could never fail. Adding nullable=False to some ForeignKey() statements helped me here.
The way I test sqlalchemy sqlite foreign key support is enabled is to do a manual insert from a declarative ORM class:
# example
ins = Coverage.__table__.insert().values(id = 99,
description = 'Wrong',
area = 42.0,
wall_id = 99, # invalid fkey id
type_id = 99) # invalid fkey_id
session.execute(ins)
Here wall_id and type_id are both ForeignKey()'s and sqlite throws an exception correctly now if trying to hookup invalid fkeys. So it works! If you remove the listener then sqlalchemy will happily add invalid entries.
I believe the main problem may be multiple sqlite3.dll's (or .so) lying around.
As a simpler approach if your session creation is centralised behind a Python helper function (rather than exposing the SQLA engine directly), you can just issue session.execute('pragma foreign_keys=on') before returning the freshly created session.
You only need the pool listener approach if arbitrary parts of your application may create SQLA sessions against the database.
From the SQLite dialect page:
SQLite supports FOREIGN KEY syntax when emitting CREATE statements for tables, however by default these constraints have no effect on the operation of the table.
Constraint checking on SQLite has three prerequisites:
At least version 3.6.19 of SQLite must be in use
The SQLite libary must be compiled without the SQLITE_OMIT_FOREIGN_KEY or SQLITE_OMIT_TRIGGER symbols enabled.
The PRAGMA foreign_keys = ON statement must be emitted on all connections before use.
SQLAlchemy allows for the PRAGMA statement to be emitted automatically for new connections through the usage of events:
from sqlalchemy.engine import Engine
from sqlalchemy import event
#event.listens_for(Engine, "connect")
def set_sqlite_pragma(dbapi_connection, connection_record):
cursor = dbapi_connection.cursor()
cursor.execute("PRAGMA foreign_keys=ON")
cursor.close()
One-liner version of conny's answer:
from sqlalchemy import event
event.listen(engine, 'connect', lambda c, _: c.execute('pragma foreign_keys=on'))
I had the same problem before (scripts with foreign keys constraints were going through but actuall constraints were not enforced by the sqlite engine); got it solved by:
downloading, building and installing the latest version of sqlite from here: sqlite-sqlite-amalgamation; before this I had sqlite 3.6.16 on my ubuntu machine; which didn't support foreign keys yet; it should be 3.6.19 or higher to have them working.
installing the latest version of pysqlite from here: pysqlite-2.6.0
after that I started getting exceptions whenever foreign key constraint failed
hope this helps, regards
If you need to execute something for setup on every connection, use a PoolListener.
Enforce Foreign Key constraints for sqlite when using Flask + SQLAlchemy.
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
def create_app(config: str=None):
app = Flask(__name__, instance_relative_config=True)
if config is None:
app.config.from_pyfile('dev.py')
else:
logger.debug('Using %s as configuration', config)
app.config.from_pyfile(config)
db.init_app(app)
# Ensure FOREIGN KEY for sqlite3
if 'sqlite' in app.config['SQLALCHEMY_DATABASE_URI']:
def _fk_pragma_on_connect(dbapi_con, con_record): # noqa
dbapi_con.execute('pragma foreign_keys=ON')
with app.app_context():
from sqlalchemy import event
event.listen(db.engine, 'connect', _fk_pragma_on_connect)
Source:
https://gist.github.com/asyd/a7aadcf07a66035ac15d284aef10d458