I'm trying to shard my database into two: one for my main objects, another for logs. Right now, my code looks something like this:
engine = create_engine('postgresql+psycopg2://postgres:password#localhost:5432/logs')
engine2 = create_engine('postgresql+psycopg2://postgres:password#localhost:5432/logs')
DBSession = scoped_session(sessionmaker(extension=ZopeTransactionExtension()))
binds = {'thing': engine,
'log': engine_a}
DBSession.configure(binds=binds)
Base = declarative_base(bind=engine)
Base2 = declarative_base(bind=engine2)
class Thing(Base):
...
class Log(Base2):
...
Where I have more tables using both Base and Base2 as well as inherited objects. I've also tried doing the following:
DBSession = scoped_session(sessionmaker(extension=ZopeTransactionExtension(), bind=engine))
DBSession2 = scoped_session(sessionmaker(extension=ZopeTransactionExtension(), bind=engine2))
However, using either way and only working with objects in Base, not Base2, I get the following error when querying:
return DBSession.query(cls).filter(func.lower(cls.name) == name.lower()).first()
File "build/bdist.macosx-10.7-intel/egg/sqlalchemy/orm/scoping.py", line 113, in do
File "build/bdist.macosx-10.7-intel/egg/sqlalchemy/orm/session.py", line 969, in query
File "build/bdist.macosx-10.7-intel/egg/sqlalchemy/orm/query.py", line 107, in __init__
File "build/bdist.macosx-10.7-intel/egg/sqlalchemy/orm/query.py", line 116, in _set_entities
File "build/bdist.macosx-10.7-intel/egg/sqlalchemy/orm/query.py", line 131, in _setup_aliasizers
File "build/bdist.macosx-10.7-intel/egg/sqlalchemy/orm/util.py", line 550, in _entity_info
File "build/bdist.macosx-10.7-intel/egg/sqlalchemy/orm/mapper.py", line 2861, in configure_mappers
File "build/bdist.macosx-10.7-intel/egg/sqlalchemy/orm/mapper.py", line 1166, in _post_configure_properties
File "build/bdist.macosx-10.7-intel/egg/sqlalchemy/orm/interfaces.py", line 128, in init
File "build/bdist.macosx-10.7-intel/egg/sqlalchemy/orm/properties.py", line 913, in do_init
File "build/bdist.macosx-10.7-intel/egg/sqlalchemy/orm/properties.py", line 969, in _process_dependent_arguments
File "build/bdist.macosx-10.7-intel/egg/sqlalchemy/ext/declarative.py", line 1346, in return_cls
File "<string>", line 1, in <module>
AttributeError: 'Table' object has no attribute 'id'
Of course my table has an attribute 'id', all the same code works as long as I only have one DBSession and one Base. What am I doing wrong?
I found out the problem. I simply had to initiate DBSession().
Related
In my flask application, I have a model class called Well, which includes a function called getProdDF, which pulls production data from a model class called ProductionData and turns it into a pandas dataframe using pd.read_sql. On my local machine it works fine like this:
class Well:
id = db.Column(db.Integer, primary_key=True)
production = db.relationship('ProductionData', backref='well', lazy='dynamic')
def getProdDF(self):
data = self.production
df = pd.read_sql(data.statement, data.session.bind)
class ProductionData(ProdData, db.Model):
id = db.Column(db.Integer, primary_key=True)
well_id = db.Column(db.Integer, db.ForeignKey('well.id'))
oil = db.Column(db.Float)
#etc
but when I tried to move the code to a server and try to run it there it gives me this error:
File "C:\Users\usr\Code\ProjName\app\models.py", line 517, in getProdDF
df = pd.read_sql(data.statement, data.session.bind)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\usr\Envs\ProjName\Lib\site-packages\pandas\io\sql.py", line 564, in read_sql
return pandas_sql.read_query(
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\usr\Envs\ProjName\Lib\site-packages\pandas\io\sql.py", line 2078, in read_query
cursor = self.execute(*args)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\usr\Envs\ProjName\Lib\site-packages\pandas\io\sql.py", line 2016, in execute
cur = self.con.cursor()
^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'cursor'
Does anyone know what I'm doing wrong? The database file (app.db) is copied over, I can connect to the database and get data from it in the command line, but I get the same error when I try to run the Well model's getProdDF function:
>>> from app import app, db
>>> from app.models import Well
>>> app.app_context().push()
>>> w = Well.query.first()
>>> w.getProdDF()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\usr\Code\ProjName\app\models.py", line 517, in getProdDF
df = pd.read_sql(data.statement, data.session.bind)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\usr\Envs\ProjName\Lib\site-packages\pandas\io\sql.py", line 564, in read_sql
return pandas_sql.read_query(
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\usr\Envs\ProjNameemphasized text\Lib\site-packages\pandas\io\sql.py", line 2078, in read_query
cursor = self.execute(*args)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\usr\Envs\ProjName\Lib\site-packages\pandas\io\sql.py", line 2016, in execute
cur = self.con.cursor()
^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'cursor'
Thanks a lot,
Alex
I'm trying to reflect a table, while loading only specific columns using include_columns. However, omitting a column that has a ForeignKeyConstraint causes an exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/sqlalchemy/sql/schema.py", line 3825, in _set_parent
ColumnCollectionConstraint._set_parent(self, table)
File "/usr/local/lib/python3.9/dist-packages/sqlalchemy/sql/schema.py", line 3414, in _set_parent
ColumnCollectionMixin._set_parent(self, table)
File "/usr/local/lib/python3.9/dist-packages/sqlalchemy/sql/schema.py", line 3371, in _set_parent
for col in self._col_expressions(table):
File "/usr/local/lib/python3.9/dist-packages/sqlalchemy/sql/schema.py", line 3365, in _col_expressions
return [
File "/usr/local/lib/python3.9/dist-packages/sqlalchemy/sql/schema.py", line 3366, in <listcomp>
table.c[col] if isinstance(col, util.string_types) else col
File "/usr/local/lib/python3.9/dist-packages/sqlalchemy/sql/base.py", line 1213, in __getitem__
return self._index[key]
KeyError: 'DiscountDefinition_FK'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File ".../script.py", line 62, in <module>
POSBillItem = Table('POSBillItem', metadata, include_columns=POSBillItem_columns, autoload_with=engine)
File "<string>", line 2, in __new__
File "/usr/local/lib/python3.9/dist-packages/sqlalchemy/util/deprecations.py", line 309, in warned
return fn(*args, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/sqlalchemy/sql/schema.py", line 607, in __new__
metadata._remove_table(name, schema)
File "/usr/local/lib/python3.9/dist-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
compat.raise_(
File "/usr/local/lib/python3.9/dist-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/usr/local/lib/python3.9/dist-packages/sqlalchemy/sql/schema.py", line 602, in __new__
table._init(name, metadata, *args, **kw)
File "/usr/local/lib/python3.9/dist-packages/sqlalchemy/sql/schema.py", line 677, in _init
self._autoload(
File "/usr/local/lib/python3.9/dist-packages/sqlalchemy/sql/schema.py", line 712, in _autoload
conn_insp.reflect_table(
File "/usr/local/lib/python3.9/dist-packages/sqlalchemy/engine/reflection.py", line 795, in reflect_table
self._reflect_fk(
File "/usr/local/lib/python3.9/dist-packages/sqlalchemy/engine/reflection.py", line 997, in _reflect_fk
table.append_constraint(
File "/usr/local/lib/python3.9/dist-packages/sqlalchemy/sql/schema.py", line 913, in append_constraint
constraint._set_parent_with_dispatch(self)
File "/usr/local/lib/python3.9/dist-packages/sqlalchemy/sql/base.py", line 1046, in _set_parent_with_dispatch
self._set_parent(parent, **kw)
File "/usr/local/lib/python3.9/dist-packages/sqlalchemy/sql/schema.py", line 3827, in _set_parent
util.raise_(
File "/usr/local/lib/python3.9/dist-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
sqlalchemy.exc.ArgumentError: Can't create ForeignKeyConstraint on table 'POSBillItem': no column named 'DiscountDefinition_FK' is present.
from sqlalchemy import create_engine, engine
from sqlalchemy import MetaData, Table
conn_string = engine.URL.create(
"mssql+pyodbc",
username,
password,
host,
port,
db,
query={"driver": "ODBC Driver 18 for SQL Server", "TrustServerCertificate": "YES"}
)
engine = create_engine(conn_string)
metadata = MetaData()
Table1_columns = [
"PosBill_FK",
"Quantity",
"UnitPrice",
#"DiscountDefinition_FK",
#"originID"
]
POSBillItem = Table('POSBillItem', metadata, include_columns=Table1_columns, autoload_with=engine)
If i uncomment "DiscountDefinition_FK", exception no longer occurs, though i get a warning:
.../script.py:62: SAWarning: Omitting index key for (originID), key covers omitted columns.
POSBillItem = Table('POSBillItem', metadata, include_columns=POSBillItem_columns, autoload_with=engine)
If i uncomment "originID", warning goes away as well.
Is there a way to avoid loading these columns without manually adding columns?
The include_columns option is not, in fact, honored by foreign key reflection - in a discussion on GitHub it was confirmed to be a bug.
As for warnings on index keys, they would probably be phased out in the future, but right now the options are: either including them or supressing the warning with warnings library.
UPD: Fixed in SQLAlchemy 1.4.38.
based on my model:
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String, ForeignKey
from sqlalchemy.orm import relationship
Base = declarative_base()
class Session(Base):
__tablename__ = 'sessions'
id = Column(Integer, primary_key=True)
token = Column(String(200))
user_id = Column(Integer, ForeignKey('app_users.id'))
user = relationship('model.user.User', back_populates='sessions')
I want to instantiate a new session through:
session = Session(token='test-token-123')
But i get:
AttributeError: mapper
The full stacktrace:
Traceback (most recent call last):
File "/home/ubuntu/.local/lib/python3.5/site-packages/falcon/api.py", line 227, in __call__
responder(req, resp, **params)
File "./app_user/register.py", line 13, in on_post
session = Session(token='test-token-123')
File "<string>", line 2, in __init__
File "/home/ubuntu/.local/lib/python3.5/site-packages/sqlalchemy/orm/instrumentation.py", line 347, in _new_state_if_none
state = self._state_constructor(instance, self)
File "/home/ubuntu/.local/lib/python3.5/site-packages/sqlalchemy/util/langhelpers.py", line 764, in __get__
obj.__dict__[self.__name__] = result = self.fget(obj)
File "/home/ubuntu/.local/lib/python3.5/site-packages/sqlalchemy/orm/instrumentation.py", line 177, in _state_constructor
self.dispatch.first_init(self, self.class_)
File "/home/ubuntu/.local/lib/python3.5/site-packages/sqlalchemy/event/attr.py", line 256, in __call__
fn(*args, **kw)
File "/home/ubuntu/.local/lib/python3.5/site-packages/sqlalchemy/orm/mapper.py", line 2976, in _event_on_first_init
configure_mappers()
File "/home/ubuntu/.local/lib/python3.5/site-packages/sqlalchemy/orm/mapper.py", line 2872, in configure_mappers
mapper._post_configure_properties()
File "/home/ubuntu/.local/lib/python3.5/site-packages/sqlalchemy/orm/mapper.py", line 1765, in _post_configure_properties
prop.init()
File "/home/ubuntu/.local/lib/python3.5/site-packages/sqlalchemy/orm/interfaces.py", line 184, in init
self.do_init()
File "/home/ubuntu/.local/lib/python3.5/site-packages/sqlalchemy/orm/relationships.py", line 1653, in do_init
self._process_dependent_arguments()
File "/home/ubuntu/.local/lib/python3.5/site-packages/sqlalchemy/orm/relationships.py", line 1710, in _process_dependent_arguments
self.target = self.mapper.mapped_table
File "/home/ubuntu/.local/lib/python3.5/site-packages/sqlalchemy/util/langhelpers.py", line 850, in __getattr__
return self._fallback_getattr(key)
File "/home/ubuntu/.local/lib/python3.5/site-packages/sqlalchemy/util/langhelpers.py", line 828, in _fallback_getattr
raise AttributeError(key)
I have no idea where this error is coming from and i can not really debug it.. anybody could help me with this issue?
Thanks and Greetings!
Looking at the traceback you can see these lines:
...
File "/home/ubuntu/.local/lib/python3.5/site-packages/sqlalchemy/orm/relationships.py", line 1653, in do_init
self._process_dependent_arguments()
File "/home/ubuntu/.local/lib/python3.5/site-packages/sqlalchemy/orm/relationships.py", line 1710, in _process_dependent_arguments
self.target = self.mapper.mapped_table
...
which narrow your problem down quite a bit. The relationship
user = relationship('model.user.User', back_populates='sessions')
uses a Python evaluable string as the argument, the use of which is further explained in "Configuring Relationships":
Relationships to other classes are done in the usual way, with the added feature that the class specified to relationship() may be a string name. The “class registry” associated with Base is used at mapper compilation time to resolve the name into the actual class object, which is expected to have been defined once the mapper configuration is used
If you've not imported models.user module anywhere before you try to instantiate a Session object for the first time, then the name resolving fails because the class User has not been created yet and does not exist in the registry. In other words for the name resolving to work, all classes must have been defined, which means that their bodies must have been executed.
And if you actually have imported the models.user module, check your other models and that their related model classes have been defined. Using your models for the first time triggers mapper compilation/configuration, so the source of the error could be other models as well.
I have seen some questions about using SQLAlchemy on App Engine to connect to Google Cloud SQL. But I'm not sure if it is possible to develop using a local MySQL database and the existing SQLAlchemy dialect. On my first attempt, I added SQLAlchemy 0.8.0 to the app and defined a schema:
from sqlalchemy import create_engine, Column, Integer, Table
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
foo_table = Table('foo', Base.metadata,
Column('id', Integer, primary_key=True, autoincrement=True),
)
And when I tried to create the tables on the development server using:
url = 'mysql+gaerdbms:///%s?instance=%s' % ('database_name', 'instance_name')
engine = create_engine(url)
Base.metadata.create_all(engine)
...I got an error DBAPIError: (ImportError) No module named pwd None None, which means that SQLAlchemy is importing a module that is blacklisted by the development server.
Am I doing something wrong? Or, if not, what should I do to use SQLAlchemy on the development server? Or maybe the first question is: Can I use the SQLAlchemy's gaerdbms dialect to develop in a local MySql database using the dev server?
Edit: this error doesn't happen only when trying to create tables. I created the tables manually and tried to query them, and the same error occurs.
The full traceback is:
Traceback (most recent call last):
File "[...]/google_appengine/lib/webapp2-2.5.2/webapp2.py", line 1535, in __call__
rv = self.handle_exception(request, response, e)
File "[...]/google_appengine/lib/webapp2-2.5.2/webapp2.py", line 1529, in __call__
rv = self.router.dispatch(request, response)
File "[...]/google_appengine/lib/webapp2-2.5.2/webapp2.py", line 1278, in default_dispatcher
return route.handler_adapter(request, response)
File "[...]/google_appengine/lib/webapp2-2.5.2/webapp2.py", line 1102, in __call__
return handler.dispatch()
File "[...]/google_appengine/lib/webapp2-2.5.2/webapp2.py", line 572, in dispatch
return self.handle_exception(e, self.app.debug)
File "[...]/google_appengine/lib/webapp2-2.5.2/webapp2.py", line 570, in dispatch
return method(*args, **kwargs)
File "[...]/webapp/admin.py", line 12, in get
db.Base.metadata.create_all(engine)
File "[...]/webapp/sqlalchemy/schema.py", line 2784, in create_all
tables=tables)
File "[...]/webapp/sqlalchemy/engine/base.py", line 1486, in _run_visitor
with self._optional_conn_ctx_manager(connection) as conn:
File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "[...]/webapp/sqlalchemy/engine/base.py", line 1479, in _optional_conn_ctx_manager
with self.contextual_connect() as conn:
File "[...]/webapp/sqlalchemy/engine/base.py", line 1669, in contextual_connect
self.pool.connect(),
File "[...]/webapp/sqlalchemy/pool.py", line 272, in connect
return _ConnectionFairy(self).checkout()
File "[...]/webapp/sqlalchemy/pool.py", line 425, in __init__
rec = self._connection_record = pool._do_get()
File "[...]/webapp/sqlalchemy/pool.py", line 855, in _do_get
return self._create_connection()
File "[...]/webapp/sqlalchemy/pool.py", line 225, in _create_connection
return _ConnectionRecord(self)
File "[...]/webapp/sqlalchemy/pool.py", line 318, in __init__
self.connection = self.__connect()
File "[...]/webapp/sqlalchemy/pool.py", line 368, in __connect
connection = self.__pool._creator()
File "[...]/webapp/sqlalchemy/engine/strategies.py", line 80, in connect
return dialect.connect(*cargs, **cparams)
File "[...]/webapp/sqlalchemy/engine/default.py", line 279, in connect
return self.dbapi.connect(*cargs, **cparams)
File "[...]/google_appengine/google/storage/speckle/python/api/rdbms_googleapi.py", line 183, in __init__
super(GoogleApiConnection, self).__init__(*args, **kwargs)
File "[...]/google_appengine/google/storage/speckle/python/api/rdbms.py", line 810, in __init__
self.OpenConnection()
File "[...]/google_appengine/google/storage/speckle/python/api/rdbms.py", line 832, in OpenConnection
self.SetupClient()
File "[...]/google_appengine/google/storage/speckle/python/api/rdbms_googleapi.py", line 193, in SetupClient
self._client = RdbmsGoogleApiClient(**kwargs)
File "[...]/google_appengine/google/storage/speckle/python/api/rdbms_googleapi.py", line 106, in __init__
rdbms.OAUTH_CREDENTIALS_PATH)
File "/usr/lib/python2.7/posixpath.py", line 259, in expanduser
import pwd
File "[...]/google_appengine/google/appengine/tools/devappserver2/python/sandbox.py", line 822, in load_module
raise ImportError('No module named %s' % fullname)
DBAPIError: (ImportError) No module named pwd None None
I found a workaround. As it is, SQLAlchemy's gaerdbms dialect can't connect to a local database. But with the dialect below it can. Folow the instructions from this answer but use this dialect instead:
# mysql/gaerdbms.py
# Copyright (C) 2005-2013 the SQLAlchemy authors and contributors <see AUTHORS file>
#
# This module is part of SQLAlchemy and is released under
# the MIT License: http://www.opensource.org/licenses/mit-license.php
"""
.. dialect:: mysql+gaerdbms
:name: Google Cloud SQL
:dbapi: rdbms
:connectstring: mysql+gaerdbms:///<dbname>?instance=<instancename>
:url: https://developers.google.com/appengine/docs/python/cloud-sql/developers-guide
This dialect is based primarily on the :mod:`.mysql.mysqldb` dialect with minimal
changes.
.. versionadded:: 0.7.8
Pooling
-------
Google App Engine connections appear to be randomly recycled,
so the dialect does not pool connections. The :class:`.NullPool`
implementation is installed within the :class:`.Engine` by
default.
"""
import os
import re
from sqlalchemy.dialects.mysql.mysqldb import MySQLDialect_mysqldb
from sqlalchemy.pool import NullPool
class MySQLDialect_gaerdbms(MySQLDialect_mysqldb):
#classmethod
def dbapi(cls):
# from django:
# http://code.google.com/p/googleappengine/source/
# browse/trunk/python/google/storage/speckle/
# python/django/backend/base.py#118
# see also [ticket:2649]
# see also https://stackoverflow.com/q/14224679/34549
if is_production():
# Production mode.
from google.storage.speckle.python.api import rdbms_apiproxy
return rdbms_apiproxy
elif is_remote_mode():
# Development mode with remote database.
from google.storage.speckle.python.api import rdbms_googleapi
return rdbms_googleapi
else:
# Development mode with local database.
from google.appengine.api import rdbms_mysqldb
return rdbms_mysqldb
#classmethod
def get_pool_class(cls, url):
# Cloud SQL connections die at any moment
return NullPool
def create_connect_args(self, url):
opts = url.translate_connect_args()
if is_production() or is_remote_mode():
# 'dsn' and 'instance' are because we are skipping
# the traditional google.api.rdbms wrapper.
# they are not needed in local mode; 'dns' even causes an error.
opts['dsn'] = ''
opts['instance'] = url.query['instance']
return [], opts
def _extract_error_code(self, exception):
match = re.compile(r"^(\d+):|^\((\d+),").match(str(exception))
# The rdbms api will wrap then re-raise some types of errors
# making this regex return no matches.
code = match.group(1) or match.group(2) if match else None
if code:
return int(code)
dialect = MySQLDialect_gaerdbms
def is_production():
return os.getenv('SERVER_SOFTWARE', '').startswith('Google App Engine')
def is_remote_mode():
return os.getenv('SETTINGS_MODE') == 'prod'
This dialect uses a local database by default when running on the development server. To use remote access to Google Cloud SQL during development, a variable must be set in the environment, following the pattern used by Django:
os.environ['SETTINGS_MODE'] = 'prod'
I have two tables, testInstance and bugzilla that are associated by a third one, bzCheck, like this:
class Instance(Base):
__tablename__ = "testInstance"
id = Column(Integer, primary_key=True)
bz_checks = relation(BZCheck, backref="instance")
class BZCheck(Base):
__tablename__ = "bzCheck"
instance_id = Column(Integer, ForeignKey("testInstance.id"), primary_key=True)
bz_id = Column(Integer, ForeignKey("bugzilla.id"), primary_key=True)
status = Column(String, nullable=False)
bug = relation(Bugzilla, backref="checks")
class Bugzilla(Base):
__tablename__ = "bugzilla"
id = Column(Integer, primary_key=True)
The backend is a postgresql server ; I'm using SQLalchemy 0.5
If I create the Instance, Bugzilla and BZCheck ojects, then do
bzcheck.bug = bugzilla
instance.bz_checks.append(bzcheck)
and then add and commit them ; everything is fine.
But now, let's assume I have an existing instance and an existing bugzilla and want to associate them:
instance = session.query(Instance).filter(Instance.id == 31).one()
bugzilla = session.query(Bugzilla).filter(Bugzilla.id == 19876).one()
check = BZCheck(status="OK")
check.bug = bugzilla
instance.bz_checks.append(check)
It fails:
In [6]: instance.bz_checks.append(check)
2012-01-09 18:43:50,713 INFO sqlalchemy.engine.base.Engine.0x...3bd0 select nextval('"bzCheck_instance_id_seq"')
2012-01-09 18:43:50,713 INFO sqlalchemy.engine.base.Engine.0x...3bd0 None
2012-01-09 18:43:50,713 INFO sqlalchemy.engine.base.Engine.0x...3bd0 ROLLBACK
It tries to get a new ID from an unexisting sequence instead of using the foreign key "testInstance.id"... I don't understand why.
I have had similar problems when trying to modify objects after commiting them ; I should have missed something fundamental but what ?
the part you're missing here is the stack trace. Always look at the stack trace - what is critical here is that it's autoflush, produced by the access of instance.bz_checks:
Traceback (most recent call last):
File "test.py", line 44, in <module>
instance.bz_checks.append(check)
File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/attributes.py", line 168, in __get__
return self.impl.get(instance_state(instance),dict_)
File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/attributes.py", line 453, in get
value = self.callable_(state, passive)
File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/strategies.py", line 563, in _load_for_state
result = q.all()
File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/query.py", line 1983, in all
return list(self)
File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/query.py", line 2092, in __iter__
self.session._autoflush()
File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/session.py", line 973, in _autoflush
self.flush()
File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/session.py", line 1547, in flush
self._flush(objects)
File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/session.py", line 1616, in _flush
flush_context.execute()
File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/unitofwork.py", line 328, in execute
rec.execute(self)
File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/unitofwork.py", line 472, in execute
uow
File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/mapper.py", line 2291, in _save_obj
execute(statement, params)
File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/engine/base.py", line 1405, in execute
params)
File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/engine/base.py", line 1538, in _execute_clauseelement
compiled_sql, distilled_params
File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/engine/base.py", line 1646, in _execute_context
context)
File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/engine/base.py", line 1639, in _execute_context
context)
File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/engine/default.py", line 330, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.IntegrityError: (IntegrityError) null value in column "instance_id" violates not-null constraint
'INSERT INTO "bzCheck" (bz_id, status) VALUES (%(bz_id)s, %(status)s) RETURNING "bzCheck".instance_id' {'status': 'OK', 'bz_id': 19876}
you can see this because the line of code is:
instance.bz_checks.append(check)
then autoflush:
self.session._autoflush()
File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/session.py", line 973, in _autoflush
Three solutions:
a. temporarily disable autoflush (see http://www.sqlalchemy.org/trac/wiki/UsageRecipes/DisableAutoflush)
b. ensure that the BZCheck association object is always created with it's full state needed before accessing any collections:
BZState(bug=bugzilla, instance=instance)
(this is usually a good idea for association objects - they represent the association between two points so it's most appropriate that they be instantiated with this state)
c. change the cascade rules so that the operation of check.bug = somebug doesn't actually place check into the session just yet. You can do this with cascade_backrefs, described at http://www.sqlalchemy.org/docs/orm/session.html#controlling-cascade-on-backrefs. (but you'd need to be on 0.6 or 0.7)