I am extracting a table row and the corresponding row from all referenced tables via SQLAlchemy.
Given the following object structure:
class DNAExtractionProtocol(Base):
__tablename__ = 'dna_extraction_protocols'
id = Column(Integer, primary_key=True)
code = Column(String, unique=True)
name = Column(String)
sample_mass = Column(Float)
mass_unit_id = Column(String, ForeignKey('measurement_units.id'))
mass_unit = relationship("MeasurementUnit", foreign_keys=[mass_unit_id])
digestion_buffer_id = Column(String, ForeignKey("solutions.id"))
digestion_buffer = relationship("Solution", foreign_keys=[digestion_buffer_id])
digestion_buffer_volume = Column(Float)
digestion_id = Column(Integer, ForeignKey("incubations.id"))
digestion = relationship("Incubation", foreign_keys=[digestion_id])
lysis_buffer_id = Column(String, ForeignKey("solutions.id"))
lysis_buffer = relationship("Solution", foreign_keys=[lysis_buffer_id])
lysis_buffer_volume = Column(Float)
lysis_id = Column(Integer, ForeignKey("incubations.id"))
lysis = relationship("Incubation", foreign_keys=[lysis_id])
proteinase_id = Column(String, ForeignKey("solutions.id"))
proteinase = relationship("Solution", foreign_keys=[proteinase_id])
proteinase_volume = Column(Float)
inactivation_id = Column(Integer, ForeignKey("incubations.id"))
inactivation = relationship("Incubation", foreign_keys=[inactivation_id])
cooling_id = Column(Integer, ForeignKey("incubations.id"))
cooling = relationship("Incubation", foreign_keys=[cooling_id])
centrifugation_id = Column(Integer, ForeignKey("incubations.id"))
centrifugation = relationship("Incubation", foreign_keys=[centrifugation_id])
volume_unit_id = Column(String, ForeignKey('measurement_units.id'))
volume_unit = relationship("MeasurementUnit", foreign_keys=[volume_unit_id])
I am using:
sql_query = session.query(DNAExtractionProtocol).options(Load(DNAExtractionProtocol).joinedload("*")).filter(DNAExtractionProtocol.code == code)
for item in sql_query:
pass
mystring = str(sql_query)
mydf = pd.read_sql_query(mystring,engine,params=[code])
print(mydf.columns)
This gives me:
Index([u'dna_extraction_protocols_id', u'dna_extraction_protocols_code',
u'dna_extraction_protocols_name',
u'dna_extraction_protocols_sample_mass',
u'dna_extraction_protocols_mass_unit_id',
u'dna_extraction_protocols_digestion_buffer_id',
u'dna_extraction_protocols_digestion_buffer_volume',
u'dna_extraction_protocols_digestion_id',
u'dna_extraction_protocols_lysis_buffer_id',
u'dna_extraction_protocols_lysis_buffer_volume',
u'dna_extraction_protocols_lysis_id',
u'dna_extraction_protocols_proteinase_id',
u'dna_extraction_protocols_proteinase_volume',
u'dna_extraction_protocols_inactivation_id',
u'dna_extraction_protocols_cooling_id',
u'dna_extraction_protocols_centrifugation_id',
u'dna_extraction_protocols_volume_unit_id', u'measurement_units_1_id',
u'measurement_units_1_code', u'measurement_units_1_long_name',
u'measurement_units_1_siunitx', u'solutions_1_id', u'solutions_1_code',
u'solutions_1_name', u'solutions_1_supplier',
u'solutions_1_supplier_id', u'incubations_1_id', u'incubations_1_speed',
u'incubations_1_duration', u'incubations_1_temperature',
u'incubations_1_movement', u'incubations_1_speed_unit_id',
u'incubations_1_duration_unit_id', u'incubations_1_temperature_unit_id',
u'solutions_2_id', u'solutions_2_code', u'solutions_2_name',
u'solutions_2_supplier', u'solutions_2_supplier_id',
u'incubations_2_id', u'incubations_2_speed', u'incubations_2_duration',
u'incubations_2_temperature', u'incubations_2_movement',
u'incubations_2_speed_unit_id', u'incubations_2_duration_unit_id',
u'incubations_2_temperature_unit_id', u'solutions_3_id',
u'solutions_3_code', u'solutions_3_name', u'solutions_3_supplier',
u'solutions_3_supplier_id', u'incubations_3_id', u'incubations_3_speed',
u'incubations_3_duration', u'incubations_3_temperature',
u'incubations_3_movement', u'incubations_3_speed_unit_id',
u'incubations_3_duration_unit_id', u'incubations_3_temperature_unit_id',
u'incubations_4_id', u'incubations_4_speed', u'incubations_4_duration',
u'incubations_4_temperature', u'incubations_4_movement',
u'incubations_4_speed_unit_id', u'incubations_4_duration_unit_id',
u'incubations_4_temperature_unit_id', u'incubations_5_id',
u'incubations_5_speed', u'incubations_5_duration',
u'incubations_5_temperature', u'incubations_5_movement',
u'incubations_5_speed_unit_id', u'incubations_5_duration_unit_id',
u'incubations_5_temperature_unit_id', u'measurement_units_2_id',
u'measurement_units_2_code', u'measurement_units_2_long_name',
u'measurement_units_2_siunitx', u'dna_extractions_1_id',
u'dna_extractions_1_code', u'dna_extractions_1_protocol_id',
u'dna_extractions_1_source_id'],
dtype='object')
This indeed contains all the columns I want - but the naming does not help me select what I want.
Isit possible to preserve the key names from the original table in this dataframe? e.g. instead of measurement_units_1_code I would like to have mass_unit_code.
This is not what joinedload is supposed to be used for. You want to do an explicit join in this case:
session.query(DNAExtractionProtocol.id.label("id"),
...,
MeasurementUnit.id.label("mass_unit_id"),
...) \
.join(DNAExtractionProtocol.mass_unit) \
.join(DNAExtractionProtocol.digestion_buffer) \
... \
.filter(...)
If you don't want to type out all those names, you can inspect the DNAExtractionProtocol class to find all relationships and dynamically construct the query and labels. An example:
cols = []
joins = []
insp = inspect(DNAExtractionProtocol)
for name, col in insp.columns.items():
cols.append(col.label(name))
for name, rel in insp.relationships.items():
alias = aliased(rel.mapper.class_, name=name)
for col_name, col in inspect(rel.mapper).columns.items():
aliased_col = getattr(alias, col.key)
cols.append(aliased_col.label("{}_{}".format(name, col_name)))
joins.append((alias, rel.class_attribute))
query = session.query(*cols).select_from(DNAExtractionProtocol)
for join in joins:
query = query.join(*join)
EDIT: Depending on your data structure you might need to use outerjoin instead of join on the last line.
You'll probably need to tweak this to your liking. For example, this doesn't take into account potential naming conflicts, e.g. for mass_unit_id, is it DNAExtractionProtocol.mass_unit_id or is it MeasurementUnit.id?
In addition, you'll probably want to execute sql_query.statement instead of str(sql_query). str(sql_query) is for printing purposes, not for execution. I believe you don't need to pass params=[code] if you use sql_query.statement because code will already have been bound to the appropriate parameter in the query.
Related
I'm using SQLAlchemy 1.4 to build my database models (posgresql).
I've stablished relationships between my models, which I follow using the different SQLAlchemy capabilities. When doing so, the fields of the related models get aliases which don't work for me.
Here's an example of one of my models:
from sqlalchemy import Column, DateTime, ForeignKey, Integer, func
from sqlalchemy.orm import relationship
class Process(declarative_model()):
"""Process database table class.
Process model. It contains all the information about one process
iteration. This is the proces of capturing an image with all the
provided cameras, preprocess the images and make a prediction for
them as well as computing the results.
"""
id: int = Column(Integer, primary_key=True, index=True, autoincrement=True)
"""Model primary key."""
petition_id: int = Column(Integer, ForeignKey("petition.id", ondelete="CASCADE"))
"""Foreign key to the related petition."""
petition: "Petition" = relationship("Petition", backref="processes", lazy="joined")
"""Related petition object."""
camera_id: int = Column(Integer, ForeignKey("camera.id", ondelete="CASCADE"))
"""Foreign key to the related camera."""
camera: "Camera" = relationship("Camera", backref="processes", lazy="joined")
"""Related camera object."""
n: int = Column(Integer, comment="Iteration number for the given petition.")
"""Iteration number for the given petition."""
image: "Image" = relationship(
"Image", back_populates="process", uselist=False, lazy="joined"
)
"""Related image object."""
datetime_init: datetime = Column(DateTime(timezone=True), server_default=func.now())
"""Datetime when the process started."""
datetime_end: datetime = Column(DateTime(timezone=True), nullable=True)
"""Datetime when the process finished if so."""
The model works perfectly and joins the data by default as expected, so far so good.
My problem comes when I make a query and I extract the results through query.all() or through pd.read_sql(query.statement, db).
Reading the documentation, I should get aliases for my fields like "{table_name}.{field}" but instead of that I'm getting like "{field}_{counter}". Here's an example of a query.statement for my model:
SELECT process.id, process.petition_id, process.camera_id, process.n, process.datetime_init, process.datetime_end, asset_quality_1.id AS id_2, asset_quality_1.code AS code_1, asset_quality_1.name AS name_1, asset_quality_1.active AS active_1, asset_quality_1.stock_quality_id, pit_door_1.id AS id_3, pit_door_1.code AS code_2, petition_1.id AS id_4, petition_1.user_id, petition_1.user_code, petition_1.load_code, petition_1.provider_code, petition_1.origin_code, petition_1.asset_quality_initial_id, petition_1.pit_door_id, petition_1.datetime_init AS datetime_init_1, petition_1.datetime_end AS datetime_end_1, mask_1.id AS id_5, mask_1.camera_id AS camera_id_1, mask_1.prefix_path, mask_1.position, mask_1.format, camera_1.id AS id_6, camera_1.code AS code_3, camera_1.pit_door_id AS pit_door_id_1, camera_1.position AS position_1, image_1.id AS id_7, image_1.prefix_path AS prefix_path_1, image_1.format AS format_1, image_1.process_id
FROM process LEFT OUTER JOIN petition AS petition_1 ON petition_1.id = process.petition_id LEFT OUTER JOIN asset_quality AS asset_quality_1 ON asset_quality_1.id = petition_1.asset_quality_initial_id LEFT OUTER JOIN stock_quality AS stock_quality_1 ON stock_quality_1.id = asset_quality_1.stock_quality_id LEFT OUTER JOIN pit_door AS pit_door_1 ON pit_door_1.id = petition_1.pit_door_id LEFT OUTER JOIN camera AS camera_1 ON camera_1.id = process.camera_id LEFT OUTER JOIN mask AS mask_1 ON camera_1.id = mask_1.camera_id LEFT OUTER JOIN image AS image_1 ON process.id = image_1.process_id
Does anybody know how can I change this behavior and make it alias the fields like “{table_name}_{field}"?
SQLAlchemy uses label styles to configure how columns are labelled in SQL statements. The default in 1.4.x is LABEL_STYLE_DISAMBIGUATE_ONLY, which will add a "counter" for columns with the same name in a query. LABEL_STYLE_TABLENAME_PLUS_COL is closer to what you want.
Default:
q = session.query(Table1, Table2).join(Table2)
q = q.set_label_style(LABEL_STYLE_DISAMBIGUATE_ONLY)
print(q)
gives
SELECT t1.id, t1.child_id, t2.id AS id_1
FROM t1 JOIN t2 ON t2.id = t1.child_id
whereas
q = session.query(Table1, Table2).join(Table2)
q = q.set_label_style(LABEL_STYLE_TABLENAME_PLUS_COL)
print(q)
generates
SELECT t1.id AS t1_id, t1.child_id AS t1_child_id, t2.id AS t2_id
FROM t1 JOIN t2 ON t2.id = t1.child_id
If you want to enforce a style for all orm queries you could sublcass Session:
class MySession(orm.Session):
_label_style = LABEL_STYLE_TABLENAME_PLUS_COL
and use this class for your sessions, or pass it it a sessionmaker, if you use one:
Session = orm.sessionmaker(engine, class_=MySession)
You can use the label argument of the Column or the relationship method to specify the custom name for a field.
For example, to give a custom label for the process.petition_id field, you can use:
petition_id = Column(Integer, ForeignKey("petition.id", ondelete="CASCADE"), label='process_petition_id')
And for the petition relationship, you can use:
petition = relationship("Petition", backref="processes", lazy="joined", lazyload=True, innerjoin=True, viewonly=False, foreign_keys=[petition_id], post_update=False, cascade='all, delete-orphan', passive_deletes=True, primaryjoin='Process.petition_id == Petition.id', single_parent=False, uselist=False, query_class=None, foreignkey=None, remote_side=None, remote_side_use_alter=False, order_by=None, secondary=None, secondaryjoin=None, back_populates=None, collection_class=None, doc=None, extend_existing=False, associationproxy=None, comparator_factory=None, proxy_property=None, impl=None, _create_eager_joins=None, dynamic=False, active_history=False, passive_updates=False, enable_typechecks=None, info=None, join_depth=None, innerjoin=None, outerjoin=None, selectin=None, selectinload=None, with_polymorphic=None, join_specified=None, viewonly=None, comparison_enabled=None, useexisting=None, label='process_petition')
With this, the fields should be aliased to process_petition_id and process_petition respectively.
I have two classes:
Trade
TradeItem
One Trade has multiple TradeItems. It's a simple 1-to-many relationship.
Trade
class Trade(Base):
__tablename__ = 'trade'
id = Column(Integer, primary_key=True)
name= Column(String)
trade_items = relationship('TradeItem',
back_populates='trade')
TradeItem
class TradeItem(Base):
__tablename__ = 'trade_item'
id = Column(Integer, primary_key=True)
location = Column(String)
trade_id = Column(Integer, ForeignKey('trade.id'))
trade = relationship('Trade',
back_populates='trade_items')
Filtering
session.query(Trade).filter(Trade.name == trade_name and TradeItem.location == location)
I want to get the Trade object by name and only those Trade Items where the location matches the given location name.
So if there is a trade object: Trade(name='Trade1') and it has 3 items:
TradeItem(location='loc1'),TradeItem(location='loc2'),TradeItem(location='loc2')
Then the query should return only my target trade items for the trade:
session.query(Trade).filter(Trade.name == 'Trade1' and TradeItem.location == 'loc1').first()
Then I expect the query to return Trade with name Trade1 but only one Trade item populated with location as loc1
But when I get the trade object, it has all of the Trade Items and it completely ignores the filter condition.
You can use the contains_eager option to to load only the child rows selected by the filter.
Note that to create a WHERE ... AND ... filter you can use sqlalchemy.and_, or chain filter or filter_by methods, but using Python's logical and will not work as expected.
from sqlalchemy import orm, and_
trade = (session.query(Trade)
.filter(and_(Trade.name == 'Trade1', TradeItem.location == 'loc1'))
.options(orm.contains_eager(Trade.trade_items))
.first())
The query generates this SQL :
SELECT trade_item.id AS trade_item_id, trade_item.location AS trade_item_location, trade_item.trade_id AS trade_item_trade_id, trade.id AS trade_id, trade.name AS trade_name
FROM trade
JOIN trade_item ON trade.id = trade_item.trade_id
WHERE trade.name = 'trade1' AND trade_item.location = 'loc1'
LIMIT 1 OFFSET 0
I am writing a code which has a booking system done on SQLite and on one of the CSV files it has time as a variable, I need it in a time type as I do operations on the time, but it gives the error message as
SQLite Time type only accepts Python time objects as input.
How do I get around this?
My code is below.
class Flight(Base):
__tablename__ = 'flights'
id = Column(Integer, primary_key=True)
planeid = Column(Integer, ForeignKey('planes.id'))
leave = Column(Time)
arrive = Column(Time)
date = Column(Date)
passengers = Column(Integer)
destination = Column(String)
bookings = relationship("Booking", back_populates='flights')
plane = relationship("Plane", back_populates='flight')
...
if session.query(Flight).count() == 0:
with open("flights.csv", "r") as flights_file:
lines = flights_file.readlines()
for line in lines:
_, planeid, leave, arrive, date, passengers, destination = line.rstrip().split(",")
new_flight = Flight(planeid=planeid,
leave=leave,
arrive=arrive,
date=date,
passengers=passengers,
destination=destination)
objects_to_add.append(new_flight)
session.add_all(objects_to_add)
session.commit()
Are you importing Time from sqlalchemy? Please checkout this.
from sqlalchemy import (Column, Time)
arrive = Column(Time)
Given a model like this:
class MyModel(Base):
__tablename__ = 'my_model'
id = Column(Integer, nullable=False, primary_key=True, index=True)
value = Column(Numeric, doc='value')
batch = Column(Integer, Sequence('my_model_batch_seq'), doc='Batch ID of the update')
I want to issue a batch insert that adds all the new objects with the same batch ID. The code below increments for each object which is not what I'm looking for.
objects = [
MyModel(
value=x,
) for x in range(10)
]
db.bulk_save_objects(objects)
If I've understood you correctly, you could first select the next value explicitly:
# Note that this may fail, if you haven't configured a bind on
# your Session.
batch = db.query(func.nextval('my_model_batch_seq')).scalar()
and then just pass it along:
objects = [
MyModel(
value=x,
batch=batch,
) for x in range(10)
]
db.bulk_save_objects(objects)
I'm using a modified version of the versioning code example that comes with SQLAlchemy to record a user id and date on changes. However, I also want to modify it so deletes are done by marking a is_deleted type flag instead of running an actual SQL DELETE. My problem is I'm not sure how to capture the delete and replace it with an update.
Here's what I have so far:
''' http://docs.sqlalchemy.org/en/rel_0_8/orm/examples.html?highlight=versioning#versioned-objects '''
from sqlalchemy.ext.declarative import declared_attr
from sqlalchemy.orm import mapper, class_mapper, attributes, object_mapper, scoping
from sqlalchemy.orm.session import Session
from sqlalchemy.orm.exc import UnmappedClassError, UnmappedColumnError
from sqlalchemy import Table, Column, ForeignKeyConstraint, DateTime, String, Boolean
from sqlalchemy import event
from sqlalchemy.orm.properties import RelationshipProperty
from datetime import datetime
from sqlalchemy.schema import ForeignKey
from sqlalchemy.sql.expression import false
def col_references_table(col, table):
for fk in col.foreign_keys:
if fk.references(table):
return True
return False
def _history_mapper(local_mapper):
cls = local_mapper.class_
# set the "active_history" flag
# on on column-mapped attributes so that the old version
# of the info is always loaded (currently sets it on all attributes)
for prop in local_mapper.iterate_properties:
getattr(local_mapper.class_, prop.key).impl.active_history = True
super_mapper = local_mapper.inherits
super_history_mapper = getattr(cls, '__history_mapper__', None)
polymorphic_on = None
super_fks = []
if not super_mapper or local_mapper.local_table is not super_mapper.local_table:
cols = []
for column in local_mapper.local_table.c:
if column.name.startswith('version_'):
continue
col = column.copy()
col.unique = False
if super_mapper and col_references_table(column, super_mapper.local_table):
super_fks.append((col.key, list(super_history_mapper.local_table.primary_key)[0]))
cols.append(col)
if column is local_mapper.polymorphic_on:
polymorphic_on = col
if super_mapper:
super_fks.append(('version_datetime', super_history_mapper.base_mapper.local_table.c.version_datetime))
super_fks.append(('version_userid', super_history_mapper.base_mapper.local_table.c.version_userid))
super_fks.append(('version_deleted', super_history_mapper.base_mapper.local_table.c.version_deleted))
cols.append(Column('version_datetime', DateTime, default=datetime.now, nullable=False, primary_key=True, info={'colanderalchemy': {'exclude': True}}))
cols.append(Column('version_userid', String(60), ForeignKey("user.login"), nullable=True, info={'colanderalchemy': {'exclude': True}}))
cols.append(Column('version_deleted', Boolean, server_default=false(), nullable=False, info={'colanderalchemy': {'exclude': True}}))
else:
cols.append(Column('version_datetime', DateTime, default=datetime.now, nullable=False, primary_key=True, info={'colanderalchemy': {'exclude': True}}))
cols.append(Column('version_userid', String(60), ForeignKey("user.login"), nullable=True, info={'colanderalchemy': {'exclude': True}}))
cols.append(Column('version_deleted', Boolean, server_default=false(), nullable=False, info={'colanderalchemy': {'exclude': True}}))
if super_fks:
cols.append(ForeignKeyConstraint(*zip(*super_fks)))
table = Table(local_mapper.local_table.name + '_history', local_mapper.local_table.metadata,
*cols
)
else:
# single table inheritance. take any additional columns that may have
# been added and add them to the history table.
for column in local_mapper.local_table.c:
if column.key not in super_history_mapper.local_table.c:
col = column.copy()
col.unique = False
super_history_mapper.local_table.append_column(col)
table = None
if super_history_mapper:
bases = (super_history_mapper.class_,)
else:
bases = local_mapper.base_mapper.class_.__bases__
versioned_cls = type.__new__(type, "%sHistory" % cls.__name__, bases, {})
m = mapper(
versioned_cls,
table,
inherits=super_history_mapper,
polymorphic_on=polymorphic_on,
polymorphic_identity=local_mapper.polymorphic_identity
)
cls.__history_mapper__ = m
if not super_history_mapper:
local_mapper.local_table.append_column(
Column('version_datetime', DateTime, default=datetime.now, nullable=False, primary_key=False, info={'colanderalchemy': {'exclude': True}})
)
local_mapper.add_property("version_datetime", local_mapper.local_table.c.version_datetime)
local_mapper.local_table.append_column(
Column('version_userid', String(60), ForeignKey("user.login"), nullable=True, info={'colanderalchemy': {'exclude': True}})
)
local_mapper.add_property("version_userid", local_mapper.local_table.c.version_userid)
local_mapper.local_table.append_column(
Column('version_deleted', Boolean, server_default=false(), nullable=False, info={'colanderalchemy': {'exclude': True}})
)
local_mapper.add_property("version_deleted", local_mapper.local_table.c.version_deleted)
class Versioned(object):
#declared_attr
def __mapper_cls__(cls):
def map(cls, *arg, **kw):
mp = mapper(cls, *arg, **kw)
_history_mapper(mp)
return mp
return map
def versioned_objects(iter):
for obj in iter:
if hasattr(obj, '__history_mapper__'):
yield obj
def create_version(obj, session, deleted = False):
obj_mapper = object_mapper(obj)
history_mapper = obj.__history_mapper__
history_cls = history_mapper.class_
obj_state = attributes.instance_state(obj)
attr = {}
obj_changed = False
for om, hm in zip(obj_mapper.iterate_to_root(), history_mapper.iterate_to_root()):
if hm.single:
continue
for hist_col in hm.local_table.c:
if hist_col.key.startswith('version_'):
continue
obj_col = om.local_table.c[hist_col.key]
# get the value of the
# attribute based on the MapperProperty related to the
# mapped column. this will allow usage of MapperProperties
# that have a different keyname than that of the mapped column.
try:
prop = obj_mapper.get_property_by_column(obj_col)
except UnmappedColumnError:
# in the case of single table inheritance, there may be
# columns on the mapped table intended for the subclass only.
# the "unmapped" status of the subclass column on the
# base class is a feature of the declarative module as of sqla 0.5.2.
continue
# expired object attributes and also deferred cols might not be in the
# dict. force it to load no matter what by using getattr().
if prop.key not in obj_state.dict:
getattr(obj, prop.key)
a, u, d = attributes.get_history(obj, prop.key)
if d:
attr[hist_col.key] = d[0]
obj_changed = True
elif u:
attr[hist_col.key] = u[0]
else:
# if the attribute had no value.
attr[hist_col.key] = a[0]
obj_changed = True
if not obj_changed:
# not changed, but we have relationships. OK
# check those too
for prop in obj_mapper.iterate_properties:
if isinstance(prop, RelationshipProperty) and \
attributes.get_history(obj, prop.key).has_changes():
obj_changed = True
break
if not obj_changed and not deleted:
return
attr['version_datetime'] = obj.version_datetime
attr['version_userid'] = obj.version_userid
attr['version_deleted'] = obj.version_deleted
hist = history_cls()
for key, value in attr.items():
setattr(hist, key, value)
session.add(hist)
obj.version_datetime = datetime.now()
obj.version_userid = getattr(session, 'userid', None)
obj.version_deleted = deleted
def versioned_session(session):
#event.listens_for(session, 'before_flush')
def before_flush(session, flush_context, instances):
for obj in versioned_objects(session.deleted):
create_version(obj, session, deleted = True)
for obj in versioned_objects(session.dirty):
create_version(obj, session)
def add_userid_to_session(userid, session):
if isinstance(session, scoping.scoped_session):
thread_local_session = session.registry()
thread_local_session.userid = userid
elif isinstance(session, Session):
session.userid = userid
else:
raise TypeError("Not sure how to add the userid into session of type {}".format(type(session)))
And here's how I'm using it (all non-essential parts have been cut out):
Base = declarative_base()
class User(Versioned, Base):
__tablename__ = 'user'
login = Column(String(60), primary_key=True, nullable=False)
groups = association_proxy('user_to_groups', 'group', creator=lambda group: UserToGroup(group_name=group.name))
def __init__(self, login, groups=None):
self.login = login
if groups:
for group in groups:
self.groups.append(group)
class Group(Versioned, Base):
__tablename__ = 'group'
name = Column(String(100), primary_key=True, nullable=False)
description = Column(String(100), nullable=True)
users = association_proxy('group_to_user', 'user', creator=lambda user: UserToGroup(user_login=user.login))
def __eq__(self, other):
return self.name == other.name
class UserToGroup(Versioned, Base):
__tablename__ = 'user_to_group'
user_login = Column(String(60), ForeignKey(User.login), primary_key=true)
group_name = Column(String(100), ForeignKey(Group.name), primary_key=true)
user = relationship(User, backref=backref('user_to_groups', cascade='all, delete-orphan'))
group = relationship(Group, backref=backref('group_to_user', cascade='all, delete-orphan'))
session.configure(bind=engine)
add_userid_to_session("test", session.registry())
versioned_session(session)
user = session.query(User).filter(User.login=='test').one()
user.groups.remove(Group(name ="g:admin"))
Before running that code the database currently has one user called 'test' and two groups that the user is attached to called 'g:admin' and 'g:superadmin'.
What it currently does is: Copy the existing user_to_group entry for the 'test' => 'g:admin' mapping and copy it to the history table. Then delete the entry from user_to_group.
What I'd like it to do is copy the value to the history table and then update the entry in user_to_group to have version_deleted set to true.
I'm thinking the way to do that is to snatch the entry out of the session.deleted (that's why I changed the order from the original code) and modify it put it into session.dirty. I'm just not sure what the "safest" way of doing this.
Another issue (which will likely require another question) is how to detect relationships which are covered in another table as currently the system makes a copy of the 'user' row into the history table and then updates the version information despite no real changes being made to the row.
EDIT: I've decided to do things a bit differently, but still have a problem... Instead of having a "deleted" flag in the live tables I actually delete the content and record another history item indicating when the deletion occurred. If I'm deleting an object directly then this works correctly. If I delete an object off of a relationship I'm not able to do it properly. A DELETE get's issued to the relationship table to remove the link, but I can't seem to figure out how to detect that deletion in the "create_version" method.
For example, if I do:
group = session.query(Group).filter(Group.name=='g:admin').one()
group.users.remove(group.users[0])
No objects are placed in session.deleted. I can detect some sort of deletion via attributes.get_history(obj, prop.key), but it seems to indicate a deletion of a UserToGroup object from Group (which I want to detect and record a history item on), but then also indicates a deletion of a Group from the UserToGroup object (which I don't want to do anything about because the actual Group is not being deleted).