SQLAlchemy logging of changes with date and user - python

This is very similar to another question that's over 3 years old: What's a good general way to look SQLAlchemy transactions, complete with authenticated user, etc?
I'm working on an application where I'd like to log all changes to particular tables. There's currently a really good "recipe" that does versioning, but I need to modify it to instead record a datetime when the change occurred and a user id of who made the change. I took the history_meta.py example that's packaged with SQLAlchemy and made it record times instead of version numbers, but I'm having trouble figuring out how to pass in a user id.
The question I referenced above suggests including the user id in the session object. That makes a lot of sense, but I'm not sure how to do that. I've tried something simple like session.userid = authenticated_userid(request) but in history_meta.py that attribute doesn't seem to be on the session object any more.
I'm doing all of this in the Pyramid framework and the session object that I'm using is defined as DBSession = scoped_session(sessionmaker(extension=ZopeTransactionExtension())). In a view I do session = DBSession() and then proceed to use session. (I'm not really sure if that's necessary, but that's what's going on)
Here's my modified history_meta.py in case someone might find it useful:
from sqlalchemy.ext.declarative import declared_attr
from sqlalchemy.orm import mapper, class_mapper, attributes, object_mapper
from sqlalchemy.orm.exc import UnmappedClassError, UnmappedColumnError
from sqlalchemy import Table, Column, ForeignKeyConstraint, DateTime
from sqlalchemy import event
from sqlalchemy.orm.properties import RelationshipProperty
from datetime import datetime
def col_references_table(col, table):
for fk in col.foreign_keys:
if fk.references(table):
return True
return False
def _history_mapper(local_mapper):
cls = local_mapper.class_
# set the "active_history" flag
# on on column-mapped attributes so that the old version
# of the info is always loaded (currently sets it on all attributes)
for prop in local_mapper.iterate_properties:
getattr(local_mapper.class_, prop.key).impl.active_history = True
super_mapper = local_mapper.inherits
super_history_mapper = getattr(cls, '__history_mapper__', None)
polymorphic_on = None
super_fks = []
if not super_mapper or local_mapper.local_table is not super_mapper.local_table:
cols = []
for column in local_mapper.local_table.c:
if column.name == 'version_datetime':
continue
col = column.copy()
col.unique = False
if super_mapper and col_references_table(column, super_mapper.local_table):
super_fks.append((col.key, list(super_history_mapper.local_table.primary_key)[0]))
cols.append(col)
if column is local_mapper.polymorphic_on:
polymorphic_on = col
if super_mapper:
super_fks.append(('version_datetime', super_history_mapper.base_mapper.local_table.c.version_datetime))
cols.append(Column('version_datetime', DateTime, default=datetime.now, nullable=False, primary_key=True))
else:
cols.append(Column('version_datetime', DateTime, default=datetime.now, nullable=False, primary_key=True))
if super_fks:
cols.append(ForeignKeyConstraint(*zip(*super_fks)))
table = Table(local_mapper.local_table.name + '_history', local_mapper.local_table.metadata,
*cols
)
else:
# single table inheritance. take any additional columns that may have
# been added and add them to the history table.
for column in local_mapper.local_table.c:
if column.key not in super_history_mapper.local_table.c:
col = column.copy()
col.unique = False
super_history_mapper.local_table.append_column(col)
table = None
if super_history_mapper:
bases = (super_history_mapper.class_,)
else:
bases = local_mapper.base_mapper.class_.__bases__
versioned_cls = type.__new__(type, "%sHistory" % cls.__name__, bases, {})
m = mapper(
versioned_cls,
table,
inherits=super_history_mapper,
polymorphic_on=polymorphic_on,
polymorphic_identity=local_mapper.polymorphic_identity
)
cls.__history_mapper__ = m
if not super_history_mapper:
local_mapper.local_table.append_column(
Column('version_datetime', DateTime, default=datetime.now, nullable=False, primary_key=False)
)
local_mapper.add_property("version_datetime", local_mapper.local_table.c.version_datetime)
class Versioned(object):
#declared_attr
def __mapper_cls__(cls):
def map(cls, *arg, **kw):
mp = mapper(cls, *arg, **kw)
_history_mapper(mp)
return mp
return map
def versioned_objects(iter):
for obj in iter:
if hasattr(obj, '__history_mapper__'):
yield obj
def create_version(obj, session, deleted = False):
obj_mapper = object_mapper(obj)
history_mapper = obj.__history_mapper__
history_cls = history_mapper.class_
obj_state = attributes.instance_state(obj)
attr = {}
obj_changed = False
for om, hm in zip(obj_mapper.iterate_to_root(), history_mapper.iterate_to_root()):
if hm.single:
continue
for hist_col in hm.local_table.c:
if hist_col.key == 'version_datetime':
continue
obj_col = om.local_table.c[hist_col.key]
# get the value of the
# attribute based on the MapperProperty related to the
# mapped column. this will allow usage of MapperProperties
# that have a different keyname than that of the mapped column.
try:
prop = obj_mapper.get_property_by_column(obj_col)
except UnmappedColumnError:
# in the case of single table inheritance, there may be
# columns on the mapped table intended for the subclass only.
# the "unmapped" status of the subclass column on the
# base class is a feature of the declarative module as of sqla 0.5.2.
continue
# expired object attributes and also deferred cols might not be in the
# dict. force it to load no matter what by using getattr().
if prop.key not in obj_state.dict:
getattr(obj, prop.key)
a, u, d = attributes.get_history(obj, prop.key)
if d:
attr[hist_col.key] = d[0]
obj_changed = True
elif u:
attr[hist_col.key] = u[0]
else:
# if the attribute had no value.
attr[hist_col.key] = a[0]
obj_changed = True
if not obj_changed:
# not changed, but we have relationships. OK
# check those too
for prop in obj_mapper.iterate_properties:
if isinstance(prop, RelationshipProperty) and \
attributes.get_history(obj, prop.key).has_changes():
obj_changed = True
break
if not obj_changed and not deleted:
return
attr['version_datetime'] = obj.version_datetime
hist = history_cls()
for key, value in attr.items():
setattr(hist, key, value)
session.add(hist)
print(dir(session))
obj.version_datetime = datetime.now()
def versioned_session(session):
#event.listens_for(session, 'before_flush')
def before_flush(session, flush_context, instances):
for obj in versioned_objects(session.dirty):
create_version(obj, session)
for obj in versioned_objects(session.deleted):
create_version(obj, session, deleted = True)
UPDATE:
Okay, it seems that in the before_flush() method the session I get is of type sqlalchemy.orm.session.Session where the session I attached the user_id to was sqlalchemy.orm.scoping.scoped_session. So, at some point an object layer is stripped off. Is it safe to assign the user_id to the Session within the scoped_session? Can I be sure that it won't be there for other requests?

Old question, but still very relevant.
You should avoid trying to place web session information on the database session. It's combining unrelated concerns and each has it's own lifecycle (which don't match). Here's an approach I use in Flask with SQLAlchemy (not Flask-SQLAlchemy, but that should work too). I've tried to comment where Pyramid would be different.
from flask import has_request_context # How to check if in a Flask session
from sqlalchemy import inspect
from sqlalchemy.orm import class_mapper
from sqlalchemy.orm.attributes import get_history
from sqlalchemy.event import listen
from YOUR_SESSION_MANAGER import get_user # This would be something in Pyramid
from my_project import models # Where your models are defined
def get_object_changes(obj):
""" Given a model instance, returns dict of pending
changes waiting for database flush/commit.
e.g. {
'some_field': {
'before': *SOME-VALUE*,
'after': *SOME-VALUE*
},
...
}
"""
inspection = inspect(obj)
changes = {}
for attr in class_mapper(obj.__class__).column_attrs:
if getattr(inspection.attrs, attr.key).history.has_changes():
if get_history(obj, attr.key)[2]:
before = get_history(obj, attr.key)[2].pop()
after = getattr(obj, attr.key)
if before != after:
if before or after:
changes[attr.key] = {'before': before, 'after': after}
return changes
def my_model_change_listener(mapper, connection, target):
changes = get_object_changes(target)
changes.pop("modify_ts", None) # remove fields you don't want to track
user_id = None
if has_request_context():
# Call your function to get active user and extract id
user_id = getattr(get_user(), 'id', None)
if user_id is None:
# What do you want to do if user can't be determined
pass
# You now have the model instance (target), the user_id who is logged in,
# and a dictionary of changes.
# Either do somthing "quick" with it here or call an async task (e.g.
# Celery) to do something with the information that may take longer
# than you want the request to take.
# Add the listener
listen(models.MyModel, 'after_update', my_model_change_listener)

After a bunch of fiddling I seem to able to set values on the session object within the scoped_session by doing the following:
DBSession = scoped_session(sessionmaker(extension=ZopeTransactionExtension()))
session = DBSession()
inner_session = session.registry()
inner_session.user_id = "test"
versioned_session(session)
Now the session object being passed around in history_meta.py has a user_id attribute on it which I set. I'm a little concerned about whether this is the right way of doing this as the object in the registry is a thread-local one and the threads are being re-used for different http requests.

I ran into this old question recently. My requirement is to log all changes to a set of tables.
I'll post the code I ended up with here in case anyone finds it useful. It has some limitations, especially around deletes, but works for my purposes. The code supports logging audit records for selected tables to either a log file, or an audit table in the db.
from app import db
import datetime
from flask import current_app, g
# your own session user goes here
# you'll need an id and an email in that model
from flask_user import current_user as user
import importlib
import logging
from sqlalchemy import event, inspect
from sqlalchemy.orm.attributes import get_history
from sqlalchemy.orm import ColumnProperty, class_mapper
from uuid import uuid4
class AuditManager (object):
config = {'storage': 'log',
#define class for Audit model for your project, if saving audit records in db
'auditModel': 'app.models.user_models.Audit'}
def __init__(self, app):
if 'AUDIT_CONFIG' in app.config:
app.before_request(self.before_request_handler)
self.config.update(app.config['AUDIT_CONFIG'])
event.listen(
db.session,
'after_flush',
self.db_after_flush
)
event.listen(
db.session,
'before_flush',
self.db_before_flush
)
event.listen(
db.session,
'after_bulk_delete',
self.db_after_bulk_delete
)
if self.config['storage'] == 'log':
self.logger = logging.getLogger(__name__)
elif self.config['storage'] == 'db':
# Load Audit model class at runtime, so that log file users dont need to define it
module_name, class_name = self.config['auditModel'].rsplit(".", 1)
self.AuditModel = getattr(importlib.import_module(module_name), class_name)
#Create a global request id
# Use this to group transactions together
def before_request_handler(self):
g.request_id = uuid4()
def db_after_flush(self, session, flush_context):
for instance in session.new:
if instance.__tablename__ in self.config['tables']:
# Record the inserts for this table
data = {}
auditFields = getattr(instance.__class__, 'Meta', None)
auditFields = getattr(auditFields,\
'auditFields', #Prefer to list auditable fields explicitly in the model's Meta class
self.get_fields(instance)) # or derive them otherwise
for attr in auditFields:
data[attr] = str(getattr(instance, attr, 'not set')) #Make every value a string in audit
self.log_it (session, 'insert', instance, data)
def db_before_flush(self, session, flush_context, instances):
for instance in session.dirty:
# Record the changes for this table
if instance.__tablename__ in self.config['tables']:
inspection = inspect(instance)
data = {}
auditFields = getattr(instance.__class__, 'Meta', None)
auditFields = getattr(auditFields,\
'auditFields',
self.get_fields(instance))
for attr in auditFields:
if getattr(inspection.attrs, attr).history.has_changes(): #We only log the new data
data[attr] = str(getattr(instance, attr, 'not set'))
self.log_it (session, 'change', instance, data)
for instance in session.deleted:
# Record the deletes for this table
# for this to be triggered, you must use this session based delete object construct.
# Eg: session.delete({query}.first())
if instance.__tablename__ in self.config['tables']:
data = {}
auditFields = getattr(instance.__class__, 'Meta', None)
auditFields = getattr(auditFields,\
'auditFields',
self.get_fields(instance))
for attr in auditFields:
data[attr] = str(getattr(instance, attr, 'not set'))
self.log_it (session, 'delete', instance, data)
def db_after_bulk_delete(self, delete_context):
instance = delete_context.query.column_descriptions[0]['type'] #only works for single table deletes
if delete_context.result.returns_rows:
# Not sure exactly how after_bulk_delete is expected work, since the context.results is empty,
# as delete statement return no results
for row in delete_context.result:
data = {}
auditFields = getattr(instance.__class__, 'Meta', None)
auditFields = getattr(auditFields,\
'auditFields',
self.get_fields(instance))
for attr in auditFields:
data[attr] = str(getattr(row, attr, 'not set')) #Make every value a string in audit
self.log_it (delete_context.session, 'delete', instance, data)
else:
# Audit what we can when we don't have indiividual rows to look at
self.log_it (delete_context.session, 'delete', instance,\
{"rowcount": delete_context.result.rowcount})
def log_it (self, session, action, instance, data):
if self.config['storage'] == 'log':
self.logger.info("request_id: %s, table: %s, action: %s, user id: %s, user email: %s, date: %s, data: %s" \
% (getattr(g, 'request_id', None), instance.__tablename__, action, getattr(user, 'id', None), getattr(user, 'email', None),\
datetime.datetime.now(), data))
elif self.config['storage'] == 'db':
audit = self.AuditModel(request_id=str(getattr(g, 'request_id', None)),
table=str(instance.__tablename__),
action=action,
user_id=getattr(user, 'id', None),
user_email=getattr(user, 'email', None),
date=datetime.datetime.now(),
data=data
)
session.add(audit)
def get_fields(self, instance):
fields = []
for attr in class_mapper(instance.__class__).column_attrs:
fields.append(attr.key)
return fields
Suggested Model, if you want to store audit records in the database.
class Audit(db.Model):
__tablename__ = 'audit'
id = db.Column(db.Integer, primary_key=True)
request_id = db.Column(db.Unicode(50), nullable=True, index=True, server_default=u'')
table = db.Column(db.Unicode(50), nullable=False, index=True, server_default=u'')
action = db.Column(db.Unicode(20), nullable=False, server_default=u'')
user_id = db.Column(db.Integer, db.ForeignKey('user.id', ondelete='SET NULL'), nullable=True, )
user_email = db.Column(db.Unicode(255), nullable=False, server_default=u'')
date = db.Column(db.DateTime, default=db.func.now())
data = db.Column(JSON)
In settings:
AUDIT_CONFIG = {
"tables": ['user', 'order', 'batch']
}

Related

Add updated_at column in postgres table using sqlalchemy

Short Description: I want to add insertion_date and last_update columns to a table in a Postgresql database using SQLalchemy ORM. I followed the documentation and the examples on SO, but it didn't work for me.
Here is my example:
base.py
from sqlalchemy.ext.declarative.api import declarative_base
Base = declarative_base()
movie_model.py
from __future__ import annotations
import datetime
from sqlalchemy import func
from base import Base
from sqlalchemy.sql.schema import Column
from typing import Optional, Dict, Union
from sqlalchemy.sql.sqltypes import String, Date, DateTime, Integer
class Movie(Base):
__tablename__ = 'movies'
movie_id = Column(String, primary_key=True)
movie_name = Column(String)
release_date = Column(Date)
movie_revenue = Column(Integer)
last_update = Column(DateTime, default=datetime.datetime.now, onupdate=datetime.datetime.now)
insertion_date = Column(DateTime, server_default=func.now())
# __mapper_args__ = {"eager_defaults": True}
#staticmethod
def from_dict(movie_dict: Dict[str, Union[str, int, datetime]]) -> Optional[Movie]:
if not movie_dict:
return None
return Movie(
movie_id=movie_dict.get('movie_id'),
movie_name=movie_dict.get('movie_name'),
release_date=movie_dict.get('release_date'),
movie_revenue=movie_dict.get('movie_revenue')
)
db_manager.py
from movie_model import Movie
from sqlalchemy.dialects.postgresql import insert
from sqlalchemy.inspection import inspect
from typing import List
class DBManager:
def __init__(self, session):
self.session = session
self.columns = Movie.__table__.columns.keys()
self.primary_key = inspect(Movie).primary_key[0]
self.primary_key_name = self.primary_key.name
def add_many_on_conflict_do_update(self, movies: List[Movie]) -> None:
# Used self.columns[:-2] to exclude the last two columns (last_update and insertion_date) from the explicit insertion, so the ORM handles it
# on its own as mentioned here: https://docs.sqlalchemy.org/en/14/core/metadata.html#sqlalchemy.schema.Column.params.onupdate.
statement = insert(Movie.__table__).values([{attr: getattr(movie, attr) for attr in self.columns[:-2]} for movie in movies])
statement = statement.on_conflict_do_update(
index_elements=[self.primary_key_name], set_={attr: getattr(statement.excluded, attr) for attr in self.columns[:-2]}
)
self.session.execute(statement)
self.session.commit()
def get_all(self) -> List[Movie]:
return self.session.query(Movie).all()
main.py
from movie_model import Movie
from db_manager import DBManager
import datetime
from base import Base
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker, scoped_session
def main():
db_string = "postgresql+psycopg2://db_user:password#localhost:5432/db_name"
engine = create_engine(db_string, echo=False)
session = scoped_session(sessionmaker(bind=engine))
Base.metadata.create_all(engine, checkfirst=True)
movie1 = {
'movie_id': '1',
'movie_name': 'movie1',
'release_date': datetime.datetime.strptime('2010-09-23', '%Y-%m-%d').date(),
'movie_revenue': 1000
}
movie2 = {
'movie_id': '2',
'movie_name': 'movie2',
'release_date': datetime.datetime.strptime('2010-09-23', '%Y-%m-%d').date(),
'movie_revenue': 2000
}
movie3 = {
'movie_id': '3',
'movie_name': 'movie3',
'release_date': datetime.datetime.strptime('2010-09-24', '%Y-%m-%d').date(),
'movie_revenue': 3000
}
movie4 = {
'movie_id': '4',
'movie_name': 'movie4',
'release_date': datetime.datetime.strptime('2010-09-24', '%Y-%m-%d').date(),
'movie_revenue': 4000
}
movies = [movie1, movie2, movie3, movie4]
movies_models = []
for movie in movies:
movies_models.append(Movie.from_dict(movie))
db_manager = DBManager(session=session)
db_manager.add_many_on_conflict_do_update(movies_models)
retrieved_movies = db_manager.get_all()
for movie in retrieved_movies:
print(movie.last_update)
if __name__ == '__main__':
main()
Packages:
psycopg2-binary==2.9.1
SQLAlchemy==1.3.24
**Postgres version: ** 12.
When I run python main.py the insertion_date is inserted properly even if I run it for many times adding new movie each time. But the last_update is only inserted the first time I add a movie, while it is expected to be updated whenever an update is applied to the already inserted movie.
I tried replacing self.columns[:-2] by self.columns[:-1] or by self.columns in the insertion statement and/or in the _set parameter. But it ended up that last_update column is null or it is being updated every time I run main.py even if there are no updates/insertions to apply.
I also replaced onupdate by server_onupdate and datetime.datetime.now by func.now() in movie_model.py but nothing worked.
I even added __mapper_args__ = {"eager_defaults": True} to Movie() in movie_model.py as described here but still didn't work.
I checked the following questions on SO but nothing worked too:
- Is possible to create Column in SQLAlchemy which is going to be automatically populated with time when it inserted/updated last time?
- onupdate not overridinig current datetime value
Any suggestions to make it work?
Edit for clarification:
What I am looking for is to make the last_update column get updated with the time of the transaction whenever a row is updated (using on_conflict_do_update).

Setting default for QuerySelectFields as part of FieldList with Flask-wtf

In a flask-wtf form, I can't seem to set the default of a QuerySelectField when it's embbedded in a FieldList.
(I'm passing the instance in the database that I want to be the default, rather than the scalar.)
Code is as follows:-
Model:
class Jobs_crewing_roles(db.Model):
__tablename__ = 'jobs_crewing_roles'
sort_by = 'roles_description'
roles_abbreviation = db.Column(db.String(1), primary_key=True, nullable=False)
roles_description = db.Column(db.Text)
roles_short_description = db.Column(db.String(10))
roles_crewing = db.relationship('Jobs_crewing', backref='role')
Form:
def CrewingRoles():
return lambda: Jobs_crewing_roles.query.all()
class CrewingRoleEntryForm(FlaskForm):
role = QuerySelectField('Role', query_factory=CrewingRoles(), get_label = 'roles_description')
class ReviewFlightForm(FlaskForm):
t_job_crewing_roles = FieldList(FormField(CrewingRoleEntryForm), min_entries=0)
t_job_review_submit = SubmitField('Save Changes')
Helper function:
job.review_form = ReviewFlightForm()
roles_items = []
for p in ast.literal_eval(form.t_job_crewing_roles_dict.data):
select_entry = CrewingRoleEntryForm()
select_entry.role.id = "role_"+str(p['id'])
select_entry.role.label = p['id']
select_entry.role.default = Jobs_crewing_roles.query.filter(Jobs_crewing_roles.roles_abbreviation == p['role']).first()
roles_items.append(select_entry)
job.review_form.t_jobs_crewing_roles = roles_items
(form.t_job_crewing_roles_dict.data is passed from a previous form, a stringified list of dictionaries in the form {'id': (person id), 'role': (role abbreviation in Jobs_crewing_roles table)})
The rest of the helper function is working as I get the correct id and label. But the QuerySelectField default isn't being set no matter what I try!
Edited to add:
The Flask debugger seems to think it's been set correctly, but it's not being selected in the form HTML...?:
[console ready]
>>> review_form.t_jobs_crewing_roles[0].role.default
<Job Crewing Roles Record: Engineer>
>>>

SQLAlchemy nested model creation one-liner

I'm looking to create a new object from q2, which fails because the Question class is expecting options to be a dictionary of Options, and it's receiving a dict of dicts instead.
So, unpacking obviously fails with a nested model.
What is the best approach to handle this? Is there something that's equivalent to the elegance of the **dict for a nested model?
main.py
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
import models.base
from models.question import Question
from models.option import Option
engine = create_engine('sqlite:///:memory:')
models.base.Base.metadata.create_all(engine, checkfirst=True)
Session = sessionmaker(bind=engine)
session = Session()
def create_question(q):
# The following hard coding works:
# q = Question(text='test text',
# frequency='test frequency',
# options=[Option(text='test option')]
# )
question = Question(**q)
session.add(question)
session.commit()
q1 = {
'text': 'test text',
'frequency': 'test frequency'
}
q2 = {
'text': 'test text',
'frequency': 'test frequency',
'options': [
{'text': 'test option 123'},
]
}
create_question(q1)
# create_question(q2) FAILS
base.py
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
question.py
from sqlalchemy import *
from sqlalchemy.orm import relationship
from .base import Base
class Question(Base):
__tablename__ = 'questions'
id = Column(Integer, primary_key=True)
text = Column(String(120), nullable=False)
frequency = Column(String(20), nullable=False)
active = Column(Boolean(), default=True, nullable=False)
options = relationship('Option', back_populates='question')
def __repr__(self):
return "<Question(id={0}, text={1}, frequency={2}, active={3})>".format(self.id, self.text, self.frequency, self.active)
option.py
from sqlalchemy import *
from sqlalchemy.orm import relationship
from .base import Base
class Option(Base):
__tablename__ = 'options'
id = Column(Integer, primary_key=True)
question_id = Column(Integer, ForeignKey('questions.id'))
text = Column(String(20), nullable=False)
question = relationship('Question', back_populates='options')
def __repr__(self):
return "<Option(id={0}, question_id={1}, text={2})>".format(self.id, self.question_id, self.text)
I liked the answer provided by #Abdou, but wanted to see if I couldn't make it a bit more generic.
I ended up coming up with the following, which should handle any nested model.
from sqlalchemy import event, inspect
#event.listens_for(Question, 'init')
#event.listens_for(Option, 'init')
def received_init(target, args, kwargs):
for rel in inspect(target.__class__).relationships:
rel_cls = rel.mapper.class_
if rel.key in kwargs:
kwargs[rel.key] = [rel_cls(**c) for c in kwargs[rel.key]]
Listens for the init event of any specified models, checks for relationships that match the kwargs passed in, and then converts those to the matching class of the relationship.
If anyone knows how to set this up so it can work on all models instead of specifying them, I would appreciate it.
Given that you need to create an Option object every time there is an options key in the dictionary passed to the create_question function, you should use dictionary comprehension to create your options before passing the result to the Question instantiator. I would rewrite the function as follows:
def create_question(q):
# The following hard coding works:
# q = Question(text='test text',
# frequency='test frequency',
# options=[Option(text='test option')]
# )
q = dict((k, [Option(**x) for x in v]) if k == 'options' else (k,v) for k,v in q.items())
print(q)
question = Question(**q)
session.add(question)
session.commit()
The dictionary comprehension part basically checks if there is an options key in the given dictionary; and if there is one, then it creates Option objects with the values. Otherwise, it carries on as normal.
The above function generated the following:
# {'text': 'test text', 'frequency': 'test frequency'}
# {'text': 'test text', 'frequency': 'test frequency', 'options': [<Option(id=None, question_id=None, text=test option 123)>]}
I hope this helps.
For SQLAlchemy objects you can simply use Model.__dict__
Building on #Searle's answer, this avoids needing to directly list all models in the decorators, and also provides handling for when uselist=False (e.g. 1:1, many:1 relationships):
from sqlalchemy import event
from sqlalchemy.orm import Mapper
#event.listens_for(Mapper, 'init')
def received_init(target, args, kwargs):
"""Allow initializing nested relationships with dict only"""
for rel in db.inspect(target).mapper.relationships:
if rel.key in kwargs:
if rel.uselist:
kwargs[rel.key] = [rel.mapper.class_(**c) for c in kwargs[rel.key]]
else:
kwargs[rel.key] = rel.mapper.class_(**kwargs[rel.key])
Possible further improvements:
add handling for if kwargs[rel.key] is a model instance (right now this fails if you pass a model instance for relationships instead of a dict)
allow relationships to be specified as None (right now requires empty lists or dicts)
source: SQLAlchemy "event.listen" for all models

How to use make_transient() to duplicate an SQLAlchemy mapped object?

I know the question how to duplicate or copy a SQLAlchemy mapped object was asked a lot of times. The answer always depends on the needs or how "duplicate" or "copy" is interpreted.
This is a specialized version of the question because I got the tip to use make_transient() for that.
But I have some problems with that. I don't really know how to handle the primary key (PK) here. In my use cases the PK is always autogenerated by SQLA (or the DB in background). But this doesn't happen with a new duplicated object.
The code is a little bit pseudo.
import sqlalchemy as sa
from sqlalchemy.orm.session import make_transient
_engine = sa.create_engine('postgres://...')
_session = sao.sessionmaker(bind=_engine)()
class MachineData(_Base):
__tablename__ = 'Machine'
_oid = sa.Column('oid', sa.Integer, primary_key=True)
class TUnitData(_Base):
__tablename__ = 'TUnit'
_oid = sa.Column('oid', sa.Integer, primary_key=True)
_machine_fk = sa.Column('machine', sa.Integer, sa.ForeignKey('Machine.oid'))
_machine = sao.relationship("MachineData")
def __str__(self):
return '{}.{}: oid={}(hasIdentity={}) machine={}(fk={})' \
.format(type(self), id(self),
self._oid, has_identity(self),
self._machine, self._machine_fk)
if __name__ == '__main__':
# any query resulting in one persistent object
obj = GetOneMachineDataFromDatabase()
# there is a valid 'oid', has_identity == True
print(obj)
# should i call expunge() first?
# remove the association with any session
# and remove its “identity key”
make_transient(obj)
# 'oid' is still there but has_identity == False
print(obj)
# THIS causes an error because the 'oid' still exsits
# and is not new auto-generated (what should happen in my
# understandings)
_session.add(obj)
_session.commit()
After making a object instance transient you have to remove its object-id. Without an object-id you can add it again to the database which will generate a new object-id for it.
if __name__ == '__main__':
# the persistent object with an identiy in the database
obj = GetOneMachineDataFromDatabase()
# make it transient
make_transient(obj)
# remove the identiy / object-id
obj._oid = None
# adding the object again generates a new identiy / object-id
_session.add(obj)
# this include a flush() and create a new primary key
_session.commit()

SQLAlchemy audit logging; how to handle deletes?

I'm using a modified version of the versioning code example that comes with SQLAlchemy to record a user id and date on changes. However, I also want to modify it so deletes are done by marking a is_deleted type flag instead of running an actual SQL DELETE. My problem is I'm not sure how to capture the delete and replace it with an update.
Here's what I have so far:
''' http://docs.sqlalchemy.org/en/rel_0_8/orm/examples.html?highlight=versioning#versioned-objects '''
from sqlalchemy.ext.declarative import declared_attr
from sqlalchemy.orm import mapper, class_mapper, attributes, object_mapper, scoping
from sqlalchemy.orm.session import Session
from sqlalchemy.orm.exc import UnmappedClassError, UnmappedColumnError
from sqlalchemy import Table, Column, ForeignKeyConstraint, DateTime, String, Boolean
from sqlalchemy import event
from sqlalchemy.orm.properties import RelationshipProperty
from datetime import datetime
from sqlalchemy.schema import ForeignKey
from sqlalchemy.sql.expression import false
def col_references_table(col, table):
for fk in col.foreign_keys:
if fk.references(table):
return True
return False
def _history_mapper(local_mapper):
cls = local_mapper.class_
# set the "active_history" flag
# on on column-mapped attributes so that the old version
# of the info is always loaded (currently sets it on all attributes)
for prop in local_mapper.iterate_properties:
getattr(local_mapper.class_, prop.key).impl.active_history = True
super_mapper = local_mapper.inherits
super_history_mapper = getattr(cls, '__history_mapper__', None)
polymorphic_on = None
super_fks = []
if not super_mapper or local_mapper.local_table is not super_mapper.local_table:
cols = []
for column in local_mapper.local_table.c:
if column.name.startswith('version_'):
continue
col = column.copy()
col.unique = False
if super_mapper and col_references_table(column, super_mapper.local_table):
super_fks.append((col.key, list(super_history_mapper.local_table.primary_key)[0]))
cols.append(col)
if column is local_mapper.polymorphic_on:
polymorphic_on = col
if super_mapper:
super_fks.append(('version_datetime', super_history_mapper.base_mapper.local_table.c.version_datetime))
super_fks.append(('version_userid', super_history_mapper.base_mapper.local_table.c.version_userid))
super_fks.append(('version_deleted', super_history_mapper.base_mapper.local_table.c.version_deleted))
cols.append(Column('version_datetime', DateTime, default=datetime.now, nullable=False, primary_key=True, info={'colanderalchemy': {'exclude': True}}))
cols.append(Column('version_userid', String(60), ForeignKey("user.login"), nullable=True, info={'colanderalchemy': {'exclude': True}}))
cols.append(Column('version_deleted', Boolean, server_default=false(), nullable=False, info={'colanderalchemy': {'exclude': True}}))
else:
cols.append(Column('version_datetime', DateTime, default=datetime.now, nullable=False, primary_key=True, info={'colanderalchemy': {'exclude': True}}))
cols.append(Column('version_userid', String(60), ForeignKey("user.login"), nullable=True, info={'colanderalchemy': {'exclude': True}}))
cols.append(Column('version_deleted', Boolean, server_default=false(), nullable=False, info={'colanderalchemy': {'exclude': True}}))
if super_fks:
cols.append(ForeignKeyConstraint(*zip(*super_fks)))
table = Table(local_mapper.local_table.name + '_history', local_mapper.local_table.metadata,
*cols
)
else:
# single table inheritance. take any additional columns that may have
# been added and add them to the history table.
for column in local_mapper.local_table.c:
if column.key not in super_history_mapper.local_table.c:
col = column.copy()
col.unique = False
super_history_mapper.local_table.append_column(col)
table = None
if super_history_mapper:
bases = (super_history_mapper.class_,)
else:
bases = local_mapper.base_mapper.class_.__bases__
versioned_cls = type.__new__(type, "%sHistory" % cls.__name__, bases, {})
m = mapper(
versioned_cls,
table,
inherits=super_history_mapper,
polymorphic_on=polymorphic_on,
polymorphic_identity=local_mapper.polymorphic_identity
)
cls.__history_mapper__ = m
if not super_history_mapper:
local_mapper.local_table.append_column(
Column('version_datetime', DateTime, default=datetime.now, nullable=False, primary_key=False, info={'colanderalchemy': {'exclude': True}})
)
local_mapper.add_property("version_datetime", local_mapper.local_table.c.version_datetime)
local_mapper.local_table.append_column(
Column('version_userid', String(60), ForeignKey("user.login"), nullable=True, info={'colanderalchemy': {'exclude': True}})
)
local_mapper.add_property("version_userid", local_mapper.local_table.c.version_userid)
local_mapper.local_table.append_column(
Column('version_deleted', Boolean, server_default=false(), nullable=False, info={'colanderalchemy': {'exclude': True}})
)
local_mapper.add_property("version_deleted", local_mapper.local_table.c.version_deleted)
class Versioned(object):
#declared_attr
def __mapper_cls__(cls):
def map(cls, *arg, **kw):
mp = mapper(cls, *arg, **kw)
_history_mapper(mp)
return mp
return map
def versioned_objects(iter):
for obj in iter:
if hasattr(obj, '__history_mapper__'):
yield obj
def create_version(obj, session, deleted = False):
obj_mapper = object_mapper(obj)
history_mapper = obj.__history_mapper__
history_cls = history_mapper.class_
obj_state = attributes.instance_state(obj)
attr = {}
obj_changed = False
for om, hm in zip(obj_mapper.iterate_to_root(), history_mapper.iterate_to_root()):
if hm.single:
continue
for hist_col in hm.local_table.c:
if hist_col.key.startswith('version_'):
continue
obj_col = om.local_table.c[hist_col.key]
# get the value of the
# attribute based on the MapperProperty related to the
# mapped column. this will allow usage of MapperProperties
# that have a different keyname than that of the mapped column.
try:
prop = obj_mapper.get_property_by_column(obj_col)
except UnmappedColumnError:
# in the case of single table inheritance, there may be
# columns on the mapped table intended for the subclass only.
# the "unmapped" status of the subclass column on the
# base class is a feature of the declarative module as of sqla 0.5.2.
continue
# expired object attributes and also deferred cols might not be in the
# dict. force it to load no matter what by using getattr().
if prop.key not in obj_state.dict:
getattr(obj, prop.key)
a, u, d = attributes.get_history(obj, prop.key)
if d:
attr[hist_col.key] = d[0]
obj_changed = True
elif u:
attr[hist_col.key] = u[0]
else:
# if the attribute had no value.
attr[hist_col.key] = a[0]
obj_changed = True
if not obj_changed:
# not changed, but we have relationships. OK
# check those too
for prop in obj_mapper.iterate_properties:
if isinstance(prop, RelationshipProperty) and \
attributes.get_history(obj, prop.key).has_changes():
obj_changed = True
break
if not obj_changed and not deleted:
return
attr['version_datetime'] = obj.version_datetime
attr['version_userid'] = obj.version_userid
attr['version_deleted'] = obj.version_deleted
hist = history_cls()
for key, value in attr.items():
setattr(hist, key, value)
session.add(hist)
obj.version_datetime = datetime.now()
obj.version_userid = getattr(session, 'userid', None)
obj.version_deleted = deleted
def versioned_session(session):
#event.listens_for(session, 'before_flush')
def before_flush(session, flush_context, instances):
for obj in versioned_objects(session.deleted):
create_version(obj, session, deleted = True)
for obj in versioned_objects(session.dirty):
create_version(obj, session)
def add_userid_to_session(userid, session):
if isinstance(session, scoping.scoped_session):
thread_local_session = session.registry()
thread_local_session.userid = userid
elif isinstance(session, Session):
session.userid = userid
else:
raise TypeError("Not sure how to add the userid into session of type {}".format(type(session)))
And here's how I'm using it (all non-essential parts have been cut out):
Base = declarative_base()
class User(Versioned, Base):
__tablename__ = 'user'
login = Column(String(60), primary_key=True, nullable=False)
groups = association_proxy('user_to_groups', 'group', creator=lambda group: UserToGroup(group_name=group.name))
def __init__(self, login, groups=None):
self.login = login
if groups:
for group in groups:
self.groups.append(group)
class Group(Versioned, Base):
__tablename__ = 'group'
name = Column(String(100), primary_key=True, nullable=False)
description = Column(String(100), nullable=True)
users = association_proxy('group_to_user', 'user', creator=lambda user: UserToGroup(user_login=user.login))
def __eq__(self, other):
return self.name == other.name
class UserToGroup(Versioned, Base):
__tablename__ = 'user_to_group'
user_login = Column(String(60), ForeignKey(User.login), primary_key=true)
group_name = Column(String(100), ForeignKey(Group.name), primary_key=true)
user = relationship(User, backref=backref('user_to_groups', cascade='all, delete-orphan'))
group = relationship(Group, backref=backref('group_to_user', cascade='all, delete-orphan'))
session.configure(bind=engine)
add_userid_to_session("test", session.registry())
versioned_session(session)
user = session.query(User).filter(User.login=='test').one()
user.groups.remove(Group(name ="g:admin"))
Before running that code the database currently has one user called 'test' and two groups that the user is attached to called 'g:admin' and 'g:superadmin'.
What it currently does is: Copy the existing user_to_group entry for the 'test' => 'g:admin' mapping and copy it to the history table. Then delete the entry from user_to_group.
What I'd like it to do is copy the value to the history table and then update the entry in user_to_group to have version_deleted set to true.
I'm thinking the way to do that is to snatch the entry out of the session.deleted (that's why I changed the order from the original code) and modify it put it into session.dirty. I'm just not sure what the "safest" way of doing this.
Another issue (which will likely require another question) is how to detect relationships which are covered in another table as currently the system makes a copy of the 'user' row into the history table and then updates the version information despite no real changes being made to the row.
EDIT: I've decided to do things a bit differently, but still have a problem... Instead of having a "deleted" flag in the live tables I actually delete the content and record another history item indicating when the deletion occurred. If I'm deleting an object directly then this works correctly. If I delete an object off of a relationship I'm not able to do it properly. A DELETE get's issued to the relationship table to remove the link, but I can't seem to figure out how to detect that deletion in the "create_version" method.
For example, if I do:
group = session.query(Group).filter(Group.name=='g:admin').one()
group.users.remove(group.users[0])
No objects are placed in session.deleted. I can detect some sort of deletion via attributes.get_history(obj, prop.key), but it seems to indicate a deletion of a UserToGroup object from Group (which I want to detect and record a history item on), but then also indicates a deletion of a Group from the UserToGroup object (which I don't want to do anything about because the actual Group is not being deleted).

Categories