Is there a way to perform validation on an object after (or as) the properties are set but before the session is committed?
For instance, I have a domain model Device that has a mac property. I would like to ensure that the mac property contains a valid and sanitized mac value before it is added to or updated in the database.
It looks like the Pythonic approach is to do most things as properties (including SQLAlchemy). If I had coded this in PHP or Java, I would probably have opted to create getter/setter methods to protect the data and give me the flexibility to handle this in the domain model itself.
public function mac() { return $this->mac; }
public function setMac($mac) {
return $this->mac = $this->sanitizeAndValidateMac($mac);
}
public function sanitizeAndValidateMac($mac) {
if ( ! preg_match(self::$VALID_MAC_REGEX) ) {
throw new InvalidMacException($mac);
}
return strtolower($mac);
}
What is a Pythonic way to handle this type of situation using SQLAlchemy?
(While I'm aware that validation and should be handled elsewhere (i.e., web framework) I would like to figure out how to handle some of these domain specific validation rules as they are bound to come up frequently.)
UPDATE
I know that I could use property to do this under normal circumstances. The key part is that I am using SQLAlchemy with these classes. I do not understand exactly how SQLAlchemy is performing its magic but I suspect that creating and overriding these properties on my own could lead to unstable and/or unpredictable results.
You can add data validation inside your SQLAlchemy classes using the #validates() decorator.
From the docs - Simple Validators:
An attribute validator can raise an exception, halting the process of mutating the attribute’s value, or can change the given value into something different.
from sqlalchemy.orm import validates
class EmailAddress(Base):
__tablename__ = 'address'
id = Column(Integer, primary_key=True)
email = Column(String)
#validates('email')
def validate_email(self, key, address):
# you can use assertions, such as
# assert '#' in address
# or raise an exception:
if '#' not in address:
raise ValueError('Email address must contain an # sign.')
return address
Yes. This can be done nicely using a MapperExtension.
# uses sqlalchemy hooks to data model class specific validators before update and insert
class ValidationExtension( sqlalchemy.orm.interfaces.MapperExtension ):
def before_update(self, mapper, connection, instance):
"""not every instance here is actually updated to the db, see http://www.sqlalchemy.org/docs/reference/orm/interfaces.html?highlight=mapperextension#sqlalchemy.orm.interfaces.MapperExtension.before_update"""
instance.validate()
return sqlalchemy.orm.interfaces.MapperExtension.before_update(self, mapper, connection, instance)
def before_insert(self, mapper, connection, instance):
instance.validate()
return sqlalchemy.orm.interfaces.MapperExtension.before_insert(self, mapper, connection, instance)
sqlalchemy.orm.mapper( model, table, extension = ValidationExtension(), **mapper_args )
You may want to check before_update reference because not every instance here is actually updated to the db.
"It looks like the Pythonic approach is to do most things as properties"
It varies, but that's close.
"If I had coded this in PHP or Java, I would probably have opted to create getter/setter methods..."
Good. That's Pythonic enough. Your getter and setter functions are bound up in a property; that's pretty good.
What's the question?
Are you asking how to spell property?
However, "transparent validation" -- if I read your example code correctly -- may not really be all that good an idea.
Your model and your validation should probably be kept separate. It's common to have multiple validations for a single model. For some users, fields are optional, fixed or not used; this leads to multiple validations.
You'll be happier following the Django design pattern of using a Form for validation, separate form the model.
Related
I've been trying to implement an 'abstract' schema class that will automatically convert values in CamelCase (serialized) to snake_case (deserialized).
class CamelCaseSchema(marshmallow.Schema):
#marshmallow.pre_load
def camel_to_snake(self, data):
return {
utils.CaseConverter.camel_to_snake(key): value for key, value in data.items()
}
#marshmallow.post_dump
def snake_to_camel(self, data):
return {
utils.CaseConverter.snake_to_camel(key): value for key, value in data.items()
}
While using something like this works nicely, it does not achieve everything applying load_from and dump_to to a field does. Namely, it fails to provide correct field names when there's an issue with deserialization. For instance, I get:
{'something_id': [u'Not a valid integer.']} instead of {'somethingId': [u'Not a valid integer.']}.
While I can post-process these emitted errors, this seems like an unnecessary coupling that I wish to avoid if I'm to make the use of schema fully transparent.
Any ideas? I tried tackling the metaclasses involved, but the complexity was a bit overwhelming and everything seemed exceptionally ugly.
You're using marshmallow 2. Marshmallow 3 is now out and I recommend using it. My answer will apply to marshmallow 3.
In marshmallow 3, load_from / dump_to have been replace by a single attribute : data_key.
You'd need to alter data_key in each field when instantiating the schema. This will happen after field instantiation but I don't think it matters.
You want to do that ASAP when the schema is instantiated to avoid inconsistency issues. The right moment to do that would be in the middle of Schema._init_fields, before the data_key attributes are checked for consistency. But duplicating this method would be a pity. Besides, due to the nature of the camel/snake case conversion the consistency checks can be applied before the conversion anyway.
And since _init_fields is private API, I'd recommend doing the modification at the end of __init__.
class CamelCaseSchema(Schema):
def __init__(self, **kwargs):
super().__init__(**kwargs)
for field_name, field in self.fields.items():
fields.data_key = utils.CaseConverter.snake_to_camel(field_name)
I didn't try that but I think it should work.
I have a IPv4Manage model, in it I have a vlanedipv4network field:
class IPv4Manage(models.Model):
...
vlanedipv4network = models.ForeignKey(
to=VlanedIPv4Network, related_name="ipv4s", on_delete=models.xxx, null=True)
As we know, on the on_delete param, we general fill the models.xxx, such as models.CASCADE.
Is it possible to custom a function, to fill there? I want to do other logic things there.
The choices for on_delete can be found in django/db/models/deletion.py
For example, models.SET_NULL is implemented as:
def SET_NULL(collector, field, sub_objs, using):
collector.add_field_update(field, None, sub_objs)
And models.CASCADE (which is slightly more complicated) is implemented as:
def CASCADE(collector, field, sub_objs, using):
collector.collect(sub_objs, source=field.remote_field.model,
source_attr=field.name, nullable=field.null)
if field.null and not connections[using].features.can_defer_constraint_checks:
collector.add_field_update(field, None, sub_objs)
So, if you figure out what those arguments are then you should be able to define your own function to pass to the on_delete argument for model fields. collector is most likely an instance of Collector (defined in the same file, not sure what it's for exactly), field is most likely the model field being deleted, sub_objs is likely instances that relate to the object by that field, and using denotes the database being used.
There are alternatives for custom logic for deletions too, incase overriding the on_delete may be a bit overkill for you.
The post_delete and pre_delete allows you define some custom logic to run before or after an instance is deleted.
from django.db.models.signals import post_save
def delete_ipv4manage(sender, instance, using):
print('{instance} was deleted'.format(instance=str(instance)))
post_delete.connect(delete_ipv4manage, sender=IPv4Manage)
And lastly you can override the delete() method of the Model/Queryset, however be aware of caveats with bulk deletes using this method:
Overridden model methods are not called on bulk operations
Note that the delete() method for an object is not necessarily called when deleting objects in bulk using a QuerySet or as a result of a cascading delete. To ensure customized delete logic gets executed, you can use pre_delete and/or post_delete signals.
Another useful solution is to use the models.SET() where you can pass a function (deleted_guest in the example below)
guest = models.ForeignKey('Guest', on_delete=models.SET(deleted_guest))
and the function deleted_guest is
DELETED_GUEST_EMAIL = 'deleted-guest#introtravel.com'
def deleted_guest():
""" used for setting the guest field of a booking when guest is deleted """
from intro.models import Guest
from django.conf import settings
deleted_guest, created = Guest.objects.get_or_create(
first_name='Deleted',
last_name='Guest',
country=settings.COUNTRIES_FIRST[0],
email=DELETED_GUEST_EMAIL,
gender='M')
return deleted_guest
You can't send any parameters and you have to be careful with circular imports. In my case I am just setting a filler record, so the parent model has a predefined guest to represent one that has been deleted. With the new GDPR rules we gotta be able to delete guest information.
CASCADE and PROTECT etc are in fact functions, so you should be able to inject your own logic there. However, it will take a certain amount of inspection of the code to figure out exactly how to get the effect you're looking for.
Depending what you want to do it might be relatively easy, for example the PROTECT function just raises an exception:
def PROTECT(collector, field, sub_objs, using):
raise ProtectedError(
"Cannot delete some instances of model '%s' because they are "
"referenced through a protected foreign key: '%s.%s'" % (
field.remote_field.model.__name__, sub_objs[0].__class__.__name__, field.name
),
sub_objs
)
However if you want something more complex you'd have to understand what the collector is doing, which is certainly discoverable.
See the source for django.db.models.deletion to get started.
There is nothing stopping you from adding your own logic. However, you need to consider multiple factors including compatibility with the database that you are using.
For most use cases, the out of the box logic is good enough if your database design is correct. Please check out your available options here https://docs.djangoproject.com/en/2.0/ref/models/fields/#django.db.models.ForeignKey.on_delete.
I'm working on a project in SQLAlchemy. I've got Command class which has custom serialization/deserialization method called toBinArray() and fromBinArray(bytes). I use it for TCP communication (I don't want to use pickle because my functions create smaller outputs).
Command has several subclasses, let's call them CommandGet, CommandSet, etc. They have additional methods and attributes and serialization methods redefinitions to keep track of their own attributes. I'm keeping all of them in one table using polymorhic_identity mechanism.
The problem is that there are lot of subclasses and every has different attributes. I have previously written mapping for every of them, but this way table has huge amount of columns.
I would like to write mechanism that will serialize (using self.toBinArray()) every instance to attribute self._bin_array (stored in Binary column) before every write to DB and load (using self.fromBinArray(value)) attributes after every load of instance from DB.
I have already found answer to part of my question: I can call self.fromBinArray(self._bin_array) in function with #orm.reconstructor decorator. It is inherited by every Command subclass and executes proper inherited version of fromBinArray(). My question is how to automatize serialization on writing to DB (I know I can manually set self._bin_array but that's very troublesome)?
P.S. Part of my code, my main class:
class Command(Base):
__tablename__ = "commands"
dbid = Column(Integer, Sequence("commands_seq"), primary_key = True)
cmd_id = Column(SmallInteger)
instance_dbid = Column(Integer, ForeignKey("instances.dbid"))
type = Column(String(20))
_bin_array = Column(Binary)
__mapper_args__ = {
"polymorphic_on" : type,
"polymorphic_identity" : "Command",
}
#orm.reconstructor
def init_on_load(self):
self.fromBinArray(self._bin_array)
def fromBinArray(self, b):
(...)
def toBinArray(self):
(...)
EDIT: I've found solution (below in answer), but are there any other solutions? Maybe some shortcut to insert event listening function inside class body?
It looks that solution was simpler than I expected-you need to use event listener for before_insert (and/or before_update event). I've found information (source) that
reconstructor() is a shortcut into a larger system of “instance level”
events, which can be subscribed to using the event API - see
InstanceEvents for the full API description of these events.
And that gave me the clue:
#event.listens_for(Command, 'before_insert', propagate = True)
def serialize_before_insert(mapper, connection, target):
print("serialize_before_insert")
target._bin_array = target.toBinArray()
You can also use event.listen() function to ,,bind'' event listener to instance, but I personally prefer decorator way. It's very important to add propagate = True) in the declaration so subclasses can inherit listener!
I working on a website based on Flask and Flask-SQLAlchemy with MySQL. I have a handful bunch of feeds, each feed has a few data, but it needs a function.
At first, I used MySQL-python (with raw SQL) to store data, and feeds were on plugins system so each feed overrides update() function to import data by its way.
Now I changed to use Flask-SQLAlchemy and added Feed model to the database as it helps with SQLAlchemy ORM, but I'm stuck at how to handle update() function?
Keep the plugins system in parallel with the database model, but I think that's unpractical/noneffective.
Extend model class, I'm not sure if that's possible, e.g. FeedOne(Feed) will represent item(name="one") only.
Make update() function handle all feeds, by using if self.name == "" statement.
Added some code bits.
Feed model:
class Feed(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(255))
datapieces = db.relationship('Datapiece', backref = 'feed', lazy = 'dynamic')
update() function
def update(self):
data = parsedata(self.data_source)
for item in data.items:
new_datapiece = Datapiece(feed=self.id, name=item.name, value=item.value)
db.session.add(new_datapiece)
db.session.commit()
What I hope to achieve in option 2 is:
for feed in Feed.query.all():
feed.update()
And every feed will use its own class update().
Extending the class and adding an .update() method is just how it is supposed to work for option 2.
I don't see any problem in it (and i'm using that style of coding with flask/sqlalchemy all the time).
And if you (can) omit the dynamic lazy attribute you could also do a thing like:
self.datapieces.append(new_datapiece)
in your Feed's update function.
I have a complex network of objects being spawned from a sqlite database using sqlalchemy ORM mappings. I have quite a few deeply nested:
for parent in owner.collection:
for child in parent.collection:
for foo in child.collection:
do lots of calcs with foo.property
My profiling is showing me that the sqlalchemy instrumentation is taking a lot of time in this use case.
The thing is: I don't ever change the object model (mapped properties) at runtime, so once they are loaded I don't NEED the instrumentation, or indeed any sqlalchemy overhead at all. After much research, I'm thinking I might have to clone a 'pure python' set of objects from my already loaded 'instrumented objects', but that would be a pain.
Performance is really crucial here (it's a simulator), so maybe writing those layers as C extensions using sqlite api directly would be best. Any thoughts?
If you reference a single attribute of a single instance lots of times, a simple trick is to store it in a local variable.
If you want a way to create cheap pure python clones, share the dict object with the original object:
class CheapClone(object):
def __init__(self, original):
self.__dict__ = original.__dict__
Creating a copy like this costs about half of the instrumented attribute access and attribute lookups are as fast as normal.
There might also be a way to have the mapper create instances of an uninstrumented class instead of the instrumented one. If I have some time, I might take a look how deeply ingrained is the assumption that populated instances are of the same type as the instrumented class.
Found a quick and dirty way that seems to at least somewhat work on 0.5.8 and 0.6. Didn't test it with inheritance or other features that might interact badly. Also, this touches some non-public API's, so beware of breakage when changing versions.
from sqlalchemy.orm.attributes import ClassManager, instrumentation_registry
class ReadonlyClassManager(ClassManager):
"""Enables configuring a mapper to return instances of uninstrumented
classes instead. To use add a readonly_type attribute referencing the
desired class to use instead of the instrumented one."""
def __init__(self, class_):
ClassManager.__init__(self, class_)
self.readonly_version = getattr(class_, 'readonly_type', None)
if self.readonly_version:
# default instantiation logic doesn't know to install finders
# for our alternate class
instrumentation_registry._dict_finders[self.readonly_version] = self.dict_getter()
instrumentation_registry._state_finders[self.readonly_version] = self.state_getter()
def new_instance(self, state=None):
if self.readonly_version:
instance = self.readonly_version.__new__(self.readonly_version)
self.setup_instance(instance, state)
return instance
return ClassManager.new_instance(self, state)
Base = declarative_base()
Base.__sa_instrumentation_manager__ = ReadonlyClassManager
Usage example:
class ReadonlyFoo(object):
pass
class Foo(Base, ReadonlyFoo):
__tablename__ = 'foo'
id = Column(Integer, primary_key=True)
name = Column(String(32))
readonly_type = ReadonlyFoo
assert type(session.query(Foo).first()) is ReadonlyFoo
You should be able to disable lazy loading on the relationships in question and sqlalchemy will fetch them all in a single query.
Try using a single query with JOINs instead of the python loops.