Python 3.x - Data Annotation Validation Equivalent - python

I am trying to identify a way to implement same concept as data annotation validation from .NET in Python. It would be something like the following:
class MyClass:
#Property
def Message(self):
return self._message
#Message.setter
#MaxValue(233)
def Message(self, value):
self._message = value
I have tried different approaches but with out success. I would like to get access to "value" argument in order to apply a specific validation on it.

Does This help you?
This way you can make python annotations with additional arguments
def MaxValue(maxValue):
def wrapFunction(function):
def replacedMaxValueFunction(self, value):
assert value <= maxValue
return function(self, value)
replacedMaxValueFunction.__name__ = function.__name__
return replacedMaxValueFunction
return wrapFunction
So now you can do this:
I do not know wether it is C# conform but hopefully it does the checking you desire.
>>> #MaxValue(123)
def f(self, value):
print(value)
>>> f(1, 123)
123
>>> f(1, 124)
Traceback (most recent call last):
File "<pyshell#9>", line 1, in <module>
f(1, 124)
File "<pyshell#1>", line 4, in replacedMaxValueFunction
assert value <= maxValue
AssertionError

Related

Python: Overload Instance Method and Class Method/ Access Instance Variable from Class Method

I was wondering what the best way to implement the following design would be in Python:
class Executor:
def __init__(self):
self.val = 5
def action(self):
self.action(self.val)
#classmethod
def action(cls, val):
print(f"Val is: {val}")
I want to be able to access the method both as an instance method that uses the value the object was initialised with, and as a class method which uses a passed in variable. Here is an example of the ways in which I would like to call it:
>>> Executor.action(3)
Val is: 3
>>> Executor().action()
Traceback (most recent call last):
File "<input>", line 1, in <module>
TypeError: action() missing 1 required positional argument: 'val'
I was thinking about trying to use keyword arguments, but I can't seem to get that to work either. Here is my code so far:
class Executor:
def __init__(self):
self.val = 5
#classmethod
def action(cls, val=None):
if val is None:
# This doesn't work; cls does not have attribute 'val'.
if hasattr(cls, "val"):
print(f"Val from instance: {cls.val}")
else:
raise ValueError("Called as class method and val not passed in.")
else:
print(f"Val passed in: {val}")
>>> Executor.action(3)
Val passed in: 3
>>> Executor().action()
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "<input>", line 13, in action
ValueError: Called as class method and val not passed in.
But the class instance does not have the val available for access.
The only other thing I can think of is using Hungarian notation, but it's not ideal as it's a bit messy, and it means there's multiple method names.
class Executor:
def __init__(self):
self.val = 5
def action_instance(self):
self.action_class(self.val)
#classmethod
def action_class(cls, val):
print(f"Val is: {val}")
>>> Executor.action_class(3)
Val is: 3
>>> Executor().action_instance()
Val is: 5
Any advice on a solid, clean approach would be greatly appreciated!
Cheers.
What you want to do looks strange to me, I am not sure you need that. Python cannot overload by method signatures/type, although there is the functools.singledispatch. So by defining action a second time you are actually replacing the first definition for the method.
The observable behaviour can be achieved with:
class Executor:
#classmethod
def action(cls, val=5):
return val
print(Executor().action())
print(Executor.action(3))
Outputs:
5
3
But again check first that you really need something like that, because it breaks one of the expectations of Python coders and Python data model: calling a method through the class is equivalent to calling the method through the instance given that you pass the instance to the class method.
obj = Executor()
obj.action() # this is the same as Executor.action(obj)

How to restrict access of property and methods to Enum members only?

I have an unorthodox Enum that I plan to use in my code, but I've come to a problem where I need my property needed to throw an error when the Enum is used incorrectly, however, instead of throwing the Exception, it instead outputted my property's address.
How I want my code to work is:
When user writes Enum.MEMBER.text, return Enum.MEMBER alt text.
When user writes Enum.text, throw an error.
Here's the code snippet
class MyEnum(Enum):
#property
def text(self):
if isinstance(self._value_,MyCapsule): return self._value_.text
raise Exception('You are not using an Enum!')
return None
#property
def value(self):
if isinstance(self._value_,MyCapsule): return self._value_.value
raise Exception('You are not using an Enum!')
return None
class MyCapsule:
def __init__(self,value,text,more_data):
self._value_, self._text_ = (value,text)
#property
def text(self): return self._text_
#property
def value(self): return self._value_
class CustomData(MyEnum):
ONE = MyCapsule(1,'One','Lorem')
TWO = MyCapsule(2,'Two','Ipsum')
TRI = MyCapsule(3,'Tri','Loipsum')
A = CustomData.ONE
B = CustomData
print(A.text, A.value,sep=' | ')
print(B.text, B.value,sep=' | ')
The output is:
One | 1
<property object at 0x0000016CA56DF0E8> | <property object at 0x0000016CA56DF278>
What I expect was
One | 1
Unexpected exception at ....
Is there a solution to this problem, or I shouldn't write my Enum this way to begin with?
A custom descriptor will do the trick:
class property_only(object):
#
def __init__(self, func):
self.func = func
#
def __get__(self, instance, cls):
if instance is None:
raise Exception('You are not using an Enum!')
else:
return self.func(instance)
#
def __set__(self, instance, value):
# raise error or set value here
pass
Then change your base Enum to use it:
class MyEnum(Enum):
#property_only
def text(self):
return self._value_.text
#property_only
def value(self):
return self._value_.value
class MyCapsule:
def __init__(self, value, text, more_data):
self._value_, self._text_ = (value, text)
class CustomData(MyEnum):
ONE = MyCapsule(1, 'One', 'Lorem')
TWO = MyCapsule(2, 'Two', 'Ipsum')
TRI = MyCapsule(3, 'Tri', 'Loipsum')
and in use:
>>> CustomData.text
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 8, in __get__
Exception: You are not using an Enum!
While that solves the "access-only-from-enum" problem, you still have a lot of indirection when you want to access text and value:
>>> CustomData.ONE.value._value_
1
>>> CustomData.ONE.value._text_
'One'
The solution is to incorporate MyCapsule directly into CustomData:
from enum import Enum
class property_only(object):
#
def __init__(self, func):
self.func = func
#
def __get__(self, instance, cls):
if instance is None:
raise Exception('You are not using an Enum!')
else:
return self.func(instance)
#
def __set__(self, instance, value):
# raise error or set value here
pass
class CustomData(Enum):
#
ONE = 1, 'One', 'Lorem'
TWO = 2, 'Two', 'Ipsum'
TRI = 3, 'Tri', 'Loipsum'
#
def __new__(cls, value, text, more_data):
member = object.__new__(cls)
member._value_ = value
member._text_ = text
# ignoring more_data for now...
return member
#
#property_only
def text(self):
return self._text_
#
#property_only
def value(self):
return self._value_
and in use:
>>> CustomData.ONE
<CustomData.ONE: 1>
>>> CustomData.ONE.value
1
>>> CustomData.ONE.text
'One'
>>> CustomData.ONE.name
'ONE'
>>> CustomData.text
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 8, in __get__
Exception: You are not using an Enum!
>>> CustomData.value
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 8, in __get__
Exception: You are not using an Enum!
Disclosure: I am the author of the Python stdlib Enum, the enum34 backport, and the Advanced Enumeration (aenum) library.

Value never below zero

Is it possible to assign a numeric value to a variable in such a way that it is limited to a certain range? More specifically I want a variable that can never go below zero, because if that was about to happen an exception would be raised.
Imaginary example:
>>> var = AlwaysPositive(0)
>>> print var
0
>>> var += 3
>>> print var
3
>>> var -= 4
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AlwaysPositiveError: dropping AlwaysPositive integer below zero
The reason I ask is because I am debugging a game I am writing. Where humans understand implicitly you can never have -1 cards in your hand, a computer does not. I can make functions that check all values used in the game and call those functions at multiple positions throughout the script and see if any weird values appear. But I was wondering if there perhaps was an easier way to do this?
Sub-classing int is probably the best way to do this if you really need to, but the implementations shown so far are naive. I would do:
class NegativeValueError(ValueError):
pass
class PositiveInteger(int):
def __new__(cls, value, base=10):
if isinstance(value, basestring):
inst = int.__new__(cls, value, base)
else:
inst = int.__new__(cls, value)
if inst < 0:
raise NegativeValueError()
return inst
def __repr__(self):
return "PositiveInteger({})".format(int.__repr__(self))
def __add__(self, other):
return PositiveInteger(int.__add__(self, other))
# ... implement other numeric type methods (__sub__, __mul__, etc.)
This allows you to construct a PositiveInteger just like a regular int:
>>> PositiveInteger("FFF", 16)
PositiveInteger(4095)
>>> PositiveInteger(5)
PositiveInteger(5)
>>> PositiveInteger(-5)
Traceback (most recent call last):
File "<pyshell#24>", line 1, in <module>
PositiveInteger(-5)
File "<pyshell#17>", line 8, in __new__
raise NegativeValueError()
NegativeValueError
See e.g. the datamodel docs on numeric type emulation for details of the methods you will need to implement. Note that you don't need to explicitly check for negative numbers in most of those methods, as when you return PositiveInteger(...) the __new__ will do it for you. In use:
>>> i = PositiveInteger(5)
>>> i + 3
PositiveInteger(8)
Alternatively, if these non-negative integers will be attributes of a class, you could enforce positive values using the descriptor protocol, e.g.:
class PositiveIntegerAttribute(object):
def __init__(self, name):
self.name = name
def __get__(self, obj, typ=None):
return getattr(obj, self.name)
def __set__(self, obj, val):
if not isinstance(val, (int, long)):
raise TypeError()
if val < 0:
raise NegativeValueError()
setattr(obj, self.name, val)
def __delete__(self, obj):
delattr(obj, self.name)
You can then use this as follows:
>>> class Test(object):
foo = PositiveIntegerAttribute('_foo')
>>> t = Test()
>>> t.foo = 1
>>> t.foo = -1
Traceback (most recent call last):
File "<pyshell#34>", line 1, in <module>
t.foo = -1
File "<pyshell#28>", line 13, in __set__
raise NegativeValueError()
NegativeValueError
>>> t.foo += 3
>>> t.foo
4
>>> t.foo -= 5
Traceback (most recent call last):
File "<pyshell#37>", line 1, in <module>
t.foo -= 5
File "<pyshell#28>", line 13, in __set__
raise NegativeValueError()
NegativeValueError
You can subclass your own data type from int and provide it with a bunch of magic methods overloading the operators you need.
class Alwayspositive(int):
def __init__(self, *args, **kwargs):
super(Alwayspositive, self).__init__(*args, **kwargs)
def __neg__(self):
raise AlwayspositiveError()
def __sub__(self, other):
result = super(Alwayspositive, self).__sub__(other)
if result < 0:
raise AlwayspositiveError()
return result
And so on. This is quite a lot of work and debug to make such a class safe, but it will allow you to debug your code with a very little changes between debug and release mode.

Decorator to invoke instance method

I have a class A with method do_something(self,a,b,c) and another instance method that validates the input and check permissions named can_do_something(self,a,b,c).
This is a common pattern in my code and I want to write a decorator that accepts a validation function name and perform the test.
def validate_input(validation_fn_name):
def validation_decorator(func):
def validate_input_action(self,*args):
error = getattr(self,validation_fn_name)(*args)
if not error == True:
raise error
else:
return func(*args)
return validate_input_action
return validation_decorator
Invoking the functions as follows
#validate_input('can_do_something')
def do_something(self,a,b,c):
return a + b + c
Problem is that i'm not sure how to maintain self through out the validation function. I've used the validation fn name with getattr so the fn could be ran in the context of the instance but i cannot do that for func(*args).
So what is the proper way to achieve this ?
Thanks.
EDIT
So following #André Laszlo answer I realized that self is just the first argument so there is no need to use getattr at all but just pass on the *args.
def validate_input(validation_fn):
def validation_decorator(func):
def validate_input_action(*args):
error = validation_fn(*args)
if not error == True:
raise error
else:
return func(*args)
return validate_input_action
return validation_decorator
Much more elegant and it also supports static methods as well.
Adding a static method to #André Laszlo example proves the decorator is working :
class Foo(object):
#staticmethod
def validate_baz(a,b,c):
if a > b:
return ValueError('a gt b')
#staticmethod
#validate_input(Foo.validate_baz)
def baz(a,b,c):
print a,b,c
>>> Foo.baz(1,2,3)
1 2 3
>>> Foo.baz(2,1,3)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 6, in validate_input_action
ValueError: a gt b
But, when i'm trying to do them same thing in a django model:
from django.db import models
from django.conf import settings
settings.configure()
class Dummy(models.Model):
id = models.AutoField(primary_key=True)
name = models.CharField(max_length=10)
def can_say_name(self):
if name is None:
return Exception('Does not have a name')
#validate_input(can_say_name)
def say_name(self):
print self.name
#staticmethod
def can_create_dummy(name):
if name == 'noname':
return Exception('No name is not a name !')
#staticmethod
#validate_input(Dummy.can_create_dummy)
def create_dummy(name):
return Dummy.objects.create(name=name)
I get the following :
NameError: name 'Dummy' is not defined
So what is the different between a django model and an Object in relation to this issue ?
I think this does what you want:
def validate_input(validation_fn_name):
def validation_decorator(func):
def validate_input_action(self, *args):
error = getattr(self, validation_fn_name)(*args)
if error is not None:
raise error
else:
arglist = [self] + list(args)
return func(*arglist)
return validate_input_action
return validation_decorator
class Foo(object):
def validate_length(self, arg1):
if len(arg1) < 3:
return ValueError('%r is too short' % arg1)
#validate_input('validate_length')
def bar(self, arg1):
print "Arg1 is %r" % arg1
if __name__ == "__main__":
f = Foo()
f.bar('hello')
f.bar('')
Output is:
Arg1 is 'hello'
Traceback (most recent call last):
File "validator.py", line 27, in <module>
f.bar('')
File "validator.py", line 6, in validate_input_action
raise error
ValueError: '' is too short
Updated answer
The error (NameError: name 'Dummy' is not defined) occurs because the Dummy class is not defined yet when the validate_input decorator gets Dummy as an argument. I guess this could have been implemented differently, but for now that's the way Python works. The easiest solution that I see is to stick to using getattr, which will work because it looks up the method at run time.

Idiomatic way of specifying default arguments whose presence/non-presence matter

I often see python code that takes default arguments and has special behaviour when they are not specified.
If I for example want behavior like this:
def getwrap(dict, key, default = ??):
if ???: # default is specified
return dict.get(key, default)
else:
return dict[key]
If I were to roll my own, I'd end up with something like:
class Ham:
__secret = object()
def Cheese(self, key, default = __secret):
if default is self.__secret:
return self.dict.get(key, default)
else:
return self.dict[key]
But I don't want to invent something silly when there certainly is a standard. What is the idiomatic way of doing this in Python?
I usually prefer
def getwrap(my_dict, my_key, default=None):
if default is None:
return my_dict[my_key]
else:
return my_dict.get(my_key, default)
but of course this assumes that None is never a valid default value.
You could do it based on *args and/or **kwargs.
Here's an alternate implementation of getwrap based on *args:
def getwrap(my_dict, my_key, *args):
if args:
return my_dict.get(my_key, args[0])
else:
return my_dict[my_key]
And here it is in action:
>>> a = {'foo': 1}
>>> getwrap(a, 'foo')
1
>>> getwrap(a, 'bar')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 5, in getwrap
KeyError: 'bar'
>>> getwrap(a, 'bar', 'Nobody expects the Spanish Inquisition!')
'Nobody expects the Spanish Inquisition!'

Categories