I've a class like following:
class Invoice
def __init__(self, invoice_id):
self.invoice_id = invoice_id
self._amount = None
#property
def amount(self):
return self._amount
#amount.setter
def amount(self, amount):
self._amount = amount
In the above example, whenever I would try to get invoice amount without setting its value, I would get a None, like following:
invoice = Invoice(invoice_id='asdf234')
invoice.amount
>> None
But in this situation, None is not the correct default value for amount. I should be able to differentiate between amount value as None vs amount value not been set at all. So question is following:
How do we handle cases when class property doesn't have a right default value ? In the above example if I remove self._amount = None from init, I would get AttributeError for self._amount and self.amount would return a valid value only after I call invoice.amount = 5. Is this the right way to handle it ? But this also leads to inconsistency in object state as application would be changing instance properties at runtime.
I've kept amount as a property for invoice for better understanding and readability of Invoice and its attributes. Should class properties only be used when we're aware of its init / default values ?
None is conventionally used for the absence of a value, but sometimes you need it to actually be an allowable value. If you want to be able to distinguish between None and an un-set value, simply define your own singleton for this purpose.
class Undefined:
__str__ = __repr__ = lambda self: "Undefined"
Undefined = Undefined()
class Invoice
def __init__(self, invoice_id):
self.invoice_id = invoice_id
self._amount = Undefined
#property
def amount(self):
return self._amount
#amount.setter
def amount(self, amount):
if amount is Undefined:
raise ValueError("amount must be an actual value")
self._amount = amount
Of course, you may now need to test for Undefined in other methods to make sure they're not being used before the instance is properly initialized. A better approach might be to set the attribute during initialization and require its value to be passed in to __init__(). That way, you avoid having an Invoice in an invalid (incompletely initialized) state. Someone could still set _amount to an invalid value, but they'd simply get the trouble they were asking for. We're all adults here.
Don't use a property in this case. Your getters and setters don't do anything except return and set the value. Just use a normal attribute. If you later want to control access, you can just add that property later without changing the interface of your class!
As for dealing with non-set values, just create your own object to represent a value that has never been set. It can be as simple as:
>>> NOT_SET = object()
>>> class Invoice:
... def __init__(self, invoice_id):
... self.invoice_id = invoice_id
... self.amount = NOT_SET
...
>>> inv = Invoice(42)
>>> if inv.amount is NOT_SET:
... inv.amount = 1
...
You could also use an enum if you want better support for typing.
You can make a static class and use that to determine whether a value has been set at all.
class AmountNotSet(object):
pass
class Invoice(object):
def __init__(self, invoice_id):
self.invoice_id = invoice_id
self._amount = AmountNotSet
# ...etc...
Then you can check whether the invoice is set or not like so:
invoice1 = Invoice(1)
invoice2 = Invoice(2)
invoice2.amount = None
invoice1.amount is AmountNotSet # => True
invoice2.amount is None # => True
Related
I'm currently writing some codes for an option pricer and at the same time I've been trying to experiment with Python dataclasses. Here I've two classes, Option() and Option2(), with the former written in dataclass syntax and latter in conventional class syntax.
from dataclasses import dataclass, field
from typing import Optional
#dataclass
class Option:
is_american: Optional[bool] = field(default=False)
is_european: Optional[bool] = not is_american
class Option2:
def __init__(is_american=False):
self.is_european = not is_american
if __name__ == "__main__":
eu_option1 = Option()
print(f"{eu_option1.is_european = }")
eu_option2 = Option2()
print(f"{eu_option2.is_european = }")
The output gives
eu_option1.is_european = False
eu_option2.is_european = True
However, something very strange happened. Notice how in the Option2() case, is_american is set to False by default, and hence is_european must be True and it indeed is, so this is expected behaviour.
But in the dataclass Option() case, is_american is also set to False by default. However, for whatever reason, the dataclass did not trigger the is_european: Optional[bool] = not is_american and hence is_european is still False when it is supposed to be True.
What is going on here? Did I use my dataclass incorrectly?
It is likely that the dataclass constructor is struggling with the order of statements. Normally you'd have all the mandatory parameters before any optional ones for example, and it may not realise at construct time that the value is meant to be false.
There is a built-in mechanism to make sure that fields which are dependent on other fields are processed in the correct order. What you need to do is flag your secondary code as init=False and move them over to a __post_init__() method.
from dataclasses import dataclass, field
from typing import Optional, List
#dataclass
class Option:
is_american: Optional[bool] = field(default=False)
is_european: Optional[bool] = field(init=False)
def __post_init__():
self.is_european = not self.is_american
Personally I'd get rid of is_european altogether and use a get() to fetch the value if it's called. There's no need to hold the extra value if it's always going to be directly related to another value. Just calculate it on the fly when it's called.
With many languages, you wouldn't access attributes directly, you'd access them through control functions (get, set, etc) like get_is_american() or get_country(). Python has an excellent way of handling this through decorators. This allows the use of direct access when first setting up a class, then moving to managed access without having the change the code calling the attribute by using the #property decorator. Examples:
# change the is_american to _is_american to stop direct access
# Get is the default action, therefore does not need to be specified
#property
def is_american(self):
return self._is_american
#property
def is_european(self):
return not self._is_american
# Allow value to be set
#property.setter
def is_american(self, america_based: bool):
self._is_american = america_based
#property.setter
def is_european(self, europe_based: bool):
self._is_american = not europe_based
This could then be called as follows:
print(my_object.is_american)
my_object.is_american = false
print(my_object.is_european)
Did you see how flexible that approach is? If you have more countries that US or European, or if you think the process might expand, you can change the storage to a string or an enum and define the return values using the accessor. Example:
# Imagine country is now a string
#property
def is_american(self):
if self.country == 'US':
return true
else:
return false
#property
def is_european(self):
if self.country == 'EU':
return true
else:
return false
#property
def country(self):
return self._country
#property.setter
def country(self, new_country: str):
self._country = new_country
#property.setter
def is_american(self, america_check: bool):
if america_check:
self._country = "US"
else:
self._country = "EU"
#property.setter
def is_european(self, europe_check: bool):
if europe_check:
self._country = "EU"
else:
self._country = "US"
Notice how, if you already have existing code that calls is_american, none of the accessing code has to be changed even though country is now stored - and available as - a string.
Your problem is:
is_european: Optional[bool] = not is_american
not is_american is evaluated at definition time. At that point, is_american is a Field, and all Fields are truthy. If you want one field defined in terms of another, you'll want to use post-initialization processing to dynamically select the value of is_european after is_american is initialized, or make it an #property that computes its value live from the value of is_american (assuming it's impossible to be both at once).
I have a class that I need:
First instance MUST receive a parameter.
All the following instances have this parameter be optional.
If it is not passed then I will use the parameter of the previous object init.
For that, I need to share a variable between the objects (all objects belong to classes with the same parent).
For example:
class MyClass:
shared_variable = None
def __init__(self, paremeter_optional=None):
if paremeter_optional is None: # Parameter optional not given
if self.shared_variable is None:
print("Error! First intance must have the parameter")
sys.exit(-1)
else:
paremeter_optional = self.shared_variable # Use last parameter
self.shared_variable = paremeter_optional # Save it for next object
objA = MyClass(3)
objB = MyClass()
Because the shared_variable is not consistent/shared across inits, when running the above code I get the error:
Error! First intance must have the parameter
(After the second init of objB)
Of course, I could use a global variable but I want to avoid it if possible and use some best practices for this.
Update: Having misunderstood the original problem, I would still recommend being explicit, rather than having the class track information better tracked outside the class.
class MyClass:
def __init__(self, parameter):
...
objA = MyClass(3)
objB = MyClass(4)
objC = MyClass(5)
objD = MyClass(5) # Be explicit; don't "remember" what was used for objC
If objC and objD are "related" enough that objD can rely on the initialization of objC, and you want to be DRY, use something like
objC, objD = [MyClass(5) for _ in range(2)]
Original answer:
I wouldn't make this something you set from an instance at all; it's a class attribute, and so should be set at the class level only.
class MyClass:
shared_variable = None
def __init__(self):
if self.shared_variable is None:
raise RuntimeError("shared_variable must be set before instantiating")
...
MyClass.shared_variable = 3
objA = MyClass()
objB = MyClass()
Assigning a value to self.shared_variable makes self.shared_variable an instance attribute so that the value is not shared among instances.
You can instead assign the value explicitly to the class attribute by referencing the attribute of the instance's class object instead.
Change:
self.shared_variable = paremeter_optional
to:
self.__class__.shared_variable = paremeter_optional
I have two related models:
class FirstModel(models.Model):
base_value = models.FloatField()
class SecondModel(models.Model):
parent = models.ForeignKey(FirstModel)
#property
def parent_value(self):
return self.parent.base_value
#property
def calculate(self):
return self.parent_value + 1
In general, SecondModel.calculate is mostly used in the context of its related FirstModel. However, I sometimes want to be able to call calculate with a temporary value as its parent_value. Something like this:
foo = SecondModel()
# would look in the database for the related FirstModel and add 1 to its base_value
foo.calculate
foo.parent_value = 10
foo.calculate # should return 11
Obviously you can't do this because the parent_value is a read-only property. I also have many different models similar to SecondModel that needs to have this kind of capability.
I've thought about and tried several things, but none have quite seemed to work:
1) Writing a Django proxy model - possible, but the number of objects is rather high, so I'd be writing a lot of similar code. Also, there appears to be a bug related to overriding properties: https://code.djangoproject.com/ticket/16176. But it'd look like this:
class ModelProxy(SecondModel):
class Meta:
proxy = True
def __init__(self, temp_value):
self.parent_value = temp_value
2) Overloading the parent_value property on the instance - like this:
foo = SecondModel()
setattr(foo, 'parent_value', 10)
but you can't do this because properties are members of the class, not the instance. And I only want the temporary value to be set for the instance
3) Metaclass or class generator? - Seems overly complicated. Also, I am uncertain what would happen if I used a metaclass to dynamically generate classes that are children of models.Model. Would I run into problems with the db tables not being in sync?
4) Rewriting the properties with proper getters and setters? - maybe the solution is to rewrite SecondModel so that the property can be set?
Any suggestions?
I believe a mixin would achieve what you want to do, and provide a simple and reusable way of supporting temporary values in your calculations. By mixing the below example into each model you want this behaviour on you can then:
Set a temporary parent value on each model
When calculate is called, it will check whether there is a property parent_value available, and if not it will use the temporary parent value in the calculation.
The code below should achieve what you are looking for - apologies I haven't been able to test it yet but it should be about right - please let me know if any problems that need editing.
class CalculateMixin(object):
#property
def temp_parent_value(self):
return self._temp_parent_value
#temp_parent_value.setter
def temp_parent_value(self, value):
self._temp_parent_value = value
#property
def calculate(self):
parent_value = self.parent_value if self.parent_value else self.temp_parent_value
return parent_value + 1
class SecondModel(models.Model, CalculateMixin):
parent = models.ForeignKey(FirstModel)
self.temp_parent_value = 'Whatever value you desire'
#property
def parent_value(self):
return self.parent.base_value
You can use the property setter:
class SecondModel(models.Model):
_base_value = None
parent = models.ForeignKey(FirstModel)
#property
def parent_value(self):
if self._base_value is None:
return self.parent.base_value
else:
return self._base_value
#parent_value.setter
def parent_value(self, value):
self._base_value = value
#property
def calculate(self):
return self.parent_value + 1
I think you can do what you need to using the mixin PropertyOverrideMixin shown below which, if some property value isn't available, then it will look for the same property prefixed with temp_. This will allow you to provide temporary values that can be used when the real property values can't be looked up.
Below is the mixin, some example models and a unit test to show how this can work. Hopefully this can be adapted for your problem! Finally it is worth mentioning that the properties here can be interchanged with normal object attributes and it should still all work.
from unittest import TestCase
class PropertyOverrideMixin(object):
def __getattribute__(self, name):
"""
Override that, if an attribute isn't found on the object, then it instead
looks for the same attribute prefixed with 'temp_' and tries to return
that value.
"""
try:
return object.__getattribute__(self, name)
except AttributeError:
temp_name = 'temp_{0}'.format(name)
return object.__getattribute__(self, temp_name)
class ParentModel(object):
attribute_1 = 'parent value 1'
class Model(PropertyOverrideMixin):
# Set our temporary property values
#property
def temp_attribute_1(self):
return 'temporary value 1'
#property
def temp_attribute_2(self):
return 'temporary value 2'
# Attribute 1 looks up value on its parent
#property
def attribute_1(self):
return self.parent.attribute_1
# Attribute 2 looks up a value on this object
#property
def attribute_2(self):
return self.some_other_attribute
class PropertyOverrideMixinTest(TestCase):
def test_attributes(self):
model = Model()
# Looking up attributes 1 and 2 returns the temp versions at first
self.assertEquals('temporary value 1', model.attribute_1)
self.assertEquals('temporary value 2', model.attribute_2)
# Now we set the parent, and lookup of attribute 1 works on the parent
model.parent = ParentModel()
self.assertEquals('parent value 1', model.attribute_1)
# now we set attribute_2, so this gets returned and the temporary ignored
model.some_other_attribute = 'value 2'
self.assertEquals('value 2', model.attribute_2)
Having a simple Python class like this:
class Spam(object):
__init__(self, description, value):
self.description = description
self.value = value
I would like to check the following constraints:
"description cannot be empty"
"value must be greater than zero"
Should I:
1. validate data before creating spam object ?
2. check data on __init__ method ?
3. create an is_valid method on Spam class and call it with spam.isValid() ?
4. create an is_valid static method on Spam class and call it with Spam.isValid(description, value) ?
5. check data on setters declaration ?
6. etc.
Could you recommend a well designed/Pythonic/not verbose (on class with many attributes)/elegant approach?
You can use Python properties to cleanly apply rules to each field separately, and enforce them even when client code tries to change the field:
class Spam(object):
def __init__(self, description, value):
self.description = description
self.value = value
#property
def description(self):
return self._description
#description.setter
def description(self, d):
if not d: raise Exception("description cannot be empty")
self._description = d
#property
def value(self):
return self._value
#value.setter
def value(self, v):
if not (v > 0): raise Exception("value must be greater than zero")
self._value = v
An exception will be thrown on any attempt to violate the rules, even in the __init__ function, in which case object construction will fail.
UPDATE: Sometime between 2010 and now, I learned about operator.attrgetter:
import operator
class Spam(object):
def __init__(self, description, value):
self.description = description
self.value = value
description = property(operator.attrgetter('_description'))
#description.setter
def description(self, d):
if not d: raise Exception("description cannot be empty")
self._description = d
value = property(operator.attrgetter('_value'))
#value.setter
def value(self, v):
if not (v > 0): raise Exception("value must be greater than zero")
self._value = v
If you only want to validate the values when the object is created AND passing in invalid values is considered a programming error then I would use assertions:
class Spam(object):
def __init__(self, description:str, value:int):
assert description != ""
assert value > 0
self.description = description
self.value = value
This is about as concise as you are going to get, and clearly documents that these are preconditions for creating the object.
Unless you're hellbent on rolling your own, you can simply use formencode. It really shines with many attributes and schemas (just subclass schemas) and has a lot of useful validators builtin. As you can see this is the "validate data before creating spam object" approach.
from formencode import Schema, validators
class SpamSchema(Schema):
description = validators.String(not_empty=True)
value = validators.Int(min=0)
class Spam(object):
def __init__(self, description, value):
self.description = description
self.value = value
## how you actually validate depends on your application
def validate_input( cls, schema, **input):
data = schema.to_python(input) # validate `input` dict with the schema
return cls(**data) # it validated here, else there was an exception
# returns a Spam object
validate_input( Spam, SpamSchema, description='this works', value=5)
# raises an exception with all the invalid fields
validate_input( Spam, SpamSchema, description='', value=-1)
You could do the checks during __init__ too (and make them completely transparent with descriptors|decorators|metaclass), but I'm not a big fan of that. I like a clean barrier between user input and internal objects.
if you want to only validate those values passed to the constructor, you could do:
class Spam(object):
def __init__(self, description, value):
if not description or value <=0:
raise ValueError
self.description = description
self.value = value
This will of course will not prevent anyone from doing something like this:
>>> s = Spam('s', 5)
>>> s.value = 0
>>> s.value
0
So, correct approach depends on what you're trying to accomplish.
You can try pyfields:
from pyfields import field
class Spam(object):
description = field(validators={"description can not be empty": lambda s: len(s) > 0})
value = field(validators={"value must be greater than zero": lambda x: x > 0})
s = Spam()
s.description = "hello"
s.description = "" # <-- raises error, see below
It yields
ValidationError[ValueError]: Error validating [<...>.Spam.description=''].
InvalidValue: description can not be empty.
Function [<lambda>] returned [False] for value ''.
It is compliant with python 2 and 3.5 (as opposed to pydantic), and validation happens everytime the value is changed (not only the first time, as opposed to attrs). It can create the constructor for you, but does not do it by default as shown above.
Note that you may wish to optionally use mini-lambda instead of plain old lambda functions if you wish the error messages to be even more straightforward (they will display the failing expression).
See pyfields documentation for details (I'm the author by the way ;) )
I'm working on yet another validation library - convtools models (docs / github).
The vision of this library is:
validation first
no implicit type casting
no implicit data losses during type casting - e.g. casting 10.0 to int is fine, 10.1 is not
if there’s a model instance, it is valid.
from collections import namedtuple
from typing import Union
from convtools.contrib.models import ObjectModel, build, validate, validators
# input data to test
SpamTest = namedtuple("SpamTest", ["description", "value"])
class Spam(ObjectModel):
description: str = validate(validators.Length(min_length=1))
value: Union[int, float] = validate(validators.Gt(0))
spam, errors = build(Spam, SpamTest("", 0))
"""
>>> In [34]: errors
>>> Out[34]:
>>> {'description': {'__ERRORS': {'min_length': 'length is 0, but should be >= 1'}},
>>> 'value': {'__ERRORS': {'gt': 'should be > 0'}}
"""
spam, errors = build(Spam, SpamTest("foo", 1))
"""
>>> In [42]: spam
>>> Out[42]: Spam(description='foo', value=1)
>>> In [43]: spam.to_dict()
>>> Out[43]: {'description': 'foo', 'value': 1}
>>> In [44]: spam.description
>>> Out[44]: 'foo'
"""
I am programming a simulations for single neurons. Therefore I have to handle a lot of Parameters. Now the Idea is that I have two classes, one for a SingleParameter and a Collection of parameters. I use property() to access the parameter value easy and to make the code more readable. This works perfect for a sinlge parameter but I don't know how to implement it for the collection as I want to name the property in Collection after the SingleParameter. Here an example:
class SingleParameter(object):
def __init__(self, name, default_value=0, unit='not specified'):
self.name = name
self.default_value = default_value
self.unit = unit
self.set(default_value)
def get(self):
return self._v
def set(self, value):
self._v = value
v = property(fget=get, fset=set, doc='value of parameter')
par1 = SingleParameter(name='par1', default_value=10, unit='mV')
par2 = SingleParameter(name='par2', default_value=20, unit='mA')
# par1 and par2 I can access perfectly via 'p1.v = ...'
# or get its value with 'p1.v'
class Collection(object):
def __init__(self):
self.dict = {}
def __getitem__(self, name):
return self.dict[name] # get the whole object
# to get the value instead:
# return self.dict[name].v
def add(self, parameter):
self.dict[parameter.name] = parameter
# now comes the part that I don't know how to implement with property():
# It shoule be something like
# self.__dict__[parameter.name] = property(...) ?
col = Collection()
col.add(par1)
col.add(par2)
col['par1'] # gives the whole object
# Now here is what I would like to get:
# col.par1 -> should result like col['par1'].v
# col.par1 = 5 -> should result like col['par1'].v = 5
Other questions that I put to understand property():
Why do managed attributes just work for class attributes and not for instance attributes in python?
How can I assign a new class attribute via __dict__ in python?
Look at built-in functions getattr and setattr. You'll probably be a lot happier.
Using the same get/set functions for both classes forces you into an ugly hack with the argument list. Very sketchy, this is how I would do it:
In class SingleParameter, define get and set as usual:
def get(self):
return self._s
def set(self, value):
self._s = value
In class Collection, you cannot know the information until you create the property, so you define the metaset/metaget function and particularize them only later with a lambda function:
def metaget(self, par):
return par.s
def metaset(self, value, par):
par.s = value
def add(self, par):
self[par.name] = par
setattr(Collection, par.name,
property(
fget=lambda x : Collection.metaget(x, par),
fset=lambda x, y : Collection.metaset(x,y, par))
Properties are meant to dynamically evaluate attributes or to make them read-only. What you need is customizing attribute access. __getattr__ and __setattr__ do that really fine, and there's also __getattribute__ if __getattr__ is not enough.
See Python docs on customizing attribute access for details.
Have you looked at the traits package? It seems that you are reinventing the wheel here with your parameter classes. Traits also have additional features that might be useful for your type of application (incidently I know a person that happily uses traits in neural simulations).
Now I implemented a solution with set-/getattr:
class Collection(object):
...
def __setattr__(self, name, value):
if 'dict' in self.__dict__:
if name in self.dict:
self[name].v = value
else:
self.__dict__[name] = value
def __getattr__(self, name):
return self[name].v
There is one thing I quite don't like that much: The attributes are not in the __dict__. And if I have them there as well I would have a copy of the value - which can be dangerous...
Finally I succeded to implement the classes with property(). Thanks a lot for the advice. It took me quite a bit to work it out - but I can promise you that this exercise helps you to understand better pythons OOP.
I implemented it also with __getattr__ and __setattr__ but still don't know the advantages and disadvantages to the property-solution. But this seems to be worth another question. The property-solutions seems to be quit clean.
So here is the code:
class SingleParameter(object):
def __init__(self, name, default_value=0, unit='not specified'):
self.name = name
self.default_value = default_value
self.unit = unit
self.set(default_value)
def get(*args):
self = args[0]
print "get(): "
print args
return self._v
def set(*args):
print "set(): "
print args
self = args[0]
value = args[-1]
self._v = value
v = property(fget=get, fset=set, doc='value of parameter')
class Collection(dict):
# inheriting from dict saves the methods: __getitem__ and __init__
def add(self, par):
self[par.name] = par
# Now here comes the tricky part.
# (Note: this property call the get() and set() methods with one
# more argument than the property of SingleParameter)
setattr(Collection, par.name,
property(fget=par.get, fset=par.set))
# Applying the classes:
par1 = SingleParameter(name='par1', default_value=10, unit='mV')
par2 = SingleParameter(name='par2', default_value=20, unit='mA')
col = Collection()
col.add(par1)
col.add(par2)
# Setting parameter values:
par1.v = 13
col.par1 = 14
# Getting parameter values:
par1.v
col.par1
# checking identity:
par1.v is col.par1
# to access the whole object:
col['par1']
As I am new I am not sure how to move on:
how to treat follow up questions (like this itself):
get() is seems to be called twice - why?
oop-design: property vs. "__getattr__ & __setattr__" - when should I use what?
is it rude to check the own answer to the own question as accepted?
is it recommended to rename the title in order to put correlated questions or questions elaborated with the same example into the same context?
Other questions that I put to understand property():
Why do managed attributes just work for class attributes and not for instance attributes in python?
How can I assign a new class attribute via __dict__ in python?
I have a class that does something similar, but I did the following in the collection object:
setattr(self, par.name, par.v)