What I tried so far:
from pydantic import BaseModel, validator
class Foo(BaseModel):
a: int
b: int
c: int
class Config:
validate_assignment = True
#validator("b", always=True)
def validate_b(cls, v, values, field):
# field - doesn't have current value
# values - has values of other fields, but not for 'b'
if values.get("b") == 0: # imaginary logic with prev value
return values.get("b") - 1
return v
f = Foo(a=1, b=0, c=2)
f.b = 3
assert f.b == -1 # fails
Also looked up property setters but apparently they don't work with pydantic.
Looks like bug to me, so I made an issue on github: https://github.com/pydantic/pydantic/issues/4888
The way validation is intended to work is stateless. When you create a model instance, validation is run before the instance is even fully initialized.
You mentioned relevant sentence from the documentation about the values parameter:
values: a dict containing the name-to-value mapping of any previously-validated fields
If we ignore assignment for a moment, for your example validator, this means the values for fields that were already validated before b, will be present in that dictionary, which is only the value for a. (Because validators are run in the order fields are defined.) This description is evidently meant for the validators run during initialization, not assignment.
What I would concede is that the documentation is leaves way too much room for interpretation as to what should happen, when validating value assignment. If we take a look at the source code of BaseModel.__setattr__, we can see the intention very clearly though:
def __setattr__(self, name, value):
...
known_field = self.__fields__.get(name, None)
if known_field:
# We want to
# - make sure validators are called without the current value for this field inside `values`
# - keep other values (e.g. submodels) untouched (using `BaseModel.dict()` will change them into dicts)
# - keep the order of the fields
if not known_field.field_info.allow_mutation:
raise TypeError(f'"{known_field.name}" has allow_mutation set to False and cannot be assigned')
dict_without_original_value = {k: v for k, v in self.__dict__.items() if k != name}
value, error_ = known_field.validate(value, dict_without_original_value, loc=name, cls=self.__class__)
...
As you can see, it explicitly states in that comment that values should not contain the current value.
We can observe that this is actually the displayed behavior here:
from pydantic import BaseModel, validator
class Foo(BaseModel):
a: int
b: int
c: int
class Config:
validate_assignment = True
#validator("b")
def validate_b(cls, v: object, values: dict[str, object]) -> object:
print(f"{v=}, {values=}")
return v
if __name__ == "__main__":
print("initializing...")
f = Foo(a=1, b=0, c=2)
print("assigning...")
f.b = 3
Output:
initializing...
v=0, values={'a': 1}
assigning...
v=3, values={'a': 1, 'c': 2}
Ergo, there is no bug here. This is the intended behavior.
Whether this behavior is justified or sensible may be debatable. If you want to debate this, you can open an issue as a question and ask why it was designed this way and argue for a plausible alternative approach.
Though in my personal opinion, what is more strange in the current implementation is that values contains anything at all during assignment. I would argue this is strange since only that one specific value being assigned is what is validated. The way I understand the intent behind values it should only be available during initialization. But that is yet another debate.
What is undoubtedly true is that this behavior of validator methods upon assignment should be explicitly documented. This is also something that may be worth mentioning in the aforementioned issue.
Related
I'm new to Python, and am sort of surprised I cannot do this.
dictionary = {
'a' : '123',
'b' : dictionary['a'] + '456'
}
I'm wondering what the Pythonic way to correctly do this in my script, because I feel like I'm not the only one that has tried to do this.
EDIT: Enough people were wondering what I'm doing with this, so here are more details for my use cases. Lets say I want to keep dictionary objects to hold file system paths. The paths are relative to other values in the dictionary. For example, this is what one of my dictionaries may look like.
dictionary = {
'user': 'sholsapp',
'home': '/home/' + dictionary['user']
}
It is important that at any point in time I may change dictionary['user'] and have all of the dictionaries values reflect the change. Again, this is an example of what I'm using it for, so I hope that it conveys my goal.
From my own research I think I will need to implement a class to do this.
No fear of creating new classes -
You can take advantage of Python's string formating capabilities
and simply do:
class MyDict(dict):
def __getitem__(self, item):
return dict.__getitem__(self, item) % self
dictionary = MyDict({
'user' : 'gnucom',
'home' : '/home/%(user)s',
'bin' : '%(home)s/bin'
})
print dictionary["home"]
print dictionary["bin"]
Nearest I came up without doing object:
dictionary = {
'user' : 'gnucom',
'home' : lambda:'/home/'+dictionary['user']
}
print dictionary['home']()
dictionary['user']='tony'
print dictionary['home']()
>>> dictionary = {
... 'a':'123'
... }
>>> dictionary['b'] = dictionary['a'] + '456'
>>> dictionary
{'a': '123', 'b': '123456'}
It works fine but when you're trying to use dictionary it hasn't been defined yet (because it has to evaluate that literal dictionary first).
But be careful because this assigns to the key of 'b' the value referenced by the key of 'a' at the time of assignment and is not going to do the lookup every time. If that is what you are looking for, it's possible but with more work.
What you're describing in your edit is how an INI config file works. Python does have a built in library called ConfigParser which should work for what you're describing.
This is an interesting problem. It seems like Greg has a good solution. But that's no fun ;)
jsbueno as a very elegant solution but that only applies to strings (as you requested).
The trick to a 'general' self referential dictionary is to use a surrogate object. It takes a few (understatement) lines of code to pull off, but the usage is about what you want:
S = SurrogateDict(AdditionSurrogateDictEntry)
d = S.resolve({'user': 'gnucom',
'home': '/home/' + S['user'],
'config': [S['home'] + '/.emacs', S['home'] + '/.bashrc']})
The code to make that happen is not nearly so short. It lives in three classes:
import abc
class SurrogateDictEntry(object):
__metaclass__ = abc.ABCMeta
def __init__(self, key):
"""record the key on the real dictionary that this will resolve to a
value for
"""
self.key = key
def resolve(self, d):
""" return the actual value"""
if hasattr(self, 'op'):
# any operation done on self will store it's name in self.op.
# if this is set, resolve it by calling the appropriate method
# now that we can get self.value out of d
self.value = d[self.key]
return getattr(self, self.op + 'resolve__')()
else:
return d[self.key]
#staticmethod
def make_op(opname):
"""A convience class. This will be the form of all op hooks for subclasses
The actual logic for the op is in __op__resolve__ (e.g. __add__resolve__)
"""
def op(self, other):
self.stored_value = other
self.op = opname
return self
op.__name__ = opname
return op
Next, comes the concrete class. simple enough.
class AdditionSurrogateDictEntry(SurrogateDictEntry):
__add__ = SurrogateDictEntry.make_op('__add__')
__radd__ = SurrogateDictEntry.make_op('__radd__')
def __add__resolve__(self):
return self.value + self.stored_value
def __radd__resolve__(self):
return self.stored_value + self.value
Here's the final class
class SurrogateDict(object):
def __init__(self, EntryClass):
self.EntryClass = EntryClass
def __getitem__(self, key):
"""record the key and return"""
return self.EntryClass(key)
#staticmethod
def resolve(d):
"""I eat generators resolve self references"""
stack = [d]
while stack:
cur = stack.pop()
# This just tries to set it to an appropriate iterable
it = xrange(len(cur)) if not hasattr(cur, 'keys') else cur.keys()
for key in it:
# sorry for being a duche. Just register your class with
# SurrogateDictEntry and you can pass whatever.
while isinstance(cur[key], SurrogateDictEntry):
cur[key] = cur[key].resolve(d)
# I'm just going to check for iter but you can add other
# checks here for items that we should loop over.
if hasattr(cur[key], '__iter__'):
stack.append(cur[key])
return d
In response to gnucoms's question about why I named the classes the way that I did.
The word surrogate is generally associated with standing in for something else so it seemed appropriate because that's what the SurrogateDict class does: an instance replaces the 'self' references in a dictionary literal. That being said, (other than just being straight up stupid sometimes) naming is probably one of the hardest things for me about coding. If you (or anyone else) can suggest a better name, I'm all ears.
I'll provide a brief explanation. Throughout S refers to an instance of SurrogateDict and d is the real dictionary.
A reference S[key] triggers S.__getitem__ and SurrogateDictEntry(key) to be placed in the d.
When S[key] = SurrogateDictEntry(key) is constructed, it stores key. This will be the key into d for the value that this entry of SurrogateDictEntry is acting as a surrogate for.
After S[key] is returned, it is either entered into the d, or has some operation(s) performed on it. If an operation is performed on it, it triggers the relative __op__ method which simple stores the value that the operation is performed on and the name of the operation and then returns itself. We can't actually resolve the operation because d hasn't been constructed yet.
After d is constructed, it is passed to S.resolve. This method loops through d finding any instances of SurrogateDictEntry and replacing them with the result of calling the resolve method on the instance.
The SurrogateDictEntry.resolve method receives the now constructed d as an argument and can use the value of key that it stored at construction time to get the value that it is acting as a surrogate for. If an operation was performed on it after creation, the op attribute will have been set with the name of the operation that was performed. If the class has a __op__ method, then it has a __op__resolve__ method with the actual logic that would normally be in the __op__ method. So now we have the logic (self.op__resolve) and all necessary values (self.value, self.stored_value) to finally get the real value of d[key]. So we return that which step 4 places in the dictionary.
finally the SurrogateDict.resolve method returns d with all references resolved.
That'a a rough sketch. If you have any more questions, feel free to ask.
If you, just like me wandering how to make #jsbueno snippet work with {} style substitutions, below is the example code (which is probably not much efficient though):
import string
class MyDict(dict):
def __init__(self, *args, **kw):
super(MyDict,self).__init__(*args, **kw)
self.itemlist = super(MyDict,self).keys()
self.fmt = string.Formatter()
def __getitem__(self, item):
return self.fmt.vformat(dict.__getitem__(self, item), {}, self)
xs = MyDict({
'user' : 'gnucom',
'home' : '/home/{user}',
'bin' : '{home}/bin'
})
>>> xs["home"]
'/home/gnucom'
>>> xs["bin"]
'/home/gnucom/bin'
I tried to make it work with the simple replacement of % self with .format(**self) but it turns out it wouldn't work for nested expressions (like 'bin' in above listing, which references 'home', which has it's own reference to 'user') because of the evaluation order (** expansion is done before actual format call and it's not delayed like in original % version).
Write a class, maybe something with properties:
class PathInfo(object):
def __init__(self, user):
self.user = user
#property
def home(self):
return '/home/' + self.user
p = PathInfo('thc')
print p.home # /home/thc
As sort of an extended version of #Tony's answer, you could build a dictionary subclass that calls its values if they are callables:
class CallingDict(dict):
"""Returns the result rather than the value of referenced callables.
>>> cd = CallingDict({1: "One", 2: "Two", 'fsh': "Fish",
... "rhyme": lambda d: ' '.join((d[1], d['fsh'],
... d[2], d['fsh']))})
>>> cd["rhyme"]
'One Fish Two Fish'
>>> cd[1] = 'Red'
>>> cd[2] = 'Blue'
>>> cd["rhyme"]
'Red Fish Blue Fish'
"""
def __getitem__(self, item):
it = super(CallingDict, self).__getitem__(item)
if callable(it):
return it(self)
else:
return it
Of course this would only be usable if you're not actually going to store callables as values. If you need to be able to do that, you could wrap the lambda declaration in a function that adds some attribute to the resulting lambda, and check for it in CallingDict.__getitem__, but at that point it's getting complex, and long-winded, enough that it might just be easier to use a class for your data in the first place.
This is very easy in a lazily evaluated language (haskell).
Since Python is strictly evaluated, we can do a little trick to turn things lazy:
Y = lambda f: (lambda x: x(x))(lambda y: f(lambda *args: y(y)(*args)))
d1 = lambda self: lambda: {
'a': lambda: 3,
'b': lambda: self()['a']()
}
# fix the d1, and evaluate it
d2 = Y(d1)()
# to get a
d2['a']() # 3
# to get b
d2['b']() # 3
Syntax wise this is not very nice. That's because of us needing to explicitly construct lazy expressions with lambda: ... and explicitly evaluate lazy expression with ...(). It's the opposite problem in lazy languages needing strictness annotations, here in Python we end up needing lazy annotations.
I think with some more meta-programmming and some more tricks, the above could be made more easy to use.
Note that this is basically how let-rec works in some functional languages.
The jsbueno answer in Python 3 :
class MyDict(dict):
def __getitem__(self, item):
return dict.__getitem__(self, item).format(self)
dictionary = MyDict({
'user' : 'gnucom',
'home' : '/home/{0[user]}',
'bin' : '{0[home]}/bin'
})
print(dictionary["home"])
print(dictionary["bin"])
Her ewe use the python 3 string formatting with curly braces {} and the .format() method.
Documentation : https://docs.python.org/3/library/string.html
I have a Python module that has a number of simple enums that are defined as follows:
class WordType(Enum):
ADJ = "Adjective"
ADV = "Adverb"
class Number(Enum):
S = "Singular"
P = "Plural"
Because there are a lot of these enums and I only decide at runtime which enums to query for any given input, I wanted a function that can retrieve the value given the enum-type and the enum-value as strings. I succeeded in doing that as follows:
names = inspect.getmembers(sys.modules[__name__], inspect.isclass)
def get_enum_type(name: str):
enum_class = [x[1] for x in names if x[0] == name]
return enum_class[0]
def get_enum_value(object_name: str, value_name: str):
return get_enum_type(object_name)[value_name]
This works well, but now I'm adding type hinting and I'm struggling with how to define the return types for these methods: I've tried slice and Literal[], both suggested by mypy, but neither checks out (maybe because I don't understand what type parameter I can give to Literal[]).
I am willing to modify the enum definitions, but I'd prefer to keep the dynamic querying as-is. Worst case scenario, I can do # type: ignore or just return -> Any, but I hope there's something better.
As you don't want to check-type for any Enum, I suggest to introduce a base type (say GrammaticalEnum) to mark all your Enums and to group them in an own module:
# module grammar_enums
import sys
import inspect
from enum import Enum
class GrammaticalEnum(Enum):
"""use as a base to mark all grammatical enums"""
pass
class WordType(GrammaticalEnum):
ADJ = "Adjective"
ADV = "Adverb"
class Number(GrammaticalEnum):
S = "Singular"
P = "Plural"
# keep this statement at the end, as all enums must be known first
grammatical_enums = dict(
m for m in inspect.getmembers(sys.modules[__name__], inspect.isclass)
if issubclass(m[1], GrammaticalEnum))
# you might prefer the shorter alternative:
# grammatical_enums = {k: v for (k, v) in globals().items()
# if inspect.isclass(v) and issubclass(v, GrammaticalEnum)}
Regarding typing, yakir0 already suggested the right types,
but with the common base you can narrow them.
If you like, you even could get rid of your functions at all:
from grammar_enums import grammatical_enums as g_enums
from grammar_enums import GrammaticalEnum
# just use g_enums instead of get_enum_value like this
WordType_ADJ: GrammaticalEnum = g_enums['WordType']['ADJ']
# ...or use your old functions:
# as your grammatical enums are collected in a dict now,
# you don't need this function any more:
def get_enum_type(name: str) -> Type[GrammaticalEnum]:
return g_enums[name]
def get_enum_value(enum_name: str, value_name: str) -> GrammaticalEnum:
# return get_enum_type(enum_name)[value_name]
return g_enums[enum_name][value_name]
You can always run your functions and print the result of the function to get a sense of what it should be. Note that you can use Enum in type hinting like any other class.
For example:
>>> result = get_enum_type('WordType')
... print(result)
... print(type(result))
<enum 'WordType'>
<class 'enum.EnumMeta'>
So you can actually use
get_enum_type(name: str) -> EnumMeta
But you can make it prettier by using Type from typing since EnumMeta is the type of a general Enum.
get_enum_type(name: str) -> Type[Enum]
For a similar process with get_enum_value you get
>>> type(get_enum_value('WordType', 'ADJ'))
<enum 'WordType'>
Obviously you won't always return the type WordType so you can use Enum to generalize the return type.
To sum it all up:
get_enum_type(name: str) -> Type[Enum]
get_enum_value(object_name: str, value_name: str) -> Enum
As I said in a comment, I don't think it's possible to have your dynamic code and have MyPy predict the outputs. For example, I don't think it would be possible to have MyPy know that get_enum_type("WordType") should be a WordType whereas get_enum_type("Number") should be a Number.
As others have said, you could clarify that they'll be Enums. You could add a base type, and say that they'll specifically be one of the base types. Part of the problem is that, although you could promise it, MyPy wouldn't be able to confirm. It can't know that inspect.getmembers(sys.modules[__name__], inspect.isclass) will just have Enums or GrammaticalEnums in [1].
If you're willing to change the implementation of your lookup, then I'd suggest you could make profitable use of __init_subclass__. Something like
GRAMMATICAL_ENUM_LOOKUP: "Mapping[str, GrammaticalEnum]" = {}
class GrammaticalEnum(Enum):
def __init_subclass__(cls, **kwargs):
GRAMMATICAL_ENUM_LOOKUP[cls.__name__] = cls
super().__init_subclass__(**kwargs)
def get_enum_type(name: str) -> Type[GrammaticalEnum]:
return GRAMMATICAL_ENUM_LOOKUP[name]
This at least has the advantage that the MyPy can see what's going on, and should be broadly happy with it. It knows that everything will in fact be a valid GrammaticalEnum because that's all that GRAMMATICAL_ENUM_LOOKUP gets populated with.
In this class definition, every parameter occurs three times, which seems to violate the DRY (don't repeat yourself) principle:
class Foo:
def __init__(self, a=1, b=2.0, c=(3, 4, 5)):
self.a = int(a)
self.b = float(b)
self.c = list(c)
DRY could be applied like this (Python 3):
class Foo:
def __init__(self, **kwargs):
defaults = dict(a=1, b=2.0, c=[3, 4, 5])
for k, v in defaults.items():
setattr(self, k, type(v)(kwargs[k]) if k in kwargs else v)
# ...detect illegal keywords here...
However, this breaks IDE autocomplete (tried Spyder and Elpy) and pylint will complain if I try to access the attributes later on.
Is there a clean way to handle this?
Edit: The example has three parameters, but I find myself dealing with this when there are 15 parameters, where I only rarely need to override the defaults; often with more complicated types, where I would need to do
if not isinstance(kwargs['x'], SomeClass):
raise TypeError('x: must be SomeClass')
self.x = kwargs['x']
for each of them. Moreover, I can't use mutables as default values for keyword arguments.
Principles like DRY are important, but it's important to keep in mind the rationale for such a principle before blindly applying it -- arguably the biggest advantage of DRY code is that you increase the maintainability of the code by only having to modify it in one place and not having to risk the subtle bugs that can occur with code that is modified in one place and not another. DRY can be antithetical to other common principles like YAGNI and KISS, and choosing the correct balance for your application is important.
In particular, DRY often applies to default values, application logic, and other things that could cause bugs if changed in one place and not another. IMO variable names don't fit in the same way since refactoring the code to change every occurrence of Foo's instance variable of a won't actually break anything by not changing the name in the initializer as well.
With that in mind, we have a simple test for your code. Are these variables likely to change together, or is the initializer for Foo a layer of abstraction that allows a refactoring of the inputs independently of the class's instance variables?
Change Together: I rather like #chepner's answer, and I'd take it one step further. If your class is anything more than a data transfer object you can use #chepner's solution as a way to logically group related pieces of data (which admittedly could be unnecessary in your situation, and without some context it's difficult to choose an optimal way to introduce such an idea), e.g.
from dataclasses import dataclass, field
#dataclass
class MyData:
a: int
b: float
c: list
class Foo:
def __init__(self, my_data):
self.wrapped = my_data
Change Separately: Then just leave it alone, or KISS as they say.
As a preface, your code
class Foo:
def __init__(self, a=1, b=2.0, c=(3, 4, 5)):
self.a = int(a)
self.b = float(b)
self.c = list(c)
is, as mentioned in several comments, fine as it is. Code is read far more than it is written, and aside from needing to be careful to avoid typos in the names when first defining this, the intent is perfectly clear. (Though see the end of the answer regarding the default value of c.)
If you are using Python 3.7, you can use a data class to reduce the number of references you make to each variable.
from dataclasses import dataclass, field
from typing import List
#dataclass
class Foo:
a: int = 1
b: float = 2.0
c: List[int] = field(default_factory=lambda: [3,4,5])
This doesn't prevent you from violating the type hints (Foo("1") will happily set a = "1" instead of a = 1 or raising an error), but it's typically the responsibility of the caller to provide arguments of the correct type.) If you really want to enforce this at run-time, you can add a __post_init__ method:
def __post_init__(self):
self.a = int(self.a)
self.b = float(self.b)
self.c = list(self.c)
But if you do that, you may as well go back to your original hand-coded __init__ method.
As an aside, the standard idiom for mutable default arguments is
def __init__(self, a=1, b=2.0, c=None):
...
if c is None:
c = [3, 4, 5]
Your approach has two problem:
It requires that list be run for every instantiation, rather than letting the compiler hard-code [3,4,5].
If you were type-hinting the arguments to __init__, your default value doesn't match the intended type. You'd have to write something like
def init(a: int = 1, b: float = 2.0, c : Union[List[Int], Tuple[Int,Int,Int]] = (3,4,5))
A default value of None automatically causes a "promotion" of the type to a corresponding optional type. The following are equivalent:
def __init__(a: int = 1, b: float = 2.0, c : List[Int] = None):
def __init__(a: int = 1, b: float = 2.0, c : Optional[List[Int]] = None):
If I have some (string) values from a GET or POST request with the associated Property instances, one IntegerProperty and one TextProperty, say, is there a way to convert the values to the proper (user) types without a long tedious chain of isinstance calls?
I'm looking to reproduce this sort of functionality (all input validation omitted for clarity):
for key, value in self.request.POST.iteritems():
prop = MyModel._properties[key]
if isinstance(prop, ndb.IntegerProperty):
value = int(value)
elif isinstance(prop, (ndb.TextProperty, ndb.StringProperty)):
pass # it's already the right type
elif ...
else
raise RuntimeError("I don't know how to deal with this property: {}"
.format(prop))
setattr(mymodelinstance, key, value)
For example, if there is a way to get the int class from an IntegerProperty and the bool class from a BooleanProperty etc., that would do the job.
The ndb metadata API doesn't really solve this elegantly, as far as I can see; with get_representations_of_kind I can reduce the number of cases, though.
You can use a dict to map between user-defined types to built-in types by using type of the object as a key and a built-in type as value.
F.e.
class IntegerProperty(int):
pass
class StringProperty(str):
pass
a, b = IntegerProperty('1'), StringProperty('string')
def to_primitive(obj):
switch = {IntegerProperty: int, StringProperty: str}
return switch[type(obj)](obj)
for x in (a, b):
print(to_primitive(x))
Because here the key is type of the object instead of isinstance check, if more than one user-defined types map to a single built-in type KeyError will arise if the type is not in the dict. So you have to explicitly add every user-defined type to the switch dict.
F.e.
class TextProperty(StringProperty):
pass
switch = {IntegerProperty: int, StringProperty: str, TextProperty: str}
Above we have added the new TextProperty to the switch even though TextProperty is subclass of StringProperty. If you don't want to do that we have to get the key from isinstance check.
Here's how to do it;
class IntegerProperty(int):
pass
class StringProperty(str):
pass
class TextProperty(StringProperty):
pass
a, b, c = IntegerProperty('1'), StringProperty('string'), TextProperty('text')
def to_primitive(obj):
switch = {IntegerProperty: int, StringProperty: str}
key = filter(lambda cls: isinstance(obj, cls), switch.keys())
if not key:
raise TypeError('Unknown type: {}'.format(repr(obj)))
key = key[0]
return switch[key](obj)
for x in (a, b, c):
print(to_primitive(x))
There is a Python package called WTForms that is enormously helpful for this, and generally makes form processing a much more pleasant experience.
Here is a really simple example:
class MyForm(wt.Form):
text = MyTextAreaField("Enter text")
n = wt.IntegerField("A number")
f = MyForm(self.request.POST)
f.validate()
print f.text.data
print f.n.data
Calling f.validate() will automatically convert the POST data to the data type specified by the form. So f.text.data will be a string and f.n.data will be an int.
It also gracefully handles invalid data. If a user inputs a letter for the integer field, then f.n.data will be None. You can also specify error messages that are easily incorporated into your web page.
WTForms takes a huge amount of drudgery out of form processing!
Python 2.5.4. Fairly new to Python, brand new to decorators as of last night. If I have a class with multiple boolean attributes:
class Foo(object):
_bool1 = True
_bool2 = True
_bool3 = True
#et cetera
def __init__():
self._bool1 = True
self._bool2 = False
self._bool3 = True
#et cetera
Is there a way to use a single decorator to check that any setting of any of the boolean attributes must be a boolean, and to return the boolean value for any requested one of these variables?
In other words, as opposed to something like this for each attribute?
def bool1():
def get_boo1():
return self._bool1
def set_bool1(self,value):
if value <> True and value <> False:
print "bool1 not a boolean value. exiting"
exit()
self._bool1=value
return locals()
bool1 = property(**bool1())
#same thing for bool2, bool3, etc...
I have tried to write it as something like this:
def stuff(obj):
def boolx():
def fget(self):
return obj
def fset(self, value):
if value <> True and value <> False:
print "Non-bool value" #name of object???
exit()
obj = value
return locals()
return property(**boolx())
bool1 = stuff(_bool1)
bool2 = stuff(_bool2)
bool3 = stuff(_bool3)
which gives me:
File "C:/PQL/PythonCode_TestCode/Tutorials/Decorators.py", line 28, in stuff
return property(**boolx())
TypeError: 'obj' is an invalid keyword argument for this function
Any pointers on how to do this correctly?
Thanks,
Paul
You can try using a descriptor:
class BooleanDescriptor(object):
def __init__(self, attr):
self.attr = attr
def __get__(self, instance, owner):
return getattr(instance, self.attr)
def __set__(self, instance, value):
if value in (True, False):
return setattr(instance, self.attr, value)
else:
raise TypeError
class Foo(object):
_bar = False
bar = BooleanDescriptor('_bar')
EDIT:
As S.Lott mentioned, python favors Duck Typing over type checking.
Two important things.
First, "class-level" attributes are shared by all instances of the class. Like static in Java. It's not clear from your question if you're really talking about class-level attributes.
Generally, most OO programming is done with instance variables, like this.
class Foo(object):
def __init__():
self._bool1 = True
self._bool2 = False
self._bool3 = True
#et cetera
Second point. We don't waste a lot of time validating the types of arguments.
If a mysterious "someone" provides wrong type data, our class will crash and that's pretty much the best possible outcome.
Fussing around with type and domain validation is a lot of work to make your class crash in a different place. Ultimately, the exception (TypeError) is the same, so the extra checking turns out to have little practical value.
Indeed, extra domain checking can (and often does) backfire when someone creates an alternate implementation of bool and your class rejects this perfectly valid class that has all the same features as built-in bool.
Do not conflate human-input range checking with Python type checking. Human input (or stuff you read from files or URI's) must be range checked, but not not type checked. The piece of the application that does the reading of the external data defines the type. No need to check the type. There won't be any mysteries.
The "what if I use the wrong type and my program appears to work but didn't" scenario doesn't actually make any sense. First, find two types that have the same behavior right down the line but produce slightly different results. The only example is int vs. float, and the only time is really matters is around division, and that's taken care of by the two division operators.
If you "accidentally" use a string where a number was required, your program will die. Reliably. Consistently.