Is it possible to assign a numeric value to a variable in such a way that it is limited to a certain range? More specifically I want a variable that can never go below zero, because if that was about to happen an exception would be raised.
Imaginary example:
>>> var = AlwaysPositive(0)
>>> print var
0
>>> var += 3
>>> print var
3
>>> var -= 4
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AlwaysPositiveError: dropping AlwaysPositive integer below zero
The reason I ask is because I am debugging a game I am writing. Where humans understand implicitly you can never have -1 cards in your hand, a computer does not. I can make functions that check all values used in the game and call those functions at multiple positions throughout the script and see if any weird values appear. But I was wondering if there perhaps was an easier way to do this?
Sub-classing int is probably the best way to do this if you really need to, but the implementations shown so far are naive. I would do:
class NegativeValueError(ValueError):
pass
class PositiveInteger(int):
def __new__(cls, value, base=10):
if isinstance(value, basestring):
inst = int.__new__(cls, value, base)
else:
inst = int.__new__(cls, value)
if inst < 0:
raise NegativeValueError()
return inst
def __repr__(self):
return "PositiveInteger({})".format(int.__repr__(self))
def __add__(self, other):
return PositiveInteger(int.__add__(self, other))
# ... implement other numeric type methods (__sub__, __mul__, etc.)
This allows you to construct a PositiveInteger just like a regular int:
>>> PositiveInteger("FFF", 16)
PositiveInteger(4095)
>>> PositiveInteger(5)
PositiveInteger(5)
>>> PositiveInteger(-5)
Traceback (most recent call last):
File "<pyshell#24>", line 1, in <module>
PositiveInteger(-5)
File "<pyshell#17>", line 8, in __new__
raise NegativeValueError()
NegativeValueError
See e.g. the datamodel docs on numeric type emulation for details of the methods you will need to implement. Note that you don't need to explicitly check for negative numbers in most of those methods, as when you return PositiveInteger(...) the __new__ will do it for you. In use:
>>> i = PositiveInteger(5)
>>> i + 3
PositiveInteger(8)
Alternatively, if these non-negative integers will be attributes of a class, you could enforce positive values using the descriptor protocol, e.g.:
class PositiveIntegerAttribute(object):
def __init__(self, name):
self.name = name
def __get__(self, obj, typ=None):
return getattr(obj, self.name)
def __set__(self, obj, val):
if not isinstance(val, (int, long)):
raise TypeError()
if val < 0:
raise NegativeValueError()
setattr(obj, self.name, val)
def __delete__(self, obj):
delattr(obj, self.name)
You can then use this as follows:
>>> class Test(object):
foo = PositiveIntegerAttribute('_foo')
>>> t = Test()
>>> t.foo = 1
>>> t.foo = -1
Traceback (most recent call last):
File "<pyshell#34>", line 1, in <module>
t.foo = -1
File "<pyshell#28>", line 13, in __set__
raise NegativeValueError()
NegativeValueError
>>> t.foo += 3
>>> t.foo
4
>>> t.foo -= 5
Traceback (most recent call last):
File "<pyshell#37>", line 1, in <module>
t.foo -= 5
File "<pyshell#28>", line 13, in __set__
raise NegativeValueError()
NegativeValueError
You can subclass your own data type from int and provide it with a bunch of magic methods overloading the operators you need.
class Alwayspositive(int):
def __init__(self, *args, **kwargs):
super(Alwayspositive, self).__init__(*args, **kwargs)
def __neg__(self):
raise AlwayspositiveError()
def __sub__(self, other):
result = super(Alwayspositive, self).__sub__(other)
if result < 0:
raise AlwayspositiveError()
return result
And so on. This is quite a lot of work and debug to make such a class safe, but it will allow you to debug your code with a very little changes between debug and release mode.
Related
I have an unorthodox Enum that I plan to use in my code, but I've come to a problem where I need my property needed to throw an error when the Enum is used incorrectly, however, instead of throwing the Exception, it instead outputted my property's address.
How I want my code to work is:
When user writes Enum.MEMBER.text, return Enum.MEMBER alt text.
When user writes Enum.text, throw an error.
Here's the code snippet
class MyEnum(Enum):
#property
def text(self):
if isinstance(self._value_,MyCapsule): return self._value_.text
raise Exception('You are not using an Enum!')
return None
#property
def value(self):
if isinstance(self._value_,MyCapsule): return self._value_.value
raise Exception('You are not using an Enum!')
return None
class MyCapsule:
def __init__(self,value,text,more_data):
self._value_, self._text_ = (value,text)
#property
def text(self): return self._text_
#property
def value(self): return self._value_
class CustomData(MyEnum):
ONE = MyCapsule(1,'One','Lorem')
TWO = MyCapsule(2,'Two','Ipsum')
TRI = MyCapsule(3,'Tri','Loipsum')
A = CustomData.ONE
B = CustomData
print(A.text, A.value,sep=' | ')
print(B.text, B.value,sep=' | ')
The output is:
One | 1
<property object at 0x0000016CA56DF0E8> | <property object at 0x0000016CA56DF278>
What I expect was
One | 1
Unexpected exception at ....
Is there a solution to this problem, or I shouldn't write my Enum this way to begin with?
A custom descriptor will do the trick:
class property_only(object):
#
def __init__(self, func):
self.func = func
#
def __get__(self, instance, cls):
if instance is None:
raise Exception('You are not using an Enum!')
else:
return self.func(instance)
#
def __set__(self, instance, value):
# raise error or set value here
pass
Then change your base Enum to use it:
class MyEnum(Enum):
#property_only
def text(self):
return self._value_.text
#property_only
def value(self):
return self._value_.value
class MyCapsule:
def __init__(self, value, text, more_data):
self._value_, self._text_ = (value, text)
class CustomData(MyEnum):
ONE = MyCapsule(1, 'One', 'Lorem')
TWO = MyCapsule(2, 'Two', 'Ipsum')
TRI = MyCapsule(3, 'Tri', 'Loipsum')
and in use:
>>> CustomData.text
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 8, in __get__
Exception: You are not using an Enum!
While that solves the "access-only-from-enum" problem, you still have a lot of indirection when you want to access text and value:
>>> CustomData.ONE.value._value_
1
>>> CustomData.ONE.value._text_
'One'
The solution is to incorporate MyCapsule directly into CustomData:
from enum import Enum
class property_only(object):
#
def __init__(self, func):
self.func = func
#
def __get__(self, instance, cls):
if instance is None:
raise Exception('You are not using an Enum!')
else:
return self.func(instance)
#
def __set__(self, instance, value):
# raise error or set value here
pass
class CustomData(Enum):
#
ONE = 1, 'One', 'Lorem'
TWO = 2, 'Two', 'Ipsum'
TRI = 3, 'Tri', 'Loipsum'
#
def __new__(cls, value, text, more_data):
member = object.__new__(cls)
member._value_ = value
member._text_ = text
# ignoring more_data for now...
return member
#
#property_only
def text(self):
return self._text_
#
#property_only
def value(self):
return self._value_
and in use:
>>> CustomData.ONE
<CustomData.ONE: 1>
>>> CustomData.ONE.value
1
>>> CustomData.ONE.text
'One'
>>> CustomData.ONE.name
'ONE'
>>> CustomData.text
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 8, in __get__
Exception: You are not using an Enum!
>>> CustomData.value
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 8, in __get__
Exception: You are not using an Enum!
Disclosure: I am the author of the Python stdlib Enum, the enum34 backport, and the Advanced Enumeration (aenum) library.
While learning the concepts of decorators in python I came to the question if it is possible to use decorators to simulate a state machine.
Example:
from enum import Enum
class CoffeeMachine(object):
def __init__(self):
self.state = CoffeeState.Initial
##Statemachine(shouldbe, willbe)
#Statemachine(CoffeeState.Initial, CoffeeState.Grounding)
def ground_beans(self):
print("ground_beans")
#Statemachine(CoffeeState.Grounding, CoffeeState.Heating)
def heat_water(self):
print("heat_water")
#Statemachine(CoffeeState.Heating, CoffeeState.Pumping)
def pump_water(self):
print("pump_water")
class CoffeeState(Enum):
Initial = 0
Grounding = 1
Heating = 2
Pumping = 3
So all the statemachine does is to check if my current state is the requested one, if it is, it should call the underlying function and lastly it should set the state further.
How would you implement this?
Sure you can, provided your decorator makes an assumption about where the state is stored:
from functools import wraps
class StateMachineWrongState(Exception):
def __init__(self, shouldbe, current):
self.shouldbe = shouldbe
self.current = current
super().__init__((shouldbe, current))
def statemachine(shouldbe, willbe):
def decorator(f):
#wraps(f)
def wrapper(self, *args, **kw):
if self.state != shouldbe:
raise StateMachineWrongState(shouldbe, self.state)
try:
return f(self, *args, **kw)
finally:
self.state = willbe
return wrapper
return decorator
The decorator expects to get self passed in; i.e. it should be applied to methods in a class. It then expects self to have a state attribute to track the state machine state.
Demo:
>>> cm = CoffeeMachine()
>>> cm.state
<CoffeeState.Initial: 0>
>>> cm.ground_beans()
ground_beans
>>> cm.state
<CoffeeState.Grounding: 1>
>>> cm.ground_beans()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 6, in wrapper
__main__.StateMachineWrongState: (<CoffeeState.Initial: 0>, <CoffeeState.Grounding: 1>)
>>> cm.heat_water()
heat_water
>>> cm.pump_water()
pump_water
>>> cm.state
<CoffeeState.Pumping: 3>
This question already has answers here:
Local variables in nested functions
(4 answers)
Closed 7 years ago.
I'm expereriencing an odd behaviour within the __new__ method of a Python metaclass. I know the following code works fine:
def create_property(name, _type):
def getter(self):
return self.__dict__.get(name)
def setter(self, val):
if isinstance(val, _type):
self.__dict__[name] = val
else:
raise ValueError("Type not correct.")
return property(getter, setter)
class Meta(type):
def __new__(cls, clsname, bases, clsdict):
for key, val in clsdict.items():
if isinstance(val, type):
clsdict[key] = create_property(key, val)
return super().__new__(cls, clsname, bases, clsdict)
But when avoiding defining the define_property function and putting the code inside the for within the __new__ weird stuff happens. The following is the modified code:
class Meta(type):
def __new__(meta, name, bases, clsdict):
for attr, data_type in clsdict.items():
if not attr.startswith("_"):
def getter(self):
return self.__dict__[attr]
def setter(self, val):
if isinstance(val, data_type):
self.__dict__[attr] = val
else:
raise ValueError(
"Attribute '" + attr + "' must be " + str(data_type) + ".")
clsdict[attr] = property(getter, setter)
return super().__new__(meta, name, bases, clsdict)
The idea is being able to create classes that behave like forms, i.e:
class Company(metaclass=Meta):
name = str
stock_value = float
employees = list
if __name__ == '__main__':
c = Company()
c.name = 'Apple'
c.stock_value = 125.78
c.employees = ['Tim Cook', 'Kevin Lynch']
print(c.name, c.stock_value, c.employees, sep=', ')
When executed, different errors start to happen, such as:
Traceback (most recent call last):
File "main.py", line 37, in <module>
c.name = 'Apple'
File "main.py", line 13, in setter
if isinstance(val, data_type):
TypeError: isinstance() arg 2 must be a type or tuple of types
Traceback (most recent call last):
File "main.py", line 38, in <module>
c.stock_value = 125.78
File "main.py", line 17, in setter
"Attribute '" + attr + "' must be " + str(data_type) + ".")
ValueError: Attribute 'name' must be <class 'str'>.
Traceback (most recent call last):
File "main.py", line 37, in <module>
c.name = 'Apple'
File "main.py", line 17, in setter
"Attribute '" + attr + "' must be " + str(data_type) + ".")
ValueError: Attribute 'stock_value' must be <class 'float'>.
Traceback (most recent call last):
File "main.py", line 37, in <module>
c.name = 'Apple'
File "main.py", line 17, in setter
"Attribute '" + attr + "' must be " + str(data_type) + ".")
ValueError: Attribute 'employees' must be <class 'list'>.
So, what is going on here? What is the difference between having the create_property defined separately than within the __new__ method?
That's due to how the scoping and variable binding works in python. You define a function in a loop which accesses a local variable; but this local variable is looked up during execution of the function, not bound during its definition:
fcts = []
for x in range(10):
def f(): print x
fcts.append(f)
for f in fcts: f() #prints '9' 10 times, as x is 9 after the loop
As you've discovered, you can simply create a closure over the current loop value by using an utility function:
fcts = []
def make_f(x):
def f(): print x
return f
for x in range(10):
fcts.append(make_f(x))
for f in fcts: f() #prints '0' to '9'
Another possibility is to (ab)use a default argument, as those are assigned during function creation:
fcts = []
for x in range(10):
def f(n=x): print n
fcts.append(f)
for f in fcts: f() #prints '0' to '9'
I am trying to identify a way to implement same concept as data annotation validation from .NET in Python. It would be something like the following:
class MyClass:
#Property
def Message(self):
return self._message
#Message.setter
#MaxValue(233)
def Message(self, value):
self._message = value
I have tried different approaches but with out success. I would like to get access to "value" argument in order to apply a specific validation on it.
Does This help you?
This way you can make python annotations with additional arguments
def MaxValue(maxValue):
def wrapFunction(function):
def replacedMaxValueFunction(self, value):
assert value <= maxValue
return function(self, value)
replacedMaxValueFunction.__name__ = function.__name__
return replacedMaxValueFunction
return wrapFunction
So now you can do this:
I do not know wether it is C# conform but hopefully it does the checking you desire.
>>> #MaxValue(123)
def f(self, value):
print(value)
>>> f(1, 123)
123
>>> f(1, 124)
Traceback (most recent call last):
File "<pyshell#9>", line 1, in <module>
f(1, 124)
File "<pyshell#1>", line 4, in replacedMaxValueFunction
assert value <= maxValue
AssertionError
I have a python class to calculate the number of bits when they have been specified using "Kb", "Mb" or "Gb" notation. I assigned a #property to the bits() method so it will always return a float (thus working well with int(BW('foo').bits)).
However, I am having trouble figuring out what to do when a pure class instance is cast as an int(), such as int(BW('foo')). I have already defined __repr__() to return a string, but it seems that that code is not touched when the class instance is cast to a type.
Is there any way to detect within my class that it is being cast as another type (and thus permit me to handle this case)?
>>> from Models.Network.Bandwidth import BW
>>> BW('98244.2Kb').bits
98244200.0
>>> int(BW('98244.2Kb').bits)
98244200
>>> BW('98244.2Kb')
98244200.0
>>> type(BW('98244.2Kb'))
<class 'Models.Network.Bandwidth.BW'>
>>>
>>> int(BW('98244.2Kb'))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: int() argument must be a string or a number, not 'BW'
>>>
Read this
http://docs.python.org/reference/datamodel.html#emulating-numeric-types
__int__()
__float__()
Basically what you want lies within overriding __trunc__ and __float__ in the Models.Network.Bandwidth.BW class:
#!/usr/bin/python
class NumBucket:
def __init__(self, value):
self.value = float(value)
def __repr__(self):
return str(self.value)
def bits(self):
return float(self.value)
def __trunc__(self):
return int(self.value)
a = NumBucket(1092)
print a
print int(a)
print int(a.bits())