This question already has answers here:
Local variables in nested functions
(4 answers)
Closed 7 years ago.
I'm expereriencing an odd behaviour within the __new__ method of a Python metaclass. I know the following code works fine:
def create_property(name, _type):
def getter(self):
return self.__dict__.get(name)
def setter(self, val):
if isinstance(val, _type):
self.__dict__[name] = val
else:
raise ValueError("Type not correct.")
return property(getter, setter)
class Meta(type):
def __new__(cls, clsname, bases, clsdict):
for key, val in clsdict.items():
if isinstance(val, type):
clsdict[key] = create_property(key, val)
return super().__new__(cls, clsname, bases, clsdict)
But when avoiding defining the define_property function and putting the code inside the for within the __new__ weird stuff happens. The following is the modified code:
class Meta(type):
def __new__(meta, name, bases, clsdict):
for attr, data_type in clsdict.items():
if not attr.startswith("_"):
def getter(self):
return self.__dict__[attr]
def setter(self, val):
if isinstance(val, data_type):
self.__dict__[attr] = val
else:
raise ValueError(
"Attribute '" + attr + "' must be " + str(data_type) + ".")
clsdict[attr] = property(getter, setter)
return super().__new__(meta, name, bases, clsdict)
The idea is being able to create classes that behave like forms, i.e:
class Company(metaclass=Meta):
name = str
stock_value = float
employees = list
if __name__ == '__main__':
c = Company()
c.name = 'Apple'
c.stock_value = 125.78
c.employees = ['Tim Cook', 'Kevin Lynch']
print(c.name, c.stock_value, c.employees, sep=', ')
When executed, different errors start to happen, such as:
Traceback (most recent call last):
File "main.py", line 37, in <module>
c.name = 'Apple'
File "main.py", line 13, in setter
if isinstance(val, data_type):
TypeError: isinstance() arg 2 must be a type or tuple of types
Traceback (most recent call last):
File "main.py", line 38, in <module>
c.stock_value = 125.78
File "main.py", line 17, in setter
"Attribute '" + attr + "' must be " + str(data_type) + ".")
ValueError: Attribute 'name' must be <class 'str'>.
Traceback (most recent call last):
File "main.py", line 37, in <module>
c.name = 'Apple'
File "main.py", line 17, in setter
"Attribute '" + attr + "' must be " + str(data_type) + ".")
ValueError: Attribute 'stock_value' must be <class 'float'>.
Traceback (most recent call last):
File "main.py", line 37, in <module>
c.name = 'Apple'
File "main.py", line 17, in setter
"Attribute '" + attr + "' must be " + str(data_type) + ".")
ValueError: Attribute 'employees' must be <class 'list'>.
So, what is going on here? What is the difference between having the create_property defined separately than within the __new__ method?
That's due to how the scoping and variable binding works in python. You define a function in a loop which accesses a local variable; but this local variable is looked up during execution of the function, not bound during its definition:
fcts = []
for x in range(10):
def f(): print x
fcts.append(f)
for f in fcts: f() #prints '9' 10 times, as x is 9 after the loop
As you've discovered, you can simply create a closure over the current loop value by using an utility function:
fcts = []
def make_f(x):
def f(): print x
return f
for x in range(10):
fcts.append(make_f(x))
for f in fcts: f() #prints '0' to '9'
Another possibility is to (ab)use a default argument, as those are assigned during function creation:
fcts = []
for x in range(10):
def f(n=x): print n
fcts.append(f)
for f in fcts: f() #prints '0' to '9'
Related
I have an unorthodox Enum that I plan to use in my code, but I've come to a problem where I need my property needed to throw an error when the Enum is used incorrectly, however, instead of throwing the Exception, it instead outputted my property's address.
How I want my code to work is:
When user writes Enum.MEMBER.text, return Enum.MEMBER alt text.
When user writes Enum.text, throw an error.
Here's the code snippet
class MyEnum(Enum):
#property
def text(self):
if isinstance(self._value_,MyCapsule): return self._value_.text
raise Exception('You are not using an Enum!')
return None
#property
def value(self):
if isinstance(self._value_,MyCapsule): return self._value_.value
raise Exception('You are not using an Enum!')
return None
class MyCapsule:
def __init__(self,value,text,more_data):
self._value_, self._text_ = (value,text)
#property
def text(self): return self._text_
#property
def value(self): return self._value_
class CustomData(MyEnum):
ONE = MyCapsule(1,'One','Lorem')
TWO = MyCapsule(2,'Two','Ipsum')
TRI = MyCapsule(3,'Tri','Loipsum')
A = CustomData.ONE
B = CustomData
print(A.text, A.value,sep=' | ')
print(B.text, B.value,sep=' | ')
The output is:
One | 1
<property object at 0x0000016CA56DF0E8> | <property object at 0x0000016CA56DF278>
What I expect was
One | 1
Unexpected exception at ....
Is there a solution to this problem, or I shouldn't write my Enum this way to begin with?
A custom descriptor will do the trick:
class property_only(object):
#
def __init__(self, func):
self.func = func
#
def __get__(self, instance, cls):
if instance is None:
raise Exception('You are not using an Enum!')
else:
return self.func(instance)
#
def __set__(self, instance, value):
# raise error or set value here
pass
Then change your base Enum to use it:
class MyEnum(Enum):
#property_only
def text(self):
return self._value_.text
#property_only
def value(self):
return self._value_.value
class MyCapsule:
def __init__(self, value, text, more_data):
self._value_, self._text_ = (value, text)
class CustomData(MyEnum):
ONE = MyCapsule(1, 'One', 'Lorem')
TWO = MyCapsule(2, 'Two', 'Ipsum')
TRI = MyCapsule(3, 'Tri', 'Loipsum')
and in use:
>>> CustomData.text
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 8, in __get__
Exception: You are not using an Enum!
While that solves the "access-only-from-enum" problem, you still have a lot of indirection when you want to access text and value:
>>> CustomData.ONE.value._value_
1
>>> CustomData.ONE.value._text_
'One'
The solution is to incorporate MyCapsule directly into CustomData:
from enum import Enum
class property_only(object):
#
def __init__(self, func):
self.func = func
#
def __get__(self, instance, cls):
if instance is None:
raise Exception('You are not using an Enum!')
else:
return self.func(instance)
#
def __set__(self, instance, value):
# raise error or set value here
pass
class CustomData(Enum):
#
ONE = 1, 'One', 'Lorem'
TWO = 2, 'Two', 'Ipsum'
TRI = 3, 'Tri', 'Loipsum'
#
def __new__(cls, value, text, more_data):
member = object.__new__(cls)
member._value_ = value
member._text_ = text
# ignoring more_data for now...
return member
#
#property_only
def text(self):
return self._text_
#
#property_only
def value(self):
return self._value_
and in use:
>>> CustomData.ONE
<CustomData.ONE: 1>
>>> CustomData.ONE.value
1
>>> CustomData.ONE.text
'One'
>>> CustomData.ONE.name
'ONE'
>>> CustomData.text
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 8, in __get__
Exception: You are not using an Enum!
>>> CustomData.value
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 8, in __get__
Exception: You are not using an Enum!
Disclosure: I am the author of the Python stdlib Enum, the enum34 backport, and the Advanced Enumeration (aenum) library.
Let's say I have this class:
class Person:
def __init__(self, name):
self.name = name
If I want to instantiate Person I can do:
me = Person("António")
But what if I only want to instantiate Person if name has type str?
I tried this:
class Person:
def __init__(self, name):
if type(name) == str:
self.name = name
But then when I do:
me = Person("António")
print(me.name)
you = Person(1)
print(you.name)
I get this:
So all that's happening is:
If name is str, the instance has a .name method
If name is not str, the instance has no .name method
But what I actually want, is to stop instantiation all together if name is not an str.
In other words, I want it to be impossible to create an object from the Person class with a non str name.
How can I do that?
You could use a factory that checks the parameters, and returns a Person object if everything is fine, or raises an error:
maybe something line this:
class PersonNameError(Exception):
pass
class Person:
def __init__(self):
self.name = None
def person_from_name(name: str) -> Person:
"""Person factory that checks if the parameter name is valid
returns a Person object if it is, or raises an error without
creating an instance of Person if not.
"""
if isinstance(name, str):
p = Person()
p.name = name
return p
raise PersonNameError('a name must be a string')
p = person_from_name('Antonio')
Whereas:
p = person_from_name(123) # <-- parameter name is not a string
throws an exception:
PersonNameError Traceback (most recent call last)
<ipython-input-41-a23e22774881> in <module>
14
15 p = person_from_name('Antonio')
---> 16 p = person_from_name(123)
<ipython-input-41-a23e22774881> in person_from_name(name)
11 p.name = name
12 return p
---> 13 raise PersonNameError('a name must be a string')
14
15 p = person_from_name('Antonio')
PersonNameError: a name must be a string
How about :
class Person:
def __init__(self, name):
if type(name) == str:
self.name = name
else:
raise Exception("name attribute should be a string")
You should use factory design pattern. You can read more about it here. To put it simple:
Create Class/method that will check for the conditions and return new class instance only if those conditions are met.
If you want to modify instantiation behaviour,
You can create a constructor,
using a class method.
class Person:
def __init__(self, name):
self.name = name
print("ok")
#classmethod
def create(cls, name):
if not isinstance(name, str):
raise ValueError(f"Expected name to be a string, got {type(name)}")
return cls(name)
me = Person.create("António")
print(me.name)
you = Person.create(1)
print(you.name)
OK prints once proving only once instantiation
ok
António
Traceback (most recent call last):
File "/data/user/0/ru.iiec.pydroid3/files/accomp_files/iiec_run/iiec_run.py", line 31, in <module>
start(fakepyfile,mainpyfile) File "/data/user/0/ru.iiec.pydroid3/files/accomp_files/iiec_run/iiec_run.py", line 30, in start
exec(open(mainpyfile).read(), __main__.__dict__)
File "<string>", line 17, in <module>
File "<string>", line 11, in create
ValueError: Expected name to be a string, got <class 'int'>
[Program finished]
Here, it's an explicit test that's being done.
Overriding new is very rarely needed and for everyday normal classes I think it should be avoided. Doing so keeps the class implementation simple.
class Test(object):
print("ok")
def __new__(cls, x):
if isinstance(x, str) :
print(x)
else:
raise ValueError(f"Expected name to be a string, got {type(x)}")
obj1 = Test("António")
obj2 = Test(1)
ok
António
Traceback (most recent call last):
File "/data/user/0/ru.iiec.pydroid3/files/accomp_files/iiec_run/iiec_run.py", line 31, in <module>
start(fakepyfile,mainpyfile) File "/data/user/0/ru.iiec.pydroid3/files/accomp_files/iiec_run/iiec_run.py", line 30, in start
exec(open(mainpyfile).read(), __main__.__dict__)
File "<string>", line 14, in <module>
File "<string>", line 10, in __new__
ValueError: Expected name to be a string, got <class 'int'>
[Program finished]
Update:
>>> solve([A(x)*A(y) + A(-1), A(x) + A(-2)], x, y)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 12, in __mul__
TypeError: unbound method __multiplyFunction__() must be called with A instance
as first argument (got Symbol instance instead)
class A:
#staticmethod
def __additionFunction__(a1, a2):
return a1*a2 #Put what you want instead of this
def __multiplyFunction__(a1, a2):
return a1*a2+a1 #Put what you want instead of this
def __init__(self, value):
self.value = value
def __add__(self, other):
return self.__class__.__additionFunction__(self.value, other.value)
def __mul__(self, other):
return self.__class__.__multiplyFunction__(self.value, other.value)
solve([A(x)*A(y) + A(-1), A(x) + A(-2)], x, y)
update2:
>>> ss([x*y + -1, x-2], x, y)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: solve instance has no __call__ method
class AA:
#staticmethod
def __additionFunction__(a1, a2):
return a1*a2 #Put what you want instead of this
def __multiplyFunction__(a1, a2):
return a1*a2+a1 #Put what you want instead of this
def __init__(self, value):
self.value = value
def __add__(self, other):
return self.__class__.__additionFunction__(self.value, other.value)
def __mul__(self, other):
return self.__class__.__multiplyFunction__(self.value, other.value)
ss = solve(AA)
ss([x*y + -1, x-2], x, y)
would like to change Add operator to custom function op2 in solve function
and then this solve([x*y - 1, x + 2], x, y)
during solve, the parameters also change Add to custom function op2
error because i do not know how to inject op2 as ast tree to be used by ast tree
>>> class ChangeAddToMultiply(ast.NodeTransformer, ast2.NodeTransformer):
... """Wraps all integers in a call to Integer()"""
... def visit_BinOp(self, node):
... print(dir(node))
... print(dir(node.left))
... if isinstance(node.op, ast.Add):
... ast.Call(Name(id="op2", ctx=ast2.Load()), [node.left, node.right
], [])
... return node
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'ast2' is not defined
>>>
>>> code = inspect.getsourcelines(solve)
>>> tree = ast.parse(code)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\ast.py", line 37, in parse
return compile(source, filename, mode, PyCF_ONLY_AST)
TypeError: expected a readable buffer object
>>> tree2 = ast.parse("def op2(a,b): return a*b+a")
>>> tree = ChangeAddToMultiply().visit(tree,tree2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'tree' is not defined
>>> ast.fix_missing_locations(tree)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'tree' is not defined
>>> co = compile(tree, '<ast>', "exec")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'tree' is not defined
>>>
>>> exec(code)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: exec: arg 1 must be a string, file, or code object
>>> exec(co)
original code
import ast
from __future__ import division
from sympy import *
x, y, z, t = symbols('x y z t')
k, m, n = symbols('k m n', integer=True)
f, g, h = symbols('f g h', cls=Function)
import inspect
def op2(a,b):
return a*b+a
class ChangeAddToMultiply(ast.NodeTransformer, ast2.NodeTransformer):
"""Wraps all integers in a call to Integer()"""
def visit_BinOp(self, node):
print(dir(node))
print(dir(node.left))
if isinstance(node.op, ast.Add):
ast.Call(Name(id="op2", ctx=ast2.Load()), [node.left, node.right], [])
return node
code = inspect.getsourcelines(solve([x*y - 1, x - 2], x, y))
tree = ast.parse(code)
tree2 = ast.parse("def op2(a,b): return a*b+a")
tree = ChangeAddToMultiply().visit(tree,tree2)
ast.fix_missing_locations(tree)
co = compile(tree, '<ast>', "exec")
exec(code)
exec(co)
I guess __add__ is what you're looking for.
class A:
def __init__(self, value):
self.value = value
def __add__(self, other):
return self.value*other.value + 4
>>> a = A(3)
>>> b = A(4)
>>> a + b
16
EDIT:
A solution to replace the + operator with a function already existing:
class A:
#staticmethod
def additionFunction(a1, a2):
return a1*a2 #Put what you want instead of this
def __init__(self, value):
self.value = value
def __add__(self, other):
return self.__class__.additionFunction(self.value, other.value)
This one is a bit more tricky. In my opinion, that additionFunction is meant to belong to the A class, but there is no such thing as static methods in Python. So this function must be called from self: self.__class__.additionFunction.
To go further, one could imagine a solution using a meta-class Addable, whose constructor would take an additionFunction as parameter... But it might not be worth it.
Is it possible to assign a numeric value to a variable in such a way that it is limited to a certain range? More specifically I want a variable that can never go below zero, because if that was about to happen an exception would be raised.
Imaginary example:
>>> var = AlwaysPositive(0)
>>> print var
0
>>> var += 3
>>> print var
3
>>> var -= 4
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AlwaysPositiveError: dropping AlwaysPositive integer below zero
The reason I ask is because I am debugging a game I am writing. Where humans understand implicitly you can never have -1 cards in your hand, a computer does not. I can make functions that check all values used in the game and call those functions at multiple positions throughout the script and see if any weird values appear. But I was wondering if there perhaps was an easier way to do this?
Sub-classing int is probably the best way to do this if you really need to, but the implementations shown so far are naive. I would do:
class NegativeValueError(ValueError):
pass
class PositiveInteger(int):
def __new__(cls, value, base=10):
if isinstance(value, basestring):
inst = int.__new__(cls, value, base)
else:
inst = int.__new__(cls, value)
if inst < 0:
raise NegativeValueError()
return inst
def __repr__(self):
return "PositiveInteger({})".format(int.__repr__(self))
def __add__(self, other):
return PositiveInteger(int.__add__(self, other))
# ... implement other numeric type methods (__sub__, __mul__, etc.)
This allows you to construct a PositiveInteger just like a regular int:
>>> PositiveInteger("FFF", 16)
PositiveInteger(4095)
>>> PositiveInteger(5)
PositiveInteger(5)
>>> PositiveInteger(-5)
Traceback (most recent call last):
File "<pyshell#24>", line 1, in <module>
PositiveInteger(-5)
File "<pyshell#17>", line 8, in __new__
raise NegativeValueError()
NegativeValueError
See e.g. the datamodel docs on numeric type emulation for details of the methods you will need to implement. Note that you don't need to explicitly check for negative numbers in most of those methods, as when you return PositiveInteger(...) the __new__ will do it for you. In use:
>>> i = PositiveInteger(5)
>>> i + 3
PositiveInteger(8)
Alternatively, if these non-negative integers will be attributes of a class, you could enforce positive values using the descriptor protocol, e.g.:
class PositiveIntegerAttribute(object):
def __init__(self, name):
self.name = name
def __get__(self, obj, typ=None):
return getattr(obj, self.name)
def __set__(self, obj, val):
if not isinstance(val, (int, long)):
raise TypeError()
if val < 0:
raise NegativeValueError()
setattr(obj, self.name, val)
def __delete__(self, obj):
delattr(obj, self.name)
You can then use this as follows:
>>> class Test(object):
foo = PositiveIntegerAttribute('_foo')
>>> t = Test()
>>> t.foo = 1
>>> t.foo = -1
Traceback (most recent call last):
File "<pyshell#34>", line 1, in <module>
t.foo = -1
File "<pyshell#28>", line 13, in __set__
raise NegativeValueError()
NegativeValueError
>>> t.foo += 3
>>> t.foo
4
>>> t.foo -= 5
Traceback (most recent call last):
File "<pyshell#37>", line 1, in <module>
t.foo -= 5
File "<pyshell#28>", line 13, in __set__
raise NegativeValueError()
NegativeValueError
You can subclass your own data type from int and provide it with a bunch of magic methods overloading the operators you need.
class Alwayspositive(int):
def __init__(self, *args, **kwargs):
super(Alwayspositive, self).__init__(*args, **kwargs)
def __neg__(self):
raise AlwayspositiveError()
def __sub__(self, other):
result = super(Alwayspositive, self).__sub__(other)
if result < 0:
raise AlwayspositiveError()
return result
And so on. This is quite a lot of work and debug to make such a class safe, but it will allow you to debug your code with a very little changes between debug and release mode.
I want to declare a function dynamically and I want to wrap any access to global variables OR alternatively define which variables are free and wrap any access to free variables.
I'm playing around with code like this:
class D:
def __init__(self):
self.d = {}
def __getitem__(self, k):
print "D get", k
return self.d[k]
def __setitem__(self, k, v):
print "D set", k, v
self.d[k] = v
def __getattr__(self, k):
print "D attr", k
raise AttributeError
globalsDict = D()
src = "def foo(): print x"
compiled = compile(src, "<foo>", "exec")
exec compiled in {}, globalsDict
f = globalsDict["foo"]
print(f)
f()
This produces the output:
D set foo <function foo at 0x10f47b758>
D get foo
<function foo at 0x10f47b758>
Traceback (most recent call last):
File "test_eval.py", line 40, in <module>
f()
File "<foo>", line 1, in foo
NameError: global name 'x' is not defined
What I want is somehow catch the access to x with my dict-like wrapper D. How can I do that?
I don't want to predefine all global variables (in this case x) because I want to be able to load them lazily.
What you're looking for is object proxying.
Here is a recipe for an object proxy which supports pre- and post- call hooks:
http://code.activestate.com/recipes/366254-generic-proxy-object-with-beforeafter-method-hooks/
Create a subclass that doesn't actually load the object until the first time the _pre hook is called. Anything accessing the object will cause the real object to be loaded, and all calls will appear to be handled directly by the real object.
Try this out
class GlobalDict(object):
def __init__(self, **kwargs):
self.d = kwargs
def __getitem__(self, key):
print 'getting', key
return self.d[key]
def __setitem__(self, key, value):
print 'setting', key, 'to', value
if hasattr(value, '__globals__'):
value.__globals__.update(self.d)
self.d[key] = value
for v in self.d.values():
if v is not value:
if hasattr(v, '__globals__'):
v.__globals__.update(self.d)
def __delitem__(self, key):
print 'deling', key
del self.d[key]
for v in self.d.values():
if hasattr(v, '__globals__'):
del v.__globals__[key]
>>> gd = GlobalDict()
>>> src = 'def foo(): print x'
>>> compiled = compile(src, '<foo>', 'exec')
>>> exec compiled in {}, gd
setting foo to <function foo at 0x102223b18>
>>> f = gd['foo']
getting foo
>>> f
<function foo at 0x102223b18>
>>> f() # This one will throw an error
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "<foo>", line 1, in foo
NameError: global name 'x' is not defined
>>> gd['x'] = 1
setting x to 1
>>> f()
1
>>> del gd['x'] # removes 'x' from the globals of anything in gd
>>> f() # Will now fail again
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "<foo>", line 1, in foo
NameError: global name 'x' is not defined