I'm doing some coding for Maya with PyMel and I'm trying to create some properties in my rig class to wrap some PyMel code. The code for all of the properties was pretty similar, so I figured this would be a good place to use a closure.
import pymel.core as pm
import riggermortis.utils as utils
class RigModule(object):
def __init__:
# blah blah irrelevant code here
pass
def createRootProperty(attrName):
def getter(self):
return pm.getAttr(self.root+'.'+attrName)
def setter(self, nodeToLink):
if self.root.hasAttr(attrName):
pm.setAttr(self.root+'.'+attrName, nodeToLink)
else:
utils.linkNodes(self.root, nodeToLink, attrName)
return property(fget=getter,fset=setter)
hookConstraint = createRootProperty('hookConstraint')
unhookTarget = createRootProperty('unhookTarget')
moduleGrp = createRootProperty('moduleGrp')
hookGrp = createRootProperty('hookGrp')
Functionally it works, but Eclipse/PyDev is telling me that my 'createRootProperty' function needs 'self' as its first argument so I'm wondering if what I'm doing is incorrect.
For what you're doing a closure is not really needed except for cleanliness. The linter think it's an incorrectly formatted member function, even though it's doing what you want.
You can just move the function of the class scope and the linter will stop complaining -- you can rename the function with an underscore so nobody accidentally thinks it is a tool rather than a piece of infrastructure.
If you expect to do this a lot, you could automate it into a metaclass that reads a list of names from a class field and creates properies as appropriate. There's a more detailed example of that strategy here, but in essence the metaclass will get a copy of the class dictionary when the class is defined and it has the opportunity to mess with the definition before it gets compiled. You can create a property easily at that step:
def createRootProperty(name):
# this is a dummy, but as long as
# the return from this function
# is a property descriptor you're good
#property
def me(self):
return name, self.__class__
return me
class PropertyMeta(type):
# this gets called when a class using this meta is
# first compiled. It gives you a chance to intervene
# in the class creation project
def __new__(cls, name, bases, properties):
# if the class has a 'PROPS' member, it's a list
# of properties to add
roots = properties.get('PROPS', [])
for r in roots:
properties[r] = createRootProperty(r)
print ("# added property '{}' to {}".format(r, name))
return type.__new__( cls, name, bases, properties)
class RigModule(object):
__metaclass__ = PropertyMeta
PROPS = ['arm', 'head', 'leg']
def __init__(self):
pass
test = RigModule()
print test.arm
class Submodule(RigModule):
# metaclass added properties are inheritable
pass
test2 = Submodule()
print test2.leg
class NewProperties(RigModule):
# they can also be augmented in derived classes
PROPS = ['nose', 'mouth']
print NewProperties().nose
print NewProperties().arm
# added property 'arm' to RigModule
# added property 'head' to RigModule
# added property 'leg' to RigModule
# ('arm', <class '__main__.RigModule'>)
# ('leg', <class '__main__.Submodule'>)
# added property 'nose' to NewProperties
# added property 'mouth' to NewProperties
# ('nose', <class '__main__.NewProperties'>)
# ('arm', <class '__main__.NewProperties'>)
Metaclasses get a bad rep -- sometimes deservedly so -- for adding complexity. Don't use them when a simpler approach will do. But for boilerplate reduction in cases like this they are a great tool.
Related
Is it possible to get the the namespace parent, or encapsulating type, of a class?
class base:
class sub:
def __init__(self):
# self is "__main__.extra.sub"
# want to create object of type "__main__.extra" from this
pass
class extra(base):
class sub(base.sub):
pass
o = extra.sub()
The problem in base.sub.__init__ is getting extra from the extra.sub.
The only solutions I can think of at the moment involve having all subclasses of base provide some link to their encapsulating class type or turning the type of self in base.sub.__init__ into a string an manipulating it into a new type string. Both a bit ughly.
It's clearly possible to go the other way, type(self()).sub would give you extra.sub from inside base.sub.__init__ for a extra type object, but how do I do .. instead of .sub ? :)
The real answer is that there is no general way to do this. Python classes are normal objects, but they are created a bit differently. A class does not exist until well after its entire body has been executed. Once a class is created, it can be bound to many different names. The only reference it has to where it was created are the __module__ and __qualname__ attributes, but both of these are mutable.
In practice, it is possible to write your example like this:
class Sub:
def __init__(self):
pass
class Base:
Sub = Sub
Sub.__qualname__ = 'Base.Sub'
class Sub(Sub):
pass
class Extra(Base):
Sub = Sub
Sub.__qualname__ = 'Extra.Sub'
del Sub # Unlink from global namespace
Barring the capitalization, this behaves exactly as your original example. Hopefully this clarifies which code has access to what, and shows that the most robust way to determine the enclosing scope of a class is to explicitly assign it somewhere. You can do this in any number of ways. The trivial way is just to assign it. Going back to your original notation:
class Base:
class Sub:
def __init__(self):
print(self.enclosing)
Base.Sub.enclosing = Base
class Extra(Base):
class Sub(Base.Sub):
pass
Extra.Sub.enclosing = Extra
Notice that since Base does not exist when it body is being executed, the assignment has to happen after the classes are both created. You can bypass this by using a metaclass or a decorator. That will allow you to mess with the namespace before the class object is assigned to a name, making the change more transparent.
class NestedMeta(type):
def __init__(cls, name, bases, namespace):
for name, obj in namespace.items():
if isinstance(obj, type):
obj.enclosing = cls
class Base(metaclass=NestedMeta):
class Sub:
def __init__(self):
print(self.enclosing)
class Extra(Base):
class Sub(Base.Sub):
pass
But this is again somewhat unreliable because not all metaclasses are an instance of type, which takes us back to the first statement in this answer.
In many cases, you can use the __qualname__ and __module__ attributes to get the name of the surrounding class:
import sys
cls = type(o)
getattr(sys.modules[cls.__module__], '.'.join(cls.__qualname__.split('.')[:-1]))
This is a very literal answer to your question. It just shows one way of getting the class in the enclosing scope without addressing the probably design flaws that lead to this being necessary in the first place, or any of the many possible corner cases that this would not cover.
I have a class Step, which I want to derive by many sub-classes. I want every class deriving from Step to be "registered" by a name I choose for it (not the class's name), so I can later call Step.getStepTypeByName().
Something like this, only working :):
class Step(object):
_STEPS_BY_NAME = {}
#staticmethod
def REGISTER(cls, name):
_STEPS_BY_NAME[name] = cls
class Derive1(Step):
REGISTER(Derive1, "CustomDerive1Name")
...
class Derive2(Step):
REGISTER(Derive2, "CustomDerive2Name")
...
Your solution do not work for three reasons.
The first one is that _STEPS_BY_NAME only exists as an attribute of the Step class, so Step.REGISTER cannot access _STEPS_BY_NAME without a reference to the Step class. IOW you have to make it a classmethod (cf below)
The second one is that you need to explicitely use Step.REGISTER(cls) - the name REGISTER does not exist outside the Step class.
The third reason is that within a class statement's body, the class object has not yet been created not bound to it's name, so you cannot not reference the class itself at this point.
IOW, you'd want this instead:
class Step(object):
_STEPS_BY_NAME = {}
# NB : by convention, "ALL_UPPER" names denote pseudo-constants
#classmethod
def register(cls, name):
# here `cls` is the current class
cls._STEPS_BY_NAME[name] = stepclass
class Derive1(Step):
...
Step.register(Derive1, "CustomDerive1Name")
class Derive2(Step):
...
Step.register(Derive2, "CustomDerive2Name")
Now with a minor modification to Step.register you could use it as a class decorator, making things much clearer:
class Step(object):
_STEPS_BY_NAME = {}
#classmethod
def register(cls, name):
def _register(stepclass):
cls._STEPS_BY_NAME[name] = stepclass
return stepclass
return _register
#Step.register("CustomDerive1Name")
class Derive1(Step):
...
#Step.register("CustomDerive2Name")
class Derive2(Step):
...
As a last note: unless you have a compelling reason to register your subclasses in the base class itself, it might be better to use module-level variables and functions (a Python module is actually a kind of singleton):
# steps.py
class Step(object):
#....
_STEPS_BY_NAME = {}
def register(name):
def _register(cls):
_STEPS_BY_NAME[name] = cls
return cls
return _register
def get_step_class(name):
return _STEPS_BY_NAME[name]
And in your other modules
import steps
#steps.register("CustomDerive1Name")
class Derive1(steps.Step):
# ...
The point here is to avoid giving too many responsabilies to your Step class. I don't know your concrete use case so I can't tell which design best fits your need, but I've been using this last one on quite a few projects and it always worked fine so far.
You are close. Use this
class Step(object):
pass
class Derive1(Step):
pass
class Derive2(Step):
pass
_STEPS_BY_NAME = {
'foo': Step,
'bar': Derive1,
'bar': Derive2
}
def get_step_by_name(name):
return _STEPS_BY_NAME[name]
Warning: there might be better approaches depending on what you are trying to achieve. Such a mapping from strings to methods is a maintenance nightmare. If you want to change the name of a method, you would have to remember to change it in multiple place. You won't get any autocomplete help from your IDE either.
This article has a snippet showing usage of __bases__ to dynamically change the inheritance hierarchy of some Python code, by adding a class to an existing classes collection of classes from which it inherits. Ok, that's hard to read, code is probably clearer:
class Friendly:
def hello(self):
print 'Hello'
class Person: pass
p = Person()
Person.__bases__ = (Friendly,)
p.hello() # prints "Hello"
That is, Person doesn't inherit from Friendly at the source level, but rather this inheritance relation is added dynamically at runtime by modification of the __bases__attribute of the Person class. However, if you change Friendly and Person to be new style classes (by inheriting from object), you get the following error:
TypeError: __bases__ assignment: 'Friendly' deallocator differs from 'object'
A bit of Googling on this seems to indicate some incompatibilities between new-style and old style classes in regards to changing the inheritance hierarchy at runtime. Specifically: "New-style class objects don't support assignment to their bases attribute".
My question, is it possible to make the above Friendly/Person example work using new-style classes in Python 2.7+, possibly by use of the __mro__ attribute?
Disclaimer: I fully realise that this is obscure code. I fully realize that in real production code tricks like this tend to border on unreadable, this is purely a thought experiment, and for funzies to learn something about how Python deals with issues related to multiple inheritance.
Ok, again, this is not something you should normally do, this is for informational purposes only.
Where Python looks for a method on an instance object is determined by the __mro__ attribute of the class which defines that object (the M ethod R esolution O rder attribute). Thus, if we could modify the __mro__ of Person, we'd get the desired behaviour. Something like:
setattr(Person, '__mro__', (Person, Friendly, object))
The problem is that __mro__ is a readonly attribute, and thus setattr won't work. Maybe if you're a Python guru there's a way around that, but clearly I fall short of guru status as I cannot think of one.
A possible workaround is to simply redefine the class:
def modify_Person_to_be_friendly():
# so that we're modifying the global identifier 'Person'
global Person
# now just redefine the class using type(), specifying that the new
# class should inherit from Friendly and have all attributes from
# our old Person class
Person = type('Person', (Friendly,), dict(Person.__dict__))
def main():
modify_Person_to_be_friendly()
p = Person()
p.hello() # works!
What this doesn't do is modify any previously created Person instances to have the hello() method. For example (just modifying main()):
def main():
oldperson = Person()
ModifyPersonToBeFriendly()
p = Person()
p.hello()
# works! But:
oldperson.hello()
# does not
If the details of the type call aren't clear, then read e-satis' excellent answer on 'What is a metaclass in Python?'.
I've been struggling with this too, and was intrigued by your solution, but Python 3 takes it away from us:
AttributeError: attribute '__dict__' of 'type' objects is not writable
I actually have a legitimate need for a decorator that replaces the (single) superclass of the decorated class. It would require too lengthy a description to include here (I tried, but couldn't get it to a reasonably length and limited complexity -- it came up in the context of the use by many Python applications of an Python-based enterprise server where different applications needed slightly different variations of some of the code.)
The discussion on this page and others like it provided hints that the problem of assigning to __bases__ only occurs for classes with no superclass defined (i.e., whose only superclass is object). I was able to solve this problem (for both Python 2.7 and 3.2) by defining the classes whose superclass I needed to replace as being subclasses of a trivial class:
## T is used so that the other classes are not direct subclasses of object,
## since classes whose base is object don't allow assignment to their __bases__ attribute.
class T: pass
class A(T):
def __init__(self):
print('Creating instance of {}'.format(self.__class__.__name__))
## ordinary inheritance
class B(A): pass
## dynamically specified inheritance
class C(T): pass
A() # -> Creating instance of A
B() # -> Creating instance of B
C.__bases__ = (A,)
C() # -> Creating instance of C
## attempt at dynamically specified inheritance starting with a direct subclass
## of object doesn't work
class D: pass
D.__bases__ = (A,)
D()
## Result is:
## TypeError: __bases__ assignment: 'A' deallocator differs from 'object'
I can not vouch for the consequences, but that this code does what you want at py2.7.2.
class Friendly(object):
def hello(self):
print 'Hello'
class Person(object): pass
# we can't change the original classes, so we replace them
class newFriendly: pass
newFriendly.__dict__ = dict(Friendly.__dict__)
Friendly = newFriendly
class newPerson: pass
newPerson.__dict__ = dict(Person.__dict__)
Person = newPerson
p = Person()
Person.__bases__ = (Friendly,)
p.hello() # prints "Hello"
We know that this is possible. Cool. But we'll never use it!
Right of the bat, all the caveats of messing with class hierarchy dynamically are in effect.
But if it has to be done then, apparently, there is a hack that get's around the "deallocator differs from 'object" issue when modifying the __bases__ attribute for the new style classes.
You can define a class object
class Object(object): pass
Which derives a class from the built-in metaclass type.
That's it, now your new style classes can modify the __bases__ without any problem.
In my tests this actually worked very well as all existing (before changing the inheritance) instances of it and its derived classes felt the effect of the change including their mro getting updated.
I needed a solution for this which:
Works with both Python 2 (>= 2.7) and Python 3 (>= 3.2).
Lets the class bases be changed after dynamically importing a dependency.
Lets the class bases be changed from unit test code.
Works with types that have a custom metaclass.
Still allows unittest.mock.patch to function as expected.
Here's what I came up with:
def ensure_class_bases_begin_with(namespace, class_name, base_class):
""" Ensure the named class's bases start with the base class.
:param namespace: The namespace containing the class name.
:param class_name: The name of the class to alter.
:param base_class: The type to be the first base class for the
newly created type.
:return: ``None``.
Call this function after ensuring `base_class` is
available, before using the class named by `class_name`.
"""
existing_class = namespace[class_name]
assert isinstance(existing_class, type)
bases = list(existing_class.__bases__)
if base_class is bases[0]:
# Already bound to a type with the right bases.
return
bases.insert(0, base_class)
new_class_namespace = existing_class.__dict__.copy()
# Type creation will assign the correct ‘__dict__’ attribute.
del new_class_namespace['__dict__']
metaclass = existing_class.__metaclass__
new_class = metaclass(class_name, tuple(bases), new_class_namespace)
namespace[class_name] = new_class
Used like this within the application:
# foo.py
# Type `Bar` is not available at first, so can't inherit from it yet.
class Foo(object):
__metaclass__ = type
def __init__(self):
self.frob = "spam"
def __unicode__(self): return "Foo"
# … later …
import bar
ensure_class_bases_begin_with(
namespace=globals(),
class_name=str('Foo'), # `str` type differs on Python 2 vs. 3.
base_class=bar.Bar)
Use like this from within unit test code:
# test_foo.py
""" Unit test for `foo` module. """
import unittest
import mock
import foo
import bar
ensure_class_bases_begin_with(
namespace=foo.__dict__,
class_name=str('Foo'), # `str` type differs on Python 2 vs. 3.
base_class=bar.Bar)
class Foo_TestCase(unittest.TestCase):
""" Test cases for `Foo` class. """
def setUp(self):
patcher_unicode = mock.patch.object(
foo.Foo, '__unicode__')
patcher_unicode.start()
self.addCleanup(patcher_unicode.stop)
self.test_instance = foo.Foo()
patcher_frob = mock.patch.object(
self.test_instance, 'frob')
patcher_frob.start()
self.addCleanup(patcher_frob.stop)
def test_instantiate(self):
""" Should create an instance of `Foo`. """
instance = foo.Foo()
The above answers are good if you need to change an existing class at runtime. However, if you are just looking to create a new class that inherits by some other class, there is a much cleaner solution. I got this idea from https://stackoverflow.com/a/21060094/3533440, but I think the example below better illustrates a legitimate use case.
def make_default(Map, default_default=None):
"""Returns a class which behaves identically to the given
Map class, except it gives a default value for unknown keys."""
class DefaultMap(Map):
def __init__(self, default=default_default, **kwargs):
self._default = default
super().__init__(**kwargs)
def __missing__(self, key):
return self._default
return DefaultMap
DefaultDict = make_default(dict, default_default='wug')
d = DefaultDict(a=1, b=2)
assert d['a'] is 1
assert d['b'] is 2
assert d['c'] is 'wug'
Correct me if I'm wrong, but this strategy seems very readable to me, and I would use it in production code. This is very similar to functors in OCaml.
This method isn't technically inheriting during runtime, since __mro__ can't be changed. But what I'm doing here is using __getattr__ to be able to access any attributes or methods from a certain class. (Read comments in order of numbers placed before the comments, it makes more sense)
class Sub:
def __init__(self, f, cls):
self.f = f
self.cls = cls
# 6) this method will pass the self parameter
# (which is the original class object we passed)
# and then it will fill in the rest of the arguments
# using *args and **kwargs
def __call__(self, *args, **kwargs):
# 7) the multiple try / except statements
# are for making sure if an attribute was
# accessed instead of a function, the __call__
# method will just return the attribute
try:
return self.f(self.cls, *args, **kwargs)
except TypeError:
try:
return self.f(*args, **kwargs)
except TypeError:
return self.f
# 1) our base class
class S:
def __init__(self, func):
self.cls = func
def __getattr__(self, item):
# 5) we are wrapping the attribute we get in the Sub class
# so we can implement the __call__ method there
# to be able to pass the parameters in the correct order
return Sub(getattr(self.cls, item), self.cls)
# 2) class we want to inherit from
class L:
def run(self, s):
print("run" + s)
# 3) we create an instance of our base class
# and then pass an instance (or just the class object)
# as a parameter to this instance
s = S(L) # 4) in this case, I'm using the class object
s.run("1")
So this sort of substitution and redirection will simulate the inheritance of the class we wanted to inherit from. And it even works with attributes or methods that don't take any parameters.
Goal: Make a decorator which can modify the scope that it is used in.
If it worked:
class Blah(): # or perhaps class Blah(ParentClassWhichMakesThisPossible)
def one(self):
pass
#decorated
def two(self):
pass
>>> Blah.decorated
["two"]
Why? I essentially want to write classes which can maintain specific dictionaries of methods, so that I can retrieve lists of available methods of different types on a per class basis. errr.....
I want to do this:
class RuleClass(ParentClass):
#rule
def blah(self):
pass
#rule
def kapow(self):
pass
def shazam(self):
class OtherRuleClass(ParentClass):
#rule
def foo(self):
pass
def bar(self):
pass
>>> RuleClass.rules.keys()
["blah", "kapow"]
>>> OtherRuleClass.rules.keys()
["foo"]
You can do what you want with a class decorator (in Python 2.6) or a metaclass. The class decorator version:
def rule(f):
f.rule = True
return f
def getRules(cls):
cls.rules = {}
for attr, value in cls.__dict__.iteritems():
if getattr(value, 'rule', False):
cls.rules[attr] = value
return cls
#getRules
class RuleClass:
#rule
def foo(self):
pass
The metaclass version would be:
def rule(f):
f.rule = True
return f
class RuleType(type):
def __init__(self, name, bases, attrs):
self.rules = {}
for attr, value in attrs.iteritems():
if getattr(value, 'rule', False):
self.rules[attr] = value
super(RuleType, self).__init__(name, bases, attrs)
class RuleBase(object):
__metaclass__ = RuleType
class RuleClass(RuleBase):
#rule
def foo(self):
pass
Notice that neither of these do what you ask for (modify the calling namespace) because it's fragile, hard and often impossible. Instead they both post-process the class -- through the class decorator or the metaclass's __init__ method -- by inspecting all the attributes and filling the rules attribute. The difference between the two is that the metaclass solution works in Python 2.5 and earlier (down to 2.2), and that the metaclass is inherited. With the decorator, subclasses have to each apply the decorator individually (if they want to set the rules attribute.)
Both solutions do not take inheritance into account -- they don't look at the parent class when looking for methods marked as rules, nor do they look at the parent class rules attribute. It's not hard to extend either to do that, if that's what you want.
Problem is, at the time the decorated decorator is called, there is no object Blah yet: the class object is built after the class body finishes executing. Simplest is to have decorated stash the info "somewhere else", e.g. a function attribute, then a final pass (a class decorator or metaclass) reaps that info into the dictionary you desire.
Class decorators are simpler, but they don't get inherited (so they wouldn't come from a parent class), while metaclasses are inherited -- so if you insist on inheritance, a metaclass it will have to be. Simplest-first, with a class decorator and the "list" variant you have at the start of your Q rather than the "dict" variant you have later:
import inspect
def classdecorator(aclass):
decorated = []
for name, value in inspect.getmembers(aclass, inspect.ismethod):
if hasattr(value, '_decorated'):
decorated.append(name)
del value._decorated
aclass.decorated = decorated
return aclass
def decorated(afun):
afun._decorated = True
return afun
now,
#classdecorator
class Blah(object):
def one(self):
pass
#decorated
def two(self):
pass
gives you the Blah.decorated list you request in the first part of your Q. Building a dict instead, as you request in the second part of your Q, just means changing decorated.append(name) to decorated[name] = value in the code above, and of course initializing decorated in the class decorator to an empty dict rather than an empty list.
The metaclass variant would use the metaclass's __init__ to perform essentially the same post-processing after the class body is built -- a metaclass's __init__ gets a dict corresponding to the class body as its last argument (but you'll have to support inheritance yourself by appropriately dealing with any base class's analogous dict or list). So the metaclass approach is only "somewhat" more complex in practice than a class decorator, but conceptually it's felt to be much more difficult by most people. I'll give all the details for the metaclass if you need them, but I'd recommend sticking with the simpler class decorator if feasible.
Obj-C (which I have not used for a long time) has something called categories to extend classes. Declaring a category with new methods and compiling it into your program, all instances of the class suddenly have the new methods.
Python has mixin possibilities, which I use, but mixins must be used from the bottom of the program: the class has to declare it itself.
Foreseen category use-case: Say you have a big class hierarchy that describe different ways of interacting with data, declaring polymorphic ways to get at different attributes. Now a category can help the consumer of these describing classes by implementing a convenient interface to access these methods in one place. (A category method could for example, try two different methods and return the first defined (non-None) return value.)
Any way to do this in Python?
Illustrative code
I hope this clarifies what I mean. The point is that the Category is like an aggregate interface, that the consumer of AppObj can change in its code.
class AppObj (object):
"""This is the top of a big hierarchy of subclasses that describe different data"""
def get_resource_name(self):
pass
def get_resource_location(self):
pass
# dreaming up class decorator syntax
#category(AppObj)
class AppObjCategory (object):
"""this is a category on AppObj, not a subclass"""
def get_resource(self):
name = self.get_resource_name()
if name:
return library.load_resource_name(name)
else:
return library.load_resource(self.get_resource_location())
Why not just add methods dynamically ?
>>> class Foo(object):
>>> pass
>>> def newmethod(instance):
>>> print 'Called:', instance
...
>>> Foo.newmethod = newmethod
>>> f = Foo()
>>> f.newmethod()
Called: <__main__.Foo object at 0xb7c54e0c>
I know Objective-C and this looks just like categories. The only drawback is that you can't do that to built-in or extension types.
I came up with this implementation of a class decorator. I'm using python2.5 so I haven't actually tested it with decorator syntax (which would be nice), and I'm not sure what it does is really correct. But it looks like this:
pycategories.py
"""
This module implements Obj-C-style categories for classes for Python
Copyright 2009 Ulrik Sverdrup <ulrik.sverdrup#gmail.com>
License: Public domain
"""
def Category(toclass, clobber=False):
"""Return a class decorator that implements the decorated class'
methods as a Category on the class #toclass
if #clobber is not allowed, AttributeError will be raised when
the decorated class already contains the same attribute.
"""
def decorator(cls):
skip = set(("__dict__", "__module__", "__weakref__", "__doc__"))
for attr in cls.__dict__:
if attr in toclass.__dict__:
if attr in skip:
continue
if not clobber:
raise AttributeError("Category cannot override %s" % attr)
setattr(toclass, attr, cls.__dict__[attr])
return cls
return decorator
Python's setattr function makes this easy.
# categories.py
class category(object):
def __init__(self, mainModule, override = True):
self.mainModule = mainModule
self.override = override
def __call__(self, function):
if self.override or function.__name__ not in dir(self.mainModule):
setattr(self.mainModule, function.__name__, function)
# categories_test.py
import this
from categories import category
#category(this)
def all():
print "all things are this"
this.all()
>>> all things are this