Any advice on what is the proper 'pythonic' way for the following function. Do I have to split it into two functions?
def readSomething(fp=None):
if fp:
return fp.read(100)
else:
with open('default.txt', 'r') as fp:
return fp.read(100)
I need something like this because the readSomething function may be called from another function that may or may not have the same file open.
For example, it may be called like this at some places:
def doSomethingWithSameFile():
with open('default.txt') as fp:
preample = fp.read(10)
more_data = readSomething(fb)
...
or like that at other places:
def init():
data = readSomething()
...
I don't think this is the right solution, but I think it's what you want.
import contextlib
def readSomething(fp=None):
with contextlib.ExitStack() as stack:
if not fp:
fp = stack.enter_context(open('default.txt'))
return fp.read(100)
I get the impression that you're going to duplicate this logic many functions like readSomething() so I'd recommend putting the ExitStack code into a decorator and wrapping the functions where you need this behavior.
You could also use a decorator. I don't use this kind of code, so the syntax below is almost certainly incomplete, but the general idea stands:
import functools
def fallback_to_default(fn):
#functools.wraps(fn)
def new_fn(fp=None, *args, **kwargs):
with contextlib.ExitStack() as stack:
if not fp:
fp = stack.enter_context(open('default.txt'))
return fn(fp, *args, **kwargs)
return new_fn
#fallback_to_default
def readSomething(fp=None):
return fp.read(100)
You could define a custom context manager that does something only if None is passed to it, but it might be overkill:
class ContextOrNone(object):
def __init__(self, obj, fn, *args, **kwargs):
if obj is not None:
self.obj = obj
self.cleanup = False
else:
self.obj = fn(*args, **kwargs)
self.cleanup = True
def __enter__(self):
return self.obj
def __exit__(self, ex_type, ex_val, traceback):
if self.cleanup:
self.obj.__exit__(ex_type, ex_val, traceback)
Or, using contextlib.contextmanager:
from contextlib import contextmanager
#contextmanager
def ContextOrNone(obj, fn, *args, **kwargs):
was_none = obj is None
try:
if was_none:
obj = fn(*args, **kwargs)
yield obj
finally:
if was_none:
obj.__exit__()
Once you have this defined, you can define readSomething as:
def readSomething(fp=None):
with ContextOrNone(fp, open, 'default.txt', 'r') as fp:
return fp.read(100)
To summarise the issue in plain language:
You may be passed an open file handle, then you want to leave it open, because it is the caller's responsibility to close that resource.
You might need to open your own resource, then it is your responsibility to close it.
This is the problem with accepting inhomogeneous argument types in Python. You're allowed to do that, but it can make your code a bit more ugly sometimes.
Context managers are just syntactic sugar for try/finally:
def readSomething(fp=None):
close_fp = False
if fp is None:
fp = open('default.txt')
close_fp = True
try:
return fp.read(100)
finally:
if close_fp:
fp.close()
To make it any more "pretty" than that, consider to change the interfaces so that you don't have to handle both reading data and doing the resource management from within the same function - refactor to make your functions have a single responsibility.
Honestly, the most Pythonic version of your code is probably just what you already have, except slightly cleaned up:
def readSomething(fp=None):
if fp:
return fp.read(100)
with open('default.txt') as fp:
return fp.read(100)
That preserves your original intent and functionality. It's clear and easy to read. Sure, it has a little repetition. If your example was simplified to the point that the repeated portion is too grotesque for you, then lift it out into its own function:
def complicatedStuff(buf, sz):
# Obviously more code will go here.
return buf.read(sz)
def readSomething(fp=None):
if fp:
return complicatedStuff(fp, 100)
with open('default.txt') as fp:
return complicatedStuff(fp, 100)
It's not Pythonic to jump through a lot of hoops just to avoid repeating yourself a little.
This doesn't use with, and closes the default or the file passed as argument,
but maybe it is still an option.
def readSomething(fp=None):
if fp is None:
fp = open('default.txt')
return (fp.read(100), fp.close)
Related
How can I override the built in open function such that when I call it like so...
with open(file_path, "r") as f:
contents = f.read()
The contents variable is any string I want?
EDIT: To clarify, I want to be able to just provide a string to the open function rather than a file path that will be read.
with open("foobar") as f:
contents = f.read()
print(contents)
The above should print foobar.
I am aware this is defeating the purpose of open etc but it is for testing purposes.
You can create your own file-like type and override the builtin open with your own open function.
import builtins
import contextlib
class File(object):
"""
A basic file-like object.
"""
def __init__(self, path, *args, **kwargs):
self._fobj = builtins.open(path, *args, **kwargs)
def read(self, n_bytes = -1):
data = self._fobj.read(n_bytes)
...
return data
def close(self):
self._fobj.close()
#contextlib.contextmanager
def open(path, *args, **kwargs):
fobj = File(path, *args, **kwargs)
try:
with contextlib.closing(fobj):
yield fobj
finally:
pass
You can add whatever behavior or additional logic needed to adjust the return value of read() inside File.read itself, or override the behavior entirely from a subclass of File.
Simplified for the particular case in question:
class File(str):
def read(self):
return str(self)
#contextlib.contextmanager
def open(string):
try:
yield File(string)
finally:
pass
with open('foobar') as f:
print(f.read())
Considering it is for testing purpose and you want to force the open calls to return a specific string then you can use mock_open here.
Let's say I have a module foo that has a function that reads content from a file and counts the number of lines:
# foo.py
def read_and_process_file():
with open('Pickle Rick') as f:
contents = f.read()
print('File has {n} lines'.format(n=len(contents.splitlines())))
Now in your test you can mock the open for this module and make it return any string you want:
from unittest.mock import mock_open, patch
import foo
m = mock_open(read_data='I am some random data\nthat spans over 2 lines')
with patch('foo.open', m):
foo.read_and_process_file() # prints 2
You can design your own class, as with requires an object with a defined __enter__ and __exit__ method. As that is what with does.
class my_class:
def __init__(self, *args):
print("initializing variable, got args: {}".format(args))
def __enter__(self):
print("Inside enter statement!")
return "arbitrary text"
def __exit__(self, type, value, traceback):
print("closing time, you don't have to go home")
return
with my_class(1,2,3) as my_thing:
print("inside the with block!")
print("The return from my_class __enter__ is: ", my_thing)
print("Outside with block!")
output when ran:
initializing variable, got args: (1, 2, 3)
Inside enter statement!
inside the with block!
The return from my_class __enter__ is: arbitrary text
closing time, you don't have to go home
Outside with block!
More reading here: http://effbot.org/zone/python-with-statement.htm
I would like to be able to write code like this:
with obj.in_batch_mode:
obj.some_attr = "some_value"
obj.some_int = 142
...
when I want obj to wait with sending updates about itself until multiple jobs are completed. I have hooks on __setattr__ that take some time to run, and the changes can be sent together.
I do not want to use code like this, since it increases the risk of forgetting to leave batch_mode (which is what the with keyword is good for):
obj.enter_batch_mode()
obj.some_attr = "some_value"
obj.some_int = 142
...
obj.exit_batch_mode()
I have not been able to figure out how to implement this. Just typing with obj: (and simply implementing with on obj) does not read anywhere near as descriptive.
Generally, a very simple way to implement context managers is to use the contextlib module. Writing a context manager becomes as simple as writing a single yield generator. Before the yield replaces the __enter__ method, the object yielded is the return value of __enter__, and the section after the yield is the __exit__ method. Any function on your class can be a context manager, it just needs the be decorated as such. For instance, take this simple ConsoleWriter class:
from contextlib import contextmanager
from sys import stdout
from io import StringIO
from functools import partial
class ConsoleWriter:
def __init__(self, out=stdout, fmt=None):
self._out = out
self._fmt = fmt
#property
#contextmanager
def batch(self):
original_out = self._out
self._out = StringIO()
try:
yield self
except Exception as e:
# There was a problem. Ignore batch commands.
# (do not swallow the exception though)
raise
else:
# no problem
original_out.write(self._out.getvalue())
finally:
self._out = original_out
#contextmanager
def verbose(self, fmt="VERBOSE: {!r}"):
original_fmt = self._fmt
self._fmt = fmt
try:
yield self
finally:
# don't care about errors, just restore end
self._fmt = original_fmt
def __getattr__(self, attr):
"""creates function that writes capitalised attribute three times"""
return partial(self.write, attr.upper()*3)
def write(self, arg):
if self._fmt:
arg = self._fmt.format(arg)
print(arg, file=self._out)
Example usage:
writer = ConsoleWriter()
with writer.batch:
print("begin batch")
writer.a()
writer.b()
with writer.verbose():
writer.c()
print("before reentrant block")
with writer.batch:
writer.d()
print("after reentrant block")
print("end batch -- all data is now flushed")
Outputing:
begin batch
before reentrant block
after reentrant block
end batch -- all data is now flushed
AAA
BBB
VERBOSE: 'CCC'
DDD
If you are after a simple solution and do not need any nested mode-change (e.g. from STD to BATCH to VERBOSE back to BATCH back to STD)
class A(object):
STD_MODE = 'std'
BATCH_MODE = 'batch'
VERBOSE_MODE = 'verb'
def __init__(self):
self.mode = self.STD_MODE
def in_mode(self, mode):
self.mode = mode
return self
def __enter__(self):
return self
def __exit__(self, type, value, tb):
self.mode = self.STD_MODE
obj = A()
print obj.mode
with obj.in_mode(obj.BATCH_MODE) as x:
print x.mode
print obj.mode
outputs
std
batch
std
This builds on Pynchia's answer, but adds support for multiple modes and allows nesting of with statements, even with the same mode multiple times. It scales O(#nested_modes) which is basically O(1).
Just remember to use stacks for data storage related to the modes.
class A():
_batch_mode = "batch_mode"
_mode_stack = []
#property
def in_batch_mode(self):
self._mode_stack.append(self._batch_mode)
return self
def __enter__(self):
return self
def __exit__(self, type, value, tb):
self._mode_stack.pop()
if self._batch_mode not in self._mode_stack:
self.apply_edits()
and then I have these checks wherever I need them:
if self._batch_mode not in self._mode_stack:
self.apply_edits()
It is also possible to use methods for modes:
with x.in_some_mode(my_arg):
just remember to save my_arg in a stack within x, and to clear it from the stack when that mode is popped from the mode stack.
The code using this object can now be
with obj.in_batch_mode:
obj.some_property = "some_value"
and there are no problems with nesting, so we can add another with obj.in_some_mode: wherever without any hard-to-debug errors or having to check every function called to make sure the object's with-statements are never nested:
def b(obj):
with obj.in_batch_mode:
obj.some_property = "some_value"
x = A()
with x.in_batch_mode:
x.my_property = "my_value"
b(x)
Maybe something like this:
Implement helper class
class WithHelperObj(object):
def __init__(self,obj):
self.obj = obj
def __enter__(self):
self.obj.impl_enter_batch()
def __exit__(self, exc_type, exc_value, traceback):
self.obj.impl_exit_batch()
class MyObject(object):
def in_batch_mode(self):
return WithHelperObj(self)
In the class itself, implement method instead of field, to use with the with statement
def impl_enter_batch(self):
print 'In impl_enter_batch'
def impl_exit_batch(self):
print 'In impl_exit_batch'
def doing(self):
print 'doing'
Then use it:
o = MyObject()
with o.in_batch_mode():
o.doing()
In this question, I defined a context manager that contains a context manager. What is the easiest correct way to accomplish this nesting? I ended up calling self.temporary_file.__enter__() in self.__enter__(). However, in self.__exit__, I am pretty sure I have to call self.temporary_file.__exit__(type_, value, traceback) in a finally block in case an exception is raised. Should I be setting the type_, value, and traceback parameters if something goes wrong in self.__exit__? I checked contextlib, but couldn't find any utilities to help with this.
Original code from question:
import itertools as it
import tempfile
class WriteOnChangeFile:
def __init__(self, filename):
self.filename = filename
def __enter__(self):
self.temporary_file = tempfile.TemporaryFile('r+')
self.f = self.temporary_file.__enter__()
return self.f
def __exit__(self, type_, value, traceback):
try:
try:
with open(self.filename, 'r') as real_f:
self.f.seek(0)
overwrite = any(
l != real_l
for l, real_l in it.zip_longest(self.f, real_f))
except IOError:
overwrite = True
if overwrite:
with open(self.filename, 'w') as real_f:
self.f.seek(0)
for l in self.f:
real_f.write(l)
finally:
self.temporary_file.__exit__(type_, value, traceback)
The easy way to create context managers is with contextlib.contextmanager. Something like this:
#contextlib.contextmanager
def write_on_change_file(filename):
with tempfile.TemporaryFile('r+') as temporary_file:
yield temporary_file
try:
... some saving logic that you had in __exit__ ...
Then use with write_on_change_file(...) as f:.
The body of the with statement will be executed “instead of” the yield. Wrap the yield itself in a try block if you want to catch any exceptions that happen in the body.
The temporary file will always be properly closed (when its with block ends).
contextlib.contextmanager works great for functions, but when I need a classes as context manager, I'm using the following util:
class ContextManager(metaclass=abc.ABCMeta):
"""Class which can be used as `contextmanager`."""
def __init__(self):
self.__cm = None
#abc.abstractmethod
#contextlib.contextmanager
def contextmanager(self):
raise NotImplementedError('Abstract method')
def __enter__(self):
self.__cm = self.contextmanager()
return self.__cm.__enter__()
def __exit__(self, exc_type, exc_value, traceback):
return self.__cm.__exit__(exc_type, exc_value, traceback)
This allow to declare contextmanager classes with the generator syntax from #contextlib.contextmanager. It makes it much more natural to nest contextmanager, without having to manually call __enter__ and __exit__. Example:
class MyClass(ContextManager):
def __init__(self, filename):
self._filename = filename
#contextlib.contextmanager
def contextmanager(self):
with tempfile.TemporaryFile() as temp_file:
yield temp_file
... # Post-processing you previously had in __exit__
with MyClass('filename') as x:
print(x)
I wish this was in the standard library...
I have some way of building a data structure (out of some file contents, say):
def loadfile(FILE):
return # some data structure created from the contents of FILE
So I can do things like
puppies = loadfile("puppies.csv") # wait for loadfile to work
kitties = loadfile("kitties.csv") # wait some more
print len(puppies)
print puppies[32]
In the above example, I wasted a bunch of time actually reading kitties.csv and creating a data structure that I never used. I'd like to avoid that waste without constantly checking if not kitties whenever I want to do something. I'd like to be able to do
puppies = lazyload("puppies.csv") # instant
kitties = lazyload("kitties.csv") # instant
print len(puppies) # wait for loadfile
print puppies[32]
So if I don't ever try to do anything with kitties, loadfile("kitties.csv") never gets called.
Is there some standard way to do this?
After playing around with it for a bit, I produced the following solution, which appears to work correctly and is quite brief. Are there some alternatives? Are there drawbacks to using this approach that I should keep in mind?
class lazyload:
def __init__(self,FILE):
self.FILE = FILE
self.F = None
def __getattr__(self,name):
if not self.F:
print "loading %s" % self.FILE
self.F = loadfile(self.FILE)
return object.__getattribute__(self.F, name)
What might be even better is if something like this worked:
class lazyload:
def __init__(self,FILE):
self.FILE = FILE
def __getattr__(self,name):
self = loadfile(self.FILE) # this never gets called again
# since self is no longer a
# lazyload instance
return object.__getattribute__(self, name)
But this doesn't work because self is local. It actually ends up calling loadfile every time you do anything.
The csv module in the Python stdlibrary will not load the data until you start iterating over it, so it is in fact lazy.
Edit: If you need to read through the whole file to build the datastructure, having a complex Lazy load object that proxies things is overkill. Just do this:
class Lazywrapper(object):
def __init__(self, filename):
self.filename = filename
self._data = None
def get_data(self):
if self._data = None:
self._build_data()
return self._data
def _build_data(self):
# Now open and iterate over the file to build a datastructure, and
# put that datastructure as self._data
With the above class you can do this:
puppies = Lazywrapper("puppies.csv") # Instant
kitties = Lazywrapper("kitties.csv") # Instant
print len(puppies.getdata()) # Wait
print puppies.getdata()[32] # instant
Also
allkitties = kitties.get_data() # wait
print len(allkitties)
print kitties[32]
If you have a lot of data, and you don't really need to load all the data you could also implement something like class that will read the file until it finds the doggie called "Froufrou" and then stop, but at that point it's likely better to stick the data in a database once and for all and access it from there.
If you're really worried about the if statement, you have a Stateful object.
from collections import MutableMapping
class LazyLoad( MutableMapping ):
def __init__( self, source ):
self.source= source
self.process= LoadMe( self )
self.data= None
def __getitem__( self, key ):
self.process= self.process.load()
return self.data[key]
def __setitem__( self, key, value ):
self.process= self.process.load()
self.data[key]= value
def __contains__( self, key ):
self.process= self.process.load()
return key in self.data
This class delegates the work to a process object which is either a Load or a
DoneLoading object. The Load object will actually load. The DoneLoading
will not load.
Note that there are no if-statements.
class LoadMe( object ):
def __init__( self, parent ):
self.parent= parent
def load( self ):
## Actually load, setting self.parent.data
return DoneLoading( self.parent )
class DoneLoading( object ):
def __init__( self, parent ):
self.parent= parent
def load( self ):
return self
Wouldn't if not self.F lead to another call to __getattr__, putting you into an infinite loop? I think your approach makes sense, but to be on the safe side, I'd make that line into:
if name == "F" and not self.F:
Also, you could make loadfile a method on the class, depending on what you're doing.
Here's a solution that uses a class decorator to defer initialisation until the first time an object is used:
def lazyload(cls):
original_init = cls.__init__
original_getattribute = cls.__getattribute__
def newinit(self, *args, **kwargs):
# Just cache the arguments for the eventual initialization.
self._init_args = args
self._init_kwargs = kwargs
self.initialized = False
newinit.__doc__ = original_init.__doc__
def performinit(self):
# We call object's __getattribute__ rather than super(...).__getattribute__
# or original_getattribute so that no custom __getattribute__ implementations
# can interfere with what we are doing.
original_init(self,
*object.__getattribute__(self, "_init_args"),
**object.__getattribute__(self, "_init_kwargs"))
del self._init_args
del self._init_kwargs
self.initialized = True
def newgetattribute(self, name):
if not object.__getattribute__(self, "initialized"):
performinit(self)
return original_getattribute(self, name)
if hasattr(cls, "__getitem__"):
original_getitem = cls.__getitem__
def newgetitem(self, key):
if not object.__getattribute__(self, "initialized"):
performinit(self)
return original_getitem(self, key)
newgetitem.__doc__ = original_getitem.__doc__
cls.__getitem__ = newgetitem
if hasattr(cls, "__len__"):
original_len = cls.__len__
def newlen(self):
if not object.__getattribute__(self, "initialized"):
performinit(self)
return original_len(self)
newlen.__doc__ = original_len.__doc__
cls.__len__ = newlen
cls.__init__ = newinit
cls.__getattribute__ = newgetattribute
return cls
#lazyload
class FileLoader(dict):
def __init__(self, filename):
self.filename = filename
print "Performing expensive load operation"
self[32] = "Felix"
self[33] = "Eeek"
kittens = FileLoader("kitties.csv")
print "kittens is instance of FileLoader: %s" % isinstance(kittens, FileLoader) # Well obviously
print len(kittens) # Wait
print kittens[32] # No wait
print kittens[33] # No wait
print kittens.filename # Still no wait
print kittens.filename
The output:
kittens is instance of FileLoader: True
Performing expensive load operation
2
Felix
Eeek
kitties.csv
kitties.csv
I tried to actually restore the original magic methods after the initialization, but it wasn't working out. It may be necessary to proxy additional magic methods, I didn't investigate every scenario.
Note that kittens.initialized will always return True because it kicks off the initialization if it hasn't already been performed. Obviously it would be possible to add an exemption for this attribute so that it would return False if no other operation had been performed on the object, or the check could be changed to the equivalent of a hasattr call and the initialized attribute could be deleted after the initialization.
Here's a hack that makes the "even better" solution work, but I think it's annoying enough that it's probably better to just use the first solution. The idea is to execute the step self = loadfile(self.FILE) by passing the the variable name as an attribute:
class lazyload:
def __init__(self,FILE,var):
self.FILE = FILE
self.var = var
def __getattr__(self,name):
x = loadfile(self.FILE)
globals()[self.var]=x
return object.__getattribute__(x, name)
Then you can do
kitties = lazyload("kitties.csv","kitties")
^ ^
\ /
These two better match exactly
After you call any method on kitties (aside from kitties.FILE or kitties.var), it will become completely indistinguishable from what you'd have gotten with kitties = loadfile("kitties.csv"). In particular, it will no longer be an instance of lazyload and kitties.FILE and kitties.var will no longer exist.
If you need use puppies[32] you need also define __getitem__ method because __getattr__ don't catch that behaviour.
I implement lazy load for my needs, there is non-adapted code:
class lazy_mask(object):
'''Fake object, which is substituted in
place of masked object'''
def __init__(self, master, id):
self.master=master
self.id=id
self._result=None
self.master.add(self)
def _res(self):
'''Run lazy job'''
if not self._result:
self._result=self.master.get(self.id)
return self._result
def __getattribute__(self, name):
'''proxy all queries to masked object'''
name=name.replace('_lazy_mask', '')
#print 'attr', name
if name in ['_result', '_res', 'master', 'id']:#don't proxy requests for own properties
return super(lazy_mask, self).__getattribute__(name)
else:#but proxy requests for masked object
return self._res().__getattribute__(name)
def __getitem__(self, key):
'''provide object["key"] access. Else can raise
TypeError: 'lazy_mask' object is unsubscriptable'''
return self._res().__getitem__(key)
(master is registry object that load data when i run it's get() method)
This implementation works ok for isinstance() and str() and json.dumps() with it
I'd like to provide the capability for users of one of my modules to extend its capabilities by providing an interface to call a user's function. For example, I want to give users the capability to be notified when an instance of a class is created and given the opportunity to modify the instance before it is used.
The way I've implemented it is to declare a module-level factory function that does the instantiation:
# in mymodule.py
def factory(cls, *args, **kwargs):
return cls(*args, **kwargs)
Then when I need an instance of a class in mymodule, I do factory(cls, arg1, arg2) rather than cls(arg1, arg2).
To extend it, a programmer would write in another module a function like this:
def myFactory(cls, *args, **kwargs):
instance = myFactory.chain(cls, *args, **kwargs)
# do something with the instance here if desired
return instance
Installation of the above callback looks like this:
myFactory.chain, mymodule.factory = mymodule.factory, myFactory
This seems straightforward enough to me, but I was wondering if you, as a Python programmer, would expect a function to register a callback rather than doing it with an assignment, or if there were other methods you would expect. Does my solution seem workable, idiomatic, and clear to you?
I am looking to keep it as simple as possible; I don't think most applications will actually need to chain more than one user callback, for example (though unlimited chaining comes "for free" with the above pattern). I doubt they will need to remove callbacks or specify priorities or order. Modules like python-callbacks or PyDispatcher seem to me like overkill, especially the latter, but if there are compelling benefits to a programmer working with my module, I'm open to them.
Taking aaronsterling's idea a bit further:
class C(object):
_oncreate = []
def __new__(cls):
return reduce(lambda x, y: y(x), cls._oncreate, super(C, cls).__new__(cls))
#classmethod
def oncreate(cls, func):
cls._oncreate.append(func)
c = C()
print hasattr(c, 'spew')
#C.oncreate
def spew(obj):
obj.spew = 42
return obj
c = C()
print c.spew
Combining Aaron's idea of using a decorator and Ignacio's idea of a class that maintains a list of attached callbacks, plus a concept borrowed from C#, I came up with this:
class delegate(object):
def __init__(self, func):
self.callbacks = []
self.basefunc = func
def __iadd__(self, func):
if callable(func):
self.__isub__(func)
self.callbacks.append(func)
return self
def callback(self, func):
if callable(func):
self.__isub__(func)
self.callbacks.append(func)
return func
def __isub__(self, func):
try:
self.callbacks.remove(func)
except ValueError:
pass
return self
def __call__(self, *args, **kwargs):
result = self.basefunc(*args, **kwargs)
for func in self.callbacks:
newresult = func(result)
result = result if newresult is None else newresult
return result
Decorating a function with #delegate allows other functions to be "attached" to it.
#delegate
def intfactory(num):
return int(num)
Functions can be added to the delegate with += (and removed with -=). You can also decorate with funcname.callback to add a callback function.
#intfactory.callback
def notify(num):
print "notify:", num
def increment(num):
return num+1
intfactory += increment
intfactory += lambda num: num * 2
print intfactory(3) # outputs 8
Does this feel Pythonic?
I might use a decorator so that the user could just write.
#new_factory
def myFactory(cls, *args, **kwargs):
instance = myFactory.chain(cls, *args, **kwargs)
# do something with the instance here if desired
return instance
Then in your module,
import sys
def new_factory(f):
mod = sys.modules[__name__]
f.chain = mod.factory
mod.factory = f
return f