I have an large script where i found out that lot of connections to a machine are left open and the reason was that for one of the class destructor was never getting called.
below is a simplified version of script manifesting the issue.
I tiered searching around and found out that it could be because of GC and weakref does help but in this case no help.
2 cases where i can see that the destructor is getting called are
If i call B_class object without passing A_class function
self.b = B_class("AA")
I call the make the B_class objects not global i.e not use self
b = B_class("AA",self.myprint)
b.do_something()
Both of these cases will cause further issues for my case. Last resort will be to close/del the objects at the end myself but i don't want to go that way.
can anybody suggest a better way out of this and help me understand this issue? Thanks in advance.
import weakref
class A_class:
def __init__(self,debug_level=1,version=None):
self.b = B_class("AA",self.myprint)
self.b.do_something()
def myprint(self, text):
print text
class B_class:
def __init__(self,ip,printfunc=None):
self.ip=ip
self.new_ip =ip
#self.printfunc = printfunc
self.printfunc = weakref.ref(printfunc)()
def __del__(self):
print("##B_Class Destructor called##")
def do_something(self,timeout=120):
self.myprint("B_Class ip=%s!!!" % self.new_ip)
def myprint(self,text):
if self.printfunc:
print ("ExtenalFUNC:%s" %text)
else:
print ("JustPrint:%s" %text)
def _main():
a = A_class()
if __name__ == '__main__':
_main()
You're not using the weakref.ref object properly. You're calling it immediately after it is created, which returns the referred-to object (the function passed in as printref).
Normally, you'd want to save the weak reference and only call it when you're going to use the reffered-to object (e.g. in myprint). However, that won't work for the bound method self.myprint you're getting passed in as printfunc, since the bound method object doesn't have any other references (every access to a method creates a new object).
If you're using Python 3.4 or later and you know that the object passed in will always be a bound method, you can use the WeakMethod class, rather than a regular ref. If you're not sure what kind of callable you're going to get, you might need to do some type checking to see if WeakMethod is required or not.
Use Python's "with" statement (http://www.python.org/dev/peps/pep-0343/).
It creates a syntactic scope and the __exit__ function which it creates is guaranteed to get called as soon as execution leaves the scope. You can also emulate "__enter__/__exit__" behavior by creating a generator with "contextmanager" decorator from the contextlib module (python 2.6+ or 2.5 using "from __future__ import with_statement" see PEP for examples).
Here's an example from the PEP:
import contextlib
#contextlib.contextmanger
def opening(filename):
f = open(filename) # IOError is untouched by GeneratorContext
try:
yield f
finally:
f.close() # Ditto for errors here (however unlikely)
and then in your main code, you write
with opening(blahblahblah) as f:
pass
# use f for something
# here you exited the with scope and f.close() got called
In your case, you'll want to use a different name (connecting or something) instead of "opening" and do socket connecting/disconnecting inside of your context manager.
self.printfunc = weakref.ref(printfunc)()
isn't actually using weakrefs to solve your problem; the line is effectively a noop. You create a weakref with weakref.ref(printfunc), but you follow it up with call parens, which converts back from weakref to a strong ref which you store (and the weakref object promptly disappears). Apparently it's not possible to store a weakref to the bound method itself (because the bound method is its own object created each time it's referenced on self, not a cached object whose lifetime is tied to self), so you have to get a bit hacky, unbinding the method so you can take a weakref on the object itself. Python 3.4 introduced WeakMethod to simplify this, but if you can't use that, then you're stuck doing it by hand.
Try changing it to (on Python 2.7, and you must import inspect):
# Must special case printfunc=None, since None is not weakref-able
if printfunc is None:
# Nothing provided
self.printobjref = self.printfuncref = None
elif inspect.ismethod(printfunc) and printfunc.im_self is not None:
# Handling bound method
self.printobjref = weakref.ref(printfunc.im_self)
self.printfuncref = weakref.ref(printfunc.im_func)
else:
self.printobjref = None
self.printfuncref = weakref.ref(printfunc)
and change myprint to:
def myprint(self,text):
if self.printfuncref is not None:
printfunc = self.printfuncref()
if printfunc is None:
self.printfuncref = self.printobjref = None # Ref died, so clear it to avoid rechecking later
elif self.printobjref is not None:
# Bound method not known to have disappeared
printobj = self.printobjref()
if printobj is not None:
print ("ExtenalFUNC:%s" %text) # To call it instead of just saying you have it, do printfunc(printobj, text)
return
self.printobjref = self.printfuncref = None # Ref died, so clear it to avoid rechecking later
else:
print ("ExtenalFUNC:%s" %text) # To call it instead of just saying you have it, do printfunc(text)
return
print ("JustPrint:%s" %text)
Yeah, it's ugly. You could factor out the ugly if you like (borrowing the implementation of WeakMethod from Python 3.4's source code would make sense, but names would have to change; __self__ is im_self in Py2, __func__ is im_func), but it's unpleasant even so. It's definitely not thread safe if the weakrefs could actually go dark, since the checks and clears of the weakref members aren't protected.
Related
I have a singleton class and I do not understand how the Python garbage collector is not removing the instance.
I'm using - from singleton_decorator import singleton
example of my class:
from singleton_decorator import singleton
#singleton
class FilesRetriever:
def __init__(self, testing_mode: bool = False):
self.testing_mode = testing_mode
test example:
def test_singletone(self):
FilesRetriever(testing_mode=True)
mode = FilesRetriever().testing_mode
print("mode 1:" + str(mode))
mode = FilesRetriever().testing_mode
print("mode 2:" + str(mode))
count_before = gc.get_count()
gc.collect()
count_after = gc.get_count()
mode = FilesRetriever().testing_mode
print("mode 3:" + str(mode))
print("count_before:" + str(count_before))
print("count_after:" + str(count_after))
test output:
mode 1:True
mode 2:True
mode 3:True
count_before:(306, 10, 5)
count_after:(0, 0, 0)
I would expect after the garbage collector runs automatically or after I ran it in my test that the instance of _SingletonWrapper (the class in the decorator implementation) will be removed because nothing is pointing to it. and then the value of "print("mode 3:" + str(mode))" will be False because that is the default (and the instance was re-created)
So the code and garbage collection is working as intended. Look at the code for the singleton decorator you are referring to. decorator
Just because you call gc.collect() and your code isn't holding a reference somewhere doesn't mean other code isn't.
In the decorator it creates an instance then stores that instance in a variable within the decorator. So even though you collected relative to your code. Their code is still holding a reference to that instance and so it doesn't get collected.
This would be expected behavior from a singleton since that is its whole purpose. Is to store an instance somewhere that can be retrieved and used instead of creating a new instance every time. So you wouldn't want that instance to be trashed unless you need to replace the instance.
To answer your comment
No, you are not getting the instance to the _SingletonWrapper. When you write FileRetriever() what you're actually doing is invoking the __call__ method of the instance of _SingletonWrapper. When you use #singleton() that returns an instance not the class object.
Again while in your code you are not storing it anywhere doesn't mean it isn't stored some where else. When you define a class what you are doing in a sense is in the global scope of the module it is creating a variable that holds that class definition. So in you code in the global scope it has something like this,
FileRetriever = (class:
def __init__(self):
blahblahblah
So now your class definition is stored in a variable call FileRetriever.
So now you're using a decorator, so now it looks like this based on the code in the single decorator.
FileRetriever = _SingletonWrapper(class: blahblah)
Now you're class is wrapped and stored in the variable FileRetriever.
Now you invoke the _SingletonWrapper.__call__() when you run FileRetriever().
Because __call__ is an instance method. It can hold a reference to you're original class and instance of the class you declared and so even if you remove all you're references to that class this code is still holding that reference.
If you truly want to remove all references to you're singleton which I'm not sure why you would want to. You need to remove all references to the wrapper as well as you're class. So something like FileRetriever = None might cause the gc to collect it. But you would lose you're original class definition in the process.
In the following code,
# An example class with some variable and a method
class ExampleClass(object):
def __init__(self):
self.var = 10
def dummyPrint(self):
print ('Hello World!')
# Creating instance and printing the init variable
inst_a = ExampleClass()
# This prints --> __init__ variable = 10
print ('__init__ variable = %d' %(inst_a.var))
# This prints --> Hello World!
inst_a.dummyPrint()
# Creating a new attribute and printing it.
# This prints --> New variable = 20
inst_a.new_var = 20
print ('New variable = %d' %(inst_a.new_var))
# Trying to create new method, which will give error
inst_a.newDummyPrint()
I am able to create a new attribute (new_var) outside the class, using instance. And it works. Ideally, I was expecting it will not work.
Similarly I tried creating new method (newDummyPrint()); which will print AttributeError: 'ExampleClass' object has no attribute 'newDummyPrint' as I expected.
My question is,
Why did creating a new attribute worked?
Why creating a new method didn't work?
As already mentionned in comments, you are creating the new attribute here:
inst_a.new_var = 20
before reading it on the next line. You're NOT assigning newDummyPrint anywhere, so obviously the attribute resolution mechanism cannot find it and ends up raising an AtributeError. You'd get the very same result if you tried to access any other non-existing attribute, ie inst_a.whatever.
Note that since in Python everything is an object (including classes, functions etc), there are no real distinction between accessing a "data" attribute or a method - they are all attributes (whether class or instance ones), and the attribute resolution rules are the same. In the case of methods (or any other callable attribute), the call operation happens after the attribute has been resolved.
To dynamically create a new "method", you mainly have two solutions: creating as a class attribute (which will make it available to all other instances of the class), or as an instance attribute (which will - obviously - make it available only on this exact instance.
The first solution is as simple as it can be: define your function and bind it to the class:
# nb: inheriting from `object` for py2 compat
class Foo(object):
def __init__(self, var):
self.var = var
def bar(self, x):
return self.var * x
# testing before:
f = Foo(42)
try:
print(f.bar(2))
except AttribteError as e:
print(e)
# now binds the function to the class:
Foo.bar = bar
# and test it:
print(f.bar(2))
# and it's also available on other instances:
f2 = Foo(6)
print(f2.bar(7))
Creating per-instance method is a (very tiny) bit more involved - you have to manually get the method from the function and bind this method to the instance:
def baaz(self):
return "{}.var = {}".format(self, self.var)
# test before:
try:
print(f.baaz())
except AttributeError as e:
print(e)
# now binds the method to the instance
f.baaz = baaz.__get__(f, Foo)
# now `f` has a `baaz` method
print(f.baaz())
# but other Foo instances dont
try:
print(f2.baaz())
except AttributeError as e:
print(e)
You'll noticed I talked about functions in the first case and methods in the second case. A python "method" is actually just a thin callable wrapper around a function, an instance and a class, and is provided by the function type through the descriptor protocol - which is automagically invoked when the attribute is resolved on the class itself (=> is a class attribute implementin the descriptor protocol) but not when resolved on the instance. This why, in the second case, we have to manually invoke the descriptor protocol.
Also note that there are limitations on what's possible here: first, __magic__ methods (all methods named with two leading and two trailing underscores) are only looked up on the class itself so you cannot define them on a per-instance basis. Then, slots-based types and some builtin or C-coded types do not support dynamic attributes whatsoever. Those restrictions are mainly there for performance optimization reasons.
You can create new attributes on the fly when you are using an empty class definition emulating Pascal "record" or C "struct". Otherwise, what you are trying to do is not a good manner, or a good pattern for object-oriented programming. There are lots of books you can read about it. Generally speaking, you have to clearly tell in the class definition what an object of that class is, how it behaves: modifying its behavior on the fly (e.g. adding new methods) could lead to unknown results, which make your life impossible when reading that code a month later and even worse when you are debugging.
There is even an anti-pattern problem called Ambiguous Viewpoint:
Lack of clarification of the modeling viewpoint leads to problematic
ambiguities in object models.
Anyway, if you are playing with Python and you swear you'll never use this code in production, you can write new attributes which store lambda functions, e.g.
c = ExampleClass()
c.newMethod = lambda s1, s2: str(s1) + ' and ' + str(s2)
print(c.newMethod('string1', 'string2'))
# output is: string1 and string2
but this is very ugly, I would never do it.
I wrote something like this today (not unlike the mpl_connect documentation:
class Foo(object):
def __init__(self): print 'init Foo', self
def __del__(self): print 'del Foo', self
def callback(self, event=None): print 'Foo.callback', self, event
from pylab import *
fig = figure()
plot(randn(10))
cid = fig.canvas.mpl_connect('button_press_event', Foo().callback)
show()
This looks reasonable, but it doesn't work -- it's as though matplotlib loses track of the function I've given it. If instead of passing it Foo().callback I pass it lambda e: Foo().callback(e), it works. Similarly if I say x = Foo() and then pass it x.callback, it works.
My presumption is that the unnamed Foo instance created by Foo() is immediately destroyed after the mpl_connect line -- that matplotlib having the Foo.callback reference doesn't keep the Foo alive. Is that correct?
In the non-toy code I encountered this in, the solution of x = Foo() didn't work, presumably because in that case show() was elsewhere so x had gone out of scope.
More generally, Foo().callback is a <bound method Foo.callback of <__main__.Foo object at 0x03B37890>>. My primary surprise is that it seems like a bound method isn't actually keeping a reference to the object. Is that correct?
Yes, a bound method references the object - the object is the value of a bound method object's .im_self attribute.
So I'm wondering whether matplotlib's mpl_connect() is remembering to increment the reference counts on the arguments passed to it. If not (and this is a common error), then there's nothing to keep the anonymous Foo().callback alive when mpl_connect() returns.
If you have easy access to the source code, take a look at the mpl_connect() implementation? You want to see C code doing Py_INCREF() ;-)
EDIT This looks relevant, from docs here:
The canvas retains only weak references to the callbacks. Therefore
if a callback is a method of a class instance, you need to retain
a reference to that instance. Otherwise the instance will be garbage-
collected and the callback will vanish.
So it's your fault - LOL ;-)
Here's the justification from matplotlib.cbook.CallbackRegistry.__doc__:
In practice, one should always disconnect all callbacks when they
are no longer needed to avoid dangling references (and thus memory
leaks). However, real code in matplotlib rarely does so, and due
to its design, it is rather difficult to place this kind of code.
To get around this, and prevent this class of memory leaks, we
instead store weak references to bound methods only, so when the
destination object needs to die, the CallbackRegistry won't keep
it alive. The Python stdlib weakref module can not create weak
references to bound methods directly, so we need to create a proxy
object to handle weak references to bound methods (or regular free
functions). This technique was shared by Peter Parente on his
"Mindtrove" blog
<http://mindtrove.info/articles/python-weak-references/>_.
It's a shame there isn't an official way to bypass this behavior.
Here's a kludge to get around it, which is dirty but may be OK for non-production test/diagnostic code: attach the Foo the the figure:
fig._dont_forget_this = Foo()
cid = fig.canvas.mpl_connect('button_press_event', fig._dont_forget_this.callback)
This still leaves the question of why lambda e: Foo().callback(e) works. It obviously makes a new Foo at every call, but why doesn't the lambda get garbage collected? Is the fact that it works just a case of undefined behavior?
I've gotten myself in trouble a few times now with accidentially (unintentionally) referencing global variables in a function or method definition.
My question is: is there any way to disallow python from letting me reference a global variable? Or at least warn me that I am referencing a global variable?
x = 123
def myfunc() :
print x # throw a warning or something!!!
Let me add that the typical situation where this arrises for my is using IPython as an interactive shell. I use 'execfile' to execute a script that defines a class. In the interpreter, I access the class variable directly to do something useful, then decide I want to add that as a method in my class. When I was in the interpreter, I was referencing the class variable. However, when it becomes a method, it needs to reference 'self'. Here's an example.
class MyClass :
a = 1
b = 2
def add(self) :
return a+b
m = MyClass()
Now in my interpreter I run the script 'execfile('script.py')', I'm inspecting my class and type: 'm.a * m.b' and decide, that would be a useful method to have. So I modify my code to be, with the non-intentional copy/paste error:
class MyClass :
a = 1
b = 2
def add(self) :
return a+b
def mult(self) :
return m.a * m.b # I really meant this to be self.a * self.b
This of course still executes in IPython, but it can really confuse me since it is now referencing the previously defined global variable!
Maybe someone has a suggestion given my typical IPython workflow.
First, you probably don't want to do this. As Martijn Pieters points out, many things, like top-level functions and classes, are globals.
You could filter this for only non-callable globals. Functions, classes, builtin-function-or-methods that you import from a C extension module, etc. are callable. You might also want to filter out modules (anything you import is a global). That still won't catch cases where you, say, assign a function to another name after the def. You could add some kind of whitelisting for that (which would also allow you to create global "constants" that you can use without warnings). Really, anything you come up with will be a very rough guide at best, not something you want to treat as an absolute warning.
Also, no matter how you do it, trying to detect implicit global access, but not explicit access (with a global statement) is going to be very hard, so hopefully that isn't important.
There is no obvious way to detect all implicit uses of global variables at the source level.
However, it's pretty easy to do with reflection from inside the interpreter.
The documentation for the inspect module has a nice chart that shows you the standard members of various types. Note that some of them have different names in Python 2.x and Python 3.x.
This function will get you a list of all the global names accessed by a bound method, unbound method, function, or code object in both versions:
def get_globals(thing):
thing = getattr(thing, 'im_func', thing)
thing = getattr(thing, '__func__', thing)
thing = getattr(thing, 'func_code', thing)
thing = getattr(thing, '__code__', thing)
return thing.co_names
If you want to only handle non-callables, you can filter it:
def get_callable_globals(thing):
thing = getattr(thing, 'im_func', thing)
func_globals = getattr(thing, 'func_globals', {})
thing = getattr(thing, 'func_code', thing)
return [name for name in thing.co_names
if callable(func_globals.get(name))]
This isn't perfect (e.g., if a function's globals have a custom builtins replacement, we won't look it up properly), but it's probably good enough.
A simple example of using it:
>>> def foo(myparam):
... myglobal
... mylocal = 1
>>> print get_globals(foo)
('myglobal',)
And you can pretty easily import a module and recursively walk its callables and call get_globals() on each one, which will work for the major cases (top-level functions, and methods of top-level and nested classes), although it won't work for anything defined dynamically (e.g., functions or classes defined inside functions).
If you only care about CPython, another option is to use the dis module to scan all the bytecode in a module, or .pyc file (or class, or whatever), and log each LOAD_GLOBAL op.
One major advantage of this over the inspect method is that it will find functions that have been compiled, even if they haven't been created yet.
The disadvantage is that there is no way to look up the names (how could there be, if some of them haven't even been created yet?), so you can't easily filter out callables. You can try to do something fancy, like connecting up LOAD_GLOBAL ops to corresponding CALL_FUNCTION (and related) ops, but… that's starting to get pretty complicated.
Finally, if you want to hook things dynamically, you can always replace globals with a wrapper that warns every time you access it. For example:
class GlobalsWrapper(collections.MutableMapping):
def __init__(self, globaldict):
self.globaldict = globaldict
# ... implement at least __setitem__, __delitem__, __iter__, __len__
# in the obvious way, by delegating to self.globaldict
def __getitem__(self, key):
print >>sys.stderr, 'Warning: accessing global "{}"'.format(key)
return self.globaldict[key]
globals_wrapper = GlobalsWrapper(globals())
Again, you can filter on non-callables pretty easily:
def __getitem__(self, key):
value = self.globaldict[key]
if not callable(value):
print >>sys.stderr, 'Warning: accessing global "{}"'.format(key)
return value
Obviously for Python 3 you'd need to change the print statement to a print function call.
You can also raise an exception instead of warning pretty easily. Or you might want to consider using the warnings module.
You can hook this into your code in various different ways. The most obvious one is an import hook that gives each new module a GlobalsWrapper around its normally-built globals. Although I'm not sure how that will interact with C extension modules, but my guess is that it will either work, or be harmlessly ignored, either of which is probably fine. The only problem is that this won't affect your top-level script. If that's important, you can write a wrapper script that execfiles the main script with a GlobalsWrapper, or something like that.
I've been struggling with a similar challenge (especially in Jupyter notebooks) and created a small package to limit the scope of functions.
>>> from localscope import localscope
>>> a = 'hello world'
>>> #localscope
... def print_a():
... print(a)
Traceback (most recent call last):
...
ValueError: `a` is not a permitted global
The #localscope decorator uses python's disassembler to find all instances of the decorated function using a LOAD_GLOBAL (global variable access) or LOAD_DEREF (closure access) statement. If the variable to be loaded is a builtin function, is explicitly listed as an exception, or satisfies a predicate, the variable is permitted. Otherwise, an exception is raised.
Note that the decorator analyses the code statically. Consequently, it does not have access to the values of variables accessed by closure.
I've got a bunch of functions (outside of any class) where I've set attributes on them, like funcname.fields = 'xxx'. I was hoping I could then access these variables from inside the function with self.fields, but of course it tells me:
global name 'self' is not defined
So... what can I do? Is there some magic variable I can access? Like __this__.fields?
A few people have asked "why?". You will probably disagree with my reasoning, but I have a set of functions that all must share the same signature (accept only one argument). For the most part, this one argument is enough to do the required computation. However, in a few limited cases, some additional information is needed. Rather than forcing every function to accept a long list of mostly unused variables, I've decided to just set them on the function so that they can easily be ignored.
Although, it occurs to me now that you could just use **kwargs as the last argument if you don't care about the additional args. Oh well...
Edit: Actually, some of the functions I didn't write, and would rather not modify to accept the extra args. By "passing in" the additional args as attributes, my code can work both with my custom functions that take advantage of the extra args, and with third party code that don't require the extra args.
Thanks for the speedy answers :)
self isn't a keyword in python, its just a normal variable name. When creating instance methods, you can name the first parameter whatever you want, self is just a convention.
You should almost always prefer passing arguments to functions over setting properties for input, but if you must, you can do so using the actual functions name to access variables within it:
def a:
if a.foo:
#blah
a.foo = false
a()
see python function attributes - uses and abuses for when this comes in handy. :D
def foo():
print(foo.fields)
foo.fields=[1,2,3]
foo()
# [1, 2, 3]
There is nothing wrong with adding attributes to functions. Many memoizers use this to cache results in the function itself.
For example, notice the use of func.cache:
from decorator import decorator
#decorator
def memoize(func, *args, **kw):
# Author: Michele Simoniato
# Source: http://pypi.python.org/pypi/decorator
if not hasattr(func, 'cache'):
func.cache = {}
if kw: # frozenset is used to ensure hashability
key = args, frozenset(kw.iteritems())
else:
key = args
cache = func.cache # attribute added by memoize
if key in cache:
return cache[key]
else:
cache[key] = result = func(*args, **kw)
return result
You can't do that "function accessing its own attributes" correctly for all situations - see for details here how can python function access its own attributes? - but here is a quick demonstration:
>>> def f(): return f.x
...
>>> f.x = 7
>>> f()
7
>>> g = f
>>> g()
7
>>> del f
>>> g()
Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
File "<interactive input>", line 1, in f
NameError: global name 'f' is not defined
Basically most methods directly or indirectly rely on accessing the function object through lookup by name in globals; and if original function name is deleted, this stops working. There are other kludgey ways of accomplishing this, like defining class, or factory - but thanks to your explanation it is clear you don't really need that.
Just do the mentioned keyword catch-all argument, like so:
def fn1(oneArg):
// do the due
def fn2(oneArg, **kw):
if 'option1' in kw:
print 'called with option1=', kw['option1']
//do the rest
fn2(42)
fn2(42, option1='something')
Not sure what you mean in your comment of handling TypeError - that won't arise when using **kw. This approach works very well for some python system functions - check min(), max(), sort(). Recently sorted(dct,key=dct.get,reverse=True) came very handy to me in CodeGolf challenge :)
Example:
>>> def x(): pass
>>> x
<function x at 0x100451050>
>>> x.hello = "World"
>>> x.hello
"World"
You can set attributes on functions, as these are just plain objects, but I actually never saw something like this in real code.
Plus. self is not a keyword, just another variable name, which happens to be the particular instance of the class. self is passed implicitly, but received explicitly.
if you want globally set parameters for a callable 'thing' you could always create a class and implement the __call__ method?
There is no special way, within a function's body, to refer to the function object whose code is executing. Simplest is just to use funcname.field (with funcname being the function's name within the namespace it's in, which you indicate is the case -- it would be harder otherwise).
This isn't something you should do. I can't think of any way to do what you're asking except some walking around on the call stack and some weird introspection -- which isn't something that should happen in production code.
That said, I think this actually does what you asked:
import inspect
_code_to_func = dict()
def enable_function_self(f):
_code_to_func[f.func_code] = f
return f
def get_function_self():
f = inspect.currentframe()
code_obj = f.f_back.f_code
return _code_to_func[code_obj]
#enable_function_self
def foo():
me = get_function_self()
print me
foo()
While I agree with the the rest that this is probably not good design, the question did intrigue me. Here's my first solution, which I may update once I get decorators working. As it stands, it relies pretty heavily on being able to read the stack, which may not be possible in all implementations (something about sys._getframe() not necessarily being present...)
import sys, inspect
def cute():
this = sys.modules[__name__].__dict__.get(inspect.stack()[0][3])
print "My face is..." + this.face
cute.face = "very cute"
cute()
What do you think? :3
You could use the following (hideously ugly) code:
class Generic_Object(object):
pass
def foo(a1, a2, self=Generic_Object()):
self.args=(a1,a2)
print "len(self.args):", len(self.args)
return None
... as you can see it would allow you to use "self" as you described. You can't use an "object()" directly because you can't "monkey patch(*)" values into an object() instance. However, normal subclasses of object (such as the Generic_Object() I've shown here) can be "monkey patched"
If you wanted to always call your function with a reference to some object as the first argument that would be possible. You could put the defaulted argument first, followed by a *args and optional **kwargs parameters (through which any other arguments or dictionaries of options could be passed during calls to this function).
This is, as I said hideously ugly. Please don't ever publish any code like this or share it with anyone in the Python community. I'm only showing it here as a sort of strange educational exercise.
An instance method is like a function in Python. However, it exists within the namespace of a class (thus it must be accessed via an instance ... myobject.foo() for example) and it is called with a reference to "self" (analagous to the "this" pointer in C++) as the first argument. Also there's a method resolution process which causes the interpreter to search the namespace of the instance, then it's class, and then each of the parent classes and so on ... up through the inheritance tree.
An unbound function is called with whatever arguments you pass to it. There can't bee any sort of automatically pre-pended object/instance reference to the argument list. Thus, writing a function with an initial argument named "self" is meaningless. (It's legal because Python doesn't place any special meaning on the name "self." But meaningless because callers to your function would have to manually supply some sort of object reference to the argument list and it's not at all clear what that should be. Just some bizarre "Generic_Object" which then floats around in the global variable space?).
I hope that clarifies things a bit. It sounds like you're suffering from some very fundamental misconceptions about how Python and other object-oriented systems work.
("Monkey patching" is a term used to describe the direct manipulation of an objects attributes -- or "instance variables" by code that is not part of the class hierarchy of which the object is an instance).
As another alternative, you can make the functions into bound class methods like so:
class _FooImpl(object):
a = "Hello "
#classmethod
def foo(cls, param):
return cls.a + param
foo = _FooImpl.foo
# later...
print foo("World") # yes, Hello World
# and if you have to change an attribute:
foo.im_self.a = "Goodbye "
If you want functions to share attribute namespaecs, you just make them part of the same class. If not, give each its own class.
What exactly are you hoping "self" would point to, if the function is defined outside of any class? If your function needs some global information to execute properly, you need to send this information to the function in the form of an argument.
If you want your function to be context aware, you need to declare it within the scope of an object.