If someone writes a class in python, and fails to specify their own __repr__() method, then a default one is provided for them. However, suppose we want to write a function which has the same, or similar, behavior to the default __repr__(). However, we want this function to have the behavior of the default __repr__() method even if the actual __repr__() for the class was overloaded. That is, suppose we want to write a function which has the same behavior as a default __repr__() regardless of whether someone overloaded the __repr__() method or not. How might we do it?
class DemoClass:
def __init__(self):
self.var = 4
def __repr__(self):
return str(self.var)
def true_repr(x):
# [magic happens here]
s = "I'm not implemented yet"
return s
obj = DemoClass()
print(obj.__repr__())
print(true_repr(obj))
Desired Output:
print(obj.__repr__()) prints 4, but print(true_repr(obj)) prints something like:
<__main__.DemoClass object at 0x0000000009F26588>
You can use object.__repr__(obj). This works because the default repr behavior is defined in object.__repr__.
Note, the best answer is probably just to use object.__repr__ directly, as the others have pointed out. But one could implement that same functionality roughly as:
>>> def true_repr(x):
... type_ = type(x)
... module = type_.__module__
... qualname = type_.__qualname__
... return f"<{module}.{qualname} object at {hex(id(x))}>"
...
So....
>>> A()
hahahahaha
>>> true_repr(A())
'<__main__.A object at 0x106549208>'
>>>
Typically we can use object.__repr__ for that, but this will to the "object repr for every item, so:
>>> object.__repr__(4)
'<int object at 0xa6dd20>'
Since an int is an object, but with the __repr__ overriden.
If you want to go up one level of overwriting, we can use super(..):
>>> super(type(4), 4).__repr__() # going up one level
'<int object at 0xa6dd20>'
For an int that thus again means that we will print <int object at ...>, but if we would for instance subclass the int, then it would use the __repr__ of int again, like:
class special_int(int):
def __repr__(self):
return 'Special int'
Then it will look like:
>>> s = special_int(4)
>>> super(type(s), s).__repr__()
'4'
What we here do is creating a proxy object with super(..). Super will walk the method resolution order (MRO) of the object and will try to find the first function (from a superclass of s) that has overriden the function. If we use single inheritance, that is the closest parent that overrides the function, but if it there is some multiple inheritance involved, then this is more tricky. We thus select the __repr__ of that parent, and call that function.
This is also a rather weird application of super since usually the class (here type(s)) is a fixed one, and does not depend on the type of s itself, since otherwise multiple such super(..) calls would result in an infinite loop.
But usually it is a bad idea to break overriding anyway. The reason a programmer overrides a function is to change the behavior. Not respecting this can of course sometimes result into some useful functions, but frequently it will result in the fact that the code contracts are no longer satisfied. For example if a programmer overrides __eq__, he/she will also override __hash__, if you use the hash of another class, and the real __eq__, then things will start breaking.
Calling magic function directly is also frequently seen as an antipattern, so you better avoid that as well.
Related
I am a novice in python. Working on extending an older module. So far it had a function that returned str (output of a blocking shell command). Now I need that function to also be able to return an object so later operations can be done on it (checking output for a non-blocking shell command). So the function now returns an instance of my class which I subclassed from str for backward compatibility. The problem is, however, when such an object is passed to os.path.isdir - it always returns False, even with the string being a valid path
import os
class ShellWrap(str):
def __new__(cls, dummy_str_value, process_handle):
return str.__new__(cls,"")
def __init__(self, dummy_str_value, process_handle):
self._ph = process_handle
self._output_str = ""
def wait_for_output(self):
# for simplicity just do
self._output_str = "/Users"
def __str__(self):
return str(self._output_str)
def __repr__(self):
return str(self._output_str)
def __eq__(self,other):
if (isinstance(other, str)):
return other == str(self._output_str)
else:
return super().__eq__(self,other)
>>> obj = ShellWrap("",None)
>>> obj.wait_for_output()
>>> print(type(obj))
... <class '__main__.ShellWrap'>
>>> print (ShellWrap.__mro__)
... <class '__main__.ShellWrap'>
(<class '__main__.ShellWrap'>, <class 'str'>, <class 'object'>)
>>> print(type(obj._output_str))
... <class 'str'>
>>> print(obj)
... /Users
>>> print(obj._output_str)
... /Users
>>> obj == "/Users"
... True
The one that puzzles me is :
>>> print(os.path.isdir(obj))
... False **<<-- This one puzzles me**
print(os.path.isdir("/Users"))
... True
I tried to add PathLike inheritance and implement one more dunder but to no prevail :
class ShellWrap(str,PathLike):
....
def __fspath__(self):
return self._output_str
It seems there is one more dunder that I failed to implement. But which?
I do see, however, something strange in the debugger. When I put a watch on obj - it says it is of a class str but the value is shown by the debugger is without the quotes (unlike other 'pure' strs).
Adding quotes manually to the string in the debugger - makes it work but I guess editing a string probably creates a new object, this time pure str.
What do I miss?
Edit: after realizing (see the accepted answer) that what I try to do is impossible, I decided to challenge the decision of having to subclass str. So now my class does not inherit anything. It just implements __str__, __repr__ and __fspath__ and this seems to be enough! Apparently as long as the str inheritance is there - it gets precedence, the dunders don't get called and it tricks some libraries to go fetch the underlying C storage of the str value
Consider the source of os.path.isdir. When you pass in obj, you’re probably triggering that value error because the string you want to evaluate is an attribute of your string subclass, not the string the subclass is supposed to represent. You’ll have to muck around a bit more in the source for str to find the right member to override.
Edit: one possible way around this is to use __init__ dynamically. That is, get everything you need done to render the path string in__new__, and before you return the class in that method, set output_str as an attribute. Now in your __init__, call super().__init__ with self.output_str as the only argument.
What you're trying to do is impossible.
C code working with a string accesses the actual string data managed by the str class, not the methods you're writing. It doesn't care that you attached another string to your object as an attribute, or that you overrode a bunch of methods. It's closer to str.__whatever__(your_obj) than your_obj.__whatever__(), although it doesn't go through method calls at all.
In this case, the relevant C code is the os.stat call that os.path.isdir delegates to, but almost anything that uses strings is going to use something written in C that accesses the str data directly at some point.
You want your object's data to be mutable - wait_for_output is mutative - but you cannot mutate the parts of your object inherited from str, and that's the data that matters.
I want to use new.instancemethod to assign a function (aFunction) from one class (A) to an instance of another class (B). I'm not sure how I can get aFunction to allow itself to be applied to an instance of B - currently I am getting an error because aFunction is expecting to be executed on an instance of A.
[Note: I can't cast instance to A using the __class__ attribute as the class I'm using (B in my example) doesn't support typecasting]
import new
class A(object):
def aFunction(self):
pass
#Class which doesn't support typecasting
class B(object):
pass
b = B()
b.aFunction = new.instancemethod(A.aFunction, b, B.__class__)
b.aFunction()
This is the error:
TypeError: unbound method aFunction() must be called with A instance as first argument (got B instance instead)
new.instancemethod takes a function. A.aFunction is an unbound method. That's not the same thing. (You may want to try adding a bunch of print statements to display all of the things involved—A.aFunction(), A().aFunction, etc.—and their types, to help understanding.)
I'm assuming you don't how descriptors work. If you don't want to learn all of the gory details, what's actually going on is that your declaration of aFunction within a class definition creates a non-data descriptor out of the function. This is a magic thing that means that, depending on how you access it, you get an unbound method (A.aFunction) or a bound method (A().aFunction).
If you modify aFunction to be a #staticmethod, this will actually work, because for a static method both A.aFunction and A().aFunction are just functions. (I'm not sure that's guaranteed to be true by the standard, but it's hard to see how else anyone would ever implement it.) But if you want "aFunction to allow itself to be applied to an instance of B", a static method won't help you.
If you actually want to get the underlying function, there are a number of ways to do it; I think this is the clearest as far as helping you understand how descriptors works:
f = object.__getattribute__(A, 'aFunction')
On the other hand, the simplest is probably:
f = A.aFunction.im_func
Then, calling new.instancemethod is how you turn that function into a descriptor that can be called as a regular method for instances of class B:
b.aFunction = new.instancemethod(f, b, B)
Printing out a bunch of data makes things a lot clearer:
import new
class A(object):
def aFunction(self):
print self, type(self), type(A)
#Class which doesn't support typecasting
class B(object):
pass
print A.aFunction, type(A.aFunction)
print A().aFunction, type(A().aFunction)
print A.aFunction.im_func, type(A.aFunction.im_func)
print A().aFunction.im_func, type(A().aFunction.im_func)
A.aFunction(A())
A().aFunction()
f = object.__getattribute__(A, 'aFunction')
b = B()
b.aFunction = new.instancemethod(f, b, B)
b.aFunction()
You'll see something like this:
<unbound method A.aFunction> <type 'instancemethod'>
<bound method A.aFunction of <__main__.A object at 0x108b82d10>> <type 'instancemethod'>
<function aFunction at 0x108b62a28> <type 'function'>
<function aFunction at 0x108b62a28> <type 'function'>
<__main__.A object at 0x108b82d10> <class '__main__.A'> <type 'type'>
<__main__.A object at 0x108b82d10> <class '__main__.A'> <type 'type'>
<__main__.B object at 0x108b82d10> <class '__main__.B'> <type 'type'>
The only thing this doesn't show is the magic that creates the bound and unbound methods. For that, you need to look into A.aFunction.im_func.__get__, and at that point you're better off reading the descriptor howto than trying to dig it apart yourself.
One last thing: As brandizzi pointed out, this is something you almost never want to do. Alternatives include: Write aFunction as a free function instead of a method, so you just call b.aFunction(); factor out a function that does the real work, and have A.aFunction and B.aFunction just call it; have B aggregate an A and forward to it; etc.
I already posted an answer to the question you asked. But you shouldn't have had to get down to the level of having to understand new.instancemethod to solve your problem. The only reason that happened is because you asked the wrong question. Let's look at what you should have asked, and why.
In PySide.QtGui, I want the list widget items to have methods to set the font and colors, and they don't seem to.
This is what you really want. There may well be an easy way to do this. And if so, that's the answer you want. Also, by starting off with this, you avoid all the comments about "What do you actually want to do?" or "I doubt this is appropriate for whatever you're trying to do" (which often come with downvotes—or, more importantly, with potential answerers just ignoring your question).
Of course I could just write a function that takes a QListWidgetItem and call that function, instead of making it a method, but that won't work for me because __.
I assume there's a reason that won't work for you. But I can't really think of a good one. Whatever line of code said this:
item.setColor(color)
would instead say this:
setItemColor(item, color)
Very simple.
Even if you need to, e.g., pass around a color-setting delegate with the item bound into it, that's almost as easy with a function as with a method. Instead of this:
delegate = item.setColor
it's:
delegate = lambda color: setItemColor(item, color)
So, if there is a good reason you need this to be a method, that's something you really should explain. And if you're lucky, it'll turn out you were wrong, and there's a much simpler way to do what you want than what you were trying.
The obvious way to do this would be to get PySide to let me specify a class or factory function, so I could write a QListWidgetItem subclass, and every list item I ever deal with would be an instance of that subclass. But I can't find any way to do that.
This seems like something that should be a feature of PySide. So maybe it is, in which case you'd want to know. And if it isn't, and neither you nor any of the commenters or answerers can think of a good reason it would be bad design or hard to implement, you should go file a feature request and it might be in the next version. (Not that this helps you if you need to release code next week against the current version, but it's still worth doing.)
Since I couldn't find a way to do that, I tried to find some way to add my setColor method to the QListWidgetItem class, but couldn't think of anything.
Monkey-patching classes is very simple:
QListWidgetItem.setColor = setItemColor
If you didn't know you could do this, that's fine. If people knew that's what you were trying to do, this would be the first thing they'd suggest. (OK, maybe not many Python programmers know much about monkey-patching, but it's still a lot more than the number who know about descriptors or new.instancemethod.) And again, besides being an answer you'd get faster and with less hassle, it's a better answer.
Even if you did know this, some extension modules won't let you do that. If you tried and it failed, explain what didn't work instead:
PySide wouldn't let me add the method to the class; when I try monkey-patching, __.
Maybe someone knows why it didn't work. If not, at least they know you tried.
So I have to add it to every instance.
Monkey-patching instances looks like this:
myListWidgetItem.setColor = setItemColor
So again, you'd get a quick answer, and a better one.
But maybe you knew that, and tried it, and it didn't work either:
PySide also wouldn't let me add the method to each instance; when I try, __.
So I tried patching out the __class__ of each instance, to make it point to a custom subclass, because that works in PyQt4, but in PySide it __.
This probably won't get you anything useful, because it's just the way PySide works. But it's worth mentioning that you tried.
So, I decided to create that custom subclass, then copy its methods over, like so, but __.
And all the way down here is where you'd put all the stuff you put in your original question. If it really were necessary to solving your problem, the information would be there. But if there were an easy solution, you wouldn't get a bunch of confused answers from people who were just guessing at how new.instancemethod works.
If possible you can make the aFunction unbound by using #staticmethod decorator: -
class A(object):
#staticmethod
def aFunction(B): # Modified to take appropriate parameter..
pass
class B(object):
pass
b = B()
b.aFunction = new.instancemethod(A.aFunction, b, B.__class__)
b.aFunction()
*NOTE: - You need to modify the method to take appropriate parameter..
I'm amazed that none of these answers gives what seems the simplest solution, specifically where you want an existing instance to have a method x replaced by a method x (same name) from another class (e.g. subclass) by being "grafted" on to it.
I had this issue in the very same context, i.e. with PyQt5. Specifically, using a QTreeView class, the problem is that I have subclassed QStandardItem (call it MyItem) for all the tree items involved in the tree structure (first column) and, among other things, adding or removing such an item involves some extra stuff, specifically adding a key to a dictionary or removing the key from this dictionary.
Overriding appendRow (and removeRow, insertRow, takeRow, etc.) presents no problem:
class MyItem( QStandardItem ):
...
def appendRow( self, arg ):
# NB QStandarItem.appendRow can be called either with an individual item
# as "arg", or with a list of items
if isinstance( arg, list ):
for element in arg:
assert isinstance( element, MyItem )
self._on_adding_new_item( element )
elif isinstance( arg, MyItem ):
self._on_adding_new_item( arg )
else:
raise Exception( f'arg {arg}, type {type(arg)}')
super().appendRow( self, arg )
... where _on_adding_new_item is a method which populates the dictionary.
The problem arises when you want to want to add a "level-0" row, i.e. a row the parent of which is the "invisible root" of the QTreeView. Naturally you want the items in this new "level-0" row to cause a dictionary entry to be created for each, but how to get this invisible root item, class QStandardItem, to do this?
I tried overriding the model's method invisibleRootItem() to deliver not super().invisibleRootItem(), but instead a new MyItem. This didn't seem to work, probably because QStandardItemModel.invisibleRootItem() does things behind the scenes, e.g. setting the item's model() method to return the QStandardItemModel.
The solution was quite simple:
class MyModel( QStandardItemModel ):
def __init__( self ):
self.root_item = None
...
...
def invisibleRootItem( self ):
# if this method has already been called we don't need to do our modification
if self.root_item != None:
return self.root_item
self.root_item = super().invisibleRootItem()
# now "graft" the method from MyItem class on to the root item instance
def append_row( row ):
MyItem.appendRow( self.root_item, row )
self.root_item.appendRow = append_row
... this is not quite enough, however: super().appendRow( self, arg ) in MyItem.appendRow will then raise an Exception when called on the root item, because a QStandardItem has no super(). Instead, MyItem.appendRow is changed to this:
def appendRow( self, arg ):
if isinstance( arg, list ):
for element in arg:
assert isinstance( element, MyItem )
self._on_adding_new_item( element )
elif isinstance( arg, MyItem ):
self._on_adding_new_item( arg )
else:
raise Exception( f'arg {arg}, type {type(arg)}')
QStandardItem.appendRow( self, arg )
Thanks Rohit Jain - this is the answer:
import new
class A(object):
#staticmethod
def aFunction(self):
pass
#Class which doesn't support typecasting
class B(object):
pass
b = B()
b.aFunction = new.instancemethod(A.aFunction, b, B.__class__)
b.aFunction()
Is there any way to get the original object from a weakproxy pointed to it? eg is there the inverse to weakref.proxy()?
A simplified example(python2.7):
import weakref
class C(object):
def __init__(self, other):
self.other = weakref.proxy(other)
class Other(object):
pass
others = [Other() for i in xrange(3)]
my_list = [C(others[i % len(others)]) for i in xrange(10)]
I need to get the list of unique other members from my_list. The way I prefer for such tasks
is to use set:
unique_others = {x.other for x in my_list}
Unfortunately this throws TypeError: unhashable type: 'weakproxy'
I have managed to solve the specific problem in an imperative way(slow and dirty):
unique_others = []
for x in my_list:
if x.other in unique_others:
continue
unique_others.append(x.other)
but the general problem noted in the caption is still active.
What if I have only my_list under control and others are burried in some lib and someone may delete them at any time, and I want to prevent the deletion by collecting nonweak refs in a list?
Or I may want to get the repr() of the object itself, not <weakproxy at xx to Other at xx>
I guess there should be something like weakref.unproxy I'm not aware about.
I know this is an old question but I was looking for an answer recently and came up with something. Like others said, there is no documented way to do it and looking at the implementation of weakproxy type confirms that there is no standard way to achieve this.
My solution uses the fact that all Python objects have a set of standard methods (like __repr__) and that bound method objects contain a reference to the instance (in __self__ attribute).
Therefore, by dereferencing the proxy to get the method object, we can get a strong reference to the proxied object from the method object.
Example:
>>> def func():
... pass
...
>>> weakfunc = weakref.proxy(func)
>>> f = weakfunc.__repr__.__self__
>>> f is func
True
Another nice thing is that it will work for strong references as well:
>>> func.__repr__.__self__ is func
True
So there's no need for type checks if either a proxy or a strong reference could be expected.
Edit:
I just noticed that this doesn't work for proxies of classes. This is not universal then.
Basically there is something like weakref.unproxy, but it's just named weakref.ref(x)().
The proxy object is only there for delegation and the implementation is rather shaky...
The == function doesn't work as you would expect it:
>>> weakref.proxy(object) == object
False
>>> weakref.proxy(object) == weakref.proxy(object)
True
>>> weakref.proxy(object).__eq__(object)
True
However, I see that you don't want to call weakref.ref objects all the time. A good working proxy with dereference support would be nice.
But at the moment, this is just not possible. If you look into python builtin source code you see, that you need something like PyWeakref_GetObject, but there is just no call to this method at all (And: it raises a PyErr_BadInternalCall if the argument is wrong, so it seems to be an internal function). PyWeakref_GET_OBJECT is used much more, but there is no method in weakref.py that could be able to do that.
So, sorry to disappoint you, but you weakref.proxy is just not what most people would want for their use cases. You can however make your own proxy implementation. It isn't to hard. Just use weakref.ref internally and override __getattr__, __repr__, etc.
On a little sidenote on how PyCharm is able to produce the normal repr output (Because you mentioned that in a comment):
>>> class A(): pass
>>> a = A()
>>> weakref.proxy(a)
<weakproxy at 0x7fcf7885d470 to A at 0x1410990>
>>> weakref.proxy(a).__repr__()
'<__main__.A object at 0x1410990>'
>>> type( weakref.proxy(a))
<type 'weakproxy'>
As you can see, calling the original __repr__ can really help!
weakref.ref is hashable whereas weakref.proxy is not. The API doesn't say anything about how you actually can get a handle on the object a proxy points to. with weakref, it's easy, you can just call it. As such, you can roll your own proxy-like class...Here's a very basic attemp:
import weakref
class C(object):
def __init__(self,obj):
self.object=weakref.ref(obj)
def __getattr__(self,key):
if(key == "object"): return object.__getattr__(self,"object")
elif(key == "__init__"): return object.__getattr__(self,"__init__")
else:
obj=object.__getattr__(self,"object")() #Dereference the weakref
return getattr(obj,key)
class Other(object):
pass
others = [Other() for i in range(3)]
my_list = [C(others[i % len(others)]) for i in range(10)]
unique_list = {x.object for x in my_list}
Of course, now unique_list contains refs, not proxys which is fundamentally different...
I know that this is an old question, but I've been bitten by it (so, there's no real 'unproxy' in the standard library) and wanted to share my solution...
The way I solved it to get the real instance was just creating a property which returned it (although I suggest using weakref.ref instead of a weakref.proxy as code should really check if it's still alive before accessing it instead of having to remember to catch an exception whenever any attribute is accessed).
Anyways, if you still must use a proxy, the code to get the real instance is:
import weakref
class MyClass(object):
#property
def real_self(self):
return self
instance = MyClass()
proxied = weakref.proxy(instance)
assert proxied.real_self is instance
I'm trying to get the name of all methods in my class.
When testing how the inspect module works, i extraced one of my methods by obj = MyClass.__dict__['mymethodname'].
But now inspect.ismethod(obj) returns False while inspect.isfunction(obj) returns True, and i don't understand why. Is there some strange way of marking methods as methods that i am not aware of? I thought it was just that it is defined in the class and takes self as its first argument.
You are seeing some effects of the behind-the-scenes machinery of Python.
When you write f = MyClass.__dict__['mymethodname'], you get the raw implementation of "mymethodname", which is a plain function. To call it, you need to pass in an additional parameter, class instance.
When you write f = MyClass.mymethodname (note the absence of parentheses after mymethodname), you get an unbound method of class MyClass, which is an instance of MethodType that wraps the raw function you obtained above. To call it, you need to pass in an additional parameter, class instance.
When you write f = MyClass().mymethodname (note that i've created an object of class MyClass before taking its method), you get a bound method of an instance of class MyClass. You do not need to pass an additional class instance to it, since it's already stored inside it.
To get wrapped method (bound or unbound) by its name given as a string, use getattr, as noted by gnibbler. For example:
unbound_mth = getattr(MyClass, "mymethodname")
or
bound_mth = getattr(an_instance_of_MyClass, "mymethodname")
Use the source
def ismethod(object):
"""Return true if the object is an instance method.
Instance method objects provide these attributes:
__doc__ documentation string
__name__ name with which this method was defined
__func__ function object containing implementation of method
__self__ instance to which this method is bound"""
return isinstance(object, types.MethodType)
The first argument being self is just by convention. By accessing the method by name from the class's dict, you are bypassing the binding, so it appears to be a function rather than a method
If you want to access the method by name use
getattr(MyClass, 'mymethodname')
Well, do you mean that obj.mymethod is a method (with implicitly passed self) while Klass.__dict__['mymethod'] is a function?
Basically Klass.__dict__['mymethod'] is the "raw" function, which can be turned to a method by something called descriptors. This means that every function on a class can be both a normal function and a method, depending on how you access them. This is how the class system works in Python and quite normal.
If you want methods, you can't go though __dict__ (which you never should anyways). To get all methods you should do inspect.getmembers(Klass_or_Instance, inspect.ismethod)
You can read the details here, the explanation about this is under "User-defined methods".
From a comment made on #THC4k's answer, it looks like the OP wants to discriminate between built-in methods and methods defined in pure Python code. User defined methods are of types.MethodType, but built-in methods are not.
You can get the various types like so:
import inspect
import types
is_user_defined_method = inspect.ismethod
def is_builtin_method(arg):
return isinstance(arg, (type(str.find), type('foo'.find)))
def is_user_or_builtin_method(arg):
MethodType = types.MethodType
return isinstance(arg, (type(str.find), type('foo'.find), MethodType))
class MyDict(dict):
def puddle(self): pass
for obj in (MyDict, MyDict()):
for test_func in (is_user_defined_method, is_builtin_method,
is_user_or_builtin_method):
print [attr
for attr in dir(obj)
if test_func(getattr(obj, attr)) and attr.startswith('p')]
which prints:
['puddle']
['pop', 'popitem']
['pop', 'popitem', 'puddle']
['puddle']
['pop', 'popitem']
['pop', 'popitem', 'puddle']
You could use dir to get the name of available methods/attributes/etc, then iterate through them to see which ones are methods. Like this:
[ mthd for mthd in dir(FooClass) if inspect.ismethod(myFooInstance.__getattribute__(mthd)) ]
I'm expecting there to be a cleaner solution, but this could be something you could use if nobody else comes up with one. I'd like if I didn't have to use an instance of the class to use getattribute.
I'd like to serialize Python objects to and from the plist format (this can be done with plistlib). My idea was to write a class PlistObject which wraps other objects:
def __init__(self, anObject):
self.theObject = anObject
and provides a "write" method:
def write(self, pathOrFile):
plistlib.writeToPlist(self.theObject.__dict__, pathOrFile)
Now it would be nice if the PlistObject behaved just like wrapped object itself, meaning that all attributes and methods are somehow "forwarded" to the wrapped object. I realize that the methods __getattr__ and __setattr__ can be used for complex attribute operations:
def __getattr__(self, name):
return self.theObject.__getattr__(name)
But then of course I run into the problem that the constructor now produces an infinite recursion, since also self.theObject = anObject tries to access the wrapped object.
How can I avoid this? If the whole idea seems like a bad one, tell me too.
Unless I'm missing something, this will work just fine:
def __getattr__(self, name):
return getattr(self.theObject, name)
Edit: for those thinking that the lookup of self.theObject will result in an infinite recursive call to __getattr__, let me show you:
>>> class Test:
... a = "a"
... def __init__(self):
... self.b = "b"
... def __getattr__(self, name):
... return 'Custom: %s' % name
...
>>> Test.a
'a'
>>> Test().a
'a'
>>> Test().b
'b'
>>> Test().c
'Custom: c'
__getattr__ is only called as a last resort. Since theObject can be found in __dict__, no issues arise.
But then of course I run into the problem that the constructor now produces an infinite recursion, since also self.theObject = anObject tries to access the wrapped object.
That's why the manual suggests that you do this for all "real" attribute accesses.
theobj = object.__getattribute__(self, "theObject")
I'm glad to see others have been able to help you with the recursive call to __getattr__. Since you've asked for comments on the general approach of serializing to plist, I just wanted to chime in with a few thoughts.
Python's plist implementation handles basic types only, and provides no extension mechanism for you to instruct it on serializing/deserializing complex types. If you define a custom class, for example, writePlist won't be able to help, as you've discovered since you're passing the instance's __dict__ for serialization.
This has a couple implications:
You won't be able to use this to serialize any objects that contain other objects of non-basic type without converting them to a __dict__, and so-on recursively for the entire network graph.
If you roll your own network graph walker to serialize all non-basic objects that can be reached, you'll have to worry about circles in the graph where one object has another in a property, which in turn holds a reference back to the first, etc etc.
Given then, you may wish to look at pickle instead as it can handle all of these and more. If you need the plist format for other reasons, and you're sure you can stick to "simple" object dicts, then you may wish to just use a simple function... trying to have the PlistObject mock every possible function in the contained object is an onion with potentially many layers as you need to handle all the possibilities of the wrapped instance.
Something as simple as this may be more pythonic, and keep the usability of the wrapped object simpler by not wrapping it in the first place:
def to_plist(obj, f_handle):
writePlist(obj.__dict__, f_handle)
I know that doesn't seem very sexy, but it is a lot more maintainable in my opinion than a wrapper given the severe limits of the plist format, and certainly better than artificially forcing all objects in your application to inherit from a common base class when there's nothing in your business domain that actually indicates those disparate objects are related.