I'm refactoring some code, and I thought I could use a bit of Reflection! So, I have this for now:
def f(self, clazz):
[...]
boolean = False
if hasattr(clazz_instace, 'some_attribute'):
setattr(clazz_instace, 'some_attribute', True)
boolean = True
if boolean:
result = getattr(clazz_instace, 'another_method')(None, request=request)
return result['objects']
sorted_objects = getattr(clazz_instace, 'One_more_method')(request)
result = getattr(clazz_instace, 'another_method')(sorted_objects, request=request)
return [...]
My question is about the strings I used to indicate which method I'm searching for regarding to the clazz_instance. I'd like to know if there's another and very best way to do what I did (In a dinamic way to be specific)? I mean, intead of putting method's name like strings directly as I did, would be really nice if I could verify dinamically those methods, differently.
Could you give some nice ideas? How would you do it?
Thanks in advance!!!
An instance's method is nothing more than a function object stored in the instace's __dict__. This said, you are doing correct to find them, except that maybe, the class indeed has the attribute corresponding to your argument string, but is not a function, is just another type instead.
If you depend on this, I recommend you refactor the instance's function looking code into a helper method:
import types
def instance_has_method(instance, name):
try:
attr = getattr(instance, name)
return isinstance(attr, types.FunctionType)
except AttributeError:
return False
Once you have this function, your code will be more concise because now you're sure that the attribute is indeed a function and can be callable.
The code above checks if the attribute is a function. If you want something wider, you can check if it's a callable like this: return hasattr(attr, '__call__').
In python, this is basically how you check for attributes inside classes. I don't think there's nothing wrong with your approach and not another more clever way to do reflection.
Hope this helps!
It's not easy to understand what you are trying to achieve.
getattr(clazz_instace, 'One_more_method')(request) is just a fancy way to say clazz_instace.One_more_method(request).
Use of getattr is reasonable when you don't know the method name in advance, that is, if your method name is variable.
Also, setattr(clazz_instace, 'some_method_inside_this_class', True) turns alleged some_method_inside_this_class into a scalar attribute inside that class, having a True value. Probably this is not what you planned to have.
My best effort to understand your code is this:
def f(clazz):
# if clazz is capable of better_processing, use it:
if hasattr(clazz, 'better_processing'):
return clazz.better_processing(...)
else:
return clazz.basic_processing(...)
BTW what getattr gives you is a callable method, you can directly use it, like this:
method = getattr(clazz, 'better_method', clazz.default_method)
# if 'better_method' was available, it was returned by getattr;
# if not, the default method was returned.
return method(clazz_instance, some_data...)
Related
I have tried looking into the documentation and google search , but I am unable to find out the significance of the [clazz] at the end of method. Could someone help me understand the meaning of the [clazz] at the end of the method? Thanks.
def get_context_setter(context, clazz):
return {
int: context.setFieldToInt,
datetime: context.setFieldToDatetime
}[clazz]
setFieldToInt and setFieldToDatetime are methods inside context class.
This function returns one of two things. It returns either context.setFieldToInt or context.setFieldToDatetime. It does so by using a dictionary as what would be a switch statement in other programming languages.
It checks whether clazz is a reference to the class int or a reference to the class datetime, and then returns the appropriate method.
It's identical to this code:
def get_context_setter(context, clazz):
lookup_table = {int: context.setFieldToInt,
datetime: context.setFieldToDatetime
}
context_function = lookup_table[clazz] # figure out which to return
return context_function
Using a dict instead of a switch statement is pretty popular, see Replacements for switch statement in Python? .
More briefly.
The code presented is expecting the class of some object as a parameter poorly named as clazz.
It's then using that class as a dictionary key.
They're essentially trying to accept two different types and call a method on the object type.
class is a keyword in Python.
The author of the code you show chose to use a strange spelling instead of a longer snake_case parameter name like obj_class.
The parameters really should have been named obj, obj_class
Or
instance, instance_class
Even better, the class really need not be a separate parameter.
I have an API in Python which can return an object, or None if no object is found. I want to avoid run-time exceptions/crashes, etc., hence I want to force the users of my API, to do an is not None test.
For example:
x = getObject(...)
if x is not None:
print x.getName() #should be o.k.
y = getObject(...)
print y.getName() # print an error to the log
How can I achieve that?
In comparable code in C++, I can add a flag that will be checked when I call the getName(); the flag is set only upon comparing the object to NULL.
In Python, however, I am unable to overload the is operator. Are there any other ways I can achieve that functionality in Python?
You cannot force the use of if x is not None because you cannot override the behavior of id(). The is operator internally compares the ids of the two objects being compared, and you have no way of controlling that behavior.
However, you can force the use of if x != None or if not x == Noneby overriding the __eq__ and __ne__ methods of your class, respectively.
This is not good practice, however. As #Kevin has noted in the comments, is is the preferred operator to use when comparing to None.
What I would do is write clear and organized documentation for this API, and then clearly warn users that the instantiation could fail and return None. Then, gently nudge users towards good practices by providing an example with the built-in getattr function or an example with the is not None check.
Like it was already said, you can't override is behavior.
To do what you want, basically you can create a surrogate object that has a getName() function. To let the user check if the function failed, you can have the object evaluate to False. (This is a standard practice and I think this is better than making the object equal to None with the __eq__ operator). To do this, you can override override __nonzero__() having it return False.
Example:
class GetObjectFailed(object):
def __nonzero__():
return False
def getName():
return "An error has occurred" # You could specify a specific error message here...
x = getObject(...)
print x # prints "An error has occurred"
if x:
# This is the recommended way of doing things
# Do something with the object
x.proccess()
if x is not None:
# This will never work
x.proccess()
if x != None:
# This is possible but not recommended
x.proccess()
I apology ahead if this is a repost.
I'm currently using a following "self-invented" verification method to check if a Class's attribute (or method) exists before trying to access it:
if 'methodName' in dir(myClassInstance): result=myClassInstance.methodName()
I wonder if there is a more common "standardized" way of doing the same.
Use hasattr. It returns True if a given object has a given name as an attribute, else False:
if hasattr(myClassInstance, 'methodName'):
... # Whatever you want to do as a result.
Use hasattr(myClassInstance, 'methodName').
Another possibility is to just try accessing it and handle the exception if it's not there:
try:
myClassInstance.methodName()
except AttributeError:
# do what you need to do if the method isn't there
How you'll want to handle this depends on why you're doing the check, how common it is for the object to not have that attribute, and what you want to do if it's not there.
This is kindof an experiment. I'm interested in an API that supports both of these syntaxes:
obj.thing
--> returns default value
obj.thing(2, 'a')
--> returns value derived from *args and **kwargs
"thing" is the same object in both cases; I'd like the calling of thing to be optional (or implicit, if there are not () after it).
I tried over-riding __repr__, but that's just the visual representation of the the object itself, and what is actually returned is an instance of the containing object (here, 'obj'). So, no good.
I'm thinking that there would be an attribute set on an object that was a callable (don't care if it's an instance, a def, or just __call__ on the object) that has enough default values:
class CallableDefault(object):
__call__(self, num=3, letter="z"):
return letter * num
class DumbObject(object):
foo = CallableDefault()
obj = DumbObject()
so, ideally, doing obj alone would return "zzz", but one could also do obj(7,'a') and get 'aaaaaaa'.
I'm thinking decorators might be the way to do this, but I'm not great with decorators. One could override the getattr() call on the containing class, but that would mean that it has to be in a containing class that supports this feature.
What you describe could work, but notice that now the value of the attribute is constrained to be an object of your CallableDefault class. This probably won't be very useful.
I strongly suggest that you don't try to do this. For one thing, you're spending a lot of time trying to trick Python into doing something it doesn't want to do. For another, the users of your API will be confused because it acts differently than every other Python code they've ever seen. They will be confused.
Write a Python API that works naturally in Python.
What happens when you do either
obj.thing
or
obj.thing(2, 'a')
is Python goes looking for thing on obj; once it has thing it either returns it (first case above), or calls it with the parameters (second case) -- the critical point being that the call does not happen until after the attribute is retrieved -- and the containing class has no way of knowing if the thing it returns will be called or not.
You could add a __call__ method to every type you might use this way, but that way lies madness.
Update
Well, as long as you're comfortable with insanity, you could try something like this:
class CallableStr(str):
def __call__(self, num, letter):
return num*letter
class CallableInt(int):
def __call__(self, num, pow):
return num ** pow
class Tester(object):
wierd = CallableStr('zzz')
big = CallableInt(3)
t = Tester()
print repr(t.wierd)
print repr(t.wierd(7, 'a'))
print repr(t.big)
print repr(t.big(2, 16))
One nice thing about this magic object is that it becomes normal soon as you use it in a calculation (or call):
print type(t.big), type(t.big + 3), t.big + 3
print type(t.big), type(t.big(2, 3) + 9), t.big(2, 3) + 9
which results in
<class '__main__.CallableInt'> <type 'int'> 6
<class '__main__.CallableInt'> <type 'int'> 17
First, if you guys think the way I'm trying to do things is not Pythonic, feel free to offer alternative suggestions.
I have an object whose functionality needs to change based on outside events. What I've been doing originally is create a new object that inherits from original (let's call it OrigObject()) and overwrites the methods that change (let's call the new object NewObject()). Then I modified both constructors such that they can take in a complete object of the other type to fill in its own values based on the passed in object. Then when I'd need to change functionality, I'd just execute myObject = NewObject(myObject).
I'm starting to see several problems with that approach now. First of all, other places that reference the object need to be updated to reference the new type as well (the above statement, for example, would only update the local myObject variable). But that's not hard to update, only annoying part is remembering to update it in other places each time I change the object in order to prevent weird program behavior.
Second, I'm noticing scenarios where I need a single method from NewObject(), but the other methods from OrigObject(), and I need to be able to switch the functionality on the fly. It doesn't seem like the best solution anymore to be using inheritance, where I'd need to make M*N different classes (where M is the number of methods the class has that can change, and N is the number of variations for each method) that inherit from OrigObject().
I was thinking of using attribute remapping instead, but I seem to be running into issues with it. For example, say I have something like this:
def hybrid_type2(someobj, a):
#do something else
...
class OrigObject(object):
...
def hybrid_fun(self, a):
#do something
...
def switch(type):
if type == 1:
self.hybrid_fun = OrigObject.hybrid_fun
else:
self.fybrid_fun = hybrid_type2
Problem is, after doing this and trying to call the new hybrid_fun after switching it, I get an error saying that hybrid_type2() takes exactly 2 arguments, but I'm passing it one. The object doesn't seem to be passing itself as an argument to the new function anymore like it does with its own methods, anything I can do to remedy that?
I tried including hybrid_type2 inside the class as well and then using self.hybrid_fun = self.hybrid_type2 works, but using self.hybrid_fun = OrigObject.hybrid_fun causes a similar error (complaining that the first argument should be of type OrigObject). I know I can instead define OrigObject.hybrid_fun() logic inside OrigObject.hybrid_type1() so I can revert it back the same way I'm setting it (relative to the instance, rather than relative to the class to avoid having object not be the first argument). But I wanted to ask here if there is a cleaner approach I'm not seeing here? Thanks
EDIT:
Thanks guys, I've given points for several of the solutions that worked well. I essentially ended up using a Strategy pattern using types.MethodType(), I've accepted the answer that explained how to do the Strategy pattern in python (the Wikipedia article was more general, and the use of interfaces is not needed in Python).
Use the types module to create an instance method for a particular instance.
eg.
import types
def strategyA(possible_self):
pass
instance = OrigObject()
instance.strategy = types.MethodType(strategyA, instance)
instance.strategy()
Note that this only effects this specific instance, no other instances will be effected.
You want the Strategy Pattern.
Read about descriptors in Python. The next code should work:
else:
self.fybrid_fun = hybrid_type2.__get__(self, OrigObject)
What about defining it like so:
def hybrid_type2(someobj, a):
#do something else
...
def hybrid_type1(someobj, a):
#do something
...
class OrigObject(object):
def __init__(self):
...
self.run_the_fun = hybrid_type1
...
def hybrid_fun(self, a):
self.run_the_fun(self, a)
def type_switch(self, type):
if type == 1:
self.run_the_fun = hybrid_type1
else:
self.run_the_fun = hybrid_type2
You can change class at runtime:
class OrigObject(object):
...
def hybrid_fun(self, a):
#do something
...
def switch(self):
self.__class__ = DerivedObject
class DerivedObject(OrigObject):
def hybrid_fun(self, a):
#do the other thing
...
def switch(self):
self.__class__ = OrigObject