Suppose I have a simple class like this:
class Class1(object):
def __init__(self, property):
self.property = property
def method1(self):
pass
An instances of Class1 returns a value that can be used in other class:
class Class2(object):
def __init__(self, instance_of_class1, other_property):
self.other_property = other_property
self.instance_of_class1 = instance_of_class1
def method1(self):
# A method that uses self.instance_of_class1.property and self.other_property
This is working. However, I have the feeling that this is not a very common approach and maybe there are alternatives. Having said this, I tried to refactor my classes to pass simpler objects to Class2, but I found that passing the whole instance as an argument actually simplifies the code significantly. In order to use this, I have to do this:
instance_of_class1 = Class1(property=value)
instance_of_class2 = Class2(instance_of_class1, other_property=other_value)
instance_of_class2.method1()
This is very similar to the way some R packages look like. Is there a more "Pythonic" alternative?
There's nothing wrong with doing that, though in this particular example it looks like you could just as easily do
instance_of_class2 = Class2(instance_of_class1.property, other_property=other_value).
But if you find you need to use other properties/methods of Class1 inside of Class2, just go ahead and pass the whole Class1 instance into Class2. This kind of approach is used all the time in Python and OOP in general. Many common design patterns call for a class to take an instance (or several instances) of other classes: Proxy, Facade, Adapter, etc.
Related
Python 3.6
I just found myself programming this type of inheritance structure (below). Where a sub class is calling methods and attributes of an object a parent has.
In my use case I'm placing code in class A that would otherwise be ugly in class B.
Almost like a reverse inheritance call or something, which doesn't seem like a good idea... (Pycharm doesn't seem to like it)
Can someone please explain what is best practice in this scenario?
Thanks!
class A(object):
def call_class_c_method(self):
self.class_c.do_something(self)
class B(A):
def __init__(self, class_c):
self.class_c = class_c
self.begin_task()
def begin_task(self):
self.call_class_c_method()
class C(object):
def do_something(self):
print("I'm doing something super() useful")
a = A
c = C
b = B(c)
outputs:
I'm doing something super() useful
There is nothing wrong with implementing a small feature in class A and use it as a base class for B. This pattern is known as mixin in Python. It makes a lot of sense if you want to re-use A or want to compose B from many such optional features.
But make sure your mixin is complete in itself!
The original implementation of class A depends on the derived class to set a member variable. This is a particularly ugly approach. Better define class_c as a member of A where it is used:
class A(object):
def __init__(self, class_c):
self.class_c = class_c
def call_class_c_method(self):
self.class_c.do_something()
class B(A):
def __init__(self, class_c):
super().__init__(class_c)
self.begin_task()
def begin_task(self):
self.call_class_c_method()
class C(object):
def do_something(self):
print("I'm doing something super() useful")
c = C()
b = B(c)
I find that reducing things to abstract letters in cases like this makes it harder for me to reason about whether the interaction makes sense.
In effect, you're asking whether it is reasonable for a class(A) to depend on a member that conforms to a given interface (C). The answer is that there are cases where it clearly does.
As an example, consider the model-view-controller pattern in web application design.
You might well have something like
class Controller:
def get(self, request)
return self.view.render(self, request)
or similar. Then elsewhere you'd have some code that found the view and populated self.view in the controller. Typical examples of doing that include some routing lookups or include having a specific view associated with a controller. While not Python, the Rails web framework does a lot of this.
When we have specific examples, it's a lot easier to reason about whether the abstractions make sense.
In the above example, the controller interface depends on having access to some instance of the view interface to do its work. The controller instance encapsulates an instance that implements that view interface.
Here are some things to consider when evaluating such designs:
Can you clearly articulate the boundaries of each interface/class? That is, can you explain what the controller's job is and what the view's job is?
Does your decision to encapsulate an instance agree with those scopes?
Do the interface and class scopes seem reasonable when you think about future extensibility and about minimizing the scope of code changes?
Not sure if this is a dupe or not. Here it goes.
I need to write some Python code that looks like:
class TestClass:
def test_case(self):
def get_categories(self):
return [“abc”,”bcd”]
# do the test here
and then have a test engine class that scans all these test classes, loads all the test_case functions and for each invokes get_categories to find out if the test belongs t the group of interest for the specific run.
The problem is that get_categories is not seen as an attribute of test_case, and even if I manually assign it
class TestClass:
def test_case(self):
def get_categories(self):
return [“abc”,”bcd”]
# do the test here
test_case.get_categories = get_categories
this is only going to happen when test_case first runs, too late for me.
The reason why this function can’t go on the class (or at least why I want it to be also available at the per-function level) is that a TestClass can have multiple test cases.
Since this is an already existing testing infrastructure, and the categories mechanism works (other than the categories-on-function scenario, which is of lesser importance), a rewrite is not in the plans.
Language tricks dearly appreciated.
Nested functions don't become attributes any more than any other assignment.
I suspect your test infrastructure is doing some severely weird things if this isn't supported (and uses old-style classes!), but you could just do this:
class TestClass:
def test_case(self):
# ...
def _get_categories(self):
return [...]
test_case.get_categories = _get_categories
del _get_categories
Class bodies are executable code like any other block.
What you need is nested classes. Functions aren't made to do what you are trying to do, so you have to move up a notch. Function attributes are mainly used as markup, whereas classes can have anything you want.
class TestClass(object):
class TestCase(object):
#classmethod
def get_categories(cls):
return ['abc', 'efg']
Note that I used #classmethod so that you could use it without instantiating TestCase(); modify if you want to do test_case = TestCase().
Apologies if this doesn't make sense, i'm not much of an experienced programmer.
Consider the following code:
import mymodule
class MyClass:
def __init__(self):
self.classInstance = myModule.classInstance()
and then ......
from mymodule import classInstance
class MyClass(classInstance):
def __init__(self):
pass
If I just wanted to use the one classInstance in MyClass, is it ok to import the specific class from the module and have MyClass inherit this class ?
Are there any best practices, or things I should be thinking about when deciding between these two methods ?
Many thanks
Allow me to propose a different example.
Imagine to have the class Vector.
Now you want a class Point. Point can be defined with a vector but maybe it has other extra functionalities that Vector doesn't have.
In this case you derive Point from Vector.
Now you need a Line class.
A Line is not a specialisation of any of the above classes so probably you don't want to derive it from any of them.
However Line uses points. In this case you might want to start you Line class this way:
class Line(object):
def __init__(self):
self.point1 = Point()
self.point2 = Point()
Where point will be something like this:
class Point(Vector):
def __init__(self):
Vector.__init__(self)
So the answer is really: Depends what you need to do, but when you have a clear idea of what you are coding, than choosing between sub-classing or not becomes obvious.
I hope it helped.
You make it sound like you're trying to "choose between" those two approaches, but they do completely different things. The second one defines a class that inherits from a class (confusingly) called classInstance. The first one defines a class called MyClass (not inheriting from anything except the base obect type) that has an instance variable called self.classInstance, which happens to be set to an instance of the classInstance class.
Why are you naming your class classInstance?
I have that strange feeling this is an easy question.
I want to be able to "alias" a class type so i can swap out the implementation at a package level. I dont want to have X amount of import X as bah scattered throughout my code...
Aka. How can I do something like the below:
class BaseClass(object):
def __init__(self): pass
def mymthod(self): pass
def mymthod1(self): pass
def mymthod2(self): pass
class Implementation(BaseClass):
def __init__(self):
BaseClass.__init__()
Seperate package...
#I dont want thse scattered through out modules,
#i want them in one place where i can change one and change implementations
#I tried putting it in the package init but no luck
import Implementation as BaseClassProxy
class Client(BaseClassImpl):
def __init__(self):
BaseClassImpl.__init__(self)
In any file (where this fits best is up to you, probably wherever Implementation was defined):
BaseClassProxy = Implementation
Since classes are first class objects in Python, you can pretty much bind them to any variable and use them the same way. In this case, you can make an alias for the class.
just put something like
BaseClassProxy = Implementation
in the module, then do:
from module import BaseClassProxy
Here's my idea: Start with a simple object:
class dynamicObject(object):
pass
And to be able to add pre written methods to it on the fly:
def someMethod(self):
pass
So that I can do this:
someObject = dyncamicObject()
someObject._someMethod = someMethod
someObject._someMethod()
Problem is, it wants me to specify the self part of _someMethod() so that it looks like this:
someObject._someMethod(someObject)
This seems kind of odd since isn't self implied when a method is "attached" to an object?
I'm new to the Python way of thinking and am trying to get away from the same thought process for languages like C# so the idea here it to be able to create an object for validation by picking and choosing what validation methods I want to add to it rather than making some kind of object hierarchy. I figured that Python's "self" idea would work in my favor as I thought the object would implicitly know to send itself into the method attached to it.
One thing to note, the method is NOT attached to the object in any way (Completely different files) so maybe that is the issue? Maybe by defining the method on it's own, self is actually the method in question and therefore can't be implied as the object?
Although below I've tried to answer the literal question, I think
Muhammad Alkarouri's answer better addresses how the problem should actually be solved.
Add the method to the class, dynamicObject, rather than the object, someObject:
class dynamicObject(object):
pass
def someMethod(self):
print('Hi there!')
someObject=dynamicObject()
dynamicObject.someMethod=someMethod
someObject.someMethod()
# Hi there!
When you say someObject.someMethod=someMethod, then someObject.__dict__ gets the key-value pair ('someMethod',someMethod).
When you say dynamicObject.someMethod=someMethod, then someMethod is added to dynamicObject's __dict__. You need someMethod defined in the class for
someObject.someMethod to act like a method call. For more information about this, see Raymond Hettinger's essay on descriptors -- after all, a method is nothing more than a descriptor! -- and Shalabh Chaturvedi's essay on attribute lookup.
There is an alternative way:
import types
someObject.someMethod=types.MethodType(someMethod,someObject,type(someObject))
but this is really an abomination since you are defining 'someMethod' as a key in someObject.__dict__, which is not the right place for methods. In fact, you do not get a class method at all, just a curried function. This is more than a mere technicality. Subclasses of dynamicObject would fail to inherit the someMethod function.
To achieve what you want (create an object for validation by picking and choosing what validation methods I want to add to it), a better way is:
class DynamicObject(object):
def __init__(self, verify_method = None):
self.verifier = verify_method
def verify(self):
self.verifier(self)
def verify1(self):
print "verify1"
def verify2(self):
print "verify2"
obj1 = DynamicObject()
obj1.verifier = verify1
obj2 = DynamicObject(verify2)
#equivalent to
#obj2 = DynamicObject()
#obj2.verify = verify2
obj1.verify()
obj2.verify()
Why don't you use setattr? I found this way much more explicit.
class dynamicObject(object):
pass
def method():
print "Hi"
someObject = dynamicObject()
setattr(someObject,"method", method)
someObject.method()
Sometimes it is annoying to need to write a regular function and add it afterwards when the method is very simple. In that case, lambdas can come to the rescue:
class Square:
pass
Square.getX = lambda self: self.x
Square.getY = lambda self: self.y
Square.calculateArea = lambda self: self.getX() * self.getY()
Hope this helps.
If you just want to wrap another class, and not have to deal with assigning a new method to any instance, you can just make the method in question a staticmethod of the class:
class wrapperClass(object):
#staticmethod
def foo():
print("yay!")
obj = wrapperClass()
obj.foo() // Yay!
And you can then give any other class the .foo method with multiple inheritance.
class fooDict(dict, wrapperClass):
"""Normal dict with foo method"""
foo_dict = fooDict()
foo_dict.setdefault('A', 10)
print(foo_dict) // {'A': 10}
foo_dict.foo() // Yay!