suppose that i have a module like below
class B:
def ...
def ...
class A:
b: B
def ...
def ...
I use class B only as member variable of class A
when i try to abstract this module for my buisness logic, what should i do?
one big interface, which has abstract method for class A and class B
two interface, which has abstract method for class A and class B individually
all above are wrong. another way
Both, 1 & 2 are correct approach, but it completely depends on your application.
I think, two interfaces, which would have abstract method for class A and class B individually is the right approach when both of your classes have separate workings and are completely different from each other.
But, as you have mentioned in your code that you have inherited class B in class A. If you create a single interface for class A, it will also allow you to access the methods from class B. So, this approach is good. Also, this approach will shorten the length of your code, resulting in fast processing.
I hope this would help you to take your decision. Let me know if any other clarification required.
Related
Having two python instances of some classes, let's say a is an instance of class A, and b is an instance of class B, is there an easy way to look for a common class for both objects? Also, since of course, all in Python inherits from 'object' I would like to know what the most 'specialized' common class.
I tried the following code, which seems to work:
def common_class(a,b):
A = a.__class__
B = b.__class__
for i in A.mro():
if i in B.mro():
return i
But I was wondering if there exists an easier way to do it.
I have implemented inheritance in python and code is working, Also, I have tried doing the same thing in other way also, I want to know where inheritance has advantage over my second way of doing the same thing.
class A:
def __init__(self,a,b):
self._a = a
self.b = b
def addition(self):
return self._a+self.b
################################### Inheritance ############
class B(A):
def __init__(self, a,b):
super().__init__(a,b)
print("Protected in B ",self._a)
################################### Doing same work by creating object #################
class D():
def __init__(self, a,b):
self.a = a
self.b = b
d = A(self.a,self.b)
print(d.addition())
print("Protected in D ", d._a)
I'm able to call protected member of class A in both ways. Any member of A can be accessed in both ways, so, why inheritance is importance. In which scenario only inheritance will work and NOT object creation ?
I don't believe that your example showcases the power of inheritance.
The 2nd example doesn't look very OOP-like. I rarely see object creation in the constructor unless it is an instance variable or a static variable.
Not every base class will have all the fields that the subclass has. For example, an Animal class may be super generic, but as more classes like Tiger extends it, Tiger will have more tiger properties (and not have the basic same ones as the Animal class).
To focus back on your question, the 2nd example, will require you to use self.d if you want to access the protected variable _a, unless you want to do d = D() and then d.a, which isn't very protected. You are creating an object within D, whereas you are still setting up the blueprint for the class in the 1st example. That might not be what you are looking to do, as the 2nd example with the object creation looks like it could be the driver class.
Assuming that I have two classes A and B. In class A, B's instances will created. Both of their instance methods will use a common variable, which is initialized in class A. I have to pass the common_var through the init function. Think that if I have classes A, B, C, D.... and common_var1, var2, var3...where all vars have to be passed through class to class, that’s terrible:
class A:
def __init__(self, variable_part):
self.common_var = "Fixed part" + variable_part
self.bList = []
def add_B(self):
self.bList += [B(self.common_var)]
def use_common_var():
do_something(self.common_var)
class B:
def __init__(self, common_var):
self.common_var = common_var
def use_common_var():
do_something(self.common_var)
There's an ugly approach to use global here:
class A:
def __init__(self, variable_part):
global common_var
common_var = "Fixed part" + variable_part
def use_common_var(self):
do_something(common_var)
class B:
def use_common_var(self):
do_something(common_var)
But I don't think it's a good idea, any better ideas?
Update:
The original question is here:
The common_vars are a series of prefixes of strings, things like "https://{}:{}/rest/v1.5/".format(host, port), "mytest" etc..
and in class A, I use
"https://127.0.0.1:8080/rest/v1.5/interface01" and "mytest_a"
in class B, I use
"https://127.0.0.1:8080/rest/v1.5/interface02" and "mytest_b"
in class C, I may use
"https://127.0.0.1:8080/rest/v1.5/get?name=123" and "mytest_c"
things like that, I use common variables just to multiplex 'https://{}:{}/rest/v1.5' and "mytest" part and none of these A, B, C classes is in "is-a" relationship. But the core part of this problem is the common_var is not common at very first but initialized in one of this class..
Final Update
I compromised. I added a Helper class to reuse the common values:
class Helper:
#staticmethod
def setup(url, prefix):
Helper.COMMON_URL = url
Helper.prefix = prefix
# A always initiates first
Class A:
def __init__(self, host, port):
Helper.setup(
"https://{0}:{1}/rest/v1.5".format(host, port),
"test"
)
def use_common_var():
do_something(Helper.url, Helper.prefix)
class B:
def use_common_var():
do_somesthing(Helper.url, Helper.prefix)
class C:
def use_common_var():
do_something(Helper.url, Helper.prefix)
Is this a better way?
If you have four classes that share the same set of four attributes, then you might (or not) have a use for inheritance - but this really depends on how/whatfor those attributes are used and what's the real relationship between those classes are. Inheritance is a "is a" relationship (if B inherits from A then B "is a" A too).
Another solution - if you don't really have a "is a" relationship - is to create a class that regroup those four attributes and pass an instance of this class where it's needed. Here again, it only makes sense if there is a real semantic relationshop between those attributes.
Oh and yes, using globals is usually not the solution (unless those variables are actually pseudo-constants - set once at startup and never changed by anyone).
To make a long story short: there's no one-size-fits-all answer to your question, the appropriate solution depends on your concrete use case.
EDIT:
given your added explanation, it looks lik the "cargo class" solution (grouping all your "shared" attributes in a same object) might be what you want. Just beware that it will couple all your classes to this class, which might or not be a problem (wrt/ testing and maintainability). If the forces that drive your A / B ∕ C /D classes evolution are mostly the same then you should be good...
Python 3.6
I just found myself programming this type of inheritance structure (below). Where a sub class is calling methods and attributes of an object a parent has.
In my use case I'm placing code in class A that would otherwise be ugly in class B.
Almost like a reverse inheritance call or something, which doesn't seem like a good idea... (Pycharm doesn't seem to like it)
Can someone please explain what is best practice in this scenario?
Thanks!
class A(object):
def call_class_c_method(self):
self.class_c.do_something(self)
class B(A):
def __init__(self, class_c):
self.class_c = class_c
self.begin_task()
def begin_task(self):
self.call_class_c_method()
class C(object):
def do_something(self):
print("I'm doing something super() useful")
a = A
c = C
b = B(c)
outputs:
I'm doing something super() useful
There is nothing wrong with implementing a small feature in class A and use it as a base class for B. This pattern is known as mixin in Python. It makes a lot of sense if you want to re-use A or want to compose B from many such optional features.
But make sure your mixin is complete in itself!
The original implementation of class A depends on the derived class to set a member variable. This is a particularly ugly approach. Better define class_c as a member of A where it is used:
class A(object):
def __init__(self, class_c):
self.class_c = class_c
def call_class_c_method(self):
self.class_c.do_something()
class B(A):
def __init__(self, class_c):
super().__init__(class_c)
self.begin_task()
def begin_task(self):
self.call_class_c_method()
class C(object):
def do_something(self):
print("I'm doing something super() useful")
c = C()
b = B(c)
I find that reducing things to abstract letters in cases like this makes it harder for me to reason about whether the interaction makes sense.
In effect, you're asking whether it is reasonable for a class(A) to depend on a member that conforms to a given interface (C). The answer is that there are cases where it clearly does.
As an example, consider the model-view-controller pattern in web application design.
You might well have something like
class Controller:
def get(self, request)
return self.view.render(self, request)
or similar. Then elsewhere you'd have some code that found the view and populated self.view in the controller. Typical examples of doing that include some routing lookups or include having a specific view associated with a controller. While not Python, the Rails web framework does a lot of this.
When we have specific examples, it's a lot easier to reason about whether the abstractions make sense.
In the above example, the controller interface depends on having access to some instance of the view interface to do its work. The controller instance encapsulates an instance that implements that view interface.
Here are some things to consider when evaluating such designs:
Can you clearly articulate the boundaries of each interface/class? That is, can you explain what the controller's job is and what the view's job is?
Does your decision to encapsulate an instance agree with those scopes?
Do the interface and class scopes seem reasonable when you think about future extensibility and about minimizing the scope of code changes?
If a subclass wants to modify the behaviour of inherited methods through static fields, is it thread safe?
More specifically:
class A (object):
_m = 0
def do(self):
print self._m
class B (A):
_m=1
def test(self):
self.do()
class C (A):
_m=2
def test(self):
self.do()
Is there a risk that an instance of class B calling do() would behave as class C is supposed to, or vice-versa, in a multithreading environment? I would say yes, but I was wondering if somebody went through actually testing this pattern already.
Note: This is not a question about the pattern itself, which I think should be avoided, but about its consequences, as I found it in reviewing real life code.
First, remember that classes are objects, and static fields (and for that matter, methods) are attributes of said class objects.
So what happens is that self.do() looks up the do method in self and calls do(self). self is set to whatever object is being called, which itself references one of the classes A, B, or C as its class. So the lookup will find the value of _m in the correct class.
Of course, that requires a correction to your code:
class A (object):
_m = 0
def do(self):
if self._m==0: ...
elif ...
Your original code won't work because Python only looks for _m in two places: defined in the function, or as a global. It won't look in class scope like C++ does. So you have to prefix with self. so the right one gets used. If you wanted to force it to use the _m in class A, you would use A._m instead.
P.S. There are times you need this pattern, particularly with metaclasses, which are kinda-sorta Python's analog to C++'s template metaprogramming and functional algorithms.