Initialize common variables used in mutiple classes Python - python

Assuming that I have two classes A and B. In class A, B's instances will created. Both of their instance methods will use a common variable, which is initialized in class A. I have to pass the common_var through the init function. Think that if I have classes A, B, C, D.... and common_var1, var2, var3...where all vars have to be passed through class to class, that’s terrible:
class A:
def __init__(self, variable_part):
self.common_var = "Fixed part" + variable_part
self.bList = []
def add_B(self):
self.bList += [B(self.common_var)]
def use_common_var():
do_something(self.common_var)
class B:
def __init__(self, common_var):
self.common_var = common_var
def use_common_var():
do_something(self.common_var)
There's an ugly approach to use global here:
class A:
def __init__(self, variable_part):
global common_var
common_var = "Fixed part" + variable_part
def use_common_var(self):
do_something(common_var)
class B:
def use_common_var(self):
do_something(common_var)
But I don't think it's a good idea, any better ideas?
Update:
The original question is here:
The common_vars are a series of prefixes of strings, things like "https://{}:{}/rest/v1.5/".format(host, port), "mytest" etc..
and in class A, I use
"https://127.0.0.1:8080/rest/v1.5/interface01" and "mytest_a"
in class B, I use
"https://127.0.0.1:8080/rest/v1.5/interface02" and "mytest_b"
in class C, I may use
"https://127.0.0.1:8080/rest/v1.5/get?name=123" and "mytest_c"
things like that, I use common variables just to multiplex 'https://{}:{}/rest/v1.5' and "mytest" part and none of these A, B, C classes is in "is-a" relationship. But the core part of this problem is the common_var is not common at very first but initialized in one of this class..
Final Update
I compromised. I added a Helper class to reuse the common values:
class Helper:
#staticmethod
def setup(url, prefix):
Helper.COMMON_URL = url
Helper.prefix = prefix
# A always initiates first
Class A:
def __init__(self, host, port):
Helper.setup(
"https://{0}:{1}/rest/v1.5".format(host, port),
"test"
)
def use_common_var():
do_something(Helper.url, Helper.prefix)
class B:
def use_common_var():
do_somesthing(Helper.url, Helper.prefix)
class C:
def use_common_var():
do_something(Helper.url, Helper.prefix)
Is this a better way?

If you have four classes that share the same set of four attributes, then you might (or not) have a use for inheritance - but this really depends on how/whatfor those attributes are used and what's the real relationship between those classes are. Inheritance is a "is a" relationship (if B inherits from A then B "is a" A too).
Another solution - if you don't really have a "is a" relationship - is to create a class that regroup those four attributes and pass an instance of this class where it's needed. Here again, it only makes sense if there is a real semantic relationshop between those attributes.
Oh and yes, using globals is usually not the solution (unless those variables are actually pseudo-constants - set once at startup and never changed by anyone).
To make a long story short: there's no one-size-fits-all answer to your question, the appropriate solution depends on your concrete use case.
EDIT:
given your added explanation, it looks lik the "cargo class" solution (grouping all your "shared" attributes in a same object) might be what you want. Just beware that it will couple all your classes to this class, which might or not be a problem (wrt/ testing and maintainability). If the forces that drive your A / B ∕ C /D classes evolution are mostly the same then you should be good...

Related

What is the point of an #abstractmethod with default implementation?

I’ve seen code that declares abstract methods that actually have a non-trivial body.
What is the point of this since you have to implement in any concrete class anyway?
Is it just to allow you to do something like this?
def method_a(self):
super(self).method_a()
I've used this before in cases where it was possible to have the concrete implementation, but I wanted to force subclass implementers to consider if that implementation is appropriate for them.
One specific example: I was implementing an abstract base class with an abstract factory method so subclasses can define their own __init__ function but have a common interface to create them. It was something like
class Foo(ABC):
def __init__(self, a, b, c):
self.a = a
self.b = b
self.c = c
#classmethod
#abstractmethod
def from_args(cls, a, b, c) -> "Foo":
return cls(a, b, c)
Subclasses usually only need a subset of the arguments. When testing, it's cumbersome to access the framework which actually uses this factory method, so it's likely someone will forget to implement the from_args factory function since it wouldn't come up in their usual testing. Making it an abstractmethod would make it impossible to initialize the class without first implementing it, and will definitely come up during normal testing.

python how to abstract the class of member variable

suppose that i have a module like below
class B:
def ...
def ...
class A:
b: B
def ...
def ...
I use class B only as member variable of class A
when i try to abstract this module for my buisness logic, what should i do?
one big interface, which has abstract method for class A and class B
two interface, which has abstract method for class A and class B individually
all above are wrong. another way
Both, 1 & 2 are correct approach, but it completely depends on your application.
I think, two interfaces, which would have abstract method for class A and class B individually is the right approach when both of your classes have separate workings and are completely different from each other.
But, as you have mentioned in your code that you have inherited class B in class A. If you create a single interface for class A, it will also allow you to access the methods from class B. So, this approach is good. Also, this approach will shorten the length of your code, resulting in fast processing.
I hope this would help you to take your decision. Let me know if any other clarification required.

Importance of Inheritance in python

I have implemented inheritance in python and code is working, Also, I have tried doing the same thing in other way also, I want to know where inheritance has advantage over my second way of doing the same thing.
class A:
def __init__(self,a,b):
self._a = a
self.b = b
def addition(self):
return self._a+self.b
################################### Inheritance ############
class B(A):
def __init__(self, a,b):
super().__init__(a,b)
print("Protected in B ",self._a)
################################### Doing same work by creating object #################
class D():
def __init__(self, a,b):
self.a = a
self.b = b
d = A(self.a,self.b)
print(d.addition())
print("Protected in D ", d._a)
I'm able to call protected member of class A in both ways. Any member of A can be accessed in both ways, so, why inheritance is importance. In which scenario only inheritance will work and NOT object creation ?
I don't believe that your example showcases the power of inheritance.
The 2nd example doesn't look very OOP-like. I rarely see object creation in the constructor unless it is an instance variable or a static variable.
Not every base class will have all the fields that the subclass has. For example, an Animal class may be super generic, but as more classes like Tiger extends it, Tiger will have more tiger properties (and not have the basic same ones as the Animal class).
To focus back on your question, the 2nd example, will require you to use self.d if you want to access the protected variable _a, unless you want to do d = D() and then d.a, which isn't very protected. You are creating an object within D, whereas you are still setting up the blueprint for the class in the 1st example. That might not be what you are looking to do, as the 2nd example with the object creation looks like it could be the driver class.

Python - Child Class to call a function from another Child Class

I have a pretty big class that i want to break down in smaller classes that each handle a single part of the whole. So each child takes care of only one aspect of the whole.
Each of these child classes still need to communicate with one another.
For example Data Access creates a dictionary that Plotting Controller needs to have access to.
And then plotting Controller needs to update stuff on Main GUI Controller. But these children have various more inter-communication functions.
How do I achieve this?
I've read Metaclasses, Cooperative Multiple Inheritence and Wonders of Cooperative Multiple Inheritence, but i cannot figure out how to do this.
The closest I've come is the following code:
class A:
def __init__(self):
self.myself = 'ClassA'
def method_ONE_from_class_A(self, caller):
print(f"I am method ONE from {self.myself} called by {caller}")
self.method_ONE_from_class_B(self.myself)
def method_TWO_from_class_A(self, caller):
print(f"I am method TWO from {self.myself} called by {caller}")
self.method_TWO_from_class_B(self.myself)
class B:
def __init__(self):
self.me = 'ClassB'
def method_ONE_from_class_B(self, caller):
print(f"I am method ONE from {self.me} called by {caller}")
self.method_TWO_from_class_A(self.me)
def method_TWO_from_class_B(self, caller):
print(f"I am method TWO from {self.me} called by {caller}")
class C(A, B):
def __init__(self):
A.__init__(self)
B.__init__(self)
def children_start_talking(self):
self.method_ONE_from_class_A('Big Poppa')
poppa = C()
poppa.children_start_talking()
which results correctly in:
I am method ONE from ClassA called by Big Poppa
I am method ONE from ClassB called by ClassA
I am method TWO from ClassA called by ClassB
I am method TWO from ClassB called by ClassA
But... even though Class B and Class A correctly call the other children's functions, they don't actually find their declaration. Nor do i "see" them when i'm typing the code, which is both frustrating and worrisome that i might be doing something wrong.
Is there a good way to achieve this? Or is it an actually bad idea?
EDIT: Python 3.7 if it makes any difference.
Inheritance
When breaking a class hierarchy like this, the individual "partial" classes, we call "mixins", will "see" only what is declared directly on them, and on their base-classes. In your example, when writing class A, it does not know anything about class B - you as the author, can know that methods from class B will be present, because methods from class A will only be called from class C, that inherits both.
Your programming tools, the IDE including, can't know that. (That you should know better than your programming aid, is a side track). It would work, if run, but this is a poor design.
If all methods are to be present directly on a single instance of your final class, all of them have to be "present" in a super-class for them all - you can even write independent subclasses in different files, and then a single subclass that will inherit all of them:
from abc import abstractmethod, ABC
class Base(ABC):
#abstractmethod
def method_A_1(self):
pass
#abstractmethod
def method_A_2(self):
pass
#abstractmethod
def method_B_1(self):
pass
class A(Base):
def __init__(self, *args, **kwargs):
# pop consumed named parameters from "kwargs"
...
super().__init__(*args, **kwargs)
# This call ensures all __init__ in bases are called
# because Python linearize the base classes on multiple inheritance
def method_A_1(self):
...
def method_A_2(self):
...
class B(Base):
def __init__(self, *args, **kwargs):
# pop consumed named parameters from "kwargs"
...
super().__init__(*args, **kwargs)
# This call ensures all __init__ in bases are called
# because Python linearize the base classes on multiple inheritance
def method_B_1(self):
...
...
class C(A, B):
pass
(The "ABC" and "abstractmethod" are a bit of sugar - they will work, but this design would work without any of that - thought their presence help whoever is looking at your code to figure out what is going on, and will raise an earlier runtime error if you per mistake create an instance of one of the incomplete base classes)
Composite
This works, but if your methods are actually for wildly different domains, instead
of multiple inheritance, you should try using the "composite design pattern".
No need for multiple inheritance if it does not arise naturally.
In this case, you instantiate objects of the classes that drive the different domains on the __init__ of the shell class, and pass its own instance to those child, which will keep a reference to it (in a self.parent attribute, for example). Chances are your IDE still won't know what you are talking about, but you will have a saner design.
class Parent:
def __init__(self):
self.a_domain = A(self)
self.b_domain = B(self)
class A:
def __init__(self, parent):
self.parent = parent
# no need to call any "super...init", this is called
# as part of the initialization of the parent class
def method_A_1(self):
...
def method_A_2(self):
...
class B:
def __init__(self, parent):
self.parent = parent
def method_B_1(self):
# need result from 'A' domain:
a_value = self.parent.a_domain.method_A_1()
...
This example uses the basic of the language features, but if you decide
to go for it in a complex application, you can sophisticate it - there are
interface patterns, that could allow you to swap the classes used
for different domains, in specialized subclasses, and so on. But typically
the pattern above is what you would need.

Are static fields used to modify super class behaviour thread safe?

If a subclass wants to modify the behaviour of inherited methods through static fields, is it thread safe?
More specifically:
class A (object):
_m = 0
def do(self):
print self._m
class B (A):
_m=1
def test(self):
self.do()
class C (A):
_m=2
def test(self):
self.do()
Is there a risk that an instance of class B calling do() would behave as class C is supposed to, or vice-versa, in a multithreading environment? I would say yes, but I was wondering if somebody went through actually testing this pattern already.
Note: This is not a question about the pattern itself, which I think should be avoided, but about its consequences, as I found it in reviewing real life code.
First, remember that classes are objects, and static fields (and for that matter, methods) are attributes of said class objects.
So what happens is that self.do() looks up the do method in self and calls do(self). self is set to whatever object is being called, which itself references one of the classes A, B, or C as its class. So the lookup will find the value of _m in the correct class.
Of course, that requires a correction to your code:
class A (object):
_m = 0
def do(self):
if self._m==0: ...
elif ...
Your original code won't work because Python only looks for _m in two places: defined in the function, or as a global. It won't look in class scope like C++ does. So you have to prefix with self. so the right one gets used. If you wanted to force it to use the _m in class A, you would use A._m instead.
P.S. There are times you need this pattern, particularly with metaclasses, which are kinda-sorta Python's analog to C++'s template metaprogramming and functional algorithms.

Categories