I have the following pseudo code:
class A:
mutex lockForB
class B:
def __init__(self, A): //the type of A is class A
lock(A.lockForB)
# ...
unlock(A.lockForB)
# other function have the same locking
I understand from oop point of view it is very bad idea to use such design, but if I create lock inside class B I will not be able to put lock on creator of the class B. Is there any better design for this? Thanks in advance.
I have no idea what you're trying to accomplish. It's unlikely that a class level lock is what you want but your pseudocode is not so far from the actual code so I'll just fill in the blanks. Honestly, without some idea what you're attempting to synchronize access to it's going to be a challenge to help you.
class A:
lockForB = threading.RLock()
class B:
def __init__(self):
with A.lockForB:
# do init stuff
def othermethod(self):
with A.lockForB:
# do other stuff
So that code will work. lockForB is just a class level attribute on A so it is shared between all instances of A. However, in cases where I've seen folks use class level locks like this it is usually to prevent the class that owns the lock from being put into an inconsistent state where you have 2 seemingly unrelated classes sharing a lock.
Without context to help understand what you're attempting to synchronize access to it's really hard to tell you why this couldn't be written like this:
class C:
lock = threading.RLock()
def __init__(self):
with self.lock:
# do stuff
def othermethod(self):
with self.lock:
# do other stuff
In general you should put locks only around the boundaries of critical sections or the use of shared resources. You should consider what it is you're trying to protect from simultaneous access and protect it. If, for instance, class A has a Queue into which items are placed and read from, then it's the access to this particular resource you should protect. Since OOP dictates that this kind of resource should only be accessed by the class methods, only class A should protect it:
class A(object):
def __init__(self, *args, **kws):
# do the initialization
self._my_queue = Queue()
self._lock = Lock()
def do_something(self):
# do initial calculations
self._lock.acquire()
item = self._my_queue.get()
self._lock.release()
# do other things
Hence forth, class B should call class A methods and it will be thread safe. If class B has its own critical sections It's quite alright to use more than a single lock:
class B(object):
def __init__(self, *args, **kws):
# do the initialization
self._lock = Lock()
self.a = A()
def do_something_with_a(self):
# initial calculations
self._lock.acquire()
# Critical section
result = self.a.do_something()
# do something with result
self._lock.release()
# continue the code
This way each class protects its own critical sections and shared resources and there's no need to break the class interface.
If you need to protect the C'tor of a class then you either need a global lock for the module, created and initialized outside the scope of the class, or to add the lock to the Class object (like a static member in C++ and Java) rather than the instance itself:
class B(object):
def __init__(self, *args, **kws):
if not hasattr(self.__class__, "_lock"):
self.__class__._lock = Lock()
# works with Python 2.6+ for earlier version use try-finally
with self.__class__._lock:
# Your initialization
This will protect your C'tor
Related
I have a pretty big class that i want to break down in smaller classes that each handle a single part of the whole. So each child takes care of only one aspect of the whole.
Each of these child classes still need to communicate with one another.
For example Data Access creates a dictionary that Plotting Controller needs to have access to.
And then plotting Controller needs to update stuff on Main GUI Controller. But these children have various more inter-communication functions.
How do I achieve this?
I've read Metaclasses, Cooperative Multiple Inheritence and Wonders of Cooperative Multiple Inheritence, but i cannot figure out how to do this.
The closest I've come is the following code:
class A:
def __init__(self):
self.myself = 'ClassA'
def method_ONE_from_class_A(self, caller):
print(f"I am method ONE from {self.myself} called by {caller}")
self.method_ONE_from_class_B(self.myself)
def method_TWO_from_class_A(self, caller):
print(f"I am method TWO from {self.myself} called by {caller}")
self.method_TWO_from_class_B(self.myself)
class B:
def __init__(self):
self.me = 'ClassB'
def method_ONE_from_class_B(self, caller):
print(f"I am method ONE from {self.me} called by {caller}")
self.method_TWO_from_class_A(self.me)
def method_TWO_from_class_B(self, caller):
print(f"I am method TWO from {self.me} called by {caller}")
class C(A, B):
def __init__(self):
A.__init__(self)
B.__init__(self)
def children_start_talking(self):
self.method_ONE_from_class_A('Big Poppa')
poppa = C()
poppa.children_start_talking()
which results correctly in:
I am method ONE from ClassA called by Big Poppa
I am method ONE from ClassB called by ClassA
I am method TWO from ClassA called by ClassB
I am method TWO from ClassB called by ClassA
But... even though Class B and Class A correctly call the other children's functions, they don't actually find their declaration. Nor do i "see" them when i'm typing the code, which is both frustrating and worrisome that i might be doing something wrong.
Is there a good way to achieve this? Or is it an actually bad idea?
EDIT: Python 3.7 if it makes any difference.
Inheritance
When breaking a class hierarchy like this, the individual "partial" classes, we call "mixins", will "see" only what is declared directly on them, and on their base-classes. In your example, when writing class A, it does not know anything about class B - you as the author, can know that methods from class B will be present, because methods from class A will only be called from class C, that inherits both.
Your programming tools, the IDE including, can't know that. (That you should know better than your programming aid, is a side track). It would work, if run, but this is a poor design.
If all methods are to be present directly on a single instance of your final class, all of them have to be "present" in a super-class for them all - you can even write independent subclasses in different files, and then a single subclass that will inherit all of them:
from abc import abstractmethod, ABC
class Base(ABC):
#abstractmethod
def method_A_1(self):
pass
#abstractmethod
def method_A_2(self):
pass
#abstractmethod
def method_B_1(self):
pass
class A(Base):
def __init__(self, *args, **kwargs):
# pop consumed named parameters from "kwargs"
...
super().__init__(*args, **kwargs)
# This call ensures all __init__ in bases are called
# because Python linearize the base classes on multiple inheritance
def method_A_1(self):
...
def method_A_2(self):
...
class B(Base):
def __init__(self, *args, **kwargs):
# pop consumed named parameters from "kwargs"
...
super().__init__(*args, **kwargs)
# This call ensures all __init__ in bases are called
# because Python linearize the base classes on multiple inheritance
def method_B_1(self):
...
...
class C(A, B):
pass
(The "ABC" and "abstractmethod" are a bit of sugar - they will work, but this design would work without any of that - thought their presence help whoever is looking at your code to figure out what is going on, and will raise an earlier runtime error if you per mistake create an instance of one of the incomplete base classes)
Composite
This works, but if your methods are actually for wildly different domains, instead
of multiple inheritance, you should try using the "composite design pattern".
No need for multiple inheritance if it does not arise naturally.
In this case, you instantiate objects of the classes that drive the different domains on the __init__ of the shell class, and pass its own instance to those child, which will keep a reference to it (in a self.parent attribute, for example). Chances are your IDE still won't know what you are talking about, but you will have a saner design.
class Parent:
def __init__(self):
self.a_domain = A(self)
self.b_domain = B(self)
class A:
def __init__(self, parent):
self.parent = parent
# no need to call any "super...init", this is called
# as part of the initialization of the parent class
def method_A_1(self):
...
def method_A_2(self):
...
class B:
def __init__(self, parent):
self.parent = parent
def method_B_1(self):
# need result from 'A' domain:
a_value = self.parent.a_domain.method_A_1()
...
This example uses the basic of the language features, but if you decide
to go for it in a complex application, you can sophisticate it - there are
interface patterns, that could allow you to swap the classes used
for different domains, in specialized subclasses, and so on. But typically
the pattern above is what you would need.
I'm in scenario where I want to refactor several classes which have identical and/or similar methods. The number of class are around ~20 and the number of similar methods are around ~15. All sorts of combinations exist within this space, which is why I'm a bit reluctant to using inheritance to solve this issue (rightfully?).
The code is part of a wrapper around another application that is controlled by a com api. The wrapper in turn is part of a package that is distributed internally at the company where I work. Therefore the interfaces of the classes have to remain the same (for backwards compatibility).
This example illustrates some very simplified versions of the classes:
class FirstCollectionLike:
def __init__(self):
self._collection = list()
def add(self, arg):
self._collection.append(arg)
def remove(self, index):
del self._collection[index]
class SecondCollectionLike:
def __init__(self):
self._collection = list()
self._resource = some_module.get_resource()
def start(self):
some_module.start(self.resource)
def add(self, arg):
self._collection.append(arg)
def remove(self, value):
self._collection.remove(value)
class SomeOtherClass:
def __init__(self):
self._some_attribute = 0
self._resource = some_module.get_resource()
def add(self, value):
self._some_attribute += value
def start(self):
some_module.start(self._resource)
Are there any design patterns I could look into that would help me solve this issue?
My initial thought was to create method classes like Add, RemoveByIndex and RemoveByName that implements __call__ like so:
class Add:
def __init__(self, owner):
self.owner = owner
def __call__(self, item):
self._collection.append(item)
class AddAndInstantiate:
def __init__(self, owner, type_to_instantiate):
self.owner = owner
self.type_to_instantiate = type_to_instantiate
def __call__(self, name):
self._collection.append(type_to_instantiate(name))
and then assign instances of those classes as instance attributes to their respective owner objects:
class RefactoredClassOne:
def __init__(self):
self.add = Add(self)
self.remove = RemoveByIndex(self)
class RefactoredClassTwo:
def __init__(self):
self.add = AddAndInstantiate(self, SomeClass)
self.remove = RemoveByName(self)
This way I could quite easily add any method I want to a class and provide some arguments to the method class if needed (like the type of the class to instantiate in the example above). The downside is that it is a bit harder to follow what is happening, and the automatic documentation generation we use (sphinx) does not work if the methods are implemented in this way.
Does this seem like a bad approach? What are the alternatives?
First, if your classes are as simple as you example suggest, I'm not sure OOP is the right tool. What your classes are doing is just renaming a couple of basic calls. This is useless abstraction and IMO a bad practice (why force me to look to into the SecondClassCollectionLike.py file to discover that .add() is 1) in fact a wrongly named append and 2) that my collection is in fact a listwith a fancy name?)
In that case I'd say that a functional approach might be better, and a workflow such as:
a = SecondClassCollectionLike()
a.add("x")
a.add("y")
a.remove(0)
a.start()
would be a lot clearer if it looked like
a = list()
a.append("x")
a.append(y)
del a[0]
somemodule.start()
If your classes are in fact more complex and you really want to keep the OOP approach, I think that this solution is probably close to your solution and what you're looking for.
The idea is to have modules which hold the logic. For example a _collection_behaviour.py module, which holds the add(), remove(), increment() or whatever. And a _runtime.py module, which holds that start(), stop(), etc. logic.
This way you could have classes which exibit behaviour from these modules:
calss MyClass():
def __init__(self):
self._collection = list()
from ._collection_behaviour import add
from ._collection_behaviour import remove
from ._runtime import start
But I do not see the point in making these functions classes which implement __call__ if that's all they do.
I am trying to create a set of classes as containers of modular blocks of logic. The idea is to be able to mix and match the classes through inheritance (possibly multiple inheritance) to execute any combination of those pieces of modular logic. Here is the structure I currently have:
class Base:
methods = {}
def __init__(self):
"""
This will create an instance attribute copy of the combined dict
of all the methods in every parent class.
"""
self.methods = {}
for cls in self.__class__.__mro__:
# object is the only class that won't have a methods attribute
if not cls == object:
self.methods.update(cls.methods)
def call(self):
"""
This will execute all the methods in every parent
"""
for k,v in self.methods.items():
v(self)
class ParentA(Base):
def method1(self):
print("Parent A called")
methods = {"method":method1}
class ParentB(Base):
def method2(self):
print("Parent B called")
methods = {"method2" : method2}
class Child(ParentA, ParentB):
def method3(self):
print("Child called")
methods = {"method3" : method3}
This seems to work as expected but I was wondering if there is anything I might be missing design wise or if there is something I am trying to do that I should not be doing. Any considerations or feedback on the structure is very welcome. As well as tips on how I could make this more pythonic. Thank you all in advance.
Is there any way to use monitor thread synchronization like java methods synchronization,in python class to ensure thread safety and avoid race condition?
I want a monitor like synchronization mechanism that allows only one method call in my class or object
You might want to have a look at python threading interface. For simple mutual exclusion functionality you might use a Lock object. You can easily do this using the with statement like:
...
lock = Lock()
...
with (lock):
# This code will only be executed by one single thread at a time
# the lock is released when the thread exits the 'with' block
...
See also here for an overview of different thread synchronization mechanisms in python.
There is no python language construct for Java's synchronized (but I guess it could be built using decorators)
I built a simple prototype for it, here's a link to the GitHub repository for all the details : https://github.com/m-a-rahal/monitor-sync-python
I used inheritance instead of decorators, but maybe I'll include that option later
Here's what the 'Monitor' super class looks like:
import threading
class Monitor(object):
def __init__(self, lock = threading.Lock()):
''' initializes the _lock, threading.Lock() is used by default '''
self._lock = lock
def Condition(self):
''' returns a condition bound to this monitor's lock'''
return threading.Condition(self._lock)
init_lock = __init__
Now all you need to do to define your own monitor is to inherit from this class:
class My_Monitor_Class(Monitor):
def __init__(self):
self.init_lock() # just don't forget this line, creates the monitor's _lock
cond1 = self.Condition()
cond2 = self.Condition()
# you can see i defined some 'Condition' objects as well, very simple syntax
# these conditions are bound to the lock of the monitor
you can also pass your own lock instead
class My_Monitor_Class(Monitor):
def __init__(self, lock):
self.init_lock(lock)
check out threading.Condition() documentation
Also you need to protect all the 'public' methods with the monitor's lock, like this:
class My_Monitor_Class(Monitor):
def method(self):
with self._lock:
# your code here
if you want to use 'private' methods (called inside the monitor), you can either NOT protect them with the _lock (or else the threads will get stuck), or use RLock instead for the monitor
EXTRA TIP
sometimes a monitor consists of 'entrance' and 'exit' protocols
monitor.enter_protocol()
<critical section>
monitor.exit_protocol()
in this case, you can exploit python's cool with statement :3
just define the __enter__ and __exit__ methods like this:
class monitor(Monitor):
def __enter__(self):
with self._lock:
# enter_protocol code here
def __exit__(self, type, value, traceback):
with self._lock:
# exit_protocol code here
now all you need to do is call the monitor using with statement:
with monitor:
<critical section>
class Parent():
def __init__(self):
self.child = Child()
class Child():
def __init__(self):
# get Parent instance
self.parent = self.Instantiator()
I know this isn't proper encapsulation but for interest's sake...
Given a "Parent" class that instantiates a "Child" object, is it possible from within Child to return the Parent object that instantiated it? And if no, I'm curious, do any languages support this?
To answer the question, no, there's no way1 the child instance knows about any classes which contain references to it. The common2 way to handle this is:
class Parent(object):
def __init__(self):
self.child = Child()
self.child._parent = self
1 Of course, this isn't strictly true. As another commentor noted, you can extract the stack frame from the executing code within the __init__ method, and examine the f_locals dictionary for the self variable for the frame before the currently executing one. But this is complicated, and prone to error. Highly unrecommended.
2 A slightly better way to handle this (depending on the specific needs of the program) might be to require the parent to pass itself to the child, like so:
class Parent(object):
def __init__(self):
self.child = Child(self)
class Child(object):
def __init__(self, parent):
self._parent = parent
Here's a reasonably-simple metaclass solution to the problem:
import functools
class MetaTrackinits(type):
being_inited = []
def __new__(cls, n, b, d):
clob = type.__new__(cls, n, b, d)
theinit = getattr(clob, '__init__')
#functools.wraps(theinit)
def __init__(self, *a, **k):
MetaTrackinits.being_inited.append(self)
try: theinit(self, *a, **k)
finally: MetaTrackinits.being_inited.pop()
setattr(clob, '__init__', __init__)
def Instantiator(self, where=-2):
return MetaTrackinits.being_inited[where]
setattr(clob, 'Instantiator', Instantiator)
return clob
__metaclass__ = MetaTrackinits
class Parent():
def __init__(self):
self.child = Child()
class Child():
def __init__(self):
self.parent = self.Instantiator()
p = Parent()
print p
print p.child.parent
a typical output, depending on the platform, will be something like
<__main__.Parent object at 0xd0750>
<__main__.Parent object at 0xd0750>
You could obtain a similar effect (in 2.6 and later) with a class decorator, but then all classes needing the functionality (both parent and children ones) would have to be explicitly decorated -- here, they just need to have the same metaclass, which may be less intrusive thanks to the "module-global __metaclass__ setting" idiom (and the fact that metaclasses, differently from class-decorations, also get inherited).
In fact, this is simple enough that I would consider allowing it in production code, if the need for that magical "instantiator" method had a proven business basis (I would never allow, in production code, a hack based on walking the stack frames!-). (BTW, the "allowing" part comes from the best-practice of mandatory code reviews: code changes don't get into the trunk of the codebase without consensus from reviewers -- this how typical open source projects work, and also how we always operate at my employer).
Here's an example based off of some of Chris B.'s suggestions to show how absolutely terrible it would be to inspect the stack:
import sys
class Child(object):
def __init__(self):
# To get the parent:
# 1. Get our current stack frame
# 2. Go back one level to our caller (the Parent() constructor).
# 3. Grab it's locals dictionary
# 4. Fetch the self instance.
# 5. Assign it to our parent property.
self.parent = sys._getframe().f_back.f_locals['self']
class Parent(object):
def __init__(self):
self.child = Child()
if __name__ == '__main__':
p = Parent()
assert(id(p) == id(p.child.parent))
Sure that'll work, but just never try to refactor it into a seperate method, or create a base class from it.
you could* try to use the traceback module, just to prove a point.
**Don't try this at home, kids*
This can be done in python with metaclasses.