I'm trying to implement in Python the builder pattern with some fail safes. This fail safes would normally be implemented in OOP using interfaces and would restrict the user of using some methods before others or at all, unless the current object can be manipulated using this methods.
How can such fail safes be implemented in Python?
The Python Abstract Base Class module (ABC) allows for such restrictions. Using Python 3:
from abc import ABC, abstractmethod
class AbstractFoo(ABC):
#abstractmethod
def foo(self):
pass
class ConcreteFoo(AbstractFoo):
pass
if __name__ == "__main__":
c = ConcreteFoo()
Executing this script will result in the following error:
TypeError: Can't instantiate abstract class ConcreteFoo with abstract methods foo
This error can be resolved by providing an implementation of the foo method:
from abc import ABC, abstractmethod
class AbstractFoo(ABC):
#abstractmethod
def foo(self):
pass
class ConcreteFoo(AbstractFoo):
def foo(self):
print('Made it to foo')
if __name__ == "__main__":
c = ConcreteFoo()
c.foo()
Executing this script results in the following:
Made it to foo
Related
I am using the python Fire module with an abstract parent class and a child class. Not all functions are abstract, some functions do not need to be replicated for each child:
parent class
from abc import ABC, abstractmethod
class Foo(ABC):
#abstractmethod
def __init__(self, val=None):
# some initialisations
#abstractmethod
def fun1(self, file=None):
# Some calls
def fun2(self):
# Non abastract func... Some calls
child class (test.py)
import fire
from foo import Foo
class Child(Foo)
def __init__(self, val=None):
super().__init__(val)
# some initialisations
def fun1(file='path/to/file')
# do some stuff
if __name__ == '__main__':
fire.Fire(Child)
when I run python CLI with python -m test --help I do not get any COMMANDS i.e. Fire is not recognising any functions to run. However it is recognising the parent global variables and init flags to set so why is this happening?
try passing it the instantiated object instead like they do here
fire.Fire(Child())
In Python 3.7.2 I have two classes that are referencing each other.
I've looked at the question and answers in: Circular (or cyclic) imports in Python
and they don't answer the question of how to keep type-hinting and auto-completion capabilities when having a cyclic import. In order to allow Pycharm to auto-complete code, I am asking this question.
Class #1 represents ElasticSearch Server and imports class #2 as its member, in order to expose it as internal search capabilities.
Class #2 represents a bunch of search JSONs patterns and imports class #1 in order to define the type of an instance it receives of class #1. This allows Class #2 to run the GET\POST methods that are defined in Class #1.
This looks something like this:
class SimplifiedElasticSearch
from framework.elk.search_patterns import SearchPatterns
class SimplifiedElasticSearch(object):
...
...
class SearchPatterns
from framework.elk.simplified_elastic_search import SimplifiedElasticSearch
class SearchPatterns(object):
def __init__(self, es_server: SimplifiedElasticSearch):
...
...
You can see that both module import each other and an instance of class SimplifiedElasticSearch is passed to class SearchPatterns upon __ init __
This results in an import error
ImportError: cannot import name 'SimplifiedElasticSearch' from 'framework.elk.simplified_elastic_search'
To prevent the error, one option is to NOT import class SimplifiedElasticSearch, i.d. remove the line
from framework.elk.simplified_elastic_search import SimplifiedElasticSearch
and write the code with auto-completion and type-hinting for the instance of SimplifiedElasticSearch that I pass to the class SearchPatterns
How can I keep the auto-completion and type hinting for such a case?
I suggest the following pattern. Using it will allow auto-completion and type hinting to work properly.
simplified_elastic_search.py
import search_patterns
class SimplifiedElasticSearch(object):
def __init__(self):
pass
def print_ses(self):
print('SimplifiedElasticSearch')
if __name__ == '__main__':
ses = SimplifiedElasticSearch()
ses.print_ses()
sp = search_patterns.SearchPatterns(ses)
sp.print_sp()
search_patterns.py
import simplified_elastic_search
class SearchPatterns(object):
def __init__(self, ses):
self.ses: simplified_elastic_search.SimplifiedElasticSearch = ses
def print_sp(self):
print('SearchPatterns-1-----------------')
self.sp.print_sp()
print('SearchPatterns-2-----------------')
You cannot import classes SimplifiedElasticSearch & SearchPatterns using this syntax
from simplified_elastic_search import SimplifiedElasticSearch
from search_patterns import SearchPatterns
You cannot declare the type of parameter ses in class SearchPatterns's __ init __ method, but you can "cast" it this way:
def __init__(self, ses):
self.ses: simplified_elastic_search.SimplifiedElasticSearch = ses
I am trying to properly create mocks for a class that has a dependency on a system library. Currently the code makes a connection to a socket for the library when being tested, and I am trying to remove that dependency.
class A:
SETTING_VARIABLE = "CONFIG_VALUE"
def __init__(self):
self.system_connector = library.open(self.SETTING_VARIABLE)
import A
class B:
INSTANCE_OF_A = A()
When class A is instantiated, it uses SETTING_VARIABLE to connect to a system library, which it can not do during a unit test, and my test suite fails during test collection. The library connector I am using can be configured to run in unit test mode, but requires a different configuration to be passed, so in this case SETTING_VARIABLE would need to be instantiated to "TEST_VALUE".
My test class test_B is failing as soon as B is imported when it tries to make a connection to the system library (I have disabled access to the socket for it). How can I set up Python mocking so that I can replace the value of the static variable defined by A?
One thing that I have tried to do from test_B:
import A
A.SETTING_VARIABLE = "TEST_VALUE"
This does seem to work, however is there a cleaner way to do this for unit tests?
Old but gold. Here is how this problem can be solved.
The B class needs to be changed to initialize INSTANCE_OF_A in its constructor. Then you can mock the SETTING_VARIABLE using patch.object.
Complete example with patch.object as instruction or decorator:
A.py:
class A:
SETTING_VARIABLE = "CONFIG_VALUE"
def __init__(self):
self.system_connector = library.open(self.SETTING_VARIABLE)
B.py:
from A import A
class B:
def __init__(self):
self.INSTANCE_OF_A = A()
test_B.py
import unittest
from unittest.mock import patch
from A import A
from B import B
class BTestCase(unittest.TestCase):
#patch.object(A, "SETTING_VARIABLE", "TEST_VALUE")
def test_b_with_decorator(self):
INSTANCE_OF_B = B()
def test_b_with_instruction(self):
with patch.object(A, "SETTING_VARIABLE", "TEST_VALUE"):
INSTANCE_OF_B = B()
I'm fairly new to python and currently attempting to write a unit test for a class, but am having some problems with mocking out dependencies. I have 2 classes, one of which (ClassB) is a dependency of the other (ClassC). The goal is to mock out ClassB and the ArgumentParser classes in the test case for ClassC. ClassB looks as follows:
# defined in a.b.b
class ClassB:
def doStuff(self) -> None:
# do stuff
pass
def doSomethingElse(self) -> None:
# do something else
pass
ClassC:
# defined in a.b.c
from .b import ClassB
from argparse import ArgumentParser
class ClassC:
b
def __init__(self) -> None:
arguments = self.parseArguments()
self.b = ClassB()
self.b.doStuff()
def close(self) -> None:
self.b.doSomethingElse()
def parseArguments(self) -> dict:
c = ArgumentParser()
return return parser.parse_args()
And finally, the test case for ClassC:
# inside a.b.test
from unittest import TestCase
from unittest.mock import patch, MagicMock
from a.b.c import ClassC
class ClassCTest(TestCase):
#patch('a.b.c.ClassB')
#patch('a.b.c.ArgumentParser')
def test__init__(self, mock_ArgumentParser, mock_ClassB):
c = ClassC()
print(isinstance(c.b, MagicMock)) # outputs False
# for reference
print(isinstance(mock_ClassB, MagicMock)) # outputs True
I read in the patch docs that it's important to mock the class in the namespace it is used not where it is defined. So that's what I did, I mocked: a.b.c.classB instead of a.b.b.classB, have tried both though. I also tried importing ClassC inside the test__init__ method body, but this also didn't work.
I prefer not mocking methods of ClassB but rather the entire class to keep the test as isolated as possible.
Environment info:
Python 3.6.1
Any help would be greatly appreciated!
Since i'm new to python i didn't know about class attributes. I had a class attribute in ClassC that held ClassB and an instance attribute in init that shadowed the class attribute.
This is in reference to the answer for this question to "Use abc module of python to create abstract classes." (by #alexvassel and accepted as an answer).
I tried the suggestions, but strangely enough, in spite of following the suggestions to use the abc way, it doesn't work for me. Hence I am posting it as a question here:
Here is my Python code:
from abc import ABCMeta, abstractmethod
class Abstract(object):
__metaclass__ = ABCMeta
#abstractmethod
def foo(self):
print("tst")
a = Abstract()
a.foo()
When I execute this module, here is the output on my console:
pydev debugger: starting (pid: 20388)
tst
as opposed to that accepted answer
>>> TypeError: Can not instantiate abstract class Abstract with abstract methods foo
So what am I doing right or wrong? Why does work and not fail? Appreciate any expert insight into this.
In Python 3 use the metaclass argument when creating the abstract base class:
from abc import ABCMeta, abstractmethod
class Abstract(metaclass=ABCMeta):
#abstractmethod
def foo(self):
print("tst")
a = Abstract()
a.foo()
In Python 2, you must assign the metaclass thusly:
import abc
class ABC(object):
__metaclass__ = abc.ABCMeta
#abc.abstractmethod
def foo(self):
return True
a = ABC()
Which raises a TypeError
Traceback (most recent call last):
File "<pyshell#59>", line 1, in <module>
a = ABC()
TypeError: Can't instantiate abstract class ABC with abstract methods foo
But in Python 3, assigning __metaclass__ as an attribute doesn't work (as you intend it, but the interpreter doesn't consider it an error, just a normal attribute as any other, which is why the above code would not raise an error). Metaclasses are now defined as a named argument to the class:
import abc
class ABC(metaclass=abc.ABCMeta):
#abc.abstractmethod
def foo(self):
return True
a = ABC()
raises the TypeError:
Traceback (most recent call last):
File "main.py", line 11, in
a = ABC()
TypeError: Can't instantiate abstract class ABC with abstract methods foo