mypy reports errors on custom defined builtins - python

I'm using the gettext package to perform translations in the Python application. There's a custom Translation class which serves as a wrapper around strings and also defines a custom built-in function
class Wrapper:
def __init__(self, message: str):
self.message = message
#classmethod
def install(cls):
import builtins
builtins._ = cls
test = _('translate me')
When running mypy over this code I'm receiving the error
test.py: error: Name "_" is not defined
Is there a way to tell mypy abou the custom created built-in function?
In flake8 I was able to set a config with
[flake8]
builtins = _

If I understand well, you need something as:
_: Type[Wrapper]
test = _('translate me')

Related

Attempt to patch a member function yields error AttributeError: 'module' object has no attribute 'object'

I would like to assert from a UT, TestRunner.test_run that some deeply nested function Prompt.run_cmd is called with the string argument "unique cmd". My setup besically resembles the following:
# Module application/engine/prompt.py
class Prompt:
def run_cmd(self, input):
pass
# Module: application/scheduler/runner.py
class Runner:
def __init__(self):
self.prompt = application.engine.prompt.Prompt()
def run(self):
self.prompt.run_cmd("unique cmd")
# Module tests/application/scheduler/runner_test.py
class TestRunner(unittest.TestCase):
...
def test_run(self):
# calls Runner.run
# Objective assert that Prompt.run is called with the argument "unique cmd"
# Current attempt below:
with mock.patch(application.engine.prompt, "run_cmd") as mock_run_cmd:
pass
Unfortunately my attempts to mock the Prompt.run_cmd fail with the error message
AttributeError: 'module' object has no attribute 'object'
If you wanted to patch a concrete instance, you could easily do this using mock.patch.object and wraps (see for example this question.
If you want to patch your function for all instances instead, you indeed have to use mock.patch. In this case you could only mock the class itself, as mocking the method would not work (because it is used on instances, not classes), so you cannot use wraps here (at least I don't know a way to do this).
What you could do instead is derive your own class from Prompt and overwrite the method to collect the calls yourself. You could then patch Prompt by your own implementation. Here is a simple example:
class MyPrompt(Prompt):
calls = []
def run_cmd(self, input):
super().run_cmd(input)
# we just add a string in the call list - this could be more sophisticated
self.__class__.calls.append(input)
class TestRunner(unittest.TestCase):
def test_run(self):
with mock.patch("application.engine.prompt.Prompt", MyPrompt) as mock_prompt:
runner = Runner()
runner.run()
self.assertEqual(["unique cmd"], mock_prompt.calls)

Python set self-variables by other class using cls?

I wonder if it is possible to set variables of a class by a different class using cls?
The story behind it:
I'm writing tests for different purposes but see that one part of the setup is the same as in an already existing class. So I would do the setUp by the already existing one:
The original code:
class TestBase(unittest.TestCase):
def setUp(self):
self.handler = txt.getHandler(hcu.handler3())
self.curves = self.handler.curves()
self.arguments = (getInit())
self.ac = self.connect2DB(self.arguments)
self.au = AutoUtils()
This has worked well so far.
Now in my TestClient I'd like to make use of that:
from .testsDB import TestBase as tb
class TestClient(unittest.TestCase):
def setUp(self):
tb.setUp()
And modified in the TestBase the setUp to the following:
#classmethod
def setUp(cls):
cls.handler = txt.getHandler(hcu.handler3())
cls.graph = cls.handler.curves()
cls.argv = (getInit())
cls.ac = cls.connect2DB(cls.arguments)
cls.au = AutoUtils()
But I'm getting an error as soon as I use one of the values defined in the variables of the TestClient-class:
def test_Duplicates(self):
self.testDB = self.ac.connect(self.ac.client, self.arguments[4])
With the error:
In test_Duplicate (Curves.tests_client.TestClient) :
Traceback (most recent call last):
File "/home/qohelet/Curves/tests_client.py", line 49, in test_Duplicate
self.testDB = self.ac.connect(self.ac.client, self.arguments[4])
AttributeError: 'TestClient' object has no attribute 'ac'
Is it actually possible what I'm trying?
EDIT:
After writing this and seeing the answers I did another review. Yes indeed there is a circular issue I'm having.
TestBase has the function connect2DB which will be executed on setUp.
If it refers to itself (as in the original) it's fine.
If I replace the self with cls it will try to execute TestClient.connect2DB in the setUp - which does not exist. So it would require self again as putting connect2DB into TestClient is not an option.
How to solve that?
Surely your new class should just inherit the setup()?
from .testsDB import TestBase as tb
class TestClient(tb):
def test_Duplicates(self):
self.testDB = self.ac.connect(self.ac.client, self.arguments[4])
The whole point of inheritance is that you don't modify what you inherit from. Your new class should just make use of what is supplied. That is why inheritance is sometimes called programming by difference.

Dependency injection in imported python module

My python module uses some functions from another module, but I have several implementations of that module interface. How to point out, which one to use?
Simple example:
A.py:
import B
def say_hi()
print "Message: " + B.greeting()
main.py:
import A(B=my_B_impl)
A.say_hi()
my_B_impl.py:
def greeting():
return "Hallo!"
output:
Message: Hallo!
In python this could be most elegantly done with inheritance:
A.py:
import B
class SayHi(object):
b = B
def say_hi(self):
print "Message: " + self.b.greeting()
my_B_impl.py:
class AlternativeHi(object):
def greeting(self):
return "Hallo!"
main.py:
import A
from my_B_impl.py import AlternativeHi
class MyHi(SayHi):
b=AlternativeHi
a=MyHi()
MyHi.say_hi()
output:
Message: Hallo!
You can also use the factory pattern to avoid explicit declaration of class AlternativeHi and MyHi:
A.py
from B import greeting
class SayHi(object):
def __init__(self,*args,**kwargs):
self.greeting = greeting
def say_hi(self):
print "Message: " + self.greeting()
def hi_factory(func):
class CustomHi(SayHi):
def __init__(self,*args,**kwargs):
result = super(CustomHi, self).__init__(*args, **kwargs)
self.greeting = func
return CustomHi
my_B_impl.py:
def greeting(self):
return "Hallo!"
main.py:
form A import hi_factory
from my_B_impl import greeting
a = hi_factory(greeting)
a.say_hi()
What you ask is not directly possible. There is no parameterisation capability built in to Python's module system. If you think about it, it's not clear how such a proposal ought to work: if modules A and B both import module M, but they supply different parameters, which parameter is used when M is imported? Is it imported twice? What would that mean for module-level configuration (as in logging)? It gets worse if a third module C attempts to import M without parameters. Also, the "open-world" idea that you could override any import statement from the outside violates the language-design principle that "the code you wrote is the code that ran".
Other languages have incorporated parameterised modules in a variety of ways (compare Scala's object model, ML's modules and signatures, and - stretching it - C++'s templates), but it's not clear that such a feature would be a good fit for Python. (That said, you could probably hack something resembling parameterised modules using importlib if you were determined and masochistic enough.)
Python does have very powerful and flexible capabilities for dynamic dispatch, however. Python's standard, day-to-day features like functions, classes, parameters and overriding provide the basis for this support.
There are lots of ways to cut the cake on your example of a function whose behaviour is configurable by its client.
A function parameterised by a value:
def say_hi(greeting):
print("Message: " + greeting)
def main():
say_hi("Hello")
A class parameterised by a value:
class Greeter:
def __init__(self, greeting):
self.greeting = greeting
def say_hi(self):
print("Message: " + self.greeting)
def main():
Greeter("Hello").say_hi()
A class with a virtual method:
class Greeter:
def say_hi(self):
print("Message: " + self.msg())
class MyGreeter(Greeter):
def msg(self):
return "Hello"
A function parameterised by a function:
def say_hi(greeting):
print("Message: " + greeting())
def make_greeting():
return "Hello"
def main():
say_hi(make_greeting)
There are more options (I'm avoiding the Java-y example of objects invoking other objects) but you get the idea. In each of these cases, the selection of the behaviour (the passing of the parameter, the overriding of the method) is decoupled from the code which uses it and could be put in a different file. The right one to choose depends on your situation (though here's a hint: the right one is always the simplest one that works).
Update: in a comment you mention that you'd like an API which sets up the dependency at the module-level. The main problem with this is that the dependency would be global - modules are singletons, so anyone who imports the module has to use the same implementation of the dependency.
My advice is to provide an object-oriented API with "proper" (per-instance) dependency injection, and provide top-level convenience functions which use a (configurable) "default" set-up of the dependency. Then you have the option of not using the globally-configured version. This is roughly how asyncio does it.
# flexible object with dependency injection
class Greeter:
def __init__(self, msg):
self.msg = msg
def say_hi(self):
print("Message: " + self.msg)
# set up a default configuration of the object to be used by the high-level API
_default_greeter = Greeter("Hello")
def configure(msg):
global _default_greeter
_default_greeter = Greeter(msg)
# delegate to whatever default has been configured
def say_hi():
_default_greeter.say_hi()

How to define a static utility class correctly in Python

I wanted to write a utility class to read from a config file in python.
import os,ConfigParser
class WebPageTestConfigUtils:
configParser = ConfigParser.RawConfigParser()
configFilePath = (os.path.join(os.getcwd(),'webPageTestConfig.cfg'))
#staticmethod
def initializeConfig():
configParser.read(self.configFilePath)
#staticmethod
def getConfigValue(key):
return configParser.get('WPTConfig', key)
def main():
WebPageTestConfigUtils.initializeConfig()
print WebPageTestConfigUtils.getConfigValue('testStatus')
if __name__ =='__main__':
main()
Upon execution this throws the error.
NameError: global name 'configParser' is not defined
Why is python not able to recognize the static member.
In general, it is almost always better to use #classmethod over #staticmethod.
Then, configParser is an attribute of the cls argument:
class WebPageTestConfigUtils(object):
configParser = ConfigParser.RawConfigParser()
configFilePath = (os.path.join(os.getcwd(),'webPageTestConfig.cfg'))
#classmethod
def initializeConfig(cls):
cls.configParser.read(cls.configFilePath)
#classmethod
def getConfigValue(cls, key):
return cls.configParser.get('WPTConfig', key)
Also note your usage of self is replaced by cls.
Class and instance attributes do not participate in the variable resolution process within a method. If you want to access them, you need to use ordinary attribute lookup syntax:
WebPageTestConfigUtils.configParser.read(self.configFilePath)
That said, you shouldn't be using a class at all for this. You seem to be used to a language where everything has to be in a class. Python doesn't work that way; you should just be using a module with ordinary functions in it.
If you want to create static variable in your file, create before class definition. Generally in python static variable declare as UPPERCASE variable name.
For your example you can use
CONFIGPARSER = ConfigParser.RawConfigParser()
CONFIGFILEPATH = (os.path.join(os.getcwd(),'webPageTestConfig.cfg'))
...
...
#staticmethod
def initializeConfig():
CONFIGPARSER.read(CONFIGFILEPATH)
...
...

python 2.7 isinstance fails at dynamically imported module class

I'm currently writing some kind of tiny api to support extending module classes. Users should be able to just write their class name in a config and it gets used in our program. The contract is, that the class' module has a function called create(**kwargs) to return an instance of our base module class, and is placed in a special folder. But the isinstance check Fails as soon as the import is made dynamically.
modules are placed in lib/services/name
module base class (in lib/services/service)
class Service:
def __init__(self, **kwargs):
#some initialization
example module class (in lib/services/ping)
class PingService(Service):
def __init__(self, **kwargs):
Service.__init__(self,**kwargs)
# uninteresting init
def create(kwargs):
return PingService(**kwargs)
importing function
import sys
from lib.services.service import Service
def doimport( clazz, modPart, kw, class_check):
path = "lib/" + modPart
sys.path.append(path)
mod = __import__(clazz)
item = mod.create(kw)
if class_check(item):
print "im happy"
return item
calling code
class_check = lambda service: isinstance(service, Service)
s = doimport("ping", "services", {},class_check)
print s
from lib.services.ping import create
pingService = create({})
if isinstance(pingService, Service):
print "why this?"
what the hell am I doing wrong
here is a small example zipped up, just extract and run test.py without arguments
zip example
The problem was in your ping.py file. I don't know exactly why, but when dinamically importing it was not accepting the line from service import Service, so you just have to change it to the relative path: from lib.services.service import Service. Adding lib/services to the sys.path could not make it work the inheritance, which I found strange...
Also, I am using imp.load_source which seems more robust:
import os, imp
def doimport( clazz, modPart, kw, class_check):
path = os.path.join('lib', modPart, clazz + '.py')
mod = imp.load_source( clazz, path )
item = mod.create(kw)
if class_check(item):
print "im happy"
return item

Categories