I have the below context manager inside a class method, which I would like to mock for unit testing.
def load_yaml_config(self) -> dict:
"""
Load the config based on the arguments provided.
Returns: dict
dictionary which will be used for configuring the logger, handlers, etc.
"""
with open(self.yaml_path, 'r') as config_yaml:
return yaml.safe_load(config_yaml.read())
How could I achieve it?
EDIT:
As #chepner suggested (I can't accept his/her answer since it was via comment), the best way to go seems to be by using the unittest's mock_open functionality.
This way, I can simply go:
import unittest.mock as um
with um.patch('builtins.open', um.mock_open(read_data=YAML_TEST)):
h = MyClass.load_yaml_config()
If you wanted to refactor this code to be able to test the safe_load part without having to actually open a file or patch builtins.open, you could do:
def load_yaml_config(self) -> dict:
"""
Load the config based on the arguments provided.
Returns: dict
dictionary which will be used for configuring the logger, handlers, etc.
"""
with open(self.yaml_path, 'r') as config_yaml:
return self._load_yaml_config(config_yaml.read())
def _load_yaml_config(self, yaml_text: str) -> dict:
return yaml.safe_load(yaml_text)
and then in your test:
TEST_YAML_DATA = """
stuff:
other_stuff
"""
def test_load_yaml_config():
assert WhateverMyClassIs()._load_yaml_config(TEST_YAML_DATA) == {
'stuff': 'other_stuff'
}
Modify to use actual appropriate YAML formatting and the correct expected dict output.
Note that all this is really testing is yaml.safe_load (which should have its own unit tests already) and the fact that your code calls it. Other than a typo in a variable name (which is easier to catch with a linter or static type analyzer), it's hard to imagine what type of bug this test might catch/prevent.
Practically speaking I probably wouldn't bother covering this function in a unit test at all, but would instead try to have some sort of larger integration test (using a real file) that involved loading a config as part of some larger test scenario.
The easy way
Just create the appropriate yaml file in your testing code. But you probably don't want that, since you're making this post.
A hack with mocking
You can override open with your mock in the module scope:
# test_YourClass.py
builtin_open = open
class open:
def __init__(self, *args, **kwargs):
pass
def __enter__(self):
pass
def __exit__(self, exc_type, exc_value, exc_traceback):
pass
def read(self):
return 'hardcoded file contents for testing'
# Test here
open = builtin_open
This code is just a general idea, I haven't run it. It might require some additional work, such as parameterizing the mock file contents.
Dependency injection
The "proper" way is to unhardcode open() call in the class and inject your context manager, I suppose. It's up to you. I personally don't like injecting everything just for the purpose of unit testing.
Related
I am playing around with python metaclasses, and trying to write some sort of metaclasses that changes or adds methods dynamically for its subclasses.
For example, here is a metaclass that its purpose is to find async methods in the subclass (that their name also ends with the string "_async") and add an additional "synchronized" version of this method:
class AsyncClientMetaclass(type):
#staticmethod
def async_func_to_sync(func):
return lambda *_args, **_kwargs: run_synchronized(func(*_args, **_kwargs))
def __new__(mcs, *args, **kwargs):
cls = super().__new__(mcs, *args, **kwargs)
_, __, d = args
for key, value in d.items():
if asyncio.iscoroutinefunction(value) and key.endswith('_async'):
sync_func_name = key[:-len('_async')]
if sync_func_name in d:
continue
if isinstance(value, staticmethod):
value = value.__func__
setattr(cls, sync_func_name, mcs.async_func_to_sync(value))
return cls
# usage
class SleepClient(metaclass=AsyncClientMetaclass):
async def sleep_async(self, seconds):
await asyncio.sleep(seconds)
return f'slept for {seconds} seconds'
c = SleepClient()
res = c.sleep(2)
print(res) # prints "slept for 2 seconds"
This example works great, the only problem is that the python linter warns about using the non async method that the metaclass has created (for the example above, the warning is Unresolved attribute reference 'sleep' for class 'SleepClient')
For now, I am adding pylint: disable whenever I am using a sync method created by the metaclass, but I am wondering if is there any way to add a custom linter rule with the metaclass, so the linter will know those methods will be created dynamically.
And are you think there is a better way to achieve this purpose rather than using metaclass?
Thanks!
As put by Chepner: no static code analyser can know about these methods - not linters nor type-annotation checking tools like MyPy, unless you give then a hint.
Maybe there is one way out: static type annotators will consume a parallel ".pyi" stub file, put side by side to the correspondent ".py" file that can list class interfaces, and, I may be wrong, but whatever it findes there will supersede what the toll "sees" on the actual Py file.
So, you could instrument your metaclass to, aside from generating the actual methods, render their signature and the signature for the "real" methods and attributes of the class as source code, and record those as the proper "pyi" file. You will have to run this code once, before the linter can find its way - but it is the only workaround I can think of.
In other words, to be clear:
make a mechanism called by the metaclass that will check for the existence and time-stamp of the appropriate ".pyi" file for the classes it is modifying, and generate them. By checking the timestamp, or generating this file only when some "--build" variable is active, there should be no runtime penalties, and static-type checkers (and possibly some linters), should be pleased.
I have an class which decorates some methods using a decorator from another library. Specifically, the class subclasses flask-restful resources, decorates the http methods with httpauth.HTTPBasicAuth().login_required(), and does some sensible defaults on a model service.
On most subclasses I want the decorator applied; therefore I'd rather remove it than add it in the subclasses.
My thought is to have a private method which does the operations and a public method which is decorated. The effects of decoration can be avoided by overriding the public method to call the private one and not decorating this override. Mocked example below.
I am curious to know if there's a better way to do this. Is there a shortcut for 'cancelling decorators' in python that gives this effect?
Or can you recommend a better approach?
Some other questions have suitable answers for this, e.g. Is there a way to get the function a decorator has wrapped?. But my question is about broader design - i am interested in any pythonic way to run the operations in decorated methods without the effects of decoration. E.g. my example is one such way but there may be others.
def auth_required(fn):
def new_fn(*args, **kwargs):
print('Auth required for this resource...')
fn(*args, **kwargs)
return new_fn
class Resource:
name = None
#auth_required
def get(self):
self._get()
def _get(self):
print('Getting %s' %self.name)
class Eggs(Resource):
name = 'Eggs'
class Spam(Resource):
name = 'Spam'
def get(self):
self._get()
# super(Spam, self)._get()
eggs = Eggs()
spam = Spam()
eggs.get()
# Auth required for this resource...
# Getting Eggs
spam.get()
# Getting Spam
Flask-HTTPAuth uses functools.wraps in the login_required decorator:
def login_required(self, f):
#wraps(f)
def decorated(*args, **kwargs):
...
From Python 3.2, as this calls update_wrapper, you can access the original function via __wrapped__:
To allow access to the original function for introspection and other
purposes (e.g. bypassing a caching decorator such as lru_cache()),
this function automatically adds a __wrapped__ attribute to the
wrapper that refers to the function being wrapped.
If you're writing your own decorators, as in your example, you can also use #wraps to get the same functionality (as well as keeping the docstrings, etc.).
See also Is there a way to get the function a decorator has wrapped?
Another common option is to have the decorated function keep a copy of the original function that can be accessed:
def auth_required(fn):
def new_fn(*args, **kwargs):
print('Auth required for this resource...')
fn(*args, **kwargs)
new_fn.original_fn = fn
return new_fn
Now, for any function that has been decorated, you can access its original_fn attribute to get a handle to the original, un-decorated function.
In that case, you could define some type of dispatcher that either makes plain function calls (when you are happy with the decorator behavior) or makes calls to thing.original_fn when you prefer to avoid the decorator behavior.
Your proposed method is also a valid way to structure it, and whether my suggestion is "better" depends on the rest of the code you're dealing with, who needs to read it, and other kinds of trade-offs.
I am curious to know if there's a better way to do this. Is there a
shortcut for 'cancelling decorators' in python that gives this effect?
Use the undecorated library. It digs through all the decorators and returns just the original function. The docs should be self-explanatory, basically you just call: undecorated(your_decorated_function)
How do I make a python "constructor" "private", so that the objects of its class can only be created by calling static methods? I know there are no C++/Java like private methods in Python, but I'm looking for another way to prevent others from calling my constructor (or other method).
I have something like:
class Response(object):
#staticmethod
def from_xml(source):
ret = Response()
# parse xml into ret
return ret
#staticmethod
def from_json(source):
# parse json
pass
and would like the following behavior:
r = Response() # should fail
r = Response.from_json(source) # should be allowed
The reason for using static methods is that I always forget what arguments my constructors take - say JSON or an already parsed object. Even then, I sometimes forget about the static methods and call the constructor directly (not to mention other people using my code). Documenting this contract won't help with my forgetfulness. I'd rather enforce it with an assertion.
And contrary to some of the commenters, I don't think this is unpythonic - "explicit is better than implicit", and "there should be only one way to do it".
How can I get a gentle reminder when I'm doing it wrong? I'd prefer a solution where I don't have to change the static methods, just a decorator or a single line drop-in for the constructor would be great. A la:
class Response(object):
def __init__(self):
assert not called_from_outside()
I think this is what you're looking for - but it's kind of unpythonic as far as I'm concerned.
class Foo(object):
def __init__(self):
raise NotImplementedError()
def __new__(cls):
bare_instance = object.__new__(cls)
# you may want to have some common initialisation code here
return bare_instance
#classmethod
def from_whatever(cls, arg):
instance = cls.__new__(cls)
instance.arg = arg
return instance
Given your example (from_json and from_xml), I assume you're retrieving attribute values from either a json or xml source. In this case, the pythonic solution would be to have a normal initializer and call it from your alternate constructors, i.e.:
class Foo(object):
def __init__(self, arg):
self.arg = arg
#classmethod
def from_json(cls, source):
arg = get_arg_value_from_json_source(source)
return cls(arg)
#classmethod
def from_xml(cls, source):
arg = get_arg_value_from_xml_source(source)
return cls(arg)
Oh and yes, about the first example: it will prevent your class from being instantiated in the usual way (calling the class), but the client code will still be able to call on Foo.__new__(Foo), so it's really a waste of time. Also it will make unit testing harder if you cannot instantiate your class in the most ordinary way... and quite a few of us will hate you for this.
I'd recommend turning the factory methods into module-level factory functions, then hiding the class itself from users of your module.
def one_constructor(source):
return _Response(...)
def another_constructor(source):
return _Response(...)
class _Response(object):
...
You can see this approach used in modules like re, where match objects are only constructed through functions like match and search, and the documentation doesn't actually name the match object type. (At least, the 3.4 documentation doesn't. The 2.7 documentation incorrectly refers to re.MatchObject, which doesn't exist.) The match object type also resists direct construction:
>>> type(re.match('',''))()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: cannot create '_sre.SRE_Match' instances
but unfortunately, the way it does so relies upon the C API, so it's not available to ordinary Python code.
Good discussion in the comments.
For the minimal use case you describe,
class Response(object):
def __init__(self, construct_info = None):
if construct_info is None: raise ValueError, "must create instance using from_xml or from_json"
# etc
#staticmethod
def from_xml(source):
info = {} # parse info into here
return Response(info)
#staticmethod
def from_json(source):
info = {} # parse info into here
return Response(info)
It can be gotten around by a user who passes in a hand-constructed info, but at that point they'll have to read the code anyway and the static method will provide the path of least resistance. You can't stop them, but you can gently discourage them. It's Python, after all.
This might be achievable through metaclasses, but is heavily discouraged in Python. Python is not Java. There is no first-class notion of public vs private in Python; the idea is that users of the language are "consenting adults" and can use methods however they like. Generally, functions that are intended to be "private" (as in not part of the API) are denoted by a single leading underscore; however, this is mostly just convention and there's nothing stopping a user from using these functions.
In your case, the Pythonic thing to do would be to default the constructor to one of the available from_foo methods, or even to create a "smart constructor" that can find the appropriate parser for most cases. Or, add an optional keyword arg to the __init__ method that determines which parser to use.
An alternative API (and one I've seen far more in Python APIs) if you want to keep it explicit for the user would be to use keyword arguments:
class Foo(object):
def __init__(self, *, xml_source=None, json_source=None):
if xml_source and json_source:
raise ValueError("Only one source can be given.")
elif xml_source:
from_xml(xml_source)
elif json_source:
from_json(json_source)
else:
raise ValueError("One source must be given.")
Here using 3.x's * to signify keyword-only arguments, which helps enforce the explicit API. In 2.x this is recreatable with kwargs.
Naturally, this doesn't scale well to lots of arguments or options, but there are definitely cases where this style makes sense. (I'd argue bruno desthuilliers probably has it right for this case, from what we know, but I'll leave this here as an option for others).
The following is similar to what I ended up doing. It is a bit more general then what was asked in the question.
I made a function called guard_call, that checks if the current method is being called from a method of a certain class.
This has multiple uses. For example, I used the Command Pattern to implement undo and redo, and used this to ensure that my objects were only ever modified by command objects, and not random other code (which would make undo impossible).
In this concrete case, I place a guard in the constructor ensuring only Response methods can call it:
class Response(object):
def __init__(self):
guard_call([Response])
pass
#staticmethod
def from_xml(source):
ret = Response()
# parse xml into ret
return ret
For this specific case, you could probably make this a decorator and remove the argument, but I didn't do that here.
Here is the rest of the code. It's been a long time since I tested it, and can't guarentee that it works in all edge cases, so beware. It is also still Python 2. Another caveat is that it is slow, because it uses inspect. So don't use it in tight loops and when speed is an issue, but it might be useful when correctness is more important than speed.
Some day I might clean this up and release it as a library - I have a couple more of these functions, including one that asserts you are running on a particular thread. You may snear at the hackishness (it is hacky), but I did find this technique useful to smoke out some hard to find bugs, and to ensure my code still behaves during refactorings, for example.
from __future__ import print_function
import inspect
# http://stackoverflow.com/a/2220759/143091
def get_class_from_frame(fr):
args, _, _, value_dict = inspect.getargvalues(fr)
# we check the first parameter for the frame function is
# named 'self'
if len(args) and args[0] == 'self':
# in that case, 'self' will be referenced in value_dict
instance = value_dict.get('self', None)
if instance:
# return its class
return getattr(instance, '__class__', None)
# return None otherwise
return None
def guard_call(allowed_classes, level=1):
stack_info = inspect.stack()[level + 1]
frame = stack_info[0]
method = stack_info[3]
calling_class = get_class_from_frame(frame)
# print ("calling class:", calling_class)
if calling_class:
for klass in allowed_classes:
if issubclass(calling_class, klass):
return
allowed_str = ", ".join(klass.__name__ for klass in allowed_classes)
filename = stack_info[1]
line = stack_info[2]
stack_info_2 = inspect.stack()[level]
protected_method = stack_info_2[3]
protected_frame = stack_info_2[0]
protected_class = get_class_from_frame(protected_frame)
if calling_class:
origin = "%s:%s" % (calling_class.__name__, method)
else:
origin = method
print ()
print ("In %s, line %d:" % (filename, line))
print ("Warning, call to %s:%s was not made from %s, but from %s!" %
(protected_class.__name__, protected_method, allowed_str, origin))
assert False
r = Response() # should fail
r = Response.from_json("...") # should be allowed
I have some relatively complex integration tests in my Python code. I simplified them greatly with a custom decorator and I'm really happy with the result. Here's a simple example of what my decorator looks like:
def specialTest(fn):
def wrapTest(self):
#do some some important stuff
pass
return wrapTest
Here's what a test may look like:
class Test_special_stuff(unittest.TestCase):
#specialTest
def test_something_special(self):
pass
This works great and is executed by PyCharm's test runner without a problem. However, when I run a test from the commandline using Nose, it skips any test with the #specialTest decorator.
I have tried to name the decorator as testSpecial, so it matches default rules, but then my FN parameter doesn't get passed.
How can I get Nose to execute those test methods and treat the decorator as it is intended?
SOLUTION
Thanks to madjar, I got this working by restructuring my code to look like this, using functools.wraps and changing the name of the wrapper:
from functools import wraps
def specialTest(fn):
#wraps(fn)
def test_wrapper(self,*args,**kwargs):
#do some some important stuff
pass
return test_wrapper
class Test_special_stuff(unittest.TestCase):
#specialTest
def test_something_special(self):
pass
If I remember correctly, nose loads the test based on their names (functions whose name begins with test_). In the snippet you posted, you do not copy the __name__ attribute of the function in your wrapper function, so the name of the function returned is wrapTest and nose decides it's not a test.
An easy way to copy the attributes of the function to the new one is to used functools.wraps.
I want to add two variables to my subclass which is inherited from unittest.testcase
like I have:
import unittest
class mrp_repair_test_case(unittest.TestCase):
def __init__(self, a=None, b=None, methodName=['runTest']):
unittest.TestCase.__init__(self)
self.a= a
self.b = b
def test1(self):
..........
.......
def runtest()
mrp_repair_test_case(a=10,b=20)
suite = unittest.TestLoader().loadTestsFromTestCase(mrp_repair_test_case)
res = unittest.TextTestRunner(stream=out,verbosity=2).run(suite)
how can I acvhieve this:
I am getting this error:
ValueError: no such test method in ****<class 'mrp_repair.unit_test.test.mrp_repair_test_case'>:**** runTest
thanks
At first glance, it looks like you need to create an instance of mrp_repair_test_case. Your current line:
mrp_repair_test_case(a=10,b=20)
doesn't actually do anything.
Try (not tested):
def runtest():
m = mrp_repair_test_case(a=10, b=20)
suite = unittest.TestLoader().loadsTestsFromTestCase(m)
res = unittest.TextTestRunner(stream=out, verbosity=2).run(suite)
This assumes you've set up 'out' as a stream already.
Edit:
By the way, is there any reason you're not using a setUp method to set these values? That would be normal best practice. Looking at the documentation of loadTestsFromTestCase it looks like it will only accept the Class itself not an instance of it, which would mean you're rather working against the design of the unittest module.
Edit 2:
In response to your further information, I would actually set your uid and cursor values seperately at module level before calling the tests. I'm not a huge fan of globals normally, but if I'm understanding you correctly these values will be A) read-only B) always the same for the same customer which avoids most of the normal pitfalls in using them.
Edit 3:
To answer your edit, if you really want to use __init__ you probably can, but you will have to roll your own loadsTestsFromTestCase alternative, and possibly your own TestSuite (you'll have to check the internals of how it works). As I said above, you'll be working against the existing design of the module - to the extent that if you decide to do your testing that way it might be easier to roll your own solution completely from scratch than use unittest. Amend: just checked, you'd definately have to roll your own version of TestSuite, as the existing one creates a new instance of the TestCaseClass for each test.