I am using the python unittest module for testing a file that takes a command line argument. The argument is a file name which is then passed into a function like so:
file_name = str(sys.argv[1])
file = open(file_name)
result = main_loop(file)
print(result)
My test is set up like so:
class testMainFile(unittest.TestCase):
def test_main_loop(self):
file = open('file_name.json')
result = main_file.main_loop(file)
self.assertEqual(result, 'Expected Result')
if __name__ == 'main':
unittest.main()
When I run the test I get an "IndexError: list index out of range".
I tried passing the argument when running the test but to no avail. How do I run my test without error?
I think you have couple of options here. Firstly go to documentation and checkout patch because i think you can get away with
from unittest.mock import patch
#patch('sys.argv', ['mock.py', 'test-value'])
def test_main_loop(self):
Options for fun:
One would be simply to override the sys.argv next to your call
def test_main_loop(self):
file = open('file_name.json')
+ orginal_argv = sys.argv
+ sys.argv = ['mock argv', 'my-test-value']
result = main_file.main_loop(file)
+ sys.argv = orginal_argv
self.assertEqual(result, 'Expected Result')
Second would be to create a simple wrapper for your function
def set_sys_argv(func: Callable):
sys.argv = ['mock.py', 'my_test_value']
def wrapper(*args, **kwargs):
func()
return wrapper
and use it with test function
#set_sys_argv
def test_main_loop(self):
We can improve it slightly and make it more generic making a decorator that accepts the values to mock
def set_sys_argv(*argv):
sys.argv = argv
def _decorator(func: Callable):
def wrapper(*args, **kwargs):
func()
return wrapper
return _decorator
and use it similarly to patch
#set_sys_argv('mock.py', 'test-value')
def test_main_loop(self):
Third would be to create a context manager, likewise:
class ReplaceSysArgv(list):
def __enter__(self):
self._argv = sys.argv
sys.argv = ['mock', 'my-test-value']
return self
def __exit__(self, *args):
sys.argv = self._argv
and use it with your code
def test_main_loop(self):
file = open('file_name.json')
with ReplaceSysArgv():
result = main_file.main_loop(file)
self.assertEqual(result, 'Expected Result')
you have to push the arguments onto sys.argv before retrieving them (if your code is pulling from command-line arguments - it's unclear to me where in your test you're using the command-line arguments but I digress)
so something like first doing
import sys
sys.argv = ['mock_filename.py', 'json_file.json']
#... continue with rest of program / test.
Related
Say I have an flag --debug/--no-debug defined for the base command. This flag will affect the behavior of many operations in my program. Right now I find myself passing this flag as function parameters all over the place, which doesn't seem elegant. Especially when I need to access this flag in a deep call stack, I'll have to add this parameter to every single function on the stack.
I can instead create a global variable is_debug and set its value at the beginning of the command function that receives the value of this flag. But this doesn't seem elegant to me either.
Is there a better way to make some option values globally accessible using the Click library?
There are two ways to do so, depending on your needs. Both of them end up using the click Context.
Personally, I'm a fan of Option 2 because then I don't have to modify function signatures (and I rarely write multi-threaded programs). It also sounds more like what you're looking for.
Option 1: Pass the Context to the function
Use the click.pass_context decorator to pass the click context to the function.
Docs:
Usage: https://click.palletsprojects.com/en/7.x/commands/#nested-handling-and-contexts
API: https://click.palletsprojects.com/en/7.x/api/#click.pass_context
# test1.py
import click
#click.pass_context
def some_func(ctx, bar):
foo = ctx.params["foo"]
print(f"The value of foo is: {foo}")
#click.command()
#click.option("--foo")
#click.option("--bar")
def main(foo, bar):
some_func(bar)
if __name__ == "__main__":
main()
$ python test1.py --foo 1 --bar "bbb"
The value of foo is: 1
Option 2: click.get_current_context()
Pull the context directly from the current thread via click.get_current_context(). Available starting in Click 5.0.
Docs:
Usage: https://click.palletsprojects.com/en/7.x/advanced/#global-context-access
API: https://click.palletsprojects.com/en/7.x/api/#click.get_current_context
Note: This only works if you're in the current thread (the same thread as what was used to set up the click commands originally).
# test2.py
import click
def some_func(bar):
c = click.get_current_context()
foo = c.params["foo"]
print(f"The value of foo is: {foo}")
#click.command()
#click.option("--foo")
#click.option("--bar")
def main(foo, bar):
some_func(bar)
if __name__ == "__main__":
main()
$ python test2.py --foo 1 --bar "bbb"
The value of foo is: 1
To build on top of the Option 2 given by #dthor, I wanted to make this more seamless, so I combined it with the trick to modify global scope of a function and came up with the below decorator:
def with_click_params(func):
#functools.wraps(func)
def wrapper(*args, **kwargs):
g = func.__globals__
sentinel = object()
ctx = click.get_current_context()
oldvalues = {}
for param in ctx.params:
oldvalues[param] = g.get(param, sentinel)
g[param] = ctx.params[param]
try:
return func(*args, **kwargs)
finally:
for param in ctx.params:
if oldvalues[param] is sentinel:
del g[param]
else:
g[param] = oldvalues[param]
return wrapper
You would use it like this (borrowing sample from #dthor's answer):
#with_click_params
def some_func():
print(f"The value of foo is: {foo}")
print(f"The value of bar is: {bar}")
#click.command()
#click.option("--foo")
#click.option("--bar")
def main(foo, bar):
some_func()
if __name__ == "__main__":
main()
Here is it in action:
$ python test2.py --foo 1 --bar "bbb"
The value of foo is: 1
The value of bar is: bbb
Caveats:
Function can only be called from a click originated call stack, but this is a conscious choice (i.e., you would make assumptions on the variable injection). The click unit testing guide should be useful here.
The function is no longer thread safe.
It is also possible to be explicit on the names of the params to inject:
def with_click_params(*params):
def wrapper(func):
#functools.wraps(func)
def inner_wrapper(*args, **kwargs):
g = func.__globals__
sentinel = object()
ctx = click.get_current_context()
oldvalues = {}
for param in params:
oldvalues[param] = g.get(param, sentinel)
g[param] = ctx.params[param]
try:
return func(*args, **kwargs)
finally:
for param in params:
if oldvalues[param] is sentinel:
del g[param]
else:
g[param] = oldvalue
return inner_wrapper
return wrapper
#with_click_params("foo")
def some_func():
print(f"The value of foo is: {foo}")
#click.command()
#click.option("--foo")
#click.option("--bar")
def main(foo, bar):
some_func()
if __name__ == "__main__":
main()
In my main, I have a function with an error and a class that tracks errors in a list inside the class itself. In other words, instead of just calling the function, I would like to give this function to a class-method which then "logs" the error in a list and suppresses the error.
Here is my problem:
This function has input arguments. When I hand-over my function to the class-method, I would like to hand-over the inputs, too. What happens is, that the function is being executed before going to the class method. Therefore, the class-method can't suppress the error which happens in the function.
In the code below, I set the variable silent=True, therefore, it should not raise an error (because of the try/except clause within the method). Unfortunately, the code raises a TypeError which comes from the function.
Any advice would be much appreciated
PS: I am not looking for a decorator solution :)
Here is the class with the class method which can suppress the error
class ErrorTracker:
def __init__(self):
self.list = list()
def track_func(self, func, silent=False):
try:
self.list.append('...in trying')
print('....trying.....')
return func
except Exception as e:
self.list.append('...in except')
self.list.append(e) # important line - here the error gets "logged"
if not silent:
raise e
Here is the function with an error
def transformation_with_error(app1, app2):
# DO STUFF HERE with inputs
result = str(app1)+str(app2)
print(result)
print('TYPE ERROR here')
raise TypeError
return result
Here the main routine:
if __name__ == "__main__":
error_tracker = ErrorTracker()
print('-- start transformation')
error_tracker.track_func(transformation_with_error(app1='AA', app2='BB'), silent=True)
print('-- end transformation')
print(error_tracker.list)
If I understand your issue, in your main routine
error_tracker.track_func(transformation_with_error(app1='AA', app2='BB'), silent=True)
calls transformation_with_error before entering error_tracker.track_func. This happens just because you indeed are calling transformation_with_error. If you want your error_tracker.track_func to call transformation_with_error, you have to pass the later as an argument, like you would do for a callback.
For example:
def test(var1, var2):
print("{} {}".format(var1, var2))
def callFn(func, *vars):
func(*vars)
callFn(test, "foo", "bar")
outputs foo bar
Thx VincentRG
That was it
Just for the record, below are the changes I did:
(side note: I added the **kwargs, too, to be able to deal with default values)
thx mate
class changes
class ErrorTracker:
def __init__(self):
self.list = list()
def track_func(self, func, silent=False, *args, **kwargs):
try:
self.list.append('...in trying')
print('....trying.....')
return func(*args, **kwargs)
except Exception as e:
self.list.append('...in except')
self.list.append(e) # important line - here the error gets "logged"
if not silent:
raise e
change in call
if __name__ == "__main__":
error_tracker = ErrorTracker()
print('-- start transformation')
error_tracker.track_func(transformation_with_error, silent=True, app1='AA', app2='BB')
print('-- end transformation')
print(error_tracker.list)
In following example:
import subprocess
import mock
class MyArgs():
cmd = ''
cmd_args = ''
cmd_path = ''
def __init__(self):
pass
def set_args(self, c, a, p):
self.cmd = c
self.cmd_args = a
self.cmd_path = p
def get_command(self):
return ([self.cmd, self.cmd_args, self.cmd_path])
class Example():
args = MyArgs()
def __init__(self):
pass
def run_ls_command(self):
print 'run_ls_command command:' + str(self.get_command())
p = subprocess.Popen(self.get_command(), stdout=subprocess.PIPE)
out, err = p.communicate()
print out #to verify the mock is working, should output 'output' if the mock is called
return err
def set_args(self, c, a, p):
#this would be more complicated logic in
#future and likely not just one method, this is a MWE
self.args.set_args(c,a,p)
def get_command(self):
return self.args.get_command()
#mock.patch.object(subprocess, 'Popen', autospec=True)
def test_subprocess_popen(mock_popen):
mock_popen.return_value.returncode = 0
mock_popen.return_value.communicate.return_value = ("output", "Error")
e = Example()
e.set_args('ls', '-al', '/bin/foobar')
e.run_ls_command()
#todo: validate arguments called by the popen command for the test
test_subprocess_popen()
The longer term goal is being able to validate more complicated subprocess.Popen commands, which will be constructed by more manipulations on the Example object (though the concept will be the same as this example).
What I would like to do is somehow analyze the arguments sent to the p = subprocess.Popen(self.get_command(), stdout=subprocess.PIPE) command.
However I am not sure how to get those arguments - I know my mock is being called because my output matches expected for the mock.
So I'm working on an application that, upon import of certain records, requires some fields to be recalculated. To prevent a database read for each check, there is a caching decorator so the database read is only preformed once every n seconds during import. The trouble comes with building test cases. The following does work, but it has an ugly sleep in it.
# The decorator I need to patch
#cache_function_call(2.0)
def _latest_term_modified():
return PrimaryTerm.objects.latest('object_modified').object_modified
# The 2.0 sets the TTL of the decorator. So I need to switch out
# self.ttl for this decorated function before
# this test. Right now I'm just using a sleep, which works
#mock.patch.object(models.Student, 'update_anniversary')
def test_import_on_term_update(self, mock_update):
self._import_student()
latest_term = self._latest_term_mod()
latest_term.save()
time.sleep(3)
self._import_student()
self.assertEqual(mock_update.call_count, 2)
The decorator itself looks like the following:
class cache_function_call(object):
"""Cache an argument-less function call for 'ttl' seconds."""
def __init__(self, ttl):
self.cached_result = None
self.timestamp = 0
self.ttl = ttl
def __call__(self, func):
#wraps(func)
def inner():
now = time.time()
if now > self.timestamp + self.ttl:
self.cached_result = func()
self.timestamp = now
return self.cached_result
return inner
I have attempted to set the decorator before the import of the models:
decorators.cache_function_call = lambda x : x
import models
But even at the top of the file, django still initializes the models before running my tests.py and the function still gets decorated with the caching decorator instead of my lambda/noop one.
What's the best way to go about writing this test so I don't have a sleep. Can I set the ttl of the decorator before running my import somehow?
You can change the decorator class just a little bit.
At module level in decorators.py set the global
BAILOUT = False
and in your decorator class, change:
def __call__(self, func):
#wraps(func)
def inner():
now = time.time()
if BAILOUT or now > self.timestamp + self.ttl:
self.cached_result = func()
self.timestamp = now
return self.cached_result
return inner
Then in your tests set decorators.BAILOUT = True, and, hey presto!-)
I'm trying to profile an instance method, so I've done something like:
import cProfile
class Test():
def __init__(self):
pass
def method(self):
cProfile.runctx("self.method_actual()", globals(), locals())
def method_actual(self):
print "Run"
if __name__ == "__main__":
Test().method()
But now problems arise when I want "method" to return a value that is computed by "method_actual". I don't really want to call "method_actual" twice.
Is there another way, something that can be thread safe? (In my application, the cProfile data are saved to datafiles named by one of the args, so they don't get clobbered and I can combine them later.)
I discovered that you can do this:
prof = cProfile.Profile()
retval = prof.runcall(self.method_actual, *args, **kwargs)
prof.dump_stats(datafn)
The downside is that it's undocumented.
An option for any arbitrary code:
import cProfile, pstats, sys
pr = cProfile.Profile()
pr.enable()
my_return_val = my_func(my_arg)
pr.disable()
ps = pstats.Stats(pr, stream=sys.stdout)
ps.print_stats()
Taken from https://docs.python.org/2/library/profile.html#profile.Profile
I was struggling with the same problem and used a wrapper function to get over direct return values. Instead of
cP.runctx("a=foo()", globals(), locales())
I create a wrapper function
def wrapper(b):
b.append(foo())
and profile the call to the wrapper function
b = []
cP.runctx("wrapper(b)", globals(), locals())
a = b[0]
extracting the result of foo's computation from the out param (b) afterwards.
I created a decorator:
import cProfile
import functools
import pstats
def profile(func):
#functools.wraps(func)
def inner(*args, **kwargs):
profiler = cProfile.Profile()
profiler.enable()
try:
retval = func(*args, **kwargs)
finally:
profiler.disable()
with open('profile.out', 'w') as profile_file:
stats = pstats.Stats(profiler, stream=profile_file)
stats.print_stats()
return retval
return inner
Decorate your function or method with it:
#profile
def somefunc(...):
...
Now that function will be profiled.
Alternatively, if you'd like the raw, unprocessed profile data (e.g. because you want to run the excellent graphical viewer RunSnakeRun on it), then:
import cProfile
import functools
import pstats
def profile(func):
#functools.wraps(func)
def inner(*args, **kwargs):
profiler = cProfile.Profile()
profiler.enable()
try:
retval = func(*args, **kwargs)
finally:
profiler.disable()
profiler.dump_stats('profile.out')
return retval
return inner
This is a minor improvement on several of the other answers on this page.
I think #detly the .runcall() is basically the best answer, but for completeness, I just wanted to take #ThomasH 's answer to be function independent:
def wrapper(b, f, *myargs, **mykwargs):
try:
b.append(f(*myargs, **mykwargs))
except TypeError:
print 'bad args passed to func.'
# Example run
def func(a, n):
return n*a + 1
b = []
cProfile.runctx("wrapper(b, func, 3, n=1)", globals(), locals())
a = b[0]
print 'a, ', a