How to test for the second parameter of a mocked method? - python

I am trying to mock the sendEmails() method and would like to test if the second parameter is called with "test#test.com" email address.
#mock.patch('apps.dbank.management.commands.optin_invites.OptinBase.sendEmails')
def test_x_send_emails(self, send_emails_mock):
oi = OptinInvitesX()
oi.compute(True, "test#test.com")
self.assertTrue(send_emails_mock.assert_called_with(???, test_email_address="test#test.com"))
I could utilise assert_called_with but I don't care about the first parameter for this test case. Is there a way to say accept anything for first parameter?

You are describing the basic usage of mock.ANY:
Sometimes you may need to make assertions about some of the arguments in a call to mock, but either not care about some of the arguments or want to pull them individually out of call_args and make more complex assertions on them.
To ignore certain arguments you can pass in objects that compare equal to everything. Calls to assert_called_with() and assert_called_once_with() will then succeed no matter what was passed in.
So, in your case, you could use:
# only asserting on 'test_email_address' argument:
send_emails_mock.assert_called_with(mock.ANY, test_email_address="test#test.com")
Note that you don't really want to use self.assertTrue on that line. The mock method is its own assertion.

Related

Transparently passing through a function with a variable argument list

I am using Python RPyC to communicate between two machines. Since the link may be prone to errors I would like to have a generic wrapper function which takes a remote function name plus that function's parameters as its input, does some status checking, calls the function with the parameters, does a little more status checking and then returns the result of the function call. The wrapper should have no knowledge of the function, its parameters/parameter types or the number of them, or the return value for that matter, the user has to get that right; it should just pass them transparently through.
I get the getattr(conn.root, function)() pattern to call the function but my Python expertise runs out at populating the parameters. I have read various posts on the use of *arg and **kwarg, in particular this one, which suggests that it is either difficult or impossible to do what I want to do. Is that correct and, if so, might there be a scheme which would work if I, say, ensured that all the function parameters were keyword parameters?
I do own both ends of this interface (the caller and the called) so I could arrange to dictionary-ise all the function parameters but I'd rather not make my API too peculiar if I could possibly avoid it.
Edit: the thing being called, at the remote end of the link, is a class with very ordinary methods, e.g.;
def exposed_a(self)
def exposed_b(self, thing1)
def exposed_c(self, thing1=None)
def exposed_d(self, thing1=DEFAULT_VALUE1, thing2=None)
def exposed_e(self, thing1, thing2, thing3=DEFAULT_VALUE1, thing4=None)
def exposed_f(self, thing1=None, thing2=None)
...where the types of each argument (and the return values) could be string, dict, number or list.
And it is indeed, trivial, my Goggle fu had simply failed me in finding the answer. In the hope of helping anyone else who is inexperienced in Python and is having a Google bad day:
One simply takes *arg and **kwarg as parameters and passes them directly on, with the asterisks attached. So in my case, to do my RPyC pass-through, where conn is the RPyC connection:
def my_passthru(conn, function_name, *function_args, **function_kwargs):
# Do a check of something or other here
return_value = getattr(conn.root, function_name)(*function_args, **function_kwargs)
# Do another check here
return return_value
Then, for example, a call to my exposed_e() method above might be:
return_value = my_passthru(conn, e, thing1, thing2, thing3)
(the exposed_ prefix being added automagically by RPyC in this case).
And of course one could put a try: / except ConnectionRefusedError: around the getattr() call in my_passthru() to generically catch the case where the connection has dropped underneath RPyC, which was my main purpose.

Python inspect: Get arguments of specific decorator

I need a script that given a function returns the arguments of a specific decorator.
Imagine the following function:
#decorator_a
#decorator_b(41,42,43)
#decorator_c(45)
def foo(self):
return 'bar'
I need a function that given foo returns the arguments of decorator_b - something like [41,42,43]. Is there a way to achieve this?
After a few hours of trying out different stuff I figured out a feasible solution:
inspect.getclosurevars(foo.__wrapped__).nonlocals
If you know the argument names of the decorator you try to inspect you can check for existence in the nonlocals dict. If it's not there, check one __wrapped__ layer higher and so on.

How to mock function with arguments spanning lines

I'm patching in my test (python2.7):
args[1].return_value.getMarkToMarketReportWithSummary.return_value = ([], {})
and I can see the
the expected mocked method with the correct return value when debugging:
and calling it is all good:
But, the method has multiple arguments:
rows, summary = manager.getMarkToMarketReportWithSummary(
portfolios, report_data_map, account,
...
include_twrr=self.__include_twrr)
and when the test runner calls the method it fails and returns a MagicMock instead of expected above. It's because of the arguments, making the method call a string or something. It looks like this:
so the method name looks the same but it has the \n with the args, etc. What is this? Is it an onion? Because it is making me cry.
Evaluating it a after that gives one more attribute, this time with #LINE#, because, you know, rubbing salt in my eyes is its goal:
:_(

How to pass parameters in a Python Dispatch Table

I am trying to construct a dispatch the following way:
def run_nn(type=None):
print type, 'nn'
return
def run_svm(type=None):
print type, 'svm'
return
action = {'nn' : run_nn( type=None),
'svm' : run_svm(type=None),}
I want the function to be executed only when called with something like:
action.get('nn',type='foo')
With expectation it to print:
foo nn
But it breaks giving:
TypeError: get() takes no keyword arguments
What's the right way to do it?
Furthermore, two functions run_nn() and run_svm() were executed without even being called. I don't want that. How can I avoid it?
You're calling the functions while building the dictionary. You should instead put the function objects in the dict without calling them. And afterwards, get the appropriate function from the dict and call it with the keyword argument.
What you want is:
action = {'nn' : run_nn,
'svm' : run_svm,}
...
action.get('nn')(type='foo') # get function object from dict and then call it.
I'll suggest you use action['nn'] over action.get('nn') since you're not specifying any default callable in the get method; the get method returns None when you don't specify one. A KeyError is much more intuitive than a TypeError NoneType object is not callable in this scenario.
On another note, you can drop those return statements as you aren't actually returning anything. Your function will still return without them.
BTW, I have the feeling your function(s) want to change behavior depending on type (although your type is counter-intuitive as it is always a string). In any case, you may have a look at functools.singledispatch. That'll transform your function(s) into a single-dispatch generic function with the possibility to create several overloaded implementations.
Finally, although type does make for a good argument name, you will run into problems when you need to use the builtin type in your function.

python argparse: how to use other parsed argument as parameter at calling function in type keyword?

I am trying to create an user interface using argparse module.
One of the argument need to be converted, so I use the type keyword:
add_argument('positional', ..., type=myfunction)
and there is another optional argument:
add_argument('-s', dest='switch', ...)
in addition, I have
parsed_argument=parse_args()
However, in myfunction, I hope I can use an additional parameter to control the behavior, which is the optional argument above, i.e.
def myfunction(positional, switch=parsed_argument.switch):
...
How can I achieve that?
Simple answer: You can’t. The arguments are parsed separately, and there is no real guarantee that some order is maintained. Instead of putting your logic into the argument type, just store it as a string and do your stuff after parsing the command line:
parser.add_argument('positional')
parser.add_argument('-s', '--switch')
args = parser.parse_args()
myfunction(args.positional, switch=args.switch)
I'm not sure I did understand correctly what you want to achieve, but if what you want to do is something that looks like:
myprog.py cmd1 --switcha
myprog.py cmd2 --switchb
yes you can, you need to use subparsers. I wrote a good example of it for a little PoC I wrote to access stackoverflow's API from CLI. The whole logic is a bit long to put thoroughly here, but mainly the idea is:
create your parser using parser = argparse.ArgumentParser(...)
create the subparsers using subparsers = parser.add_subparsers(...)
add the commands with things like `subparser.add_parser('mycommand', help='Its only a command').set_defaults(func=mycmd_fn) where
mycmd_fn takes args as parameters where you have all the switches you issued to the command!
the difference from what you ask, is that you'll need one function per command, and not one function with the positional argument as first argument. But you can leverage that easily by having mycmd_fn being like: mycmd_fn = lambda *args: myfunction('mycmd', *args)
HTH
From the documentation:
type= can take any callable that takes a single string argument and returns the converted value:
Python functions like int and float are good examples of a type function should be like. int takes a string and returns a number. If it can't convert the string it raises a ValueError. Your function could do the same. argparse.ArgumentTypeError is another option. argparse isn't going to pass any optional arguments to it. Look at the code for argparse.FileType to see a more elaborate example of a custom type.
action is another place where you can customize behavior. The documentation has an example of a custom Action. Its arguments include the namespace, the object where the parser is collecting the values it will return to you. This object contains any arguments have already been set. In theory your switch value will be available there - if it occurs first.
There are many SO answers that give custom Actions.
Subparsers are another good way of customizing the handling of arguments.
Often it is better to check for the interaction of arguments after parse_args. In your case 'switch' could occur after the positional and still have effect. And argparse.Error lets you use the argparse error mechanism (e.g. displaying the usage)

Categories