Passing custom Python objects to nosetests - python

I am attempting to re-organize our test libraries for automation and nose seems really promising. My question is, what is the best strategy for passing Python objects into nose tests?
Our tests are organized in a testlib with a bunch of modules that exercise different types of request operations. Something like this:
testlib
\-testmoda
\-testmodb
\-testmodc
In some cases the test modules (i.e. testmoda) is nothing but test_something(), test_something2() functions while in some cases we have a TestModB class in testmob with the test_anotherthing1(), test_anotherthing2() functions. The cool thing is that nose easily finds both.
Most of those test functions are request factory stuff that can easily share a single connection to our server farm. Thus we do a lot of test_something1(cnn), TestModB.test_anotherthing2(cnn), etc.
Currently we don't use nose, instead we have a hodge-podge of homegrown driver scripts with hard-coded lists of tests to execute. Each of those driver scripts creates its own connection object. Maintaining those scripts and the connection minutia is painful.
I'd like to take free advantage of nose's beautiful discovery functionality, passing in a connection object of my choosing.
Thanks in advance!
Rob
P.S. The connection objects are not pickle-able. :(

Could you use a factory create the connections, then have the functions test_something1() (taking no arguments) use the factory to get a connection?

As far as I can tell, there is no easy way to simply pass custom objects to Nose.
However, as Matt pointed out there are some viable workarounds to achieve similar results.
Basically, do this:
Setup a data dictionary as a package level global
Add custom objects to that dictionary
Create some factory functions to return those custom objects or create new ones if they're present/suitable
Refactor the existing testlib\testmod* modules to use the factory

Related

Using Fixtures vs passing method as argument

I'm just learning Python and Pytest and came across Fixtures. Pardon the basic question but I'm a bit wondering what's the advantage of using fixtures in Python as you can already pass a method as argument, for example:
def method1():
return 'hello world'
def method2(methodToRun):
result = methodToRun()
return result
method2(method1)
What would be the advantage of passing a #pytest.fixture object as argument instead?
One difference is that fixtures pass the result of calling the function, not the function itself. That doesn't answer your question though why you'd want to use pytest.fixture instead of just manually calling it, so I'll just list a couple of things.
One reason is the global availability. After you write a fixture in conftest.py, you can use it in your whole test suite just by referencing its name and avoid duplicating it, which is nice.
In case your fixture returns a mutable object, pytest also handles the new call for you, so that you can be sure that other tests using the same fixture won't change the behavior between each other. If pytest didn't do that by default, you'd have to do it by hand.
A big one is that the plugin system of pytest uses fixtures to make its functionality available. So if you are a web dev and want to have a mock-server for your tests, you just install pytest-localserver and now adding httpserver, httpsserver, and smtpserver arguments to your test functions will inject the fixtures from the library you just installed. This is incredibly convenient and intuitive, in particular when compared to injection mechanisms in other languages.
The bottom line is that it is useful to have a single way to include dependencies in your test suits, and pytest chooses a fixture mechanism that magically binds itself to function signatures. So while it really is no different from manually inserting the argument, the quality of life things pytest adds through it make it worth it.
Fixture are a way of centralizing your test variables, avoid redundancy. If you are confortable with the concept of Dependency Injection, that's basically the same advantages, i.e. python will automatically bind your parameters with the available fixtures so you build tests more quickly by simply asking for what you need.
Also, fixtures enables you to easily parametrize all your tests at once. Which will avoid some cumbersome code if you want to do it by hand. (more info about it on the documentation: https://docs.pytest.org/en/latest/fixture.html#parametrizing-fixtures)
Some references:
Official documentation: https://docs.pytest.org/en/latest/fixture.html
Dependency injection: https://en.wikipedia.org/wiki/Dependency_injection

monkey-patching/decorating/wrapping an entire python module

I would like, given a python module, to monkey patch all functions, classes and attributes it defines. Simply put, I would like to log every interaction a script I do not directly control has with a module I do not directly control. I'm looking for an elegant solution that will not require prior knowledge of either the module or the code using it.
I found several high-level tools that help wrapping, decorating, patching etc... and i've went over the code of some of them, but I cannot find an elegant solution to create a proxy of any given module and automatically proxy it, as seamlessly as possible, except for appending logic to every interaction (record input arguments and return value, for example).
in case someone else is looking for a more complete proxy implementation
Although there are several python proxy solutions similar to those OP is looking for, I could not find a solution that will also proxy classes and arbitrary class objects, as well as automatically proxy functions return values and arguments. Which is what I needed.
I've got some code written for that purpose as part of a full proxying/logging python execution and I may make it into a separate library in the future. If anyone's interested you can find the core of the code in a pull request. Drop me a line if you'd like this as a standalone library.
My code will automatically return wrapper/proxy objects for any proxied object's attributes, functions and classes. The purpose is to log and replay some of the code, so i've got the equivalent "replay" code and some logic to store all proxied objects to a json file.

OOP: Using conditional statement while initializing a class

This question geared toward OOP best practices.
Background:
I've created a set of scripts that are either automatically triggered by cronjobs or are constantly running in the background to collect data in real time. In the past, I've used Python's smtplib to send myself notifications when errors occur or a job is successfully completed. Recently, I migrated these programs to the Google Cloud platform which by default blocks popular SMTP ports. To get around this I used linux's mail command to continue sending myself the reports.
Originally, my hacky solution was to have two separate modules for sending alerts that were initiated based on an argument I passed to the main script.
Ex:
$ python mycode.py my_arg
if sys.argv[1] == 'my_arg':
mailer = Class1()
else:
mailer = Class2()
I want to improve upon this and create a module that automatically handles this without the added code. The question I have is whether it is "proper" to include a conditional statement while initializing the class to handle the situation.
Ex:
Class Alert(object):
def __init__(self, sys.platform, other_args):
# Google Cloud Platform
if sys.platform == "linux":
#instantiate Class1 variables and methods
#local copy
else:
#instantiate Class2 variables and methods
My gut instinct says this is wrong but I'm not sure what the proper approach would be.
I'm mostly interested in answers regarding how to create OO classes/modules that handle environmental dependencies to provide the same service. In my case, a blocked port requires a different set of code altogether.
Edit: After some suggestions here are my favorite readings on this topic.
http://python-3-patterns-idioms-test.readthedocs.io/en/latest/Factory.html
This seems like a wonderful use-case for a factory class, which encapsulates the conditional, and always returns an instance of one of N classes, all of which implement the same interface, so that the rest of your code can use it without caring about the concrete class being used.
This is a way to do it. But I would rather use something like creating a dynamic class instance. To do that, you could have only one class instead of selecting from two different classes. The class would then take some arguments and return the result depending the on the arguments provided. There are quite some examples out there and I'm sure you can use them in your use-case. Try searching for how to create a dynamic class in python.

Monkeypatch persisting across unit tests python

I have a custom framework which runs different code for different clients. I have monkeypatched certain methods in order to customize functionality for a client.
Here is the pattern simplified:
#import monkeypatches here
if self.config['client'] == 'cool_dudes':
from app.monkeypatches import Stuff
if self.config['client'] == 'cool_dudettes':
from app.monkeypatches import OtherStuff
Here is an example patch:
from app.framework.stuff import Stuff
def function_override(self):
return pass
Stuff.function = function_override
This works fine when the program executes as it is executed in a batch manner, spinning up from scratch every time. However, when running across unit tests, I find that the monkey patches persist across tests, causing unexpected behavior.
I realize that it would be far better to use an object oriented inheritance approach to these overrides, but I inherited this codebase and am not currently empowered to rearchitect it to that degree.
Barring properly re-architecting the program, how can I prevent these monkey patches from persisting across unit tests?
The modules, including app.framework.<whatever>, are not reloaded for every test. So, any changes in them you make persist. The same happens if your module is stateful (that's one of the reasons why global state is not such a good idea, you should rather keep state in objects).
Your options are to:
undo the monkey-patches when needed, or
change them into something more generic that would change (semi-)automatically depending on the test running, or
(preferred) Do not reinvent the wheel and use an existing, manageable, time-proven solution for your task (or at least, base your work on one if it doesn't meet your requirements completely). E.g. if you use them for mocking, see How can one mock/stub python module like urllib . Among the suggestions there is #mock.patch that does the patching for a specific test and undoes it upon its completion.
Anyone coming here looking for information about monkeypatching, might want to have a look at pytest's monkeypatch fixture. It avoids the problem of the OP by automatically undoing all modifications after the test function has finished.

python libraries that provide mocked objects

I use python at work to develop some fairly complex libraries that have interdependencies between them. I am suggesting to my team that the developers of our internal libraries provide mocked versions of their functional classes and functions.
For an example, imagine you had an ssh class and you had to mock out ssh calls repeatedly. One way to do it would be to just mock something like ssh.run_command with different return values (using standard MagicMock). IMO a better way is to have something like
# mock using regexp
ssh.run_command.mock_pattern(r'ls .*', output='one two three', rc=0)
ssh.run_command.mock_pattern(r'kill \d+', output='', rc=0)
myfunc() # run a function that uses ssh somehow
ssh.run_command.assert_all_match() # assert all inputs matched a pattern
Basically, I am suggesting that the libraries define "helper mocks" that make it easier to write unit tests for them.
I am having trouble finding resources on this subject, or even python libraries that provide this kind of functionality -- does anyone know about this kind of tool?
Thanks!

Categories