manipulate pytest commandline param for only one test - python

Is it possible to set the -s command-line parameter of pytest via a fixture or any other method?
What I want to achieve is to enable the -s (capture=no) option for exactly one of my tests?
I am thinking of a way to manipulate this option from within the test, or via a fixture.
To give an example of a test that needs disabled capturing here is a really simple test:
from fabric.state import env
from fabric.operations import run
class TestCapture:
def test_fabric(self):
env.user = '<ssh_username>'
env.password = '<ssh_password>'
env.host_string = '<hostname>'
who = run('whoami').stdout # use fabric command run to invoke whoami on <hostname>
assert who == '<username>'
In this test fabric is used to login on another machine via ssh, and run the whoami command. The result should be the name of the login user.
This test fails, if pytest's capturing is enabled.

It is possible to manipulate this using request.config.pluginmanager.getplugin('capturemanager').suspendcapture() and .resumecapture().
But this is a rather rare usecase which should not be needed normally. The only two cases I can think of right now for which this is reasonable are in the pdb and pytest-timeout plugins. So explaining your usecase a bit more might be helpful.

Related

Environment Variables in Fabric2

I’m using Python 3.6 and Fabric 2.4. I’m using Fabric to SSH into a server and run some commands. I need to set an environment variable for the commands being run on the remote server. The documentation indicates that something like this should work:
from fabric import task
#task(hosts=["servername"])
def do_things(c):
c.run("command_to_execute", env={"KEY": "VALUE"})
But that doesn’t work. Something like this should also be possible:
from fabric import task
#task(hosts=["servername"])
def do_things(c):
c.config.run.env = {"KEY": "VALUE"}
c.run("command_to_execute")
But that doesn’t work either. I feel like I’m missing something. Can anyone help?
I was able to do it by setting inline_ssh_env=True, and then explicitly setting the env variable, ex:
with Connection(host=hostname, user=username, inline_ssh_env=True) as c:
c.config.run.env = {"MY_VAR": "this worked"}
c.run('echo $MY_VAR')
As stated on the site of Fabric:
The root cause of this is typically because the SSH server runs non-interactive commands via a very limited shell call: /path/to/shell -c "command" (for example, OpenSSH). Most shells, when run this way, are not considered to be either interactive or login shells; and this then impacts which startup files get loaded.
You read more on this page link
So what you try to do won't work, and the solution is to pass the environment variable you want to set explicitly:
from fabric import task
#task(hosts=["servername"])
def do_things(c):
c.config.run.env = {"KEY": "VALUE"}
c.run('echo export %s >> ~/.bashrc ' % 'ENV_VAR=VALUE' )
c.run('source ~/.bashrc' )
c.run('echo $ENV_VAR') # to verify if it's set or not!
c.run("command_to_execute")
You can try that:
#task
def qa(ctx):
ctx.config.run.env['counter'] = 22
ctx.config.run.env['conn'] = Connection('qa_host')
#task
def sign(ctx):
print(ctx.config.run.env['counter'])
conn = ctx.config.run.env['conn']
conn.run('touch mike_was_here.txt')
And run:
fab2 qa sign
When creating the Connection object, try adding inline_ssh_env=True.
Quoting the documentation:
Whether to send environment variables “inline” as prefixes in front of command strings (export VARNAME=value && mycommand here), instead of trying to submit them through the SSH protocol itself (which is the default behavior). This is necessary if the remote server has a restricted AcceptEnv setting (which is the common default).
According to that part of the official doc, the connect_kwargs attribute of the Connection object is intended to replace the env dict. I use it, and it works as expected.

Python - Using pytest to skip test unless specified

Background
I have am using pytest to test a web scraper that pushes the data to a database. The class only pulls the html and pushes the html to a database to be parsed later. Most of my tests use dummy data to represent the html.
Question
I want to do a test where a webpage from the website is scraped but I want the test to be automatically turned off unless specified. A similar scenario could be if you have an expensive or time consuming test that you do not want to always run.
Expected Solution
I am expecting some kind of marker that suppresses a test unless I give pytest to run all suppressed tests, but I do not see that in the documentation.
What I have done
I am currently using the skip marker and comment it out.
Tried to use the skipif marker and and give arguments to python script using this command from command prompt pytest test_file.py 1 and the following code below in the test file. The problem is that when I try to provide an argument to my test_file, pytest is expecting that to be another file name so I get an error "no tests run in 0.00 seconds, ERROR: file not found: 1"
if len(sys.argv) == 1:
RUN_ALL_TESTS = False
else:
RUN_ALL_TESTS = True
...
# other tests
...
#pytest.mark.skipif(RUN_ALL_TESTS)
def test_scrape_website():
...
I might be able to treat the test as a fixture and use #pytest.fixture(autouse=False), not sure how to override the autouse variable though
A similar solution was stated in How to skip a pytest using an external fixture? but this solutions seems more complicated than what I need.
The docs describe exactly your problem: https://docs.pytest.org/en/latest/example/simple.html#control-skipping-of-tests-according-to-command-line-option. Copying from there:
Here is a conftest.py file adding a --runslow command line option to
control skipping of pytest.mark.slow marked tests:
# content of conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption(
"--runslow", action="store_true", default=False, help="run slow tests"
)
def pytest_collection_modifyitems(config, items):
if config.getoption("--runslow"):
# --runslow given in cli: do not skip slow tests
return
skip_slow = pytest.mark.skip(reason="need --runslow option to run")
for item in items:
if "slow" in item.keywords:
item.add_marker(skip_slow)
We can now write a test module like this:
# content of test_module.py
import pytest
def test_func_fast():
pass
#pytest.mark.slow
def test_func_slow():
pass
There's a couple ways to handle this, but I'll go over two common approaches I've seen in Python baselines.
1) Separate your tests by putting the "optional" tests in another directory.
Not sure what your project layout looks like, but you can do something like this (only the test directory is important, the rest is just a toy example layout):
README.md
setup.py
requirements.txt
test/
unit/
test_something.py
test_something_else.py
integration/
test_optional.py
application/
__init__.py
some_module.py
Then, when you invoke pytest, you invoke it by doing pytest test/unit if you want to run just the unit tests (i.e. only test_something*.py files), or pytest test/integration if you want to run just the integration tests (i.e. only test_optional.py), or pytest test if you want to run all the tests. So, by default, you can just run pytest test/unit.
I recommend wrapping these calls in some sort of script. I prefer make since it is powerful for this type of wrapping. Then you can say make test and it just runs your default (fast) test suite, or make test_all, and it'll run all the tests (which may or may not be slow).
Example Makefile you could wrap with:
.PHONY: all clean install test test_int test_all uninstall
all: install
clean:
rm -rf build
rm -rf dist
rm -rf *.egg-info
install:
python setup.py install
test: install
pytest -v -s test/unit
test_int: install
pytest -v -s test/integration
test_all: install
pytest -v -s test
uninstall:
pip uninstall app_name
2) Mark your tests judiciously with the #pytest.mark.skipif decorator, but use an environment variable as the trigger
I don't like this solution as much, it feels a bit haphazard to me (it's hard to tell which set of tests are being run on any give pytest run). However, what you can do is define an environment variable and then rope that environment variable into the module to detect if you want to run all your tests. Environment variables are shell dependent, but I'll pretend you have a bash environment since that's a popular shell.
You could do export TEST_LEVEL="unit" for just fast unit tests (so this would be your default), or export TEST_LEVEL="all" for all your tests. Then in your test files, you can do what you were originally trying to do like this:
import os
...
#pytest.mark.skipif(os.environ["TEST_LEVEL"] == "unit")
def test_scrape_website():
...
Note: Naming the test levels "unit" and "integration" is irrelevant. You can name them whatever you want. You can also have many many levels (like maybe nightly tests or performance tests).
Also, I think option 1 is the best way to go, since it not only clearly allows separation of testing, but it can also add semantics and clarity to what the tests mean and represent. But there is no "one size fits all" in software, you'll have to decide what approach you like based on your particular circumstances.
HTH!
A very simply solution is to use the -k argument. You can use the -k parameter to deselect certain tests. -k tries to match its argument to any part of the tests name or markers You can invert the match by using not (you can also use the boolean operators and and or). Thus -k 'not slow' skips tests which have "slow" in the name, has a marker with "slow" in the name, or whose class/module name contains "slow".
For example, given this file:
import pytest
def test_true():
assert True
#pytest.mark.slow
def test_long():
assert False
def test_slow():
assert False
When you run:
pytest -k 'not slow'
It outputs something like: (note that both failing tests were skipped as they matched the filter)
============================= test session starts =============================
platform win32 -- Python 3.5.1, pytest-3.4.0, py-1.5.2, pluggy-0.6.0
rootdir: c:\Users\User\Documents\python, inifile:
collected 3 items
test_thing.py . [100%]
============================= 2 tests deselected ==============================
=================== 1 passed, 2 deselected in 0.02 seconds ====================
Because of the eager matching you might want to do something like putting all your unittests in a directory called unittest and then marking the slow ones as slow_unittest (so as to to accidentally match a test that just so happens to have slow in the name). You could then use -k 'unittest and not slow_unittest' to match all your quick unit tests.
More pytest example marker usage
Form a little class for reuse of #xverges code on multiple marks/cli options;
#dataclass
class TestsWithMarkSkipper:
''' Util to skip tests with mark, unless cli option provided. '''
test_mark: str
cli_option_name: str
cli_option_help: str
def pytest_addoption_hook(self, parser):
parser.addoption(
self.cli_option_name,
action="store_true",
default=False,
help=self.cli_option_help,
)
def pytest_collection_modifyitems_hook(self, config, items):
if not config.getoption(self.cli_option_name):
self._skip_items_with_mark(items)
def _skip_items_with_mark(self, items):
reason = "need {} option to run".format(self.cli_option_name)
skip_marker = pytest.mark.skip(reason=reason)
for item in items:
if self.test_mark in item.keywords:
item.add_marker(skip_marker)
Usage example (must be put in conftest.py):
slow_skipper = TestsWithMarkSkipper(
test_mark='slow',
cli_option_name="--runslow",
cli_option_help="run slow tests",
)
pytest_addoption = slow_skipper.pytest_addoption_hook
pytest_collection_modifyitems = slow_skipper.pytest_collection_modifyitems_hook

How to run a #roles-decorated fabric task on only a single host

I have a task decorated with #roles that I would occasionally like to run on a single host (for canary-testing deploys).
from fabric.api import *
env.roledefs = {
'web-workers': ['django#worker1', 'django#worker2'],
'some-other-role': ['django#worker2'],
}
#task
#roles('web-workers')
def bogomips():
run('uptime')
The docs for #roles states that:
...barring an override on the command line, my_func will be executed against the hosts listed [in the role]...
But I can't get the "override" functionality mentioned here to work... I've tried:
$ fab bogomips -H django#worker2
$ fab bogomips -R some-other-role
but it always executes on the entire role mentioned in the decorator...
What am I missing here? How can I override where a #roles-decorated task is run?
This is actually the intended behavior, according to the Execution model's Order of Precedence, and there's a slightly different syntax that you must use in this scenario.
So here's the command that doesn't work:
$ fab bogomips -R some-other-role # fabric ignores the -R values!
And here's the version that DOES work:
$ fab bogomips:roles=some-other-role
Here's the issue: #308: #roles and #hosts decorators ignore command line options
And the docs: http://docs.fabfile.org/en/1.0.0/usage/execution.html#order-of-precedence
Per-task, command-line host lists (fab mytask:host=host1) override absolutely everything else.
Per-task, decorator-specified host lists (#hosts('host1')) override the env variables.
Globally specified host lists set in the fabfile (env.hosts = ['host1']) can override such lists set on the command-line, but only if you’re not careful (or want them to.)
Globally specified host lists set on the command-line (--hosts=host1) will initialize the env variables, but that’s it.

Python CLI program unit testing

I am working on a python Command-Line-Interface program, and I find it boring when doing testings, for example, here is the help information of the program:
usage: pyconv [-h] [-f ENCODING] [-t ENCODING] [-o file_path] file_path
Convert text file from one encoding to another.
positional arguments:
file_path
optional arguments:
-h, --help show this help message and exit
-f ENCODING, --from ENCODING
Encoding of source file
-t ENCODING, --to ENCODING
Encoding you want
-o file_path, --output file_path
Output file path
When I made changes on the program and want to test something, I must open a terminal,
type the command(with options and arguments), type enter, and see if any error occurs
while running. If error really occurs, I must go back to the editor and check the code
from top to end, guessing where the bug positions, make small changes, write print lines,
return to the terminal, run command again...
Recursively.
So my question is, what is the best way to do testing with CLI program, can it be as easy
as unit testing with normal python scripts?
I think it's perfectly fine to test functionally on a whole-program level. It's still possible to test one aspect/option per test. This way you can be sure that the program really works as a whole. Writing unit-tests usually means that you get to execute your tests quicker and that failures are usually easier to interpret/understand. But unit-tests are typically more tied to the program structure, requiring more refactoring effort when you internally change things.
Anyway, using py.test, here is a little example for testing a latin1 to utf8 conversion for pyconv::
# content of test_pyconv.py
import pytest
# we reuse a bit of pytest's own testing machinery, this should eventually come
# from a separatedly installable pytest-cli plugin.
pytest_plugins = ["pytester"]
#pytest.fixture
def run(testdir):
def do_run(*args):
args = ["pyconv"] + list(args)
return testdir._run(*args)
return do_run
def test_pyconv_latin1_to_utf8(tmpdir, run):
input = tmpdir.join("example.txt")
content = unicode("\xc3\xa4\xc3\xb6", "latin1")
with input.open("wb") as f:
f.write(content.encode("latin1"))
output = tmpdir.join("example.txt.utf8")
result = run("-flatin1", "-tutf8", input, "-o", output)
assert result.ret == 0
with output.open("rb") as f:
newcontent = f.read()
assert content.encode("utf8") == newcontent
After installing pytest ("pip install pytest") you can run it like this::
$ py.test test_pyconv.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.4.5dev1
collected 1 items
test_pyconv.py .
========================= 1 passed in 0.40 seconds =========================
The example reuses some internal machinery of pytest's own testing by leveraging pytest's fixture mechanism, see http://pytest.org/latest/fixture.html. If you forget about the details for a moment, you can just work from the fact that "run" and "tmpdir" are provided for helping you to prepare and run tests. If you want to play, you can try to insert a failing assert-statement or simply "assert 0" and then look at the traceback or issue "py.test --pdb" to enter a python prompt.
Start from the user interface with functional tests and work down towards unit tests. It can feel difficult, especially when you use the argparse module or the click package, which take control of the application entry point.
The cli-test-helpers Python package has examples and helper functions (context managers) for a holistic approach on writing tests for your CLI. It's a simple idea, and one that works perfectly with TDD:
Start with functional tests (to ensure your user interface definition) and
Work towards unit tests (to ensure your implementation contracts)
Functional tests
NOTE: I assume you develop code that is deployed with a setup.py file or is run as a module (-m).
Is the entrypoint script installed? (tests the configuration in your setup.py)
Can this package be run as a Python module? (i.e. without having to be installed)
Is command XYZ available? etc. Cover your entire CLI usage here!
Those tests are simplistic: They run the shell command you would enter in the terminal, e.g.
def test_entrypoint():
exit_status = os.system('foobar --help')
assert exit_status == 0
Note the trick to use a non-destructive operation (e.g. --help or --version) as we can't mock anything with this approach.
Towards unit tests
To test single aspects inside the application you will need to mimic things like command line arguments and maybe environment variables. You will also need to catch the exiting of your script to avoid the tests to fail for SystemExit exceptions.
Example with ArgvContext to mimic command line arguments:
#patch('foobar.command.baz')
def test_cli_command(mock_command):
"""Is the correct code called when invoked via the CLI?"""
with ArgvContext('foobar', 'baz'), pytest.raises(SystemExit):
foobar.cli.main()
assert mock_command.called
Note that we mock the function that we want our CLI framework (click in this example) to call, and that we catch SystemExit that the framework naturally raises. The context managers are provided by cli-test-helpers and pytest.
Unit tests
The rest is business as usual. With the above two strategies we've overcome the control a CLI framework may have taken away from us. The rest is usual unit testing. TDD-style hopefully.
Disclosure: I am the author of the cli-test-helpers Python package.
So my question is, what is the best way to do testing with CLI program, can it be as easy as unit testing with normal python scripts?
The only difference is that when you run Python module as a script, its __name__ attribute is set to '__main__'. So generally, if you intend to run your script from command line it should have following form:
import sys
# function and class definitions, etc.
# ...
def foo(arg):
pass
def main():
"""Entry point to the script"""
# Do parsing of command line arguments and other stuff here. And then
# make calls to whatever functions and classes that are defined in your
# module. For example:
foo(sys.argv[1])
if __name__ == '__main__':
main()
Now there is no difference, how you would use it: as a script or as a module. So inside your unit-testing code you can just import foo function, call it and make any assertions you want.
Maybe too little too late,
but you can always use
import os.system
result = os.system(<'Insert your command with options here'>
assert(0 == result)
In that way, you can run your program as if it was from command line, and evaluate the exit code.
(Update after I studied pytest)
You can also use capsys.
(from running pytest --fixtures)
capsys
Enable text capturing of writes to sys.stdout and sys.stderr.
The captured output is made available via ``capsys.readouterr()`` method
calls, which return a ``(out, err)`` namedtuple.
``out`` and ``err`` will be ``text`` objects.
This isn't for Python specifically, but what I do to test command-line scripts is to run them with various predetermined inputs and options and store the correct output in a file. Then, to test them when I make changes, I simply run the new script and pipe the output into diff correct_output -. If the files are the same, it outputs nothing. If they're different, it shows you where. This will only work if you are on Linux or OS X; on Windows, you will have to get MSYS.
Example:
python mycliprogram --someoption "some input" | diff correct_output -
To make it even easier, you can add all these test runs to your 'make test' Makefile target, which I assume you already have. ;)
If you are running many of these at once, you could make it a little more obvious where each one ends by adding a fail tag:
python mycliprogram --someoption "some input" | diff correct_output - || tput setaf 1 && echo "FAILED"
The short answer is yes, you can use unit tests, and should. If your code is well structured, it should be quite easy to test each component separately, and if you need to to can always mock sys.argv to simulate running it with different arguments.
pytest-console-scripts is a Pytest plugin for testing python scripts installed via console_scripts entry point of setup.py.
For Python 3.5+, you can use the simpler subprocess.run to call your CLI command from your test.
Using pytest:
import subprocess
def test_command__works_properly():
try:
result = subprocess.run(['command', '--argument', 'value'], check=True, capture_output=True, text=True)
except subprocess.CalledProcessError as error:
print(error.stdout)
print(error.stderr)
raise error
The output can be accessed via result.stdout, result.stderr, and result.returncode if needed.
The check parameter causes an exception to be raised if an error occurs. Note Python 3.7+ is required for the capture_output and text parameters, which simplify capturing and reading stdout/stderr.
Given that you are explicitly asking about testing for a command line application, I believe that you are aware of unit-testing tools in python and that you are actually looking for a tool to automate end-to-end tests of a command line tool. There are a couple of tools out there that are specifically designed for that. If you are looking for something that's pip-installable, I would recommend cram. It integrates well with the rest of the python environment (e.g. through a pytest extension) and it's quite easy to use:
Simply write the commands you want to run prepended with $ and the expected output prepended with . For example, the following would be a valid cram test:
$ echo Hello
Hello
By having four spaces in front of expected output and two in front of the test, you can actually use these tests to also write documentation. More on that on the website.
You can use standard unittest module:
# python -m unittest <test module>
or use nose as a testing framework. Just write classic unittest files in separate directory and run:
# nosetests <test modules directory>
Writing unittests is easy. Just follow online manual for unittesting
I would not test the program as a whole this is not a good test strategy and may not actually catch the actual spot of the error. The CLI interface is just front end to an API. You test the API via your unit tests and then when you make a change to a specific part you have a test case to exercise that change.
So, restructure your application so that you test the API and not the application it self. But, you can have a functional test that actually does run the full application and checks that the output is correct.
In short, yes testing the code is the same as testing any other code, but you must test the individual parts rather than their combination as a whole to ensure that your changes do not break everything.

How do I tell fabric which role current site has?

I have just one command in fabfile.py:
#role('dev')
def test():
local('...')
Now, I can use --role=dev in every command, but this is extremely stupid.
What I want is to install my project in a host once, with a certain role, then use it without repeating this parameter.
I typically include the following in my fabfile.py:
if not len(env.roles):
env.roles = ["test"]
This says if env.roles is not defined (via the command line for instance) that it should be defined as "test" in my case. So in your case I would alter the above to substitute dev for test and thus you would have:
if not len(env.roles):
env.roles = ["dev"]
By doing this you should find that you get the behavior you are looking for while providing you the ability to override if you so desire at any point in the future.
EDIT: I'm editing this to include a small example fabfile.py and explanation of usage.
env.roledefs = {
'test': ['test.fabexample.com'],
'stage': ['stage.fabexample.com'],
'prod': ['web01.fabexample.com', 'web02.fabexample.com', 'web03.fabexample.com'],
}
# default role will be test
env.roles = ['test']
def git_pull():
run("git pull")
def deploy():
target = "/opt/apps/FOO"
with cd(target):
git_pull()
sudo("service apache2 restart")
Now this fabfile will allow me to deploy code to any of three different environments: "test", "stage", or "prod". I select which environment I want to deploy to via the command line:
fab -R stage deploy
or,
fab --role=stage deploy
If I do not specify a role fabric will default to 'test' due to env.roles being set. Not that fabric isn't used to do anything to the local box, instead it acts on the local box (or boxes) as defined in env.roledefs although with some modifications it could be made to work locally as well.
Typically the fabric command is used from a development box to perform these operations remotely on the testing, staging, or production boxes, therefore specifying the role via the command line is not "extremely stupid" but is by design in this case.
You can use env.roledefs to associates roles with groups of hosts.

Categories