Pytest CLI: how can i ignore test case by name? - python

User can run test cases like below
python /test/submit.py --pytest-args "test/cases/"
this will be run as
pytest.main("test/cases/")`
submit.py argparse parse argument expects ('--pytest-args', default='', type=str)
submit.py is some module, where one of the def [like initiate_test()] will split --pytest-args by space, and runs pytest based on same split --pytest-args.
Now I want to pass --pytest-args in such a way that it ignore test_name.
But -ignore or -k is not doing what I expect.
Below is how I tried -ignore or -k.
-ignore seems like it is limited to directory & module.
python /test/submit.py --pytest-args "test/cases/ --ignore=test/cases/test.py::test_class::test_name"
while -k=test_name does selective run, but -k!=test_name did not do deselection.
`python /test/submit.py --pytest-args "test/cases/ -k!=test_name"`
Also I tried shell parameter like a='-k not (test_name)' with
python /test/submit.py --pytest-args "test/cases/ $a"
this does not work as there is space in $a.
Note: I don’t have access to test.py or submit.py, so I cannot use markers. So solution should be with respect to CLI.

Use python /test/submit.py --pytest-args "test/cases/ -k-test_name
Note the the test to be skipped starts with a - when passing to -k argument.
One can also spell out the command line argument as -k 'not test_name'.

Related

Is it possible to run a python script from the windows command prompt and pass an argument for that script at the same time?

I have a saved python script. I run this python script from the command prompt in Windows 10.
This is as simple as navigating to the directory where the script is located and then typing:
python myscript.py
and the script will run fine.
However, sometimes, I want to run this script such that a variable within that script is set to one value and sometimes to another. This variable tells the script which port to operate an API connection through (if this is relevant).
At the moment, I go into the script each time and change the variable to the one that I want and then run the script after that. Is there a way to set the variable at the time of running the script from the command prompt in Windows 10?
Or are there potentially any other efficient solutions to achieve the same flexibility at the time of running?
Thanks
The usual way to do this is with command-line arguments. In fact, passing a port number is, after passing a list of filenames, almost the paradigm case for command-line arguments.
For simple cases, you can handle this in your code with sys.argv
port = int(sys.argv[1])
Or, if you want a default value:
port = int(sys.argv[1]) if len(sys.argv) > 1 else 12345
Then, to run the program:
python myscript.py 54321
For more complicated cases—when you have multiple flags, some with values, etc.—you usually want to use something like argparse. But you'll probably want to read up a bit on typical command-line interfaces, and maybe look at the arguments of tools you commonly, before designing your first one. Because just looking at all of the options in argparse without knowing what you want in advance can be pretty overwhelming.
Another option is to use an environment variable. This is more tedious if you want to change it for each run, but if you want to set it once for an entire series of runs in a command-line session, or even set a computer-wide default, it's a lot easier.
In the code, you'd look in os.environ:
port = int(os.environ.get('MYSCRIPT_PORT', 12345))
And then, to set a port:
MYSCRIPT_PORT=54321
python myscript.py
You can combine the two: use a command-line argument if present, otherwise fall back to the environment variable, otherwise fall back to a default. Or even add a config file and/or (if you only care about Windows) registry setting. Python itself does something like three-step fallback, as do many major servers, but it may be overkill for your simple use case.
You should look at argparse. Heres an example:
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("-m", help='message to be sent', type=str)
args = (parser.parse_args())
print args.m
Each argument you create is saved like a dictionary so you have to call it in your code like I did with my print statement
"args.m" < ---- This specifies the argument passed you want to do stuff with
here was my input/output:
C:\Users\Vinny\Desktop>python argtest.py -m "Hi"
Hi
C:\Users\Vinny\Desktop>
More info on argparse:https://docs.python.org/3/library/argparse.html
You need the argparse library.
https://docs.python.org/3/library/argparse.html
https://docs.python.org/2/library/argparse.html

Executing a PowerShell script OUTSIDE of Python

I am trying to run a sort of application that utilises both Python and powershell scripts. I already wrote the Python script and powershell script, which are meant to work simultaneously but separate from each other. What I want to do is create a Python program that launches them both, is there a way? Thanks!
What I have right now, as part of a larger script, is:
import subprocess
autom = r"C:\Users\mrmostacho\Desktop\Robot\Autom.ps1","-ExecutionPolicy","Unrestricted"
powershell = r"C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe"
subprocess.Popen("%s %s" % (powershell, autom,))
I think you don't want "-ExecutionPolicy","Unrestricted" as script arguments but instead want to set powershells execution policy to allow the execution of your script. Therefore you should pass those parameters before the actual Script.
Second: It's not enough, to pass the script as argument to powershell.exe (this way the script name is interpreted as Powershell command and one has to escape the name according to powershells quoting rules). Instead the Script name should be given after the -File parameter. From online documentation:
-File []
Runs the specified script in the local scope ("dot-sourced"), so that the functions and variables that the script creates are
available in the current session. Enter the script file path and any
parameters. File must be the last parameter in the command, because
all characters typed after the File parameter name are interpreted as
the script file path followed by the script parameters.
You can include the parameters of a script, and parameter values, in
the value of the File parameter. For example: -File .\Get-Script.ps1 -Domain Central
Typically, the switch parameters of a script are either included or
omitted. For example, the following command uses the All parameter of
the Get-Script.ps1 script file: -File .\Get-Script.ps1 -All
In rare cases, you might need to provide a Boolean value for a switch
parameter. To provide a Boolean value for a switch parameter in the
value of the File parameter, enclose the parameter name and value in
curly braces, such as the following: -File .\Get-Script.ps1 {-All:$False}.
Third: As cdarke already commented, it's better to use a list instead of a string as argument to Popen. This way one doesn't need to worry about the CommandLine parsing on Windows.
Altogether, this should be the way to go. (Tested with small test script.)
import subprocess
autom = r"C:\Users\mrmostacho\Desktop\Robot\Autom.ps1"
powershell = r"C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe"
subprocess.Popen([powershell,"-ExecutionPolicy","Unrestricted","-File", autom])
If you need to pass arguments to the script, do it like this:
subprocess.Popen([powershell,"-ExecutionPolicy","Unrestricted","-File", autom, 'arg 1', 'arg 2'])

about argparse when using fabric

I am new to python & fabric.
I have a python module module1.py which would like to take in a command parameter, we use argparse inside the module1.py to require a command line parameter.
But we are trying to run the whole program through fabric, if I directly specify the command line when running through fab, I got
mycode.py: error: argument --config_yaml is required
How could I pass the argument through fab?
Thanks!
From what i can see from here:
https://github.com/fabric/fabric/blob/master/fabric/main.py#L340
https://github.com/fabric/fabric/blob/master/fabric/main.py#L619
You CAN'T do it. You can't add things to env_options before main.py runs, your code inside fabfile.py will run after main() has already been processed.
You can however do this:
Rename fabfile.py -> whatever you want as long as its not fabric.py. I called mine fabricc.py
from fabric.state import env_options, make_option
env_options.append(make_option('--myconfig', dest='config_file', default='default.ini'))
from fabric import main
from fabric.api import task, env
#task
def do_something():
print('config file: {}'.format(env.config_file))
if __name__ == '__main__':
main.find_fabfile = lambda x: __file__
main.main()
now run it:
$ python fabricc.py do_something
config file: default.ini
Done.
or..
$ python fabricc.py do_something --myconfig=somethinelse.ini
config file: somethinelse.ini
Done.
Word of warning DO NOT call it --config -- this is a builtin param.
With this you still enjoy everything you still love about fabric and nothing has changed.
$ python fabricc.py do_something --help
Usage: fab [options] <command>[:arg1,arg2=val2,host=foo,hosts='h1;h2',...] ...
Options:
-h, --help show this help message and exit
-d NAME, --display=NAME
...
-z INT, --pool-size=INT
number of concurrent processes to use in parallel mode
--myconfig=CONFIG_FILE
PS. i dont suggest you do the fabric override to handle it manually. Im just showing you it can be done. (the reason i dont condone this is: compatibility. If the version changes tomorrow this might break) best to use the product how the author meant it to be used.
====================================================================
Also, fabric is meant to be a fancy orm builder -- for lack of a better word. You cannot have command line arguments on a per-task-basis, it wasnt design to work like that. BUT it was designed to take in task-function arguments:
#task
def do_something(file=None):
print('config file: {}'.format(file or 'default.ini'))
$ fab dosomething:test
config file: override.ini
Done.
and that is what fabric was created to do.
its meant to be used like this:
#task
def environment(box_env=None):
...
#task
def deploy(branch='master'):
...
#task
def provision(recreate_if_existing=False, start_nignx=True):
...
fab evironment:dev deploy:development
fab evironment:dev provision:True,False
all together.
fab evironment:dev provision deploy:development
I know argparse, but not fabric. From the error is looks like your script defines a '--config_yaml' argparse argument (and makes it 'required'). But fabric apparently also uses that argument name.
I don't know if fabric also uses argparse. But it is common for programs like that to strip off the commandline arguments that it expects, and pass the rest on to your code.
Do you need to use --config_yaml in your script? And why is it set to required=True? argparse allows you to specify a required parameter, but ideally flagged arguments like this are optional. If not given they should have a reasonable default.
I see a --config=... argument in the fabric API, but not a --config_yaml.
I wonder if this section about per-task arguments is relevant
http://docs.fabfile.org/en/1.10/usage/fab.html#per-task-arguments
It sounds like you need to add the task name to the argument. fabric doesn't just pass all unknown arguments to the task (which is what I was assuming above). But I don't have fabric installed, so can't test it.

Python script argument conditional

Is anyone able to tell me how to write a conditional for an argument on a python script? I want it to print "Argument2 Entered" if it is run with a second command line arguments such as:
python script.py argument1 argument2
And print "No second argument" if it is run without command line arguments, like this:
python script.py argument1
Is this possible?
import sys
if len(sys.argv)==2: # first entry in sys.argv is script itself...
print "No second argument"
elif len(sys.argv)==3:
print "Second argument"
There are many answers to this, depending on what exactly you want to do and how much flexibility you are likely to need.
The simplest solution is to examine the variable sys.argv, which is a list containing all of the command-line arguments. (It also contains the name of the script as the first element.) To do this, simply look at len(sys.argv) and change behaviour based on its value.
However, this is often not flexible enough for what people expect command-line programs to do. For example, if you want a flag (-i, --no-defaults, ...) then it's not obvious how to write one with just sys.argv. Likewise for arguments (--dest-dir="downloads"). There are therefore many modules people have written to simplify this sort of argument parsing.
The built-in solution is argparse, which is powerful and pretty easy-to-use but not particularly concise.
A clever solution is plac, which inspects the signature of the main function to try to deduce what the command-line arguments should be.
There are many ways to do this simple thing in Python. If you are interested to know more than I recommend to read this article. BTW I am giving you one solution below:
import click
'''
Prerequisite: # python -m pip install click
run: python main.py ttt yyy
'''
#click.command(context_settings=dict(ignore_unknown_options=True))
#click.argument("argument1")
#click.argument("argument2")
def main(argument1, argument2):
print(f"argument1={argument1} and argument2={argument2}")
if __name__ == '__main__':
main()
Following block should be self explanatory
$ ./first.py second third 4th 5th
5
$ cat first.py
#!/usr/bin/env python
import sys
print (len(sys.argv))
This is related to many other posts depending upon where you are going with this, so I'll put four here:
What's the best way to grab/parse command line arguments passed to a Python script?
Implementing a "[command] [action] [parameter]" style command-line interfaces?
How can I process command line arguments in Python?
How do I format positional argument help using Python's optparse?
But the direct answer to your question from the Python docs:
sys.argv -
The list of command line arguments passed to a Python script. argv[0] is the script name (it is operating system dependent whether this is a full pathname or not). If the command was executed using the -c command line option to the interpreter, argv[0] is set to the string '-c'. If no script name was passed to the Python interpreter, argv[0] is the empty string.
To loop over the standard input, or the list of files given on the command line, see the fileinput module.

Python CLI program unit testing

I am working on a python Command-Line-Interface program, and I find it boring when doing testings, for example, here is the help information of the program:
usage: pyconv [-h] [-f ENCODING] [-t ENCODING] [-o file_path] file_path
Convert text file from one encoding to another.
positional arguments:
file_path
optional arguments:
-h, --help show this help message and exit
-f ENCODING, --from ENCODING
Encoding of source file
-t ENCODING, --to ENCODING
Encoding you want
-o file_path, --output file_path
Output file path
When I made changes on the program and want to test something, I must open a terminal,
type the command(with options and arguments), type enter, and see if any error occurs
while running. If error really occurs, I must go back to the editor and check the code
from top to end, guessing where the bug positions, make small changes, write print lines,
return to the terminal, run command again...
Recursively.
So my question is, what is the best way to do testing with CLI program, can it be as easy
as unit testing with normal python scripts?
I think it's perfectly fine to test functionally on a whole-program level. It's still possible to test one aspect/option per test. This way you can be sure that the program really works as a whole. Writing unit-tests usually means that you get to execute your tests quicker and that failures are usually easier to interpret/understand. But unit-tests are typically more tied to the program structure, requiring more refactoring effort when you internally change things.
Anyway, using py.test, here is a little example for testing a latin1 to utf8 conversion for pyconv::
# content of test_pyconv.py
import pytest
# we reuse a bit of pytest's own testing machinery, this should eventually come
# from a separatedly installable pytest-cli plugin.
pytest_plugins = ["pytester"]
#pytest.fixture
def run(testdir):
def do_run(*args):
args = ["pyconv"] + list(args)
return testdir._run(*args)
return do_run
def test_pyconv_latin1_to_utf8(tmpdir, run):
input = tmpdir.join("example.txt")
content = unicode("\xc3\xa4\xc3\xb6", "latin1")
with input.open("wb") as f:
f.write(content.encode("latin1"))
output = tmpdir.join("example.txt.utf8")
result = run("-flatin1", "-tutf8", input, "-o", output)
assert result.ret == 0
with output.open("rb") as f:
newcontent = f.read()
assert content.encode("utf8") == newcontent
After installing pytest ("pip install pytest") you can run it like this::
$ py.test test_pyconv.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.4.5dev1
collected 1 items
test_pyconv.py .
========================= 1 passed in 0.40 seconds =========================
The example reuses some internal machinery of pytest's own testing by leveraging pytest's fixture mechanism, see http://pytest.org/latest/fixture.html. If you forget about the details for a moment, you can just work from the fact that "run" and "tmpdir" are provided for helping you to prepare and run tests. If you want to play, you can try to insert a failing assert-statement or simply "assert 0" and then look at the traceback or issue "py.test --pdb" to enter a python prompt.
Start from the user interface with functional tests and work down towards unit tests. It can feel difficult, especially when you use the argparse module or the click package, which take control of the application entry point.
The cli-test-helpers Python package has examples and helper functions (context managers) for a holistic approach on writing tests for your CLI. It's a simple idea, and one that works perfectly with TDD:
Start with functional tests (to ensure your user interface definition) and
Work towards unit tests (to ensure your implementation contracts)
Functional tests
NOTE: I assume you develop code that is deployed with a setup.py file or is run as a module (-m).
Is the entrypoint script installed? (tests the configuration in your setup.py)
Can this package be run as a Python module? (i.e. without having to be installed)
Is command XYZ available? etc. Cover your entire CLI usage here!
Those tests are simplistic: They run the shell command you would enter in the terminal, e.g.
def test_entrypoint():
exit_status = os.system('foobar --help')
assert exit_status == 0
Note the trick to use a non-destructive operation (e.g. --help or --version) as we can't mock anything with this approach.
Towards unit tests
To test single aspects inside the application you will need to mimic things like command line arguments and maybe environment variables. You will also need to catch the exiting of your script to avoid the tests to fail for SystemExit exceptions.
Example with ArgvContext to mimic command line arguments:
#patch('foobar.command.baz')
def test_cli_command(mock_command):
"""Is the correct code called when invoked via the CLI?"""
with ArgvContext('foobar', 'baz'), pytest.raises(SystemExit):
foobar.cli.main()
assert mock_command.called
Note that we mock the function that we want our CLI framework (click in this example) to call, and that we catch SystemExit that the framework naturally raises. The context managers are provided by cli-test-helpers and pytest.
Unit tests
The rest is business as usual. With the above two strategies we've overcome the control a CLI framework may have taken away from us. The rest is usual unit testing. TDD-style hopefully.
Disclosure: I am the author of the cli-test-helpers Python package.
So my question is, what is the best way to do testing with CLI program, can it be as easy as unit testing with normal python scripts?
The only difference is that when you run Python module as a script, its __name__ attribute is set to '__main__'. So generally, if you intend to run your script from command line it should have following form:
import sys
# function and class definitions, etc.
# ...
def foo(arg):
pass
def main():
"""Entry point to the script"""
# Do parsing of command line arguments and other stuff here. And then
# make calls to whatever functions and classes that are defined in your
# module. For example:
foo(sys.argv[1])
if __name__ == '__main__':
main()
Now there is no difference, how you would use it: as a script or as a module. So inside your unit-testing code you can just import foo function, call it and make any assertions you want.
Maybe too little too late,
but you can always use
import os.system
result = os.system(<'Insert your command with options here'>
assert(0 == result)
In that way, you can run your program as if it was from command line, and evaluate the exit code.
(Update after I studied pytest)
You can also use capsys.
(from running pytest --fixtures)
capsys
Enable text capturing of writes to sys.stdout and sys.stderr.
The captured output is made available via ``capsys.readouterr()`` method
calls, which return a ``(out, err)`` namedtuple.
``out`` and ``err`` will be ``text`` objects.
This isn't for Python specifically, but what I do to test command-line scripts is to run them with various predetermined inputs and options and store the correct output in a file. Then, to test them when I make changes, I simply run the new script and pipe the output into diff correct_output -. If the files are the same, it outputs nothing. If they're different, it shows you where. This will only work if you are on Linux or OS X; on Windows, you will have to get MSYS.
Example:
python mycliprogram --someoption "some input" | diff correct_output -
To make it even easier, you can add all these test runs to your 'make test' Makefile target, which I assume you already have. ;)
If you are running many of these at once, you could make it a little more obvious where each one ends by adding a fail tag:
python mycliprogram --someoption "some input" | diff correct_output - || tput setaf 1 && echo "FAILED"
The short answer is yes, you can use unit tests, and should. If your code is well structured, it should be quite easy to test each component separately, and if you need to to can always mock sys.argv to simulate running it with different arguments.
pytest-console-scripts is a Pytest plugin for testing python scripts installed via console_scripts entry point of setup.py.
For Python 3.5+, you can use the simpler subprocess.run to call your CLI command from your test.
Using pytest:
import subprocess
def test_command__works_properly():
try:
result = subprocess.run(['command', '--argument', 'value'], check=True, capture_output=True, text=True)
except subprocess.CalledProcessError as error:
print(error.stdout)
print(error.stderr)
raise error
The output can be accessed via result.stdout, result.stderr, and result.returncode if needed.
The check parameter causes an exception to be raised if an error occurs. Note Python 3.7+ is required for the capture_output and text parameters, which simplify capturing and reading stdout/stderr.
Given that you are explicitly asking about testing for a command line application, I believe that you are aware of unit-testing tools in python and that you are actually looking for a tool to automate end-to-end tests of a command line tool. There are a couple of tools out there that are specifically designed for that. If you are looking for something that's pip-installable, I would recommend cram. It integrates well with the rest of the python environment (e.g. through a pytest extension) and it's quite easy to use:
Simply write the commands you want to run prepended with $ and the expected output prepended with . For example, the following would be a valid cram test:
$ echo Hello
Hello
By having four spaces in front of expected output and two in front of the test, you can actually use these tests to also write documentation. More on that on the website.
You can use standard unittest module:
# python -m unittest <test module>
or use nose as a testing framework. Just write classic unittest files in separate directory and run:
# nosetests <test modules directory>
Writing unittests is easy. Just follow online manual for unittesting
I would not test the program as a whole this is not a good test strategy and may not actually catch the actual spot of the error. The CLI interface is just front end to an API. You test the API via your unit tests and then when you make a change to a specific part you have a test case to exercise that change.
So, restructure your application so that you test the API and not the application it self. But, you can have a functional test that actually does run the full application and checks that the output is correct.
In short, yes testing the code is the same as testing any other code, but you must test the individual parts rather than their combination as a whole to ensure that your changes do not break everything.

Categories