Splitting python click options betwen functions - python

My codebase has multiple scripts, say sc1.py and sc2.py. Their code looks like this (just replace 1 for 2 to imagine what the other is like):
#click.command()
#click.option(--op1, default="def1", required=True)
def sc1(op1):
pass
if __name__ == "__main__":
sc1()
What I want to do is write a wrapper that handles concerns like loading the config file (and handling missing ones appropriately), configuring the logger, etc.
The logger, as an example, requires some options, e.g. the log level, and this option doesn't appear in any of the existing scripts. I'd like to handle it at the wrapper level (since the wrapper will deal with common concerns like log configuration).
That is what the wrapper might look like:
#click.command()
#click.option(--level, default=INFO, required=False)
def wrapper(level, wrapped_main_function):
# use level to setup logger basic config.
wrapped_main_function()
Then I would modify the "main" in the scripts (sc1.py, sc2.py etc):
from wrap_module import wrapper
#click.command()
#click.option(--op1, default="def1", required=True)
def sc1(op1):
pass
if __name__ == "__main__":
wrapper(???, sc1)
Put into words I am trying to split the options between two functions, say wrapper and sc1, and have wrapper deal with its own dedicated options and then call sc1, forwarding the remaining command line options. All while wrapper is not aware of what options those might be, since the whole point is to write something generic that can be used for all the scripts and having wrapper deal with the commonalities.
The command line should look something like:
python sc.py --op1 something --level debug
I can't figure out the right syntax to do what I want.

I think you can get close to what you want with a slightly different approach. First, we have the following in common.py:
import click
def add_common_options(func):
func = click.option("--level")(func)
return func
def handle_common_options(level):
print("log level is", level)
And then in sc1.py, we have:
import click
from common import add_common_options, handle_common_options
#click.command()
#click.option("--op1")
#add_common_options
def sc1(op1, **common_options):
handle_common_options(**common_options)
print("op1 is", op1)
if __name__ == "__main__":
sc1()
(With sc2.py implemented in a similar fashion.)
This gets you the behavior you want. The help output for sc1.py
looks like:
Usage: sc1.py [OPTIONS]
Options:
--op1 TEXT
--level TEXT
--help Show this message and exit.
And you can run a command line like this:
$ python sc1.py --op1 foo --level info
log level is info
op1 is foo

Related

Is it bad practice to have arguments called in main() in Python

Should the function name main() always be empty and have arguments called within the function itself or is it acceptable to have them as inputs to the function e.g main(arg1, arg2, arg3)?
I know it works but I'm wondering if it is poor programming practice. Apologies if this is a duplicate but I couldn't see the question specifically answered for Python.
In most other programming languages, you'd either have zero parameters or two parameters:
int main(char *argv[], int argc)
To denote the arguments passed through to the parameter. However, in Python these are accessed through the sys module:
import sys
def main():
print(sys.argv, len(sys.argv))
But then you could extend this so that you pass through argv and argc into your python function, similar to other languages yes:
import sys
def main(argv, arc):
print(argv, arc)
if __name__ == '__main__':
main(sys.argv, len(sys.argv))
But let's forget about argv/argc for now - why would you want to pass something through to main. You create something outside of main and want to pass it through to main. And this can happen in two instances:
You're calling main multiple times from other functions.
You've created variables outside main that you want to pass through.
Point number 1 is definitely bad practice. main should be unique and called only once at the beginning of your program. If you have the need to call it multiple times, then the code inside main doesn't belong inside main. Split it up.
Point number 2 may seem like it makes sense, but then you do it in practise:
def main(a, b):
print(a, b)
if __name__ == '__main__':
x = 4
y = 5
main(x, y)
But then aren't x and y global variables? And good practice would assume that these are at the top of your file (and multiple other properties - they're constant, etc), and that you wouldn't need to pass these through as arguments.
By following the pattern:
def main():
...stuff...
if __name__ == '__main__':
main()
It allows your script to both to be run directly, and if packaged using setup tools, to have an executable script generated automatically when the package is installed by specifying main as an entry point.
See: https://setuptools.readthedocs.io/en/latest/setuptools.html#automatic-script-creation
You would add to setup.py something like:
entry_points={
'console_scripts': [
'my_script = my_module:main'
]
}
And then when you build a package, people can install it in their virtual environment, and immediately get a script called my_script on their path.
Automatic script creation like this requires a function that takes no required arguments.
It's a good idea to allow you script to be imported and expose it's functionality both for code reuse, and also for testing. I would recommend something line this pattern:
import argparse
def parse_args():
parser = argparse.ArgumentParser()
#
# ... configure command line arguments ...
#
return parser.parse_args()
def do_stuff(args):
#
# ... main functionality goes in here ...
#
def main():
args = parse_args()
do_stuff(args)
if __name__ == '__main__':
main()
This allows you to run your script directly, have an automatically generated script that behaves the same way, and also import the script and call do_stuff to re-use or test the actual functionality.
This blog post was mentioned in the comments: https://www.artima.com/weblogs/viewpost.jsp?thread=4829 which uses a default argument on main to allow dependency injection for testing, however, this is a very old blog post; the getopt library has been superseded twice since then. This pattern is superior and still allows dependency injection.
I would definitely prefer to see main take arguments rather than accessing sys.argv directly.
This makes the reuse of the main function by other Python modules much easier.
import sys
def main(arg):
...
if __name__ == "__main__":
main(sys.argv[1])
Now if I want to execute this module as a script from another module I can just write (in my other module).
from main_script import main
main("use this argument")
If main uses sys.argv this is tougher.

Is there a way I can use mock() (or similar) to mock the date & time for a script I *invoke* from a unit test?

I have written some unit tests using unittest in Python. However, they do not simply test objects in the conventional way - rather, they invoke another Python script by calling it using Popen. This is by design - it's a command line utility, so I want to test it as a user would, which includes things such as command-line options, etc.). To be clear, both the unit tests and the script to be tested are written in Python (v3 to be precise).
The script I am testing makes heavy use of datetime.now(), and ideally I would like to mock that value somehow so that I can keep it constant. All the examples I've seen of doing this, though (e.g. this one using mock) assume some form of white-box testing.
Is there a way for me to do this?
Nothing prevents you from testing your CLI without using Popen. You just need to architect your code to make it possible:
Instead of having this:
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser()
# ... Add args
ns = parser.parse_args()
Do this:
import argparse
def main(argv):
parser = argparse.ArgumentParser()
# ... Add args
parser.parse_args(argv[1:]) # This is the default for argparse
ns = parser.parse_args()
if __name__ == "__main__":
import sys
main(sys.argv)
Then, you can test the main function in isolation (just call main([...]) with a set of args you specify). Note that this should also work (with some adaptation) for other CLI frameworks.
Also, note that if you're indeed using argparse, you'll need to patch ArgumentParser() so that it doesn't call sys.exit when parsing fails.
An easy way to do is to declare a ParsingError exception, and patch ArgumentParser.error(self, message) with:
def error(self, message):
raise ParsingError(message)
You can then use assertRaises in your tests.

How to access plugin options within a test? (Python Nose)

We are trying to implement an automated testing framework using nose. The intent is to add a few command line options to pass into the tests, for example a hostname. We run these tests against a web app as integration tests.
So, we've created a simple plugin that adds a option to the parser:
import os
from nose.plugins import Plugin
class test_args(Plugin):
"""
Attempting to add command line parameters.
"""
name = 'test_args'
enabled = True
def options(self, parser, env=os.environ):
super(test_args, self).options(parser, env)
parser.add_option("--hostname",
action="store",
type="str",
help="The hostname of the server")
def configure(self, options, conf):
self.hostname = options.hostname
The option is available now when we run nosetests...but I can't figure out how to use it within a test case? Is this possible? I can't find any documentation on how to access options or the configuration within a test case.
The adding of the command line arguments is purely for development/debugging code purposes. We plan to use config files for our automated runs in bamboo. However, when developing integration tests and also debugging issues, it is nice to change the config on the fly. But we need to figure out how to actually use the options first...I feel like I'm just missing something basic, or I'm blind...
Ideally we could extend the testconfig plugin to make passing in config arguments from this:
--tc=key:value
to:
--key=value
If there is a better way to do this then I'm all ears.
One shortcut is to access import sys; sys.argv within the test - it will have the list of parameters passed to the nose executable, including the plugin ones. Alternatively your plugin can add attributes to your tests, and you can refer to those attributes - but it requires more heavy lifting - similar to this answer.
So I've found out how to make this work:
import os
from nose.plugins import Plugin
case_options = None
class test_args(Plugin):
"""
Attempting to add command line parameters.
"""
name = 'test_args'
enabled = True
def options(self, parser, env=os.environ):
super(test_args, self).options(parser, env)
parser.add_option("--hostname",
action="store",
type="str",
help="The hostname of the server")
def configure(self, options, conf):
global case_options
case_options = options
Using this you can do this in your test case to get the options:
from test_args import case_options
To solve the different config file issues, I've found you can use a setup.cfg file written like an INI file to pass in default command line parameters. You can also pass in a -c config_file.cfg to pick a different config. This should work nicely for what we need.

Is there a way to add an already created parser as a subparser in argparse?

Normally, to add a subparser in argparse you have to do:
parser = ArgumentParser()
subparsers = parser.add_subparser()
subparser = subparsers.add_parser()
The problem I'm having is I'm trying to add another command line script, with its own parser, as a subcommand of my main script. Is there an easy way to do this?
EDIT: To clarify, I have a file script.py that looks something like this:
def initparser():
parser = argparse.ArgumentParser()
parser.add_argument('--foo')
parser.add_argument('--bar')
return parser
def func(args):
#args is a Namespace, this function does stuff with it
if __name__ == '__main__':
initparser().parse_args()
So I can run this like:
python script.py --foo --bar
I'm trying to write a module app.py that's a command line interface with several subcommands, so i can run something like:
python app.py script --foo --bar
Rather than copy and pasting all of the initparser() logic over to app.py, I'd like to be able to directly use the parser i create from initparser() as a sub-parser. Is this possible?
You could use the parents parameter
p=argparse.ArgumentParser()
s=p.add_subparsers()
ss=s.add_parser('script',parents=[initparser()],add_help=False)
p.parse_args('script --foo sst'.split())
ss is a parser that shares all the arguments defined for initparser. The add_help=False is needed on either ss or initparser so -h is not defined twice.
You might want to take a look at the shlex module as it sounds to me like you're trying to hack the ArgumentParser to do something that it wasn't actually intended to do.
Having said that, it's a little difficult to figure out a good answer without examples of what it is, exactly, that you're trying to parse.
I think your problem can be addressed by a declarative wrapper for argparse. The one I wrote is called Argh. It helps with separating definition of commands (with all arguments-related stuff) from assembling (including subparsers) and dispatching.
This is a way old question, but I wanted to throw out another alternative. And that is to think in terms of inversion of control. By this I mean the root ArgumentParser would manage the creation of the subparsers:
# root_argparser.py
from argparse import ArgumentParser, Namespace
__ARG_PARSER = ArgumentParser('My Script')
__SUBPARSERS = __ARG_PARSER.add_subparsers(dest='subcommand')
__SUBPARSERS.required = True
def get_subparser(name: str, **kwargs) -> ArgumentParser:
return __SUBPARSERS.add_parser(name, **kwargs)
def parse_args(**kwargs) -> Namespace:
return __ARG_PARSER.parse_args(**kwargs)
# my_script.py
from argparse import ArgumentParser
from root_argparse import get_subparser
__ARG_PARSER = get_subparser('script')
__ARG_PARSER.add_argument('--foo')
__ARG_PARSER.add_argument('--bar')
def do_stuff(...):
...
# main.py
from root_argparse import parse_args
import my_script
if __name__ == '__main__':
args = parse_args()
# do stuff with args
Seems to work okay from some quick testing I did.

command line arg parsing through introspection

I'm developing a management script that does a fairly large amount of work via a plethora of command-line options. The first few iterations of the script have used optparse to collect user input and then just run down the page, testing the value of each option in the appropriate order, and doing the action if necessary. This has resulted in a jungle of code that's really hard to read and maintain.
I'm looking for something better.
My hope is to have a system where I can write functions in more or less normal python fashion, and then when the script is run, have options (and help text) generated from my functions, parsed, and executed in the appropriate order. Additionally, I'd REALLY like to be able to build django-style sub-command interfaces, where myscript.py install works completely separately from myscript.py remove (separate options, help, etc.)
I've found simon willison's optfunc and it does a lot of this, but seems to just miss the mark — I want to write each OPTION as a function, rather than try to compress the whole option set into a huge string of options.
I imagine an architecture involving a set of classes for major functions, and each defined method of the class corresponding to a particular option in the command line. This structure provides the advantage of having each option reside near the functional code it modifies, easing maintenance. The thing I don't know quite how to deal with is the ordering of the commands, since the ordering of class methods is not deterministic.
Before I go reinventing the wheel: Are there any other existing bits of code that behave similarly? Other things that would be easy to modify? Asking the question has clarified my own thinking on what would be nice, but feedback on why this is a terrible idea, or how it should work would be welcome.
Don't waste time on "introspection".
Each "Command" or "Option" is an object with two sets of method functions or attributes.
Provide setup information to optparse.
Actually do the work.
Here's the superclass for all commands
class Command( object ):
name= "name"
def setup_opts( self, parser ):
"""Add any options to the parser that this command needs."""
pass
def execute( self, context, options, args ):
"""Execute the command in some application context with some options and args."""
raise NotImplemented
You create sublcasses for Install and Remove and every other command you need.
Your overall application looks something like this.
commands = [
Install(),
Remove(),
]
def main():
parser= optparse.OptionParser()
for c in commands:
c.setup_opts( parser )
options, args = parser.parse()
command= None
for c in commands:
if c.name.startswith(args[0].lower()):
command= c
break
if command:
status= command.execute( context, options, args[1:] )
else:
logger.error( "Command %r is unknown", args[0] )
status= 2
sys.exit( status )
The WSGI library werkzeug provides Management Script Utilities which may do what you want, or at least give you a hint how to do the introspection yourself.
from werkzeug import script
# actions go here
def action_test():
"sample with no args"
pass
def action_foo(name=2, value="test"):
"do some foo"
pass
if __name__ == '__main__':
script.run()
Which will generate the following help message:
$ python /tmp/test.py --help
usage: test.py <action> [<options>]
test.py --help
actions:
foo:
do some foo
--name integer 2
--value string test
test:
sample with no args
An action is a function in the same module starting with "action_" which takes a number of arguments where every argument has a default. The type of the default value specifies the type of the argument.
Arguments can then be passed by position or using --name=value from the shell.

Categories