How to execute multiple class instance in Python with Argparse - python

My project has several categories with multiple scripts inside, each script allows you to perform a unique task.
To run my project, I give my main.py arguments that I retrieve with argparse.
Example: ./main.py --category web --script S1_HTTP_HEADER --url https://qwerty.com --port 1234
However, I need to create an argument to run all my scripts at once.
Example : ./main.py --category web --all --url https://qwerty.com --port 1234
Actually, my main.py looks like this :
if args.category == "web":
if args.script == 'S1_HTTP_HEADER':
from scripts.WEB_S1_HTTPHEADER import RequestHeader
make_request = RequestHeader(args.url, args.port)
make_request.insert_value()
Do you have solutions to run all scripts at once with only one argument?
For information, each script has a class that I have to instantiate with a URL and a PORT. Of course I have to import my class in my main.py before the operation.
Thank's !

I would define a custom parsing action to load all callables respective to the category.
That class defines a mapping binding both categories and associated callables.
Each callable is a tuple composed of the script name and the associated callable object.
An additional argument "scripts" is created by the custom action as a list object. Empty by default.
Import mechanism needs to be changed a little too.
from scripts import WEB_S1_HTTPHEADER, WEB_S2_HTTPHEADER, WEB_S3_HTTPHEADER
class GetScripts(argparse.Action):
"""
"""
SCRIPTS = {"web": [("S1_HTTPHEADER", WEB_S1_HTTPHEADER.RequestHeader), ("S2_HTTPHEADER", WEB_S2_HTTPHEADER.RequestHeader), ("S3_HTTPHEADER", WEB_S3_HTTPHEADER.RequestHeader)]}
def __init__(self, option_strings, dest, **kwargs):
super(GetScripts, self).__init__(option_strings, dest, **kwargs)
def __call__(self, parsobj, namespace, values, option_string=None):
setattr(namespace, self.dest, values)
if values:
category = getattr(namespace, "category")
setattr(namespace, "scripts", self.SCRIPTS.get(category, []))
Define the keyword argument "--all" associated to the custom action:
your_parser.add_argument("--all", nargs="?", default=False, const=True, action=GetScripts)
You can then iterate over the "scripts" argument to run each configured callable.
main.py now looks like:
if args.all:
for _, func in args.scripts:
make_request = func(args.url, args.port)
make_request.insert_value()
A new category or a new script means however changing the class member SCRIPTS.
Does it answer to your need?

Related

Let each submodule define its own subparser

I have to run the main file that depends on the rest of a relatively large project. Project structure can be seen as such
main.py
opts.py
--models \
--model1.py
--model2.py
...
--schedulers
--scheduler1.py
--scheduler2.py
...
...
The problem is when I have to pass arguments to each component (using argparse). A simple way would be to define all parameters in a single place for each component. This worked so far for me (in opts.py), but I would like to make something more elegant. In my parse function for each component parse_models or parse_scheduler I would like to iterate through each submodule of models and schedulers and let them define their own arguments by calling a function define_arguments that each of them has where they create their own sub parser.
All in all, how do I iterate through the submodules and call their define_arguments function from within opts.py?
You can iterate over all the python files using the glob module. You can find the correct path with the parent module's __path__ attribute. Import the modules using importlib.import_module. The imported module then contains the define_arguments function that you can pass a parser per submodule to define the arguments on:
from glob import glob
import os, importlib
def load_submodule_parsers(parent_module, parser, help=None):
if help is None:
help = parent_module.__name__ + " modules"
modules = glob(os.path.join(parent_module.__path__, "*.py"))
subparsers = parser.add_subparsers(help=help)
for module_file in modules:
module_name = os.path.basename(module_file)[:-3]
if module == "__init__":
continue
module = importlib.import_module(module_name, package=parent_module.__name__)
if "define_arguments" not in module.__dict__:
raise ImportError(parent_module.__name__ + " submodule '" + module_name + "' does not have required 'define_arguments' function.")
parser = subparsers.add_parser(module_name)
module.define_arguments(parser)
Pass the function the parent module object:
import argparse, models, schedulers
parser = argparse.ArgumentParser()
subparsers = parser.add_subparsers()
models_parser = subparsers.add_parser("models")
load_submodule_parsers(models, models_parser)
schedulers_parser = subparsers.add_parser("schedulers")
load_submodule_parsers(schedulers, schedulers_parser)
Untested code but I think you can refine it from here

Custom Ansible module is giving param extra params error

I am trying to implement hostname like module and my target machine in an amazon-ec2. But When I am running the script its giving me below error:
[ansible-user#ansible-master ~]$ ansible node1 -m edit_hostname.py -a node2
ERROR! this task 'edit_hostname.py' has extra params, which is only allowed in the following modules: meta, group_by, add_host, include_tasks, import_role, raw, set_fact, command, win_shell, import_tasks, script, shell, include_vars, include_role, include, win_command
My module is like this:
#!/usr/bin/python
from ansible.module_utils.basic import *
try:
import json
except ImportError:
import simplejson as json
def write_to_file(module, hostname, hostname_file):
try:
with open(hostname_file, 'w+') as f:
try:
f.write("%s\n" %hostname)
finally:
f.close()
except Exception:
err = get_exception()
module.fail_json(msg="failed to write to the /etc/hostname file")
def main():
hostname_file = '/etc/hostname'
module = AnsibleModule(argument_spec=dict(name=dict(required=True, type=str)))
name = module.params['name']
write_to _file(module, name, hostname_file)
module.exit_json(changed=True, meta=name)
if __name__ == "__main__":
main()
I don't know where I am making the mistake. Any help will be greatly appreciated. Thank you.
When developing a new module, I would recommend to use the boilerplate described in the documentation. This also shows that you'll need to use AnsibleModule to define your arguments.
In your main, you should add something like the following:
def main():
# define available arguments/parameters a user can pass to the module
module_args = dict(
name=dict(type='str', required=True)
)
# seed the result dict in the object
# we primarily care about changed and state
# change is if this module effectively modified the target
# state will include any data that you want your module to pass back
# for consumption, for example, in a subsequent task
result = dict(
changed=False,
original_hostname='',
hostname=''
)
module = AnsibleModule(
argument_spec=module_args
supports_check_mode=False
)
# manipulate or modify the state as needed (this is going to be the
# part where your module will do what it needs to do)
result['original_hostname'] = module.params['name']
result['hostname'] = 'goodbye'
# use whatever logic you need to determine whether or not this module
# made any modifications to your target
result['changed'] = True
# in the event of a successful module execution, you will want to
# simple AnsibleModule.exit_json(), passing the key/value results
module.exit_json(**result)
Then, you can call the module like so:
ansible node1 -m mymodule.py -a "name=myname"
ERROR! this task 'edit_hostname.py' has extra params, which is only allowed in the following modules: meta, group_by, add_host, include_tasks, import_role, raw, set_fact, command, win_shell, import_tasks, script, shell, include_vars, include_role, include, win_command
As explained by your error message, an anonymous default parameter is only supported by a limited number of modules. In your custom module, the paramter you created is called name. Moreover, you should not include the .py extension in the module name. You have to call your module like so as an ad-hoc command:
$ ansible node1 -m edit_hostname -a name=node2
I did not test your module code so you may have further errors to fix.
Meanwhile, I still strongly suggest you use the default boilerplate from the ansible documentation as proposed in #Simon's answer.

Group Django commands to a folder inside the same app

Is it allowed to group custom Django commands to separate folders inside the same Django app?
I have a lot of them and wanted to group them logically by purpose. Created folders but Django can't find them.
Maybe I'm trying to run them wrong. Tried:
python manage.py process_A_related_data
the same plus imported all commands in __init__.py
python manage.py folderA process_A_related_data
python manage.py folderA.process_A_related_data
python manage.py folderA/process_A_related_data
Got following error:
Unknown command: 'folderA/process_A_related_data'
Type 'manage.py help' for usage.
I think you can create a basic custom command which will run other commands from relevent folders. Here is an approach you can take:
First make a folder structure like this:
management/
commands/
folder_a/
process_A_related_data.py
folder_b/
process_A_related_data.py
process_data.py
Then inside process_data.py, update the command like this:
from django.core import management
from django.core.management.base import BaseCommand
import importlib
class Command(BaseCommand):
help = 'Folder Process Commands'
def add_arguments(self, parser):
parser.add_argument('-u', '--use', type=str, nargs='?', default='folder_a.process_A_related_data')
def handle(self, *args, **options):
try:
folder_file_module = options['use'] if options['use'].startswith('.') else '.' + options['use']
command = importlib.import_module(folder_file_module, package='your_app.management.commands')
management.call_command(command.Command())
except ModuleNotFoundError:
self.stderr.write(f"No relevent folder found: {e.name}")
Here I am using call_command method to call other managment commands.
Then run commands like this:
python manage.py process_data --use folder_a.process_A_related_data
Finally, if you want to run commands like python manage.py folder_a.process_A_related_data, then probably you need to change in manage.py. Like this:
import re
...
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
if re.search('folder_[a-z].*', sys.argv[-1]):
new_arguments = sys.argv[:-1] + ['process_data','--use', sys.argv[-1]]
execute_from_command_line(new_arguments)
else:
execute_from_command_line(sys.argv)
You should be able to partition the code by using mixins (I have not tried this in this context, though)
A standard management command looks like
from django.core.management.base import BaseCommand
class Command(BaseCommand):
help = 'FIXME A helpful comment goes here'
def add_arguments(self, parser):
parser.add_argument( 'name', ...)
# more argument definitions
def handle(self, *args, **options):
# do stuff
Which can probably be replaced by a "stub" in app/management/commands:
from wherever.commands import FooCommandMixin
from django.core.management.base import BaseCommand
class Command(FooCommandMixin, BaseCommand):
# autogenerated -- do not put any code in here!
pass
and in wherever/commands
class FooCommandMixin( object):
help = 'FIXME A helpful comment goes here'
def add_arguments(self, parser):
parser.add_argument( 'name', ...)
# more argument definitions
def handle(self, *args, **options):
# do the work
It would not be hard to write a script to go through a list of file names or paths (using glob.glob) using re.findall to identify appropriate class declarations, and to (re)generate a matching stub for each in the app's management/commands folder.
Also/instead Python's argparse allows for the definition of sub-commands. So you should be able to define a command that works like
./manage.py foo bar --aa --bb something --cc and
./manage.py foo baz --bazzy a b c
where the syntax after foo is determined by the next word (bar or baz or ...). Again I have no experience of using subcommands in this context.
I found no mention of support for this feature in the release notes. It looks to be that this is still not supported as of version Django 3.0. I would suggest that you use meaningful names for your files that help you specify. You could always come up w/ a naming convention!
A workaround could be: create a specific Django "satellite" app for each group of management commands.
In recent version of Django, the requirements for a Python module to be an app are minimal: you won't need to provide any fake models.py or other specific files as happened in the old days.
While far from perfect from a stylistic point of view, you still gain a few advantages:
no need to hack the framework at all
python manage.py will list the commands grouped by app
you can control the grouping by providing suitable names to the apps
you can use these satellite apps as container for specific unit tests
I always try to avoid fighting against the framework, even when this means to compromise, and sometimes accept it's occasional design limitations.

Initialize Objects with classes parsed with argparser

I wrote a Machine Learning script which I want to control from the command line. I already parsed all the options like for example --optimize 400, to perform 400 iterations over a RandomizedSearchCV grid.
However, I'm struggeling with one thing: I want to choose my estimator, for example GradientBoostingRegressor() or Lasso(), from the command line. I tried two things:
def cli()
p = arg.ArgumentParser(description="Perform ML regression.")
p.add_argument("-e","--estimator",default=Lasso(), help="Choose between Lasso() and GradientBoostingRegressor()")
return p.parse_args()
args = cli()
estimator = args.estimator
But when I try to open the program with:
python script.py -e GradientBoostingRegressor()
I get errors, because of the "()", and also without the (), because it gets interpreted as a string.
Another thing I tried is:
def cli()
p = arg.ArgumentParser(description="Perform ML regression.")
group = p.add_mutually_exclusive_group()
group.add_argument("-SVR", nargs='?', default = SVR(),
help="Choose Suppor Vector Regression")
group.add_argument("-GBR", nargs='?', default = GradientBoostingRegressor())
return p.parse_args()
args = cli()
But now I dont know how to access the estimator and also it seems like when I call the programm like this:
python script.py -SVR
the namespace "args" holds SVR=None and GBR=GradientBoostingRegressor(default_GBR_options), which is exactly the opposite to what I want.
Ideally I could choose between -SVR and -GBR in the command line and the parser would pass it just like my other options and I could initialize an object like this:
estimator = args.estimator
I hope anybody has some information on how to do that.
Thank you very much!
Arguments are always just strings. You need to parse the string to get a function which you can call.
import argparse
def Lasso():
print("Lasso!")
def GradientBoostingRegressor():
print("GradientBoostingRegressor!")
class GetEstimator(argparse.Action):
estimators = {
"Lasso": Lasso,
"GBR": GradientBoostingRegressor,
}
def __call__(self, parser, namespace, values, option_string=None):
setattr(namespace, self.dest, self.estimators[values])
p = argparse.ArgumentParser()
p.add_argument( "-e", "--estimator", action=GetEstimator, default=Lasso, choices=GetEstimaor.estimators.keys())
args = p.parse_args()
args.estimator()
(This replaces a previous answer that used the type parameter to map a string argument to a function. I misunderstood how type and choices interact.)
While #chepner's use of type is a nice use of argparse, the approach can be difficult to get right and understand.
As written it raises an error in the add_argument method:
Traceback (most recent call last):
File "stack50799294.py", line 18, in <module>
p.add_argument("-e", "--estimator", type=estimators.get, default=Lasso, choices=estimators.keys())
File "/usr/lib/python3.6/argparse.py", line 1346, in add_argument
type_func = self._registry_get('type', action.type, action.type)
File "/usr/lib/python3.6/argparse.py", line 1288, in _registry_get
return self._registries[registry_name].get(value, default)
TypeError: unhashable type: 'dict'
It's testing that the type parameter is either an item in the registry, or that it's a valid function. I'm not sure why it's raising this error.
def mytype(astr):
return estimators.get(astr)
works better in type=mytype. But there's further level of difficulty - choices is the keys(), strings. But mytype returns a class, producing an error like:
0942:~/mypy$ python3 stack50799294.py -e GBR
usage: stack50799294.py [-h] [-e {Lasso,GBR}]
stack50799294.py: error: argument -e/--estimator: invalid choice: <class '__main__.GradientBoostingRegressor'> (choose from 'Lasso', 'GBR')
To avoid those difficulties, I'd suggest separating the argument to class mapping. This should be easier to understand and to expand:
import argparse
class Lasso():pass
class GradientBoostingRegressor():pass
# Map an easy-to-type string to each function
estimators = {
'Lasso': Lasso,
'GBR': GradientBoostingRegressor
}
p = argparse.ArgumentParser(description="Perform ML regression.")
p.add_argument("-e", "--estimator", default='Lasso', choices=estimators.keys())
args = p.parse_args()
print(args)
estimator = estimators.get(args.estimator, None)
if estimator is not None:
print(type(estimator()))
samples:
0946:~/mypy$ python3 stack50799294.py -e GBR
Namespace(estimator='GBR')
<class '__main__.GradientBoostingRegressor'>
0946:~/mypy$ python3 stack50799294.py
Namespace(estimator='Lasso')
<class '__main__.Lasso'>
0946:~/mypy$ python3 stack50799294.py -e Lasso
Namespace(estimator='Lasso')
<class '__main__.Lasso'>
0946:~/mypy$ python3 stack50799294.py -e lasso
usage: stack50799294.py [-h] [-e {Lasso,GBR}]
stack50799294.py: error: argument -e/--estimator: invalid choice: 'lasso' (choose from 'Lasso', 'GBR')
const parameter
You could use store_const to choose between 2 classes, a default and a const:
parser.add_argument('-e', action='store_const', default=Lasso(), const=GradientBoostingRegressor())
but that can't be generalized to more. `nargs='?' provides a 3 way choice - default, const, and user provided. But there's still the problem of converting the commandline string to a class object.
Subparsers, https://docs.python.org/3/library/argparse.html#sub-commands, shows how set_defaults can be used to set functions or classes. But to use this you have to define a subparser for each choice.
In general it's better to start with the simple argparse approach, accepting strings and string choices, and doing the mapping after. Using more argparse features can come later.
get error
#chepner's error has something to do with how Python views the d.get method. Even though it looks like a method, it's somehow seeing the dict rather than the method:
In [444]: d = {}
In [445]: d.get(d.get)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-445-c6d679ba4e8d> in <module>()
----> 1 d.get(d.get)
TypeError: unhashable type: 'dict'
In [446]: def foo(astr):
...: return d.get(astr)
...:
...:
In [447]: d.get(foo)
That could be viewed as a basic python bug, or a argparse bug, but a user defined function or lambda is an easy work around.
Maybe you could separate input from functionality.
First collect the input from the user, then according to the input run the functionality that the user requested.
For example, if the user ran:
python script.py -e GradientBoostingRegressor
you would do:
if args.estimator == "GradientBoostingRegressor":
do stuff...

Can I add ini style configuration to pytest suites?

I am using pytest to run tests in multiple environments and I wanted to include that information (ideally) in an ini style config file. I would also like to override parts or all of the configuration at the command line as well. I tried using the hook pytest_addoption in my conftest.py like so:
def pytest_addoption(parser):
parser.addoption("--hostname", action="store", help="The host")
parser.addoption("--port", action="store", help="The port")
#pytest.fixture
def hostname(request):
return request.config.getoption("--hostname")
#pytest.fixture
def port(request):
return request.config.getoption("--port")
Using this I can add the configuration info at the command line, but not in a config file. I also tried adding
[pytest]
addopts = --hostname host --port 311
to my pytest.ini file, but that didn't work. Is there a way to do this without building my own plugin? Thanks for your time.
The parser object does have an addini method as well that you can use to specify configuration options through an ini file.
Here is the documentation for it: https://pytest.org/latest/writing_plugins.html?highlight=addini#_pytest.config.Parser.addini
addini(name, help, type=None, default=None)[source]
registers an ini-file option.
Name: name of the ini-variable
Type: type of the variable, can be pathlist, args, linelist or bool.
Default: default value if no ini-file option exists but is queried.
The value of ini-variables can be retrieved via a call to config.getini(name).

Categories