I wonder, is it possible to pass behave arguments, eg. "-D environment". by default they are taken from the behave file.
Maybe there is some way to keep each configuration in a different file? Maybe many behave files? or Such as: behave "path to file with arguments"?
At this point i figured out that I could put bash scripts containing various configuration " #!/bin/bash behave ..."
I asking because i want easily manage my configurations when I run "behave -..." without editing many arguments
I think you could take advantage of using custom json configuration files described in behave Advanced Cases section: https://behave.readthedocs.io/en/latest/new_and_noteworthy_v1.2.5.html#advanced-cases
from documentation:
# -- FILE: features/environment.py
import json
import os.path
def before_all(context):
"""Load and update userdata from JSON configuration file."""
userdata = context.config.userdata
configfile = userdata.get("configfile", "userconfig.json")
if os.path.exists(configfile):
assert configfile.endswith(".json")
more_userdata = json.load(open(configfile))
context.config.update_userdata(more_userdata)
# -- NOTE: Reapplies userdata_defines from command-line, too.
So, if you like to use custom config by specifying it at the command line, you could run it as for example:
behave -D conffile=myconfig.json
Then I would parametrize this line to something like:
myconfigfile = context.config.userdata["conffile"]
configfile = userdata.get("configfile", myconfigfile)
Related
I have trace32 installed at C drive and have mentioned that directory in my code. Suppose if some other user run this code in their system, the code does not work because the user has installed application in different location. How can I make this directory generic and dynamic and make it work for all users?
You have multiple possibilities. Bevor explaining them some generic tips:
Make the TRACE32 system path configurable, not a path inside the installation. In your case this would be r"C:\T32". This path is called t32sys or T32SYS.
Make sure you use os.path.join to concatenate your strings, so it works on the users operating system: os.path.join(r"C:\T32", "bin/windows64")
Command line arguments using argparse. This is the simplest solution which requires the user to start the Python script like this: python script.py --t32sys="C:\t32".
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--t32sys", help="TRACE32 system directory.")
args = parser.parse_args()
t32sys = args["t32sys"]
Instead of command line parameters you could also use a configuration file. For this you can use the built-in configparser module. This has the advantage that the user doesn't need to specify the directory as a command line argument, but the disadvantage that the user needs to be aware of the configuration file.
Configuration file (example.ini):
[DEFAULT]
t32sys = C:\T32
import configparser
parser = configparser.ConfigParser()
parser.read("example.ini")
args = parser["DEFAULT"]
t32sys = args["t32sys"]
Environment variables using os.environ. T32SYS is a environment variable often used for this, but it's not ensured to be set. So you have to tell users that they have to set the variable before using your tool. This approach has the advantage to work in the background, but also in my opinion a little bit obfuscating. I'd only use this in combination with argparse or configparse to override.
import os
t32sys = os.environ.get('T32SYS')
You can of course combine multiple ways with fallbacks / overrides.
I'm writing my first python command line tool using docopt and have run into an issue.
My structure is like this:
Usage:
my-tool configure
my-tool [(-o <option> | --option <option>)]
...
I'm trying to find a way to run my-tool -o foo-bar first, and then optionally pass the value 'foo-bar' into my configure function if I run my-tool configure next.
In pseduocode, that translates to this:
def configure(option=None):
print option # With the above inputs this should print 'foo-bar'
def main():
if arguments['configure']:
configure(option=arguments['<option>'])
return
...
Is there a way to get this working without changing the argument structure?
I'm looking for a way to avoid my-tool configure [(-o <option> | --option <option>)]
Since you run this on 2 different instances it might be best to store the values in some sort of config/json file that will be cleared each time you run "configure".
import json
def configure(config_file):
print config_file[opt_name] # do something with options in file
# clear config file
with open("CONFIG_FILE.JSON", "wb") as f: config = json.dump([], f)
def main():
# load config file
with open("CONFIG_FILE.JSON", "rb") as f: config = json.load(f)
# use the configure opt only when called and supply the config json to it
if sys.argv[0] == ['configure']:
configure(config)
return
# parse options example (a bit raw, and should be done in different method anyway)
parser = OptionParser()
parser.add_option("-q", action="store_false", dest="verbose")
config_file["q"] = OPTION_VALUE
I tried writing some script to help you but it was a bit beyond my (current) Newbie skill level.
However, the tools/approach I started taking may be able to help. Try using sys.argv (which generates a list of all arguments from when the script is run), and then using some regular expressions (import re...).
I hope this helps someone else help you. (:
I am trying to make exclusion in my argparse parser. Basically what I want is to avoid --all option and filenames argument to be parsed (which I think succeeded).
But I want to create also another check where if I only pass python reader.py read --all, the filenames argument will get populated with all txt files in current directory.
So far I've come up with following code:
import argparse
import glob
parser = argparse.ArgumentParser()
subcommands = parser.add_subparsers(title='subcommands')
read_command = subcommands.add_parser('read')
read_command.add_argument('filenames', type=argparse.FileType(), nargs = '+')
read_command.add_argument('-a', '--all', action='store_true')
parsed = parser.parse_args()
if parsed.all and parsed.filenames:
raise SystemExit
if parsed.all:
parsed.filenames = glob.glob('*.txt')
print parsed
The problem is that if I try to run python reader.py read --all I get error error: too few arguments because of the filenames argument.
Is there a way to have this work like I want to without creating subcommand to read, for example python reader.py read all?
How can I access error messages in argparse? I'd like to have some default message that would say that filenames and --all can't be combined instead of SystemExit error.
Also I want to avoid using add_mutually_exclusive_group because this is just a snippet of my real parser where this approach wouldn't work (already checked in other SO topic).
I've heard about custom actions but it would really help to see example on it.
If filenames gets nargs="*", it should allow you to use --all alone. parsed.filenames will then be a [], which you can replace with the glob.
You could also test giving that argument a default derived from the glob - but see my caution regarding FileType.
Do you want the parser to open all the filenames you give it? Or would you rather open the files latter yourself (preferably in a with context). FileType opens the files (creating if necessary), and in the process checks their existence (which is nice), but leaves it up to you (or the program exit) to close them.
The documentation talks about issuing error messages yourself, and how to change them. parser.error('my message') with display the usage and message, and then exit.
if parsed.all and parsed.filenames:
parsed.error("Do you want to read ALL or just %s?"%parsed.filenames)
It is also possible trap SystemExit exceptions in a try/except clause.
We are trying to implement an automated testing framework using nose. The intent is to add a few command line options to pass into the tests, for example a hostname. We run these tests against a web app as integration tests.
So, we've created a simple plugin that adds a option to the parser:
import os
from nose.plugins import Plugin
class test_args(Plugin):
"""
Attempting to add command line parameters.
"""
name = 'test_args'
enabled = True
def options(self, parser, env=os.environ):
super(test_args, self).options(parser, env)
parser.add_option("--hostname",
action="store",
type="str",
help="The hostname of the server")
def configure(self, options, conf):
self.hostname = options.hostname
The option is available now when we run nosetests...but I can't figure out how to use it within a test case? Is this possible? I can't find any documentation on how to access options or the configuration within a test case.
The adding of the command line arguments is purely for development/debugging code purposes. We plan to use config files for our automated runs in bamboo. However, when developing integration tests and also debugging issues, it is nice to change the config on the fly. But we need to figure out how to actually use the options first...I feel like I'm just missing something basic, or I'm blind...
Ideally we could extend the testconfig plugin to make passing in config arguments from this:
--tc=key:value
to:
--key=value
If there is a better way to do this then I'm all ears.
One shortcut is to access import sys; sys.argv within the test - it will have the list of parameters passed to the nose executable, including the plugin ones. Alternatively your plugin can add attributes to your tests, and you can refer to those attributes - but it requires more heavy lifting - similar to this answer.
So I've found out how to make this work:
import os
from nose.plugins import Plugin
case_options = None
class test_args(Plugin):
"""
Attempting to add command line parameters.
"""
name = 'test_args'
enabled = True
def options(self, parser, env=os.environ):
super(test_args, self).options(parser, env)
parser.add_option("--hostname",
action="store",
type="str",
help="The hostname of the server")
def configure(self, options, conf):
global case_options
case_options = options
Using this you can do this in your test case to get the options:
from test_args import case_options
To solve the different config file issues, I've found you can use a setup.cfg file written like an INI file to pass in default command line parameters. You can also pass in a -c config_file.cfg to pick a different config. This should work nicely for what we need.
From the code that runs the tests using nose, how do I retrieve a list of config files that have been passed on the command line (without parsing the args myself since nose should expose these values somewhere) as in,
nosetests -c default.ini -c staging.ini
which would then result in,
[default.ini, staging.ini]
I can't seem to find these values on the nose.config object.
Seems like your problem is that you're naming your configuration files differently than what the default nose configuration files should be named.
From nose.config
config_files = [
# Linux users will prefer this
"~/.noserc",
# Windows users will prefer this
"~/nose.cfg"
]
def user_config_files():
"""Return path to any existing user config files
"""
return filter(os.path.exists,
map(os.path.expanduser, config_files))
def all_config_files():
"""Return path to any existing user config files, plus any setup.cfg
in the current working directory.
"""
user = user_config_files()
if os.path.exists('setup.cfg'):
return user + ['setup.cfg']
return user
The short of this is, that nose is looking for default configuration files named ~/.noserc or ~/nose.cfg. If they're not named like this nose will not pick them up and you will have to manually specify the name of the configuration files, like you are doing on the command line
Now say for instance that you have some object config which is an instance of nose.config.Config then the best way to get your config file names would be to say
>>> from nose.config import Config
>>> c = Config()
>>> c.configure(argv=["nosetests", "-c", "foo.txt"])
>>> c.options.files
['foo.txt']