So our test environments dynamically change depending on the release that we are working on.
For example:
for abc release the URL for the test environment would be feature-abc.mycompany.com, for xyz release the URL for the test environment would be feature-xyz.company.com and so on so forth.
Same thing would be for staging: release-abc.mycompany.com, release-xyz.mycompany.com, etc..
Production is just static URL: platform.mycompany.com
With this being said, I need to specify on which URL I would like my tests to be executed using behave BDD framework for Python.
To be specific Im looking for the equivalent functionality that cucumber has for Ruby using: features/support/env.rb file to define multiple URL (qa, staging, production, etc) so that on the command-line (terminal) I would just say xyz (having qa = feature(the release).mycompany.com
Something like: How can I test different environments (e.g. development|test|production) in Cucumber?
Ok, so for this there is a Pull Request (PR #243) to be able to do this in behave's github repo.
In the meantime as a workaround they suggested me to use os.getenv('variable_name', 'default_value'), and then at the command line I would just say export variable_name='another_value' ; behave
Please see more detailed on this on our short thread:
https://github.com/behave/behave/issues/250
behave-1.2.5 introduced the userdata concept.
behave -D BUILD_STAGE=develop …
Load the corresponding configuration for this stage in the before_all() hook.
Related
Are there any best-practices for config-file documentation, especially for python?
Particularly in scientific computing, it is common to use a config file as the input to control a batch processing job (such as a simulation), and expect the user to customise a substantial portion of the config for their scenario. (The config also likely selects among different processing modules, each possessing different suites of config fields.) Thus, the user ought to know: what each setting means or effects; which settings are unused (in which circumstances); what are the default values (and the permissible values or ranges); etc.
I've found incomplete config file docs to be common. The fundamental problem seems to be that if the docs are maintained separately from the code, they grow out of sync. (This seems less of a problem with API docs due to standard practices involving colocated docstrings and autogeneration from function signatures/argspec.) For example if the standard python configparser is used once to parse the config file, then the code for accessing individual attributes (and implicitly determining the config schema) may still be spread out across the entire code base (and perhaps only available at runtime rather than when building docs).
Further thoughts:
Is it bad practice to replace a config file (yaml or similar) with a user-customised python script (so as to only need API docs)?
Distribution of a well commented example config file (that is also used in automatic tests): how to maintain if different scenarios duplicate large sections but need some completely different fields?
Can a single schema be maintained, both for use in code (to help parse, validate, and set defaults) and to generate docs somehow?
Is there a human readable/writeable way of (des)serialising the state of some (sub)class instance that represents a new batch process (so that config is covered by existing docs)?
Personally, I like to use the argparse module for configuration, and read the default value for each setting from an environment variable. That centralizes the settings and documentation in one place, and allows the user to either tweak settings on the command line or set and forget them in environment variables. Be careful about putting passwords on the command line, though, because other users can probably see your command line arguments in the process list.
Here's an example that uses argparse and environment variables:
def parse_args(argv=None):
parser = ArgumentParser(description='Watch the raw data folder for new runs.',
formatter_class=ArgumentDefaultsHelpFormatter)
parser.add_argument(
'--kive_server',
default=os.environ.get('MICALL_KIVE_SERVER', 'http://localhost:8000'),
help='server to send runs to')
parser.add_argument(
'--kive_user',
default=os.environ.get('MICALL_KIVE_USER', 'kive'),
help='user name for Kive server')
parser.add_argument(
'--kive_password',
default=SUPPRESS,
help='password for Kive server (default not shown)')
args = parser.parse_args(argv)
if not hasattr(args, 'kive_password'):
args.kive_password = os.environ.get('MICALL_KIVE_PASSWORD', 'kive')
return args
Setting those environment variables can be a bit confusing, particularly for system services. If you're using systemd, look at the service unit, and be careful to use EnvironmentFile instead of Environment for any secrets. Environment values can be viewed by any user with systemctl show.
I usually make the default values useful for a developer running on their workstation, so they can start development without changing any configuration.
Another option is to put the configuration settings in a settings.py file, and just be careful not to commit that file to source control. I have often committed a settings_template.py file that users can copy.
If your settings are so complicated/flexible that environment variables or a settings file get messy, then I would convert the project to a library with an API. Instead of settings, users then write a script that calls your API. You don't have to go through the effort of hosting your library on PyPI, either. pip can install from a GitHub repository, for example.
I am trying to use PyCharm for unit testing (with unittest), and am able to make it work: the test runner nicely shows the list of test cases and nested test functions.
However, once the tests have been discovered, I cannot find any way to (re)run a specific test function: the only button available will run the whole list of tests, and right clicking on a single test function doesn't show any meaningful action for this purpose.
As you can imagine, it can take a long time unnecessarily when the purpose is to debug a single test.
How to achieve this? It is possible in Visual Studio for example, and seems like a basic feature so I assume I must be missing something.
Check the default test framework of the project...
You're perhaps used to 'unittest' being the default. Its enables me to put the cursor on the test definition and hit "SHIFT-CTRL-R" to run that one test.
The default seems to have changed to 'py.test' which has different behaviour and keyboard shortcuts. I'm on OSX so ymmv.
On Linux:
File -> Settings -> Tools -> Python Integrated Tools -> Testing / "Default Test Runner"
On OSX:
Preferences -> Tools -> Python Integrated Tools -> "Default test runner:"
With recent versions of PyCharm the availability of the 'right click' option seems intermittent.
One replacement is to go to Edit Configurations... and type the name of the class and method yourself. That's worked well for me, even if not quite as convenient
Under pycharm 2017.2.3:
the key step:
change the default test runner(unittests) to (nosetests or py.test), both ok.
then the IDE can run single test function now.
follow the steps of the below screenshots.
1. change settings:
2. run single test function:
3. run all test functions:
In Pycharm 2018.1: restart, delete the existing run configrations - suddently right-click provides an option to run a single test. :-/
Have you tried right clicking the test in the actual class? It should be possible to run the single test from there. I'd suggest a re-install if this is not available.
Please check whether you have the same test name repeated in two or more locations in the test fixture. I had the same problem and resolving the naming conflicts enabled me to right click on the test name and run it individually.
I had this problem with PyCharm 2018.3.
It seemed to be because I had a breakpoint in a strange place (at function declaration, instead of inside the function).
Clearing all the breakpoints seemed to restore the ability to debug individual tests
I work with a proxy which doesn't like git. In most of the cases, I can use export http_proxy and git config --global url."http://".insteadOf git://.
But when I use Yocto's python script, this workaround doesn't work anymore. I'm systematically stopped at Getting branches from remote repo git://git.yoctoproject.org/linux-yocto-3.14.git.... I suspect these lines to be responsible :
gitcmd = "git ls-remote %s *heads* 2>&1" % (giturl)
tmp = subprocess.Popen(gitcmd, shell=True, stdout=subprocess.PIPE).stdout.read()
I think that after these lines, others will try to connect to git url. The script I use (yocto-bsp) calls others scripts, which call scripts, so it's difficult to say.
I have tried to add os.system(git config --global url."http://".insteadOf git://) just before, but it does peanuts.
Of course, I could try and modify all the url manually (or with a parsing script) to replace git:// by http:// manually, but this solution is... hideous. I'd like the modification(s) to be as small as possible and reproductible easily. But most of all, I'd like a working script.
EDIT : according to this page, the git url is git://git.yoctoproject.org/linux-yocto-3.14 but the correspondant http url is http://git.yoctoproject.org/git/linux-yocto-3.14, so I can't just parse to replace git:// by http://. Definitely not cool.
Well, rewriting the git url does indeed work, also when using YP.
However, you're rewriting scheme doesn't work that well... You're just replacing the git:// part or the url with http://, but if you look at e.g. linux-yocto-3.14, you'll see that this repo is available through the following two URL's:
git://git.yoctoproject.org/linux-yocto-3.14
http://git.yoctoproject.org/git/linux-yocto-3.14
That is you need to rewrite git://git.yoctoproject.org to http://git.yoctoproject.org/git. Thus, you'll need to do this instead:
git config --global url."http://git.yoctoproject.org/git".insteadOf git://git.yoctoproject.org
Which means that you'll have to repeat this exercise for all repositories that are accessed through the git protocol.
A "settings file" would be a file where things like "background color", "speed of execution", "number of x's" are defined. Currently, I implemented it as a single setting.py file, which I import in the beginning. Someone told me I should make it a settings.ini file instead, but I don't see why! Care to clarify, what is the optimal option?
There is no optimal solution; it is a matter of preference.*
Normally, settings do not need to be expressed in a Turing-complete language: they're often just a bunch of flags and options, sometimes strings and numbers, etc. An argument for having a settings.py file (though very unorthodox) would be if the end-user was expected to write code to generate very esoteric configurations (e.g. maps for a game). This would then be fairly similar to shell script .bashrc-style files.
But again, in 99.9% of programs, the settings are often just a bunch of flags and options, sometimes strings and numbers, etc. It's fine to store them as JSON or XML. It also makes it easy to perform reflection on your settings: for example, automatically listing them in a tree manner, or automatically creating a GUI out of the descriptions.
(Also it may be a (unlikely?) security issue if you allow people to inject code by modifying the settings file.)
*edit: no pun intended...
There are a few reasons why separating out config files from main codebase is a good idea. Of course it depends on your use case and you should evaluate against your usecase.
Configuration can be managed by end user, who do not understand programming languages. It makes more sense to factor out configuration and use a simple ini file which uses simple key-value pairs for config parameters.
Configuration varies based on the installation environment. Your code runs on multiple environment and they all use different configuration. It is very easy to maintain such cases by having separate config files and same source code installed on those environments.
There are package managers that knows what is a config file and what is a source file. They are intelligent to not override any changed config on version upgrade etc. So you do not have to worry about resetting config parameters after version upgrade of package. For example you ship your product with a default config file. User fine tuned few parameters. You shipped another version of the package. User should not expect a config reset after version upgrade.
One problem with having a settings file being a Python module is that it can contain code that will be executed when you import it. This may allow malicious code to be inserted into your program.
For Python use stock libraries:
YAML style configuration files:
http://www.yaml.org/start.html
http://pypi.python.org/pypi/PyYAML/
(Used e.g. Google App Engine)
INI: http://docs.python.org/library/configparser.html
Don't use XML for hand-edited config files.
I'm trying to get results of db.stats() mongo shell command in my python code (for monitoring purposes).
But unlike for example serverStatus I can't do db.command('stats'). I was not able to find any API equivalent in mongodb docs. I've also tried variations with db.$cmd but none of that worked.
So,
Small question: how can I get results of db.stats() (number of connections/objects, size of data & indexes, etc) in my python code?
Bigger question: can anyone explain why some of shell commands are easily accessible from API, while others are not? It's very annoying: some admin-related tools are accessible via db.$cmd.sys, some via db.command, some via ...? Is there some standard or explanation of this situation?
PS: mongodb 2.0.2, pymongo 2.1.0, python 2.7
The Javascript shell's stats command helper actually invokes a command named dbstats, which you can run from PyMongo using the Database.command method. The easiest way to find out what command a shell helper will run is to invoke the shell helper without parentheses -- this will print out the Javascript code it runs:
> db.stats
function (scale) {
return this.runCommand({dbstats:1, scale:scale});
}
As for why some commands have helpers and others do not, it's largely a question of preference, time, and perceived frequency of use by the driver authors. You can run any command by name with Database.command, which is just a convenience wrapper around db.$cmd.find_one. You can find a full list of commands at List of Database Commands. You can also submit a patch against PyMongo to add a helper method for commands you find that you need to invoke frequently but aren't supported by PyMongo yet.