how to pass optional flags to pre-tasks using Python invoke? - python

I am using Invoke and have two tasks: one cleans a raw data file to produce a clean data file, and the other produces several plots from the clean data file:
RAW_FILE = "raw.csv"
CLEAN_FILE = "clean.csv"
PLOT_FILE = "plot.svg"
#task(optional=["logging"])
def clean_data(c, logging=None):
"""Produce cleaned-up dataset."""
print("CLEAN", logging)
_configure_logging(logging)
df = clean_raw_data(RAW_FILE)
df.to_csv(CLEAN_FILE, index=False)
#task(pre=[clean_data], optional=["logging"])
def plot_data(c, logging=None):
"""Create plots of data."""
print("PLOT", logging)
_configure_logging(logging)
make_plot(CLEAN_FILE, PLOT_FILE)
def _configure_logging(log_level):
"""Initialize logging."""
if log_level is not None:
print("SETTING LOGGING TO", log_level)
CONFIG["LOGGING_LEVEL"] = log_level.upper()
If I run:
$ invoke clean-data --logging info
then logging is set to INFO and I get a message from inside clean_raw_data. However, if I run:
$ invoke plot-data --logging info
then:
clean_data is invoked with logging=None, so no log message appears.
plot_data is then invoked with logging="info", so its log message appears.
My expectation was that command-line flags would be passed down to dependent tasks. I tried doing this manually:
#task(pre=[call(clean_data, logging=logging)], optional=["logging"])
def plot_data(c, logging=None):
...as before...
but this produces an error message because logging isn't defined at the point the #task decorator is invoked.
Is there a way to chain optional arguments in the desired fashioned?

Related

Django management command doesn't flush stdout

I'm trying to print to console before and after processing that takes a while in a Django management command, like this:
import requests
import xmltodict
from django.core.management.base import BaseCommand
def get_all_routes():
url = 'http://busopen.jeju.go.kr/OpenAPI/service/bis/Bus'
r = requests.get(url)
data = xmltodict.parse(r.content)
return data['response']['body']['items']['item']
class Command(BaseCommand):
help = 'Updates the database via Bus Info API'
def handle(self, *args, **options):
self.stdout.write('Saving routes ... ', ending='')
for route in get_all_routes():
route_obj = Route(
route_type=route['routeTp'], route_id=route['routeId'], route_number=route['routeNum'])
route_obj.save()
self.stdout.write('done.')
In the above code, Saving routes ... is expected to print before the loop begins, and done. right next to it when the loop completes so that it looks like Saving routes ... done. in the end.
However, the former doesn't print until the loop completes, when both strings finally print at the same time, which is not what I expected.
I found this question, where the answer suggests flushing the output i.e. self.stdout.flush(), so I added that to my code:
def handle(self, *args, **options):
self.stdout.write('Saving routes ... ', ending='')
self.stdout.flush()
for route in get_all_routes():
route_obj = Route(
route_type=route['routeTp'], route_id=route['routeId'], route_number=route['routeNum'])
route_obj.save()
self.stdout.write('done.')
Still, the result remains unchanged.
What could have I done wrong?
The thing to keep in mind is you're using self.stdout (as suggested in the Django docs), which is BaseCommand's override of Python's standard sys.stdout. There are two main differences between the 2 relevant to your problem:
The default "ending" in BaseCommand's version of self.stdout.write() is a new-line, forcing you to use the ending='' parameter, unlike sys.stdout.write() that has an empty ending as the default. This in itself is not causing your problem.
The BaseCommand version of flush() does not really do anything (who would have thought?). This is a known bug: https://code.djangoproject.com/ticket/29533
So you really have 2 options:
Not use BaseCommand's self.stdout but instead use sys.stdout, in which case the flush does work
Force the stdout to be totally unbuffered while running the management command by passing the "-u" parameter to python. So instead of running python manage.py <subcommand>, run python -u manage.py <subcommand>
Hope this helps.
Have you try to set PYTHONUNBUFFERED environment variable?

Custom Ansible module is giving param extra params error

I am trying to implement hostname like module and my target machine in an amazon-ec2. But When I am running the script its giving me below error:
[ansible-user#ansible-master ~]$ ansible node1 -m edit_hostname.py -a node2
ERROR! this task 'edit_hostname.py' has extra params, which is only allowed in the following modules: meta, group_by, add_host, include_tasks, import_role, raw, set_fact, command, win_shell, import_tasks, script, shell, include_vars, include_role, include, win_command
My module is like this:
#!/usr/bin/python
from ansible.module_utils.basic import *
try:
import json
except ImportError:
import simplejson as json
def write_to_file(module, hostname, hostname_file):
try:
with open(hostname_file, 'w+') as f:
try:
f.write("%s\n" %hostname)
finally:
f.close()
except Exception:
err = get_exception()
module.fail_json(msg="failed to write to the /etc/hostname file")
def main():
hostname_file = '/etc/hostname'
module = AnsibleModule(argument_spec=dict(name=dict(required=True, type=str)))
name = module.params['name']
write_to _file(module, name, hostname_file)
module.exit_json(changed=True, meta=name)
if __name__ == "__main__":
main()
I don't know where I am making the mistake. Any help will be greatly appreciated. Thank you.
When developing a new module, I would recommend to use the boilerplate described in the documentation. This also shows that you'll need to use AnsibleModule to define your arguments.
In your main, you should add something like the following:
def main():
# define available arguments/parameters a user can pass to the module
module_args = dict(
name=dict(type='str', required=True)
)
# seed the result dict in the object
# we primarily care about changed and state
# change is if this module effectively modified the target
# state will include any data that you want your module to pass back
# for consumption, for example, in a subsequent task
result = dict(
changed=False,
original_hostname='',
hostname=''
)
module = AnsibleModule(
argument_spec=module_args
supports_check_mode=False
)
# manipulate or modify the state as needed (this is going to be the
# part where your module will do what it needs to do)
result['original_hostname'] = module.params['name']
result['hostname'] = 'goodbye'
# use whatever logic you need to determine whether or not this module
# made any modifications to your target
result['changed'] = True
# in the event of a successful module execution, you will want to
# simple AnsibleModule.exit_json(), passing the key/value results
module.exit_json(**result)
Then, you can call the module like so:
ansible node1 -m mymodule.py -a "name=myname"
ERROR! this task 'edit_hostname.py' has extra params, which is only allowed in the following modules: meta, group_by, add_host, include_tasks, import_role, raw, set_fact, command, win_shell, import_tasks, script, shell, include_vars, include_role, include, win_command
As explained by your error message, an anonymous default parameter is only supported by a limited number of modules. In your custom module, the paramter you created is called name. Moreover, you should not include the .py extension in the module name. You have to call your module like so as an ad-hoc command:
$ ansible node1 -m edit_hostname -a name=node2
I did not test your module code so you may have further errors to fix.
Meanwhile, I still strongly suggest you use the default boilerplate from the ansible documentation as proposed in #Simon's answer.

How to create new log file for each run of tests in pytest?

I have created a pytest.ini file,
addopts = --resultlog=log.txt
This creates a log file, but I would like to create a new log file everytime I run the tests.
I am new to the pytest, and pardon me if I have missed out anything while reading the documentation.
Thanks
Note
--result-log argument is deprecated and scheduled for removal in version 6.0 (see Deprecations and Removals: Result log). The possible replacement implementation is discussed in issue #4488, so watch out for the next major version bump - the code below will stop working with pytest==6.0.
Answer
You can modify the resultlog in the pytest_configure hookimpl. Example: put the code below in the conftest.py file in your project root dir:
import datetime
def pytest_configure(config):
if not config.option.resultlog:
timestamp = datetime.datetime.strftime(datetime.datetime.now(), '%Y-%m-%d_%H-%M-%S')
config.option.resultlog = 'log.' + timestamp
Now if --result-log is not passed explicitly (so you have to remove addopts = --resultlog=log.txt from your pytest.ini), pytest will create a log file ending with a timestamp. Passing --result-log with a log file name will override this behaviour.
Answering my own question.
As hoefling mentioned --result-log is deprecated, I had to find a way to do it without using that flag. Here's how I did it,
conftest.py
from datetime import datetime
import logging
log = logging.getLogger(__name__)
def pytest_assertrepr_compare(op, left, right):
""" This function will print log everytime the assert fails"""
log.error('Comparing Foo instances: vals: %s != %s \n' % (left, right))
return ["Comparing Foo instances:", " vals: %s != %s" % (left, right)]
def pytest_configure(config):
""" Create a log file if log_file is not mentioned in *.ini file"""
if not config.option.log_file:
timestamp = datetime.strftime(datetime.now(), '%Y-%m-%d_%H-%M-%S')
config.option.log_file = 'log.' + timestamp
pytest.ini
[pytest]
log_cli = true
log_cli_level = CRITICAL
log_cli_format = %(message)s
log_file_level = DEBUG
log_file_format = %(asctime)s [%(levelname)8s] %(message)s (%(filename)s:%(lineno)s)
log_file_date_format=%Y-%m-%d %H:%M:%S
test_my_code.py
import logging
log = logging.getLogger(__name__)
def test_my_code():
****test code
You can have different pytest run logs by naming the log file the time when test execution starts.
pytest tests --log-file $(date '+%F_%H:%M:%S')
This will create a log file for each test run. And the name of the test run would be the timestamp.
$(date '+%F_%H:%M:%S') is the bash command to get current timestamp in DATE_Hr:Min:Sec format.

In robot framework how do you to create object of class and call the methods in corresponding class?

In robot framework how do you to create object of class and call the methods in corresponding class? This is the code snippet.
*** Settings ***
Documentation A resource file with reusable keywords and variables.
... Use keywords in this file in testcases directory.
Library /home/kirti/src/Helper/utilities.py
Library /home/kirti/src/Helper/config_parser.py
#Library /home/kirti/qa/src/executor/cleanup.CleanUp
Library /home/kirti/qa/src/executor/cleanup.py
*** Variables ***
${RESULT} 0
*** Keywords ***
Read Json Config Values
Log To Console "Setting up the config values globally"
config_parser.Json Config Parser
Import Variables /home/kirti/src/Helper/variables.py
Log Variables INFO
Check Machines Reachability
utilities.Check All Machines Status
Check SNMP Counter
utilities.Get Snmp 192.178.1.2 PPSessionCount
Call Clean Up
#${cleanupobj}= cleanup.create cleanup
#${name}= ${cleanupobj.cc()}
Import Library /home/kirti/src/executor/cleanup.py
${cmp}= Get library instance CleanUp
Log To Console ${cmp}.__class__.__name__
#${name}= Call method ${cmp} Create cleanup
${name}= Call method ${cmp} cc
#${name}= Call method ${cleanupobj} env cleanup
#Log To Console "${name}"
#Log Variables INFO
utilities.Check All Machines Status
Here is a way you can achieve the desired result.
Lets take example of demo.py which have class Sample
Sample class has init ,getting_path() as methods
class Sample(object):
def __init__(self,path,device):
self.device=device
self.path = path
def getting_path(self):
return self.path
Lets use these methods in Robotfile
*** Settings ***
#in the Library section you reference python class in below format
# (file.class_name) so file is demo.py and class is Sample
Library demo.Sample ${path} ${device} WITH NAME obj
#path and device are two arguments required by __init__,'obj' will be used to
#access the methods in python class
Library Collections
*** Variables ***
${path} c:
${device} samsung
*** Test Cases ***
Test
Test_python_class
*** Keywords ***
Test_python_class
#with obj you now call the method of python file
${result} = obj.getting_path
#if method need any argument , this can be passed like
#${result} = obj.getting_path ${arg1} ${arg2}
log to console ${result}
If you want to use a specific instance of a class you can use
${instance} = obj arg1
log to console ${instance.function(args)}

pytest recording results in addition to the pass/fail

I've just started using pytest. Is there any way to record results in addition to the pass/fail status?
For example, suppose I have a test function like this:
#pytest.fixture(scope="session")
def server():
# something goes here to setup the server
def test_foo(server):
server.send_request()
response = server.get_response()
assert len(response) == 42
The test passes if the length of the response is 42. But I'd also like to record the response value as well ("...this call will be recorded for quality assurance purposes...."), even though I don't strictly require an exact value for the pass/fail criteria.
Use print result, then run py.test -s
-s tells py.test to not capture stdout and stdout.
Adapting your example:
# test_service.py
# ---------------
def test_request():
# response = server.get_response()
response = "{'some':'json'}"
assert len(response) == 15
print response, # comma prevents default newline
Running py.test -s produces
$ py.test -s test_service.py
=========================== test session starts ===========================
platform linux2 -- Python 2.7.6 -- py-1.4.26 -- pytest-2.6.4
collected 1 items
test_service.py {'some':'json'}.
======================== 1 passed in 0.04 seconds =========================
$
Or use python logging instead
# test_logging.py
# ---------------
import logging
logging.basicConfig(
filename="logresults.txt",
format="%(filename)s:%(lineno)d:%(funcName)s %(message)s")
def test_request():
response = "{'some':'json'}"
# print response, # comma prevents default newline
logging.warn("{'some':'json'}") # sorry, newline unavoidable
logging.warn("{'some':'other json'}")
Running py.test produces the machine readable file logresults.txt:
test_logging.py:11:test_request {'some':'json'}
test_logging.py:12:test_request {'some':'other json'}
Pro tip
Run vim logresults.txt +cbuffer to load the logresults.txt as your quickfix list.
see my example of passing test data to ELK
http://fruch.github.io/blog/2014/10/30/ELK-is-fun/
later I've made it a bit like this:
def pytest_configure(config):
# parameter to add analysis from tests teardowns, and etc.
config.analysis = []
def pytest_unconfigure(config):
# send config.analysis to where you want, i.e. file / DB / ELK
send_to_elk(config.analysis)
def test_example():
pytest.config.analysis += [ "My Data I want to keep" ]
this is per run/session data, and not per test (but I'm working on figuring out how to do it per test)
I'll try updating once I have a working example...

Categories