I'd like to run pytest and then store results and present them to users on demand (e.g. store pytest results to a db and then expose them through web service)
I could run pytest from a command line with option to save results report into file, then find and parse the file, but feels silly to have the results in a (pytest) python app, then store them to a file and then instantly look for the file, parse it back into python code for further processing. I know I can run pytest programatically via pytest.main(args) however it only return some exit code and not details about tests results - how can I retrieve the results when using pytest.main()?
I'm looking for smt like
args = # arguments
ret_code = pytest.main(args=args) # pytest.main() as is only returns trivial return code
my_own_method_to_process(pytest.results) # how to retrieve any kind of pytest.results object that would contain test execution results data (list of executed tests, pass fail info, etc as pytest is displaying into console or saves into file reports)
There are couple of similar questions but always with some deviation that doesn't work for me. I simply want to run pytest from my code and - whatever format the output would be - directly grab it and further process.
(Note I'm in a corporate environment where installing new packages (i.e. pytest plugins) is limited, so I'd like to achieve this without installing any other module/pytest plugin into my environment)
Write a small plugin that collects and stores reports for each test. Example:
import time
import pytest
class ResultsCollector:
def __init__(self):
self.reports = []
self.collected = 0
self.exitcode = 0
self.passed = 0
self.failed = 0
self.xfailed = 0
self.skipped = 0
self.total_duration = 0
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_makereport(self, item, call):
outcome = yield
report = outcome.get_result()
if report.when == 'call':
self.reports.append(report)
def pytest_collection_modifyitems(self, items):
self.collected = len(items)
def pytest_terminal_summary(self, terminalreporter, exitstatus):
print(exitstatus, dir(exitstatus))
self.exitcode = exitstatus.value
self.passed = len(terminalreporter.stats.get('passed', []))
self.failed = len(terminalreporter.stats.get('failed', []))
self.xfailed = len(terminalreporter.stats.get('xfailed', []))
self.skipped = len(terminalreporter.stats.get('skipped', []))
self.total_duration = time.time() - terminalreporter._sessionstarttime
def run():
collector = ResultsCollector()
pytest.main(plugins=[collector])
for report in collector.reports:
print('id:', report.nodeid, 'outcome:', report.outcome) # etc
print('exit code:', collector.exitcode)
print('passed:', collector.passed, 'failed:', collector.failed, 'xfailed:', collector.xfailed, 'skipped:', collector.skipped)
print('total duration:', collector.total_duration)
if __name__ == '__main__':
run()
Related
I have a Python Rumps application that monitors a folder for new files using the rumps.Timer(...) feature. When it sees new files, it transfers them offsite (to AWS s3) and runs a GET request. sometimes that transfer, and get request can take over 1 second, and sometimes up to about 5 seconds. During this time, the application is frozen and can't do anything else.
Here is the current code:
class MyApp(rumps.App):
def __init__(self):
super(MyApp, self).__init__("App", quit_button="Stop")
self.process_timer = rumps.Timer(self.my_tick, 1)
self.process_timer.start()
def my_tick(self, sender):
named_set = set()
for file in os.listdir(self.process_folder):
fullpath = os.path.join(self.process_folder, file)
if os.path.isfile(fullpath) and fullpath.endswith(('.jpg', '.JPG')):
named_set.add(file)
if len(named_set) == 0:
self.files_in_folder = set()
new_files = sorted(named_set - self.files_in_folder)
if len(new_files) > 0:
for new_file in new_files:
# upload file
self.s3_client.upload_file(
new_file,
'##bucket##',
'##key##'
)
# GET request
return requests.get(
'##url##',
params={'file': new_file}
)
self.files_in_folder = named_set
if __name__ == "__main__":
MyApp().run()
Is there a way to have this transfer and GET request run as a background process?
I've tried using subprocess with the transfer code in a separate script
subprocess.Popen(['python3', 'transferscript.py', newfile])
and it doesn't appear to do anything. It will work if I run that line outside of rumps, but once it's in rumps, it will not run.
Edit: code provided
I can't figure out how to use pytest to test a dag task waiting for xcom_arg.
I created the following DAG using the new airflow API syntax :
#dag(...)
def transfer_files():
#task()
def retrieve_existing_files():
existing = []
for elem in os.listdir("./backup"):
existing.append(elem)
return existing
#task()
def get_new_file_to_sync(existing: list[str]):
new_files = []
for elem in os.listdir("./prod"):
if not elem in existing:
new_files.append(elem)
return new_files
r = retrieve_existing_files()
get_new_file_to_sync(r)
Now I want to perform unit testing on the get_new_file_to_sync task. I wrote the following test :
def test_get_new_elan_list():
mocked_existing = ["a.out", "b.out"]
dag_bag = DagBag(include_examples=False)
dag = dag_bag.get_dag("transfer_files")
task = dag.get_task("get_new_file_to_sync")
result = task.execute({}, mocked_existing)
print(result)
The test fails because task.execute is waiting for 2 parameters but 3 were given.
My issue is that I don't have any clue of how to proceed in order to test my tasks waiting for arguments with a mocked custom argument.
Thanks for your insights
I managed to find a way to unit test airflow tasks declared using the new airflow API.
Here is a test case for the task get_new_file_to_sync contained in the DAG transfer_files declared in the question :
def test_get_new_file_to_synct():
mocked_existing = ["a.out", "b.out"]
# Asking airflow to load the dags in its home folder
dag_bag = DagBag(include_examples=False)
# Retrieving the dag to test
dag = dag_bag.get_dag("transfer_files")
# Retrieving the task to test
task = dag.get_task("get_new_file_to_sync")
# extracting the function to test from the task
function_to_unit_test = task.python_callable
# Calling the function normally
results = function_to_unit_test(mocked_existing)
assert len(results) == 10
This allows bypassing all the airflow mechanics triggered before calling the actual code you have written for your task. Thus, you can focus on writing tests for the code you have written for your task.
For testing such a task, I believe you'll need to use mocking from pytest.
Let's take this user defined operator for an example:
class MovielensPopularityOperator(BaseOperator):
def __init__(self, conn_id, start_date, end_date, min_ratings=4, top_n=5, **kwargs):
super().__init__(**kwargs)
self._conn_id = conn_id
self._start_date = start_date
self._end_date = end_date
self._min_ratings = min_ratings
self._top_n = top_n
def execute(self, context):
with MovielensHook(self._conn_id) as hook:
ratings = hook.get_ratings(start_date=self._start_date, end_date=self._end_date)
rating_sums = defaultdict(Counter)
for rating in ratings:
rating_sums[rating["movieId"]].update(count=1, rating=rating["rating"])
averages = {
movie_id: (rating_counter["rating"] / rating_counter["count"], rating_counter["count"])
for movie_id, rating_counter in rating_sums.items()
if rating_counter["count"] >= self._min_ratings
}
return sorted(averages.items(), key=lambda x: x[1], reverse=True)[: self._top_n]
And a test written just like the one you did:
def test_movielenspopularityoperator():
task = MovielensPopularityOperator(
task_id="test_id",
start_date="2015-01-01",
end_date="2015-01-03",
top_n=5,
)
result = task.execute(context={})
assert len(result) == 5
Running this test fail as:
=============================== FAILURES ===============================
___________________ test_movielenspopularityoperator ___________________
mocker = <pytest_mock.plugin.MockFixture object at 0x10fb2ea90>
def test_movielenspopularityoperator(mocker: MockFixture):
task = MovielensPopularityOperator(
➥
>
task_id="test_id", start_date="2015-01-01", end_date="2015-01-
03", top_n=5
)
➥
E
TypeError: __init__() missing 1 required positional argument:
'conn_id'
tests/dags/chapter9/custom/test_operators.py:30: TypeError
========================== 1 failed in 0.10s ==========================
The test failed because we’re missing the required argument conn_id, which points to the connection ID in the metastore. But how do you provide this in a test? Tests should be isolated from each other; they should not be able to influence the results of other tests, so a database shared between tests is not an ideal situation. In this case, mocking comes to the rescue.
Mocking is “faking” certain operations or objects. For example, the call to a database that is expected to exist in a production setting but not while testing could be faked, or mocked, by telling Python to return a certain value instead of making the actual call to the (nonexistent during testing) database. This allows you to develop and run tests without requiring a connection to external systems. It requires insight into the internals of whatever it is you’re testing, and thus sometimes requires you to dive into third-party code.
After installing pytest-mock in your enviroment:
pip install pytest-mock
Here is the test written where mocking is used:
def test_movielenspopularityoperator(mocker):
mocker.patch.object(
MovielensHook,
"get_connection",
return_value=Connection(conn_id="test", login="airflow", password="airflow"),
)
task = MovielensPopularityOperator(
task_id="test_id",
conn_id="test",
start_date="2015-01-01",
end_date="2015-01-03",
top_n=5,
)
result = task.execute(context=None)
assert len(result) == 5
Now, hopefully this will give you an idea about how to write your tests for Airflow Tasks.
For more about mocking and unit tests, you can check here and here.
I have a luigi preprocessing task that splits my raw data into smaller files. These Files will then be processed by the actual pipeline.
So regarding the parameters, I would like to require each pipeline with one preprocessed file id as parameter. However, this file id is only generated in the preprocessing step and is thus only known at runtime. To illustrate my idea I provide this not-working code:
import luigi
import subprocess
import random
class GenPipelineFiles(luigi.Task):
input_file = luigi.Parameter()
def requires(self):
pass
def output(self):
for i in range(random.randint(0,10)):
yield luigi.LocalTarget("output/{}_{}.txt".format(self.input_file, i))
def run(self):
for iout in self.output:
command = "touch {}".format(iout.fname)
subprocess.call(command, shell=True)
class RunPipelineOnSmallChunk(luigi.Task):
pass
class Experiment(luigi.WrapperTask):
input_file = luigi.Parameter(default="ex")
def requires(self):
file_ids = GenPipelineFiles(input_file=self.input_file)
for file_id in file_ids:
yield RunPipelineOnSmallChunk(directory=self.input_file, file_id=file_id)
luigi.run()
The wrapper task Experiment should
first, somehow require the splitting of the raw data into documents
secondly, require the actual pipeline with the obtained file id of the preprocessing.
The random number of output files in the GenPipelineFiles indicates that this cannot be hard-coded into the Experiment's requires.
A question that is probably related to this one is the fact, that a luigi task properly only has one input target and one output target. Probably a note on how to model multiple outputs in GenPipelineFiles could also solve the problem.
One simple approach to dealing with multiple outputs is to create a directory named after the input file, and put the output files from the split into that a directory named after the input file. That way the dependent task can just check for the existence of the directory. Let's say I have an input file 123.txt, I then make a directory 123_split with files 1.txt, 2.txt, 3.txt as the output of GenPipelineFiles, and then a directory 123_processed with 1.txt, 2.txt, 3.txt as the output of RunPipelineOnSmallChunk.
For your requires method in Experiment, you have to return the tasks you want to run, in a list for example. The way you have written file_ids = GenPipelineFiles(input_file=self.input_file) makes me think the run method of that object is not being called, because it is not being returned by the method.
here's some sample code that works with targets on a per file basis (but not a task per file basis). I still think it is safer to have a single output target of a directory or a sentinel file out of some kind to indicate you are done. Atomicity is lost unless the tasks ensures each target is created.
PYTHONPATH=. luigi --module sampletask RunPipelineOnSmallChunk --local-scheduler
sampletask.py
import luigi
import os
import subprocess
import random
class GenPipelineFiles(luigi.Task):
inputfile = luigi.Parameter()
num_targets = random.randint(0,10)
def requires(self):
pass
def get_prefix(self):
return self.inputfile.split(".")[0]
def get_dir(self):
return "split_{}".format(self.get_prefix())
def output(self):
targets = []
for i in range(self.num_targets):
targets.append(luigi.LocalTarget(" {}/{}_{}.txt".format(self.get_dir(), self.get_prefix(), i)))
return targets
def run(self):
if not os.path.exists(self.get_dir()):
os.makedirs(self.get_dir())
for iout in self.output():
command = "touch {}".format(iout.path)
subprocess.call(command, shell=True)
class RunPipelineOnSmallChunk(luigi.Task):
inputfile = luigi.Parameter(default="test")
def get_prefix(self):
return self.inputfile.split(".")[0]
def get_dir(self):
return "processed_{}".format(self.get_prefix())
#staticmethod
def clean_input_path(path):
return path.replace("split", "processed")
def requires(self):
return GenPipelineFiles(self.inputfile)
def output(self):
targets = []
for target in self.input():
targets.append(luigi.LocalTarget(RunPipelineOnSmallChunk.clean_input_path(target.path)))
return targets
def run(self):
if not os.path.exists(self.get_dir()):
os.makedirs(self.get_dir())
for iout in self.output():
command = "touch {}".format(iout.path)
subprocess.call(command, shell=True)
I am using Pytest to test an executable. This .exe file reads a configuration file on startup.
I have written a fixture to spawn this .exe file at the start of each test and closes it down at the end of the test. However, I cannot work out how to tell the fixture which configuration file to use. I want the fixture to copy a specified config file to a directory before spawning the .exe file.
#pytest.fixture
def session(request):
copy_config_file(specific_file) # how do I specify the file to use?
link = spawn_exe()
def fin():
close_down_exe()
return link
# needs to use config file foo.xml
def test_1(session):
session.talk_to_exe()
# needs to use config file bar.xml
def test_2(session):
session.talk_to_exe()
How do I tell the fixture to use foo.xml for test_1 function and bar.xml for test_2 function?
Thanks
John
One solution is to use pytest.mark for that:
import pytest
#pytest.fixture
def session(request):
m = request.node.get_closest_marker('session_config')
if m is None:
pytest.fail('please use "session_config" marker')
specific_file = m.args[0]
copy_config_file(specific_file)
link = spawn_exe()
yield link
close_down_exe(link)
#pytest.mark.session_config("foo.xml")
def test_1(session):
session.talk_to_exe()
#pytest.mark.session_config("bar.xml")
def test_2(session):
session.talk_to_exe()
Another approach would be to just change your session fixture slightly to delegate the creation of the link to the test function:
import pytest
#pytest.fixture
def session_factory(request):
links = []
def make_link(specific_file):
copy_config_file(specific_file)
link = spawn_exe()
links.append(link)
return link
yield make_link
for link in links:
close_down_exe(link)
def test_1(session_factory):
session = session_factory('foo.xml')
session.talk_to_exe()
def test_2(session):
session = session_factory('bar.xml')
session.talk_to_exe()
I prefer the latter as its simpler to understand and allows for more improvements later, for example, if you need to use #parametrize in a test based on the config value. Also notice the latter allows to spawn more than one executable in the same test.
I want to change the env.hosts dynamically because sometimes I want to deploy to one machine first, check if ok then deploy to many machines.
Currently I need to set env.hosts first, how could I set the env.hosts in a method and not in global at script start?
Yes you can set env.hosts dynamically. One common pattern we use is:
from fabric.api import env
def staging():
env.hosts = ['XXX.XXX.XXX.XXX', ]
def production():
env.hosts = ['YYY.YYY.YYY.YYY', 'ZZZ.ZZZ.ZZZ.ZZZ', ]
def deploy():
# Do something...
You would use this to chain the tasks such as fab staging deploy or fab production deploy.
Kind of late to the party, but I achieved this with ec2 like so (note in EC2 you do not know what the ip/hostname may be, generally speaking - so you almost have to go dynamic to really account for how the environment/systems could come up - another option would be to use dyndns, but then this would still be useful):
from fabric.api import *
import datetime
import time
import urllib2
import ConfigParser
from platform_util import *
config = ConfigParser.RawConfigParser()
#task
def load_config(configfile=None):
'''
***REQUIRED*** Pass in the configuration to use - usage load_config:</path/to/config.cfg>
'''
if configfile != None:
# Load up our config file
config.read(configfile)
# Key/secret needed for aws interaction with boto
# (anyone help figure out a better way to do this with sub modules, please don't say classes :-) )
global aws_key
global aws_sec
aws_key = config.get("main","aws_key")
aws_sec = config.get("main","aws_sec")
# Stuff for fabric
env.user = config.get("main","fabric_ssh_user")
env.key_filename = config.get("main","fabric_ssh_key_filename")
env.parallel = config.get("main","fabric_default_parallel")
# Load our role definitions for fabric
for i in config.sections():
if i != "main":
hostlist = []
if config.get(i,"use-regex") == 'yes':
for x in get_running_instances_by_regex(aws_key,aws_sec,config.get(i,"security-group"),config.get(i,"pattern")):
hostlist.append(x.private_ip_address)
env.roledefs[i] = hostlist
else:
for x in get_running_instances(aws_key,aws_sec,config.get(i,"security-group")):
hostlist.append(x.private_ip_address)
env.roledefs[i] = hostlist
if config.has_option(i,"base-group"):
if config.get(i,"base-group") == 'yes':
print "%s is a base group" % i
print env.roledefs[i]
# env["basegroups"][i] = True
where get_running_instances and get_running_instances_by_regex are utility functions that make use of boto (http://code.google.com/p/boto/)
ex:
import logging
import re
from boto.ec2.connection import EC2Connection
from boto.ec2.securitygroup import SecurityGroup
from boto.ec2.instance import Instance
from boto.s3.key import Key
########################################
# B-O-F get_instances
########################################
def get_instances(access_key=None, secret_key=None, security_group=None):
'''
Get all instances. Only within a security group if specified., doesnt' matter their state (running/stopped/etc)
'''
logging.debug('get_instances()')
conn = EC2Connection(aws_access_key_id=access_key, aws_secret_access_key=secret_key)
if security_group:
sg = SecurityGroup(connection=conn, name=security_group)
instances = sg.instances()
return instances
else:
instances = conn.get_all_instances()
return instances
Here is a sample of what my config looked like:
# Config file for fabric toolset
#
# This specific configuration is for <whatever> related hosts
#
#
[main]
aws_key = <key>
aws_sec = <secret>
fabric_ssh_user = <your_user>
fabric_ssh_key_filename = /path/to/your/.ssh/<whatever>.pem
fabric_default_parallel = 1
#
# Groupings - Fabric knows them as roledefs (check env dict)
#
# Production groupings
[app-prod]
security-group = app-prod
use-regex = no
pattern =
[db-prod]
security-group = db-prod
use-regex = no
pattern =
[db-prod-masters]
security-group = db-prod
use-regex = yes
pattern = mysql-[d-s]01
Yet another new answer to an old question. :) But I just recently found myself attempting to dynamically set hosts, and really have to disagree with the main answer. My idea of dynamic, or at least what I was attempting to do, was take an instance DNS-name that was just created by boto, and access that instance with a fab command. I couldn't do fab staging deploy, because the instance doesn't exist at fabfile-editing time.
Fortunately, fabric does support a truly dynamic host-assignment with execute. (It's possible this didn't exist when the question was first asked, of course, but now it does). Execute allows you to define both a function to be called, and the env.hosts it should use for that command. For example:
def create_EC2_box(data=fab_base_data):
conn = boto.ec2.connect_to_region(region)
reservations = conn.run_instances(image_id=image_id, ...)
...
return instance.public_dns_name
def _ping_box():
run('uname -a')
run('tail /var/log/cloud-init-output.log')
def build_box():
box_name = create_EC2_box(fab_base_data)
new_hosts = [box_name]
# new_hosts = ['ec2-54-152-152-123.compute-1.amazonaws.com'] # testing
execute(_ping_box, hosts=new_hosts)
Now I can do fab build_box, and it will fire one boto call that creates an instance, and another fabric call that runs on the new instance - without having to define the instance-name at edit-time.