I'm trying to customize report.html of pytest using pytest-html plugin.
I searched up many sites(including pytest-html documentation) and found that the code below is commonly used.(The code is in conftest.py)
(https://pytest-html.readthedocs.io/en/latest/user_guide.html#extra-content)
#pytest.hookimpl(hookwrapper = True)
def pytest_runtest_makereport(item, call):
pytest_html = item.config.pluginmanager.getplugin("html")
outcome = yield
report = outcome.get_result()
extra = getattr(report, "extra", [])
if report.outcome == "call":
#always add url to report
xfail = hasattr(report, "wasxfail")
if (report.skipped and xfail) or (report.failed and not xfail):
extra.append(pytest_html.extras.url("http://www.google.com/"))
extra.append(pytest_html.extras.text('Hi', name = 'TEXT'))
# only add additional html on failure
# extra.append(pytest_html.extras.html("<div>Additional HTML</div>"))
report.extra = extra
However, I have no idea of each lines.
No one explained what the line does actually.
Why does the script allocates yield keyword to outcome with out any variable(e.g. yield 1), and what does yield.get_result() actually do?
Also, I have no idea of xfail("wasxfail").
I found that #pytest.xfail makes the test function fail in the pytest run, but I think it has nothing to do with the above code.
Why don't we use 'fail' not 'xfail'?
Anyway, what I need is
First, the meaning of each line and what it does.
Second, I wanna set different message in the report.html depending on the pass/fail.
I tried python report.outcome == 'failed', report.outcome == 'passed' to divide conditions, but it didn't work.
Third, when adding the text not url, it becomes tag and helps redirecting the page containing the text.
However, if I click the page in the html, it opens about:blank page not the desired one.
Using right click and open in new tab redirects to the desired one.
Any help is welcomed. Thanks.
+ I have more questions, I tried
if report.passed:
extra.append(pytest_html.extras.url("https://www.google.com/")
report.extra = extra
It attaches 3 same links in the report.html(Results table) How can I handle it?
+ I could log a message when test is failed like msg = 'hi', pytest.fail(msg) However, I cannot get a clue to do it when the test is passed.
Trying to answer as many lines as possible.
Pytest uses generators to iterate over the report steps.
The function pytest_runtest_makereport iterates over every result.when (not .outcome, this is a bug in the documentation) which according to pytest are as follows: 'collect', 'setup', 'call', and 'teardown'.
The get_result is how pytest implements its hooks.
The confusion about failed and xfail (expected to fail) is how you define a test failure: It is an error if it was skipped but was expected to fail or if it failed but was not expected to fail.
The thing with the about:blank could also be a bug.
What you want to use your if statements on is not the call info but the report:
if report.failed:
do_stuff()
if report.passed:
do_stuff_different()
One way to get more info about code and its context would be to debug it using breakpoint().
So the snippet you are looking for is:
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_makereport(item, call):
pytest_html = item.config.pluginmanager.getplugin("html")
outcome = yield
report = outcome.get_result()
extra = getattr(report, "extra", [])
if report.when == "call":
xfail = hasattr(report, "wasxfail")
if (report.skipped and xfail) or (report.failed and not xfail):
extra.append(pytest_html.extras.url("http://www.google.com/"))
if report.passed:
extra.append(pytest_html.extras.url("http://www.stackoverflow.com/"))
report.extra = extra
Related
I'd like to run pytest and then store results and present them to users on demand (e.g. store pytest results to a db and then expose them through web service)
I could run pytest from a command line with option to save results report into file, then find and parse the file, but feels silly to have the results in a (pytest) python app, then store them to a file and then instantly look for the file, parse it back into python code for further processing. I know I can run pytest programatically via pytest.main(args) however it only return some exit code and not details about tests results - how can I retrieve the results when using pytest.main()?
I'm looking for smt like
args = # arguments
ret_code = pytest.main(args=args) # pytest.main() as is only returns trivial return code
my_own_method_to_process(pytest.results) # how to retrieve any kind of pytest.results object that would contain test execution results data (list of executed tests, pass fail info, etc as pytest is displaying into console or saves into file reports)
There are couple of similar questions but always with some deviation that doesn't work for me. I simply want to run pytest from my code and - whatever format the output would be - directly grab it and further process.
(Note I'm in a corporate environment where installing new packages (i.e. pytest plugins) is limited, so I'd like to achieve this without installing any other module/pytest plugin into my environment)
Write a small plugin that collects and stores reports for each test. Example:
import time
import pytest
class ResultsCollector:
def __init__(self):
self.reports = []
self.collected = 0
self.exitcode = 0
self.passed = 0
self.failed = 0
self.xfailed = 0
self.skipped = 0
self.total_duration = 0
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_makereport(self, item, call):
outcome = yield
report = outcome.get_result()
if report.when == 'call':
self.reports.append(report)
def pytest_collection_modifyitems(self, items):
self.collected = len(items)
def pytest_terminal_summary(self, terminalreporter, exitstatus):
print(exitstatus, dir(exitstatus))
self.exitcode = exitstatus.value
self.passed = len(terminalreporter.stats.get('passed', []))
self.failed = len(terminalreporter.stats.get('failed', []))
self.xfailed = len(terminalreporter.stats.get('xfailed', []))
self.skipped = len(terminalreporter.stats.get('skipped', []))
self.total_duration = time.time() - terminalreporter._sessionstarttime
def run():
collector = ResultsCollector()
pytest.main(plugins=[collector])
for report in collector.reports:
print('id:', report.nodeid, 'outcome:', report.outcome) # etc
print('exit code:', collector.exitcode)
print('passed:', collector.passed, 'failed:', collector.failed, 'xfailed:', collector.xfailed, 'skipped:', collector.skipped)
print('total duration:', collector.total_duration)
if __name__ == '__main__':
run()
We're using py.test to execute our integration tests. Since we have quite a lot of tests, we'd like to monitor the progress in a dashboard we use.
Is it possible to configure a webhook or something that pytest will call with the result of each tests executed (passed/failed/skipped)?
I did find the teamcity integration, but we'd prefer to monitor the progress on a different dashboard.
It depends on what data you want to emit. If a simple completion check will suffice, write a custom pytest_runtest_logfinish hook in the conftest.py file as it directly provides lots of test info:
def pytest_runtest_logfinish(nodeid, location):
(filename, line, name) = location
print('finished', nodeid, 'in file', filename,
'on line', line, 'name', name)
Should you need to access the test result, a custom pytest_runtest_makereport is a good option. You can get the same test info (and more) as above from the item parameter:
import pytest
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
report = yield
result = report.get_result()
if result.when == 'teardown':
(filename, line, name) = item.location
print('finished', item.nodeid, 'with result', result.outcome,
'in file', filename, 'on line', line, 'name', name)
You can also go with the fixture teardown option, as suggested in the comments:
#pytest.fixture(autouse=True)
def myhook(request):
yield
item = request.node
(filename, line, name) = item.location
print('finished', item.nodeid, 'in file', filename,
'on line', line, 'name', name)
However, it depends on when you want your webhook to emit - the custom hookimpls above will run when the test finished and all fixtures have finalized, while with the fixture example, you can't guarantee that all fixtures have finalized yet as there's no explicit fixture ordering. Also, should you need the test result, you can't access it in a fixture.
I am writing tests in pytest bdd with selenium. I am using pytest-html to generate report. For debug purpose or just to have a proper logging, I want selenium screenshots and rest of the logs in html report. But I am unable to have selenium screenshot in passed report.
Here are the things I am trying.
There is a pytest-html hook wrapper in conftest.py
conftest.py
#pytest.mark.hookwrapper
def pytest_runtest_makereport(item, call):
pytest_html = item.config.pluginmanager.getplugin('html')
outcome = yield
report = outcome.get_result()
print("printing report")
extra = getattr(report, 'extra', [])
if report.when == 'call':
mylogs = ""
with open('/tmp/test.log', 'r') as logfile:
for line in logfile:
mylogs = mylogs + line + "<br>"
extra.append(pytest_html.extras.html('<html><body>{}</body></html>'.format(mylogs)))
report.extra = extra
This code is adding logs in my report.html
Similarly, I will be adding few selenium screenshots in my test code.
I want to know if we can generate a report containing all selenium screenshots.
Following is my test file
test_file.py
def test_case():
logger.info("I will now open browser")
driver = webdriver.Chrome()
driver.get('http://www.google.com')
driver.save_screenshot('googlehome.png')
time.sleep(3)
driver.quit()
I want googlehome.png and all other png file to be part of html report. I will be great if the we can generate a robot framework like html report.
Is there any way in pytest we can do that?
Following is the command I use to generate report
py.test -s --html=report.html --self-contained-html -v
You have to pass webdriver from test into pytest reporting system.
In my case I use webdriver as fixtuer. That have a lot of other advantages - for example you can test for any set of browsers and control that from one place.
#pytest.fixture(scope='session', params=['chrome'], ids=lambda x: 'Browser: {}'.format(x))
def web_driver(request):
browsers = {'chrome': webdriver.Chrome}
return browsers[]()
def test_case(web_driver):
logger.info("I will now open browser")
web_driver.get('http://www.google.com')
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
rep = outcome.get_result()
if rep.when == 'call' and not rep.failed:
try:
if 'web_driver' in item.fixturenames:
web_driver = item.funcargs['web_driver']
else:
return # This test does not use web_driver and we do need screenshot for it
# web_driver.save_screenshot and other magic to add screenshot to your report
except Exception as e:
print('Exception while screen-shot creation: {}'.format(e))
Here is how I solved mine:
Okay so here is how you access webdriver from the report generation hook:
#pytest.mark.hookwrapper
def pytest_runtest_makereport(item, call):
timestamp = datetime.now().strftime('%H-%M-%S')
pytest_html = item.config.pluginmanager.getplugin('html')
outcome = yield
report = outcome.get_result()
extra = getattr(report, 'extra', [])
if report.when == 'call':
feature_request = item.funcargs['request']
driver = feature_request.getfuncargvalue('browser')
driver.save_screenshot('D:/report/scr'+timestamp+'.png')
extra.append(pytest_html.extras.image('D:/report/scr'+timestamp+'.png'))
# always add url to report
extra.append(pytest_html.extras.url('http://www.example.com/'))
xfail = hasattr(report, 'wasxfail')
if (report.skipped and xfail) or (report.failed and not xfail):
# only add additional html on failure
extra.append(pytest_html.extras.image('D:/report/scr.png'))
extra.append(pytest_html.extras.html('<div>Additional HTML</div>'))
report.extra = extra
I have some experience in Python, but I have never used try & except functions to catch errors due to lack of formal training.
I am working on extracting a few articles from wikipedia. For this I have an array of titles, a few of which do not have any article or search result at the end. I would like the page retrieval function just to skip those few names and continue running the script on the rest. Reproducible code follows.
import wikipedia
# This one works.
links = ["CPython"]
test = [wikipedia.page(link, auto_suggest=False) for link in links]
test = [testitem.content for testitem in test]
print(test)
#The sequence breaks down if there is no wikipedia page.
links = ["CPython","no page"]
test = [wikipedia.page(link, auto_suggest=False) for link in links]
test = [testitem.content for testitem in test]
print(test)
The library running it uses a method like this. Normally it would be really bad practice, but since this is just for a one-off data extraction, I am willing to change the local copy of the library to get it to work. Edit I included the complete function now.
def page(title=None, pageid=None, auto_suggest=True, redirect=True, preload=False):
'''
Get a WikipediaPage object for the page with title `title` or the pageid
`pageid` (mutually exclusive).
Keyword arguments:
* title - the title of the page to load
* pageid - the numeric pageid of the page to load
* auto_suggest - let Wikipedia find a valid page title for the query
* redirect - allow redirection without raising RedirectError
* preload - load content, summary, images, references, and links during initialization
'''
if title is not None:
if auto_suggest:
results, suggestion = search(title, results=1, suggestion=True)
try:
title = suggestion or results[0]
except IndexError:
# if there is no suggestion or search results, the page doesn't exist
raise PageError(title)
return WikipediaPage(title, redirect=redirect, preload=preload)
elif pageid is not None:
return WikipediaPage(pageid=pageid, preload=preload)
else:
raise ValueError("Either a title or a pageid must be specified")
What should I do to retreive only the pages that do not give the error. Maybe there is a way to filter out all items in the list that give this error or an error of some kind. Returning "NA" or similar would be fine with pages that don't exist. Skipping them without notice would be fine too. Thanks!
The function wikipedia.page will raise a wikipedia.exceptions.PageError if the page doesn't exist. That's the error you want to catch.
import wikipedia
links = ["CPython","no page"]
test=[]
for link in links:
try:
#try to load the wikipedia page
page=wikipedia.page(link, auto_suggest=False)
test.append(page)
except wikipedia.exceptions.PageError:
#if a "PageError" was raised, ignore it and continue to next link
continue
You have to surround the function wikipedia.page by a try block, so I'm afraid you can't use list comprehension.
Understand that this will be bad practice, but for a one off quick and dirty script you can just:
edit: Wait, sorry. I've just noticed the list comprehension. I'm actually not sure if this will work without breaking that down:
links = ["CPython", "no page"]
test = []
for link in links:
try:
page = wikipedia.page(link, auto_suggest=False)
test.append(page)
except wikipedia.exceptions.PageError:
pass
test = [testitem.content for testitem in test]
print(test)
pass Tells python to essentially to trust you and ignore the error so that it can continue on about its day.
I'm writing a small fixture for implementing regression tests. The function under test does not contain any assert statements but produces output which is compared to a recorded output which is assumed to be correct.
This is a simplfied snippet to demonstrate what I'm doing:
#pytest.yield_fixture()
def regtest(request):
fp = cStringIO.StringIO()
yield fp
reset, full_path, id_ = _setup(request)
if reset:
_record_output(fp.getvalue(), full_path)
else:
failed = _compare_output(fp.getvalue(), full_path, request, id_)
if failed:
pytest.fail("regression test %s failed" % id_, pytrace=False)
In general my approach works works but I want to improve error reporting so that the fixture indicates the failure of a test and not the testing function itself: this implementation always prints a . because the testing function does not raise any exception, and then an extra E if pytest.fail is called in the last line.
So what I want is to supress the output of . triggered by the function under test and let my fixture code output the approriate character.
Update:
I was able to improve output, but it still I have to many "." in the output when the tests are running. It is uploaded at https://pypi.python.org/pypi/pytest-regtest
you can find the repository at https://sissource.ethz.ch/uweschmitt/pytest-regtest/tree/master
Sorry for posting links, but the files got a bit bigger now.
Solution:
I came up with a solution by implementing an hook which handles the regtest result in hook. The code is then (simplified):
#pytest.yield_fixture()
def regtest(request):
fp = cStringIO.StringIO()
yield fp
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_call(item):
try:
outcome = yield
except Exception:
raise
else:
# we only handle regtest fixture if no other other exception came up during testing:
if outcome.excinfo is not None:
return
regtest = item.funcargs.get("regtest")
if regtest is not None:
_handle_regtest_result(regtest)
And _handle_regtest_result either stores the recorded values or does the appropriate checks. The plugin is now available at https://pypi.python.org/pypi/pytest-regtest
Your are mixing two things there: the fixture itself (setting up conditions for your test) and the expected behavior _compare_output(a, b). You are probably looking for something along the lines:
import pytest
#pytest.fixture()
def file_fixture():
fp = cStringIO.StringIO()
return fp.getvalue()
#pytest.fixture()
def request_fixture(request, file_fixture):
return _setup(request)
def test_regression(request_fixture, file_fixture):
reset, full_path, id_ = request_fixture
if reset:
_record_output(file_fixture, full_path)
else:
failed = _compare_output(file_fixture, full_path, request, id_)
assert failed is True, "regression test %s failed" % id_