How does Bazel's py_binary stamping work? - python

The docs say that the "stamp" parameter will "enable link stamping" and that it will "encode build information into the binary" and "stamp the build information into the binary"
But where does that information actually go, and how can I retrieve it?
It does not seem to be a simple search-and-replace, nor variable definition, nor environment variable, nor format-string variable. Test python script:
BUILD_HOST="ASDF"
print("{BUILD_HOST}")
print(BUILD_HOST)
Build rule:
py_binary(name="catself", srcs=["catself.py"], stamp=1)
Output:
$ bazel run --stamp catself
INFO: Analyzed target //:catself (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
Target //:catself up-to-date:
bazel-bin/catself
INFO: Elapsed time: 0.083s, Critical Path: 0.01s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
INFO: Build completed successfully, 1 total action
{BUILD_HOST}
ASDF

It looks like stamping might not be enabled for py_binary. It should work for genrule though, so something simple like this should work:
py_binary(
name = "foo",
srcs = ["foo.py"],
data = [":stable-status.txt"],
)
genrule(
name = "copy_stable-status.txt",
outs = ["stable-status.txt"],
cmd = "cp bazel-out/stable-status.txt $#",
stamp = 1,
)
foo.py:
build_info = {}
with open("stable-status.txt") as stable_status:
for line in stable_status.readlines():
key, val = line.split(" ", 1)
build_info[key] = val.strip()
print("Build label is:")
print(build_info['BUILD_EMBED_LABEL'])
then:
$ bazel run foo --embed_label=foobar
Starting local Bazel server and connecting to it...
INFO: Analyzed target //:foo (17 packages loaded, 102 targets configured).
INFO: Found 1 target...
Target //:foo up-to-date:
bazel-bin/foo
INFO: Elapsed time: 2.985s, Critical Path: 0.11s
INFO: 1 process: 1 linux-sandbox.
INFO: Build completed successfully, 6 total actions
INFO: Build completed successfully, 6 total actions
Build label is:
foobar
Or something fancier:
py_binary(
name = "bar",
srcs = ["bar.py"],
deps = [":buildinfo"],
)
py_library(
name = "buildinfo",
srcs = [":buildinfo.py"],
)
genrule(
name = "gen_buildinfo",
outs = ["buildinfo.py"],
cmd = r"""sed -E 's/(.*) (.*)/\1 = "\2"/' bazel-out/stable-status.txt > $#""",
stamp = 1,
)
bar.py:
import buildinfo
print("Build label is:")
print(buildinfo.BUILD_EMBED_LABEL)

Related

Python click sample code test case does not give 100% coverage [duplicate]

This question already has answers here:
Python coverage.py exclude_lines
(2 answers)
is there a python-version specific "#pragma nocover" available for python coverage tool?
(4 answers)
Closed last year.
I wrote the following code.
https://gitlab.com/ksaito11/click-test
$ cat commands/cmd.py
import click
from commands.hello import hello
def print_version(ctx, param, value):
if not value or ctx.resilient_parsing:
return
click.echo('Version 1.0')
ctx.exit()
#click.group()
#click.option('--opt1')
#click.option('--version', is_flag=True, callback=print_version,
expose_value=False, is_eager=True)
#click.pass_context
def cmd(ctx, **kwargs):
ctx.obj = kwargs
def main():
cmd.add_command(hello)
cmd(auto_envvar_prefix='HELLOCLI')
if __name__ == '__main__':
main()
$ cat commands/hello.py
import click
#click.command()
def hello():
click.echo('Hello World!')
The code works correctly.
$ export PYTHONPATH=.
$ python commands/cmd.py
Usage: cmd.py [OPTIONS] COMMAND [ARGS]...
Options:
--opt1 TEXT
--version
--help Show this message and exit.
Commands:
hello
$ python commands/cmd.py --version
Version 1.0
$ python commands/cmd.py hello
Hello World!
I wrote the following test case.
$ cat tests/test_cmd.py
from click.testing import CliRunner
import click
import pytest
from commands.cmd import cmd, main
from commands.hello import hello
def test_version():
runner = CliRunner()
result = runner.invoke(cmd, ["--version"])
assert result.exit_code == 0
def test_help():
runner = CliRunner()
result = runner.invoke(cmd)
assert result.exit_code == 0
def test_hello():
runner = CliRunner()
result = runner.invoke(hello)
assert result.exit_code == 0
I measured the coverage with the following command.
$ pytest --cov-branch --cov=commands
================================================================ test session starts ================================================================
platform linux -- Python 3.9.9, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /home/ksaito/ghq/gitlab.com/ksaito11/click-test
plugins: cov-3.0.0
collected 3 items
tests/test_cmd.py ... [100%]
----------- coverage: platform linux, python 3.9.9-final-0 -----------
Name Stmts Miss Branch BrPart Cover
--------------------------------------------------------
commands/__init__.py 0 0 0 0 100%
commands/cmd.py 18 5 4 2 68%
commands/hello.py 4 0 0 0 100%
--------------------------------------------------------
TOTAL 22 5 4 2 73%
================================================================= 3 passed in 0.15s =================================================================
I didn't know how to write the code to test the part below and couldn't get 100% coverage.
def cmd(ctx, **kwargs):
ctx.obj = kwargs
def main():
cmd.add_command(hello)
cmd(auto_envvar_prefix='HELLOCLI')
The code below may not be needed when using "# click.group", but I couldn't determine.
def print_version(ctx, param, value):
if not value or ctx.resilient_parsing:
return
Please give me advice.
By adding the following settings, code that does not need to be included in coverage is excluded.
$ cat .coveragerc
[run]
branch = True
[report]
exclude_lines =
# Don't complain if non-runnable code isn't run:
if 0:
if __name__ == .__main__.:
def main
ctx.obj = kwargs
I deleted the code below because I thought it was unnecessary.
if not value or ctx.resilient_parsing:
return
The coverage is now 100%.
$ pytest --cov-branch --cov=commands
================================================================ test session starts ================================================================
platform linux -- Python 3.9.9, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /home/ksaito/ghq/gitlab.com/ksaito11/click-test
plugins: cov-3.0.0
collected 3 items
tests/test_cmd.py ... [100%]
----------- coverage: platform linux, python 3.9.9-final-0 -----------
Name Stmts Miss Branch BrPart Cover
--------------------------------------------------------
commands/__init__.py 0 0 0 0 100%
commands/cmd.py 10 0 0 0 100%
commands/hello.py 4 0 0 0 100%
--------------------------------------------------------
TOTAL 14 0 0 0 100%
================================================================= 3 passed in 0.22s =================================================================

Make pytest output like googletest?

I'm using PyTest for python code testing. Since I use googletest for my C++ code testing, I like the output format of googletest.
I'm wondering, is it possible to make pytest output like googletest? The pytest output line is too long, while googletest is short:
// pytest example:
(base) zz#home% pytest test_rle_v2.py
================================================================================== test session starts ===================================================================================
platform linux -- Python 3.8.1, pytest-6.0.1, py-1.9.0, pluggy-0.13.1
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/zz/work/test/learn-hp/.hypothesis/examples')
rootdir: /home/zz/work/test/learn-hp
plugins: env-0.6.2, hypothesis-4.38.0
collected 1 item
test_rle_v2.py . [100%]
=================================================================================== 1 passed in 0.46s ====================================================================================
// googletest example
(base) zz#home% ./test_version
[==========] Running 5 tests from 1 test suite.
[----------] Global test environment set-up.
[----------] 5 tests from VERSION
[ RUN ] VERSION.str
[ OK ] VERSION.str (0 ms)
[ RUN ] VERSION.parts
[ OK ] VERSION.parts (0 ms)
[ RUN ] VERSION.metadata
[ OK ] VERSION.metadata (1 ms)
[ RUN ] VERSION.atLeast
[ OK ] VERSION.atLeast (0 ms)
[ RUN ] VERSION.hasFeature
[ OK ] VERSION.hasFeature (0 ms)
[----------] 5 tests from VERSION (1 ms total)
[----------] Global test environment tear-down
[==========] 5 tests from 1 test suite ran. (1 ms total)
[ PASSED ] 5 tests.
After several hours searching and trying, I found a conftest.py file required for my purpose. In conftest.py, people can override default pytest function, i.e. by providing hooks.
The following is an WIP example:
# conftest.py
import os
import random
def pytest_runtest_call(item):
item.add_report_section("call", "custom", " [ Run ] " + str(item))
def pytest_report_teststatus(report, config):
#print(">>> outcome:", report.outcome)
if report.when == 'call':
# line = f' [ Run ] {report.nodeid}'
# report.sections.append(('ChrisZZ', line))
if (report.outcome == 'failed'):
line = f' [ FAILED ] {report.nodeid}'
report.sections.append(('failed due to', line))
if report.when == 'teardown':
if (report.outcome == 'passed'):
line = f' [ OK ] {report.nodeid}'
report.sections.append(('ChrisZZ', line))
def pytest_terminal_summary(terminalreporter, exitstatus, config):
reports = terminalreporter.getreports('')
content = os.linesep.join(text for report in reports for secname, text in report.sections)
if content:
terminalreporter.ensure_newline()
#terminalreporter.section('', sep=' ', green=True, bold=True)
#terminalreporter.section('My custom section2', sep='------]', green=True, bold=True, fullwidth=None)
terminalreporter.line(content)

How to get progress of successful build through Jenkins Python API

I have written python code to retrieve information about build. I prints a summary of successful and unsuccessful builds.
from prettytable import PrettyTable
t = PrettyTable(['Job name','Successful','Failed','Unstable','Aborted','Total Builds','Failure Rate'])
t1 = PrettyTable(['Status', 'Job name','Build #','Date','Duration','Node','User'])
aggregation ={}
jobs = server.get_all_jobs(folder_depth=None)
for job in jobs:
print(job['fullname'])
aggregation[job['fullname']] = {"success" : 0 , "failure" : 0 , "aborted" : 0, "unstable":0}
info = server.get_job_info(job['fullname'])
# Loop over builds
builds = info['builds']
for build in builds:
information = server.get_build_info(job["fullname"],
build['number'])
if "SUCCESS" in information['result']:
aggregation[job['fullname']]['success'] = str(int(aggregation[job['fullname']]['success']) + 1)
if "FAILURE" in information['result']:
aggregation[job['fullname']]['failure'] = str(int(aggregation[job['fullname']]['failure']) + 1)
if "ABORTED" in information['result']:
aggregation[job['fullname']]['aborted'] = str(int(aggregation[job['fullname']]['aborted']) + 1)
if "UNSTABLE" in information['result']:
aggregation[job['fullname']]['unstable'] = str(int(aggregation[job['fullname']]['unstable']) + 1)
t1.add_row([ information['result'], job['fullname'],information["id"],datetime.fromtimestamp(information['timestamp']/1000),information["duration"],"master",information["actions"][0]["causes"][0]["userName"]])
total_build = int(aggregation[job['fullname']]['success'])+int(aggregation[job['fullname']]['failure'])
t.add_row([job["fullname"], aggregation[job['fullname']]['success'],aggregation[job['fullname']]['failure'],aggregation[job['fullname']]['aborted'],aggregation[job['fullname']]['unstable'],total_build,(float(aggregation[job['fullname']]['failure'])/total_build)*100])
with open('result', 'w') as w:
w.write(str(t1))
w.write(str(t))
This is what the output looks like:
And this is what Windows batch execute command looks like:
cd E:\airflowtmp
conda activate web_scraping
python hello.py
hello.py prints hello world. If I add print counter =100 or something like this then how do I return it and print it in this resultant table.
Edit:
I am trying to get some kind of variable from code to display. For instance if Im scraping pages and scraper ran successfully then I want to know the number of pages that it scraped. You can think of it as a simple counter. Is there any way to return a variable from Jenkins to python

Would like to see list of deselected tests and their node ids in pytest output

Is there an option to list the deselected tests in the cli output along with the mark that triggered their deselection?
I know that in suites with many tests this would not be good as a default but would be a useful option in something like api testing where the tests are likely to be more limited.
The numeric summary
collected 21 items / 16 deselected / 5 selected
is helpful but not enough when trying to organize marks and see what happened in a ci build.
pytest has a hookspec pytest_deselected for accessing the deselected tests. Example: add this code to conftest.py in your test root dir:
def pytest_deselected(items):
if not items:
return
config = items[0].session.config
reporter = config.pluginmanager.getplugin("terminalreporter")
reporter.ensure_newline()
for item in items:
reporter.line(f"deselected: {item.nodeid}", yellow=True, bold=True)
Running the tests now will give you an output similar to this:
$ pytest -vv
...
plugins: cov-2.8.1, asyncio-0.10.0
collecting ...
deselected: test_spam.py::test_spam
deselected: test_spam.py::test_bacon
deselected: test_spam.py::test_ham
collected 4 items / 3 deselected / 1 selected
...
If you want a report in another format, simply store the deselected items in the config and use them for the desired output somewhere else, e.g. pytest_terminal_summary:
# conftest.py
import os
def pytest_deselected(items):
if not items:
return
config = items[0].session.config
config.deselected = items
def pytest_terminal_summary(terminalreporter, exitstatus, config):
reports = terminalreporter.getreports('')
content = os.linesep.join(text for report in reports for secname, text in report.sections)
deselected = getattr(config, "deselected", [])
if deselected:
terminalreporter.ensure_newline()
terminalreporter.section('Deselected tests', sep='-', yellow=True, bold=True)
content = os.linesep.join(item.nodeid for item in deselected)
terminalreporter.line(content)
gives the output:
$ pytest -vv
...
plugins: cov-2.8.1, asyncio-0.10.0
collected 4 items / 3 deselected / 1 selected
...
---------------------------------------- Deselected tests -----------------------------------------
test_spam.py::test_spam
test_spam.py::test_bacon
test_spam.py::test_ham
================================= 1 passed, 3 deselected in 0.01s =================================

How to add custom sections to terminal report in pytest

In pytest, when a test case is failed, you have in the report the following categories:
Failure details
Captured stdout call
Captured stderr call
Captured log call
I would like to add some additional custom sections (I have a server that turns in parallel and would like to display the information logged by this server in a dedicated section).
How could I do that (if ever possible)?
Thanks
NOTE:
I have currently found the following in source code but don't know whether that shall be right approach
nodes.py
class Item(Node):
...
def add_report_section(self, when, key, content):
"""
Adds a new report section, similar to what's done internally
to add stdout and stderr captured output::
...
"""
reports.py
class BaseReport:
...
#property
def caplog(self):
"""Return captured log lines, if log capturing is enabled
.. versionadded:: 3.5
"""
return "\n".join(
content for (prefix, content) in self.get_sections("Captured log")
)
To add custom sections to terminal output, you need to append to report.sections list. This can be done in pytest_report_teststatus hookimpl directly, or in other hooks indirectly (via a hookwrapper); the actual implementation heavily depends on your particular use case. Example:
# conftest.py
import os
import random
import pytest
def pytest_report_teststatus(report, config):
messages = (
'Egg and bacon',
'Egg, sausage and bacon',
'Egg and Spam',
'Egg, bacon and Spam'
)
if report.when == 'teardown':
line = f'{report.nodeid} says:\t"{random.choice(messages)}"'
report.sections.append(('My custom section', line))
def pytest_terminal_summary(terminalreporter, exitstatus, config):
reports = terminalreporter.getreports('')
content = os.linesep.join(text for report in reports for secname, text in report.sections)
if content:
terminalreporter.ensure_newline()
terminalreporter.section('My custom section', sep='-', blue=True, bold=True)
terminalreporter.line(content)
Example tests:
def test_spam():
assert True
def test_eggs():
assert True
def test_bacon():
assert False
When running the tests, you should see My custom section header at the bottom colored blue and containing a message for every test:
collected 3 items
test_spam.py::test_spam PASSED
test_spam.py::test_eggs PASSED
test_spam.py::test_bacon FAILED
============================================= FAILURES =============================================
____________________________________________ test_bacon ____________________________________________
def test_bacon():
> assert False
E assert False
test_spam.py:9: AssertionError
---------------------------------------- My custom section -----------------------------------------
test_spam.py::test_spam says: "Egg, bacon and Spam"
test_spam.py::test_eggs says: "Egg and Spam"
test_spam.py::test_bacon says: "Egg, sausage and bacon"
================================ 1 failed, 2 passed in 0.07 seconds ================================
The other answer shows how to add a custom section to the terminal report summary, but it's not the best way for adding a custom section per test.
For this goal, you can (and should) use the higher-level API add_report_section of an Item node (docs). A minimalist example is shown below, modify it to suit your needs. You can pass state from the test instance through an item node, if necessary.
In test_something.py, here is one passing test and two failing:
def test_good():
assert 2 + 2 == 4
def test_bad():
assert 2 + 2 == 5
def test_ugly():
errorerror
In conftest.py, setup a hook wrapper:
import pytest
content = iter(["first", "second", "third"])
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_call(item):
outcome = yield
item.add_report_section("call", "custom", next(content))
The report will now display custom sections per-test:
$ pytest
============================== test session starts ===============================
platform linux -- Python 3.9.0, pytest-6.2.2, py-1.10.0, pluggy-0.13.1
rootdir: /tmp/example
collected 3 items
test_something.py .FF [100%]
==================================== FAILURES ====================================
____________________________________ test_bad ____________________________________
def test_bad():
> assert 2 + 2 == 5
E assert (2 + 2) == 5
test_something.py:5: AssertionError
------------------------------ Captured custom call ------------------------------
second
___________________________________ test_ugly ____________________________________
def test_ugly():
> errorerror
E NameError: name 'errorerror' is not defined
test_something.py:8: NameError
------------------------------ Captured custom call ------------------------------
third
============================ short test summary info =============================
FAILED test_something.py::test_bad - assert (2 + 2) == 5
FAILED test_something.py::test_ugly - NameError: name 'errorerror' is not defined
========================== 2 failed, 1 passed in 0.02s ===========================

Categories