Is it possible to change junit_suite_name depending on pytest parameters? - python

I'm trying to create a test report in JUnit XML format while running pytest on Jenkins.
The default test suite name is "pytest", but I want to change the name depending on parameter values.
For example,
in conftest.py, I have
def pytest_addoption(parser):
parser.addoption("--site", type=str.upper, action="append", default=[],
help="testing site")
And I want to change junit_suite_name option depending on the site value.
I read pytest document but I found that you can change the name in config file like this
[pytest]
junit_suite_name = my_suite
or on command line, -o junit_suite_name.
But this way, the name will always the same for all test cases.
Is there a way to group the suite name conditionally?
Thanks

You can change ini options programmatically via setting or changing values in config.inicfg dict. For example, do it in the custom pytest_configure hookimpl:
def pytest_configure(config):
if config.option.site == 'foo':
config.inicfg['junit_suite_name'] = 'bar'
The suite name in the JUnit report will now be bar when you run pytest --site=foo.

Related

Load existing data catalog programmatically

I want to write pytest unit test in Kedro 0.17.5. They need to perform integrity checks on dataframes created by the pipeline.
These dataframes are specified in the catalog.yml and already persisted successfully using kedro run. The catalog.yml is in conf/base.
I have a test module test_my_dataframe.py in src/tests/pipelines/my_pipeline/.
How can I load the data catalog based on my catalog.yml programmatically from within test_my_dataframe.py in order to properly access my specified dataframes?
Or, for that matter, how can I programmatically load the whole project context (including the data catalog) in order to also execute nodes etc.?
For unit testing, we test just the function which we are testing, and everything external to the function we should mock/patch. Check if you really need kedro project context while writing the unit test.
If you really need project context in test, you can do something like following
from kedro.framework.project import configure_project
from kedro.framework.session import KedroSession
with KedroSession.create(package_name="demo", project_path=Path.cwd()) as session:
context = session.load_context()
catalog = context.catalog
or you can also create pytest fixture to use it again and again with scope of your choice.
#pytest.fixture
def get_project_context():
session = KedroSession.create(
package_name="demo",
project_path=Path.cwd()
)
_activate_session(session, force=True)
context = session.load_context()
return context
Different args supported by KedroSession create you can check it here https://kedro.readthedocs.io/en/0.17.5/kedro.framework.session.session.KedroSession.html#kedro.framework.session.session.KedroSession.create
To read more about pytest fixture you can refer to https://docs.pytest.org/en/6.2.x/fixture.html#scope-sharing-fixtures-across-classes-modules-packages-or-session

xfail pytest tests from the command line

I have a test suite where I need to mark some tests as xfail, but I cannot edit the test functions themselves to add markers. Is it possible to specify that some tests should be xfail from the command line using pytest? Or barring that, at least by adding something to pytest.ini or conftest.py?
I don't know of a command line option to do this, but if you can filter out the respective tests, you may implement pytest_collection_modifyitemsand add an xfail marker to these tests:
conftest.py
names_to_be_xfailed = ("test_1", "test_3")
def pytest_collection_modifyitems(config, items):
for item in items:
if item.name in names_to_be_xfailed:
item.add_marker("xfail")
or, if the name is not unique, you could also filter by item.nodeid.

How to explicitly instruct PyTest to drop a database after some tests?

I am writing unit-tests with pytest-django for a django app. I want to make my tests more performant and doing so is requiring me to keep data saved in database for a certain time and not drop it after one test only. For example:
#pytest.mark.django_db
def test_save():
p1 = MyModel.objects.create(description="some description") # this object has the id 1
p1.save()
#pytest.mark.django_db
def test_modify():
p1 = MyModel.objects.get(id=1)
p1.description = "new description"
What I want is to know how to keep both tests separated but both will use the same test database for some time and drop it thereafter.
I think what you need are pytest fixtures. They allow you yo create objects (stored in database if needed) that will be used during tests. You can have a look at pytest fixtures scope that you can set so that the fixture is not deleted from database and reloading for each test that requires it but instead is created once for a bunch of tests and deleted afterwards.
You should read the documentation of pytest fixtures (https://docs.pytest.org/en/6.2.x/fixture.html) and the section dedicated to fixtures' scope (https://docs.pytest.org/en/6.2.x/fixture.html#scope-sharing-fixtures-across-classes-modules-packages-or-session).

create robot framework Test Case dynamically on running test suite

I got a very specific scenario, where I'm inserting some data to the database(e.g. let's say 3 inserts and each one of them returns some ID) and based on a return value I want to create dynamically test cases for those return values
E.g.
*** Variables ***
#{result} ${EMPTY}
*** Test Cases ***
Some dummy sql inserts
${result} Insert sql statements dt1 dt2 dt3 #e.g. return ['123', '456', '789']
Verify some ids
# NOPE, sorry i can't use [Template] because each iteration is not marked on a report as a "TEST" but as a "VAR"
Verify if ids exist somewhere ${result} #This keyword execution should create another 3 test cases, one for each item from ${result} list
*** Keywords ***
Insert sql statement
[Arguments] #{data}
Create List ${result}
FOR ${elem} IN #{data}
${return_id} SomeLib.Execute SQL INSERT INTO some_table(some_id) VALUES (${elem})
Append To List ${result} ${return_id}
END
[Return] ${result}
Verify if ids exist somewhere
[Arguments] ${some_list_of_ids}
FOR ${id} IN #{some_list_of_ids}
So some stuff on ${id}
END
I was trying to figure out how to do that by reffering to robot API documentation but without any success.
Can you please tell/advise if it's feasible, if so, than how can I achieve that.
So far I've figured out that there might be 2 ways of doing this:
by creating a listener
by creating own keyword
In both cases, I have to put the logic there, but can't figure out how to create test cases on-the-fly.
Help, please? :)
P.S. Some examples are more than welcome. Thanks in advance
There is a blog post with a answer for you:
https://gerg.dev/2018/09/dynamically-create-test-cases-with-robot-framework/
As you suggested the solution is to create a listener so you can add tests dynamically. Just carefully read the post as there are some constrains as when you can and cannot create tests (during the execution process).
Also, the post was for 3.x framework and for 4.x you need to make a tiny change in the class, by replacing:
tc.keywords.create(name=kwname, args=args)
with:
tc.body.create_keyword(name=kwname, args=args).
Example on how to implement this:
demo.robot:
*** Settings ***
Library DynamicTestCases.py
*** Test Cases ***
Create Dynamic Test Cases
#{TestNamesList} Create List "Test 1" "Test 2" "Test 3"
FOR ${element} IN #{TestNamesList}
Add Test Case ${element} Keyword To Execute
END
*** Keywords ***
Keyword To Execute
Log This is executed for each test!
DynamicTestCases.py contents (basic copy from the url I posted + the changed line):
from __future__ import print_function
from robot.running.model import TestSuite
class DynamicTestCases(object):
ROBOT_LISTENER_API_VERSION = 3
ROBOT_LIBRARY_SCOPE = 'TEST SUITE'
def __init__(self):
self.ROBOT_LIBRARY_LISTENER = self
self.current_suite = None
def _start_suite(self, suite, result):
# save current suite so that we can modify it later
self.current_suite = suite
def add_test_case(self, name, kwname, *args):
"""Adds a test case to the current suite
'name' is the test case name
'kwname' is the keyword to call
'*args' are the arguments to pass to the keyword
Example:
add_test_case Example Test Case
... log hello, world WARN
"""
tc = self.current_suite.tests.create(name=name)
#tc.keywords.create(name=kwname, args=args) #deprecated in 4.0
tc.body.create_keyword(name=kwname, args=args)
# To get our class to load, the module needs to have a class
# with the same name of a module. This makes that happen:
globals()[__name__] = DynamicTestCases
This is a small example how to make it work.
If for example you want to give a variable to the keyword, just add the argument:
*** Settings ***
Library DynamicTestCases.py
*** Test Cases ***
Create Dynamic Test Cases
#{TestNamesList} Create List "Test 1" "Test 2" "Test 3"
FOR ${element} IN #{TestNamesList}
Add Test Case ${element} Keyword To Execute ${element}
END
*** Keywords ***
Keyword To Execute
[Arguments] ${variable}
Log The variable sent to the test was: ${variable}

Python Behave - how to pass value from a scenario to use in a fixture on a feature level?

I have the following test scenario:
Check if project with a specific name was created
Edit this project
Verify that it was edited
remove this project as part of a teardown procedure
Here is an example code to achieve that:
Scenario:
#fixture.remove_edited_project
#web
Scenario: Edit a project data
Given Project was created with the following parameters
| project_name |
| my_project_to_edit |
When I edit the "my_project_to_edit" project
Then Project is edited
Step to save the data in some variable to be used in a teardown function(fixture):
#step('I edit the "{project_name}" project')
def step_impl(context, project_name):
# steps related to editing the project
# storing value in context variable to be used in fixture
context.edited_project_name = project_name
and an example fixture function to remove a project after scenario:
#fixture
def remove_edited_project(context):
yield
logging.info(f'Removing project: "{context.edited_project_name}"')
# Part deleting a project with name stored in context.edited_project_name
In such a configuration everything works fine and project is deleted by a fixture in any case(test failed or passed). Which is alright.
But, when I want to execute such a feature on a Feature level, means placing #fixture.remove_edited_project decorator before Feature Keyword:
#fixture.remove_edited_project
Feature: My project Edit feature
, then this is not working.
I know the reason already - the context.edited_project_name variable is cleaned after every scenario and it's no longer available for this fixture function later.
Is there any good way in passing a parameter somehow to a fixture on a feature level? Somehow globally?
I was trying to use global variables as an option, but this started to be a bit dirty and problematic in this framework.
Ideally it would be to have something like #fixture.edited_project_name('my_project_to_edit')
Because the context gets cleaned of variables created during execution of the scenario you need a mechanism that persists through the feature. One way to do this would be to create a dictionary or other container in the context during setup of the fixture so that it will persist through the feature. The scenarios can set attributes or add to the container and because the dictionary was added during the feature, it will still exist during destruction of the fixture. E.g.,
#fixture
def remove_edited_project(context):
context.my_fixture_properties = {}
yield
logging.info(f'Removing project: "{context.my_fixture_properties['edited_project_name']}"')
#step('I edit the "{project_name}" project')
def step_impl(context, project_name):
# steps related to editing the project
# storing value in context variable to be used in fixture
context.my_fixture_properties['edited_project_name'] = project_name

Categories