I got a very specific scenario, where I'm inserting some data to the database(e.g. let's say 3 inserts and each one of them returns some ID) and based on a return value I want to create dynamically test cases for those return values
E.g.
*** Variables ***
#{result} ${EMPTY}
*** Test Cases ***
Some dummy sql inserts
${result} Insert sql statements dt1 dt2 dt3 #e.g. return ['123', '456', '789']
Verify some ids
# NOPE, sorry i can't use [Template] because each iteration is not marked on a report as a "TEST" but as a "VAR"
Verify if ids exist somewhere ${result} #This keyword execution should create another 3 test cases, one for each item from ${result} list
*** Keywords ***
Insert sql statement
[Arguments] #{data}
Create List ${result}
FOR ${elem} IN #{data}
${return_id} SomeLib.Execute SQL INSERT INTO some_table(some_id) VALUES (${elem})
Append To List ${result} ${return_id}
END
[Return] ${result}
Verify if ids exist somewhere
[Arguments] ${some_list_of_ids}
FOR ${id} IN #{some_list_of_ids}
So some stuff on ${id}
END
I was trying to figure out how to do that by reffering to robot API documentation but without any success.
Can you please tell/advise if it's feasible, if so, than how can I achieve that.
So far I've figured out that there might be 2 ways of doing this:
by creating a listener
by creating own keyword
In both cases, I have to put the logic there, but can't figure out how to create test cases on-the-fly.
Help, please? :)
P.S. Some examples are more than welcome. Thanks in advance
There is a blog post with a answer for you:
https://gerg.dev/2018/09/dynamically-create-test-cases-with-robot-framework/
As you suggested the solution is to create a listener so you can add tests dynamically. Just carefully read the post as there are some constrains as when you can and cannot create tests (during the execution process).
Also, the post was for 3.x framework and for 4.x you need to make a tiny change in the class, by replacing:
tc.keywords.create(name=kwname, args=args)
with:
tc.body.create_keyword(name=kwname, args=args).
Example on how to implement this:
demo.robot:
*** Settings ***
Library DynamicTestCases.py
*** Test Cases ***
Create Dynamic Test Cases
#{TestNamesList} Create List "Test 1" "Test 2" "Test 3"
FOR ${element} IN #{TestNamesList}
Add Test Case ${element} Keyword To Execute
END
*** Keywords ***
Keyword To Execute
Log This is executed for each test!
DynamicTestCases.py contents (basic copy from the url I posted + the changed line):
from __future__ import print_function
from robot.running.model import TestSuite
class DynamicTestCases(object):
ROBOT_LISTENER_API_VERSION = 3
ROBOT_LIBRARY_SCOPE = 'TEST SUITE'
def __init__(self):
self.ROBOT_LIBRARY_LISTENER = self
self.current_suite = None
def _start_suite(self, suite, result):
# save current suite so that we can modify it later
self.current_suite = suite
def add_test_case(self, name, kwname, *args):
"""Adds a test case to the current suite
'name' is the test case name
'kwname' is the keyword to call
'*args' are the arguments to pass to the keyword
Example:
add_test_case Example Test Case
... log hello, world WARN
"""
tc = self.current_suite.tests.create(name=name)
#tc.keywords.create(name=kwname, args=args) #deprecated in 4.0
tc.body.create_keyword(name=kwname, args=args)
# To get our class to load, the module needs to have a class
# with the same name of a module. This makes that happen:
globals()[__name__] = DynamicTestCases
This is a small example how to make it work.
If for example you want to give a variable to the keyword, just add the argument:
*** Settings ***
Library DynamicTestCases.py
*** Test Cases ***
Create Dynamic Test Cases
#{TestNamesList} Create List "Test 1" "Test 2" "Test 3"
FOR ${element} IN #{TestNamesList}
Add Test Case ${element} Keyword To Execute ${element}
END
*** Keywords ***
Keyword To Execute
[Arguments] ${variable}
Log The variable sent to the test was: ${variable}
Related
I am writing unit-tests with pytest-django for a django app. I want to make my tests more performant and doing so is requiring me to keep data saved in database for a certain time and not drop it after one test only. For example:
#pytest.mark.django_db
def test_save():
p1 = MyModel.objects.create(description="some description") # this object has the id 1
p1.save()
#pytest.mark.django_db
def test_modify():
p1 = MyModel.objects.get(id=1)
p1.description = "new description"
What I want is to know how to keep both tests separated but both will use the same test database for some time and drop it thereafter.
I think what you need are pytest fixtures. They allow you yo create objects (stored in database if needed) that will be used during tests. You can have a look at pytest fixtures scope that you can set so that the fixture is not deleted from database and reloading for each test that requires it but instead is created once for a bunch of tests and deleted afterwards.
You should read the documentation of pytest fixtures (https://docs.pytest.org/en/6.2.x/fixture.html) and the section dedicated to fixtures' scope (https://docs.pytest.org/en/6.2.x/fixture.html#scope-sharing-fixtures-across-classes-modules-packages-or-session).
I have a simple prerunmodifier that implements the start_suite function in which it gets the suite setup keyword from the suite variable and prints its attributes. The object is an instance of robot.running.model.Keyword class, here is the doc for that class. The name, keyword type, id and parent attributes are correct but the timeout, doc, tags, children attributes return nothing. It is the same with keywords and messages attributes. Below is my simplified code example and the output.
I would expect the following children: Log, Log Many, No Operation. Is it possible to get the name (and arguments) of these keywords in a prerunmodifier like this? I am using robotframework==3.1.2.
This is the suite file (test.robot):
*** Settings ***
Suite Setup Custom Suite Setup Keyword
*** Test Cases ***
A test
No Operation
*** Keywords ***
Custom Suite Setup Keyword
[Timeout] 2 min
[Documentation] It is a keyword doc.
[Tags] 1TAG 2TAG
Log 1st child
Log Many 2nd child
No Operation
[Teardown] My Keyword Teardown
My Keyword Teardown
Log teardown
This is the prerunmodifier (modifier.py):
from robot.api import SuiteVisitor
from robot.libraries.BuiltIn import BuiltIn
class MyModifier(SuiteVisitor):
def __init__(self):
self._BuiltIn = BuiltIn()
def start_suite(self, suite):
self._BuiltIn.log_to_console(f'suite keywords - {suite.keywords}')
self._BuiltIn.log_to_console(f'class - {type(suite.keywords.setup)}')
self._BuiltIn.log_to_console(f'name - {suite.keywords.setup.name}')
self._BuiltIn.log_to_console(f'id - {suite.keywords.setup.id}')
self._BuiltIn.log_to_console(f'parent(suite) - {suite.keywords.setup.parent}')
self._BuiltIn.log_to_console(f'timeout - {suite.keywords.setup.timeout}')
self._BuiltIn.log_to_console(f'type - {suite.keywords.setup.type}')
self._BuiltIn.log_to_console(f'doc - {suite.keywords.setup.doc}')
self._BuiltIn.log_to_console(f'tags - {suite.keywords.setup.tags}')
self._BuiltIn.log_to_console(f'children - {suite.keywords.setup.children}')
This is the output:
prompt# robot --prerunmodifier modifier.MyModifier --pythonpath ./ test.robot
suite keywords - [Custom Suite Setup Keyword]
class - <class 'robot.running.model.Keyword'>
name - Custom Suite Setup Keyword
id - s1-k1
parent(suite) - Test
timeout - None
type - setup
doc -
tags - []
children - []
==============================================================================
Test
==============================================================================
A test | PASS |
------------------------------------------------------------------------------
Test | PASS |
1 critical test, 1 passed, 0 failed
1 test total, 1 passed, 0 failed
==============================================================================
I have managed to find the relevant part in Robot Framework's API documentation. What I am trying to achieve is not possible.
Visitors make it easy to modify test suite structures or to collect
information from them. They work both with the executable model and
the result model, but the objects passed to the visitor methods are
slightly different depending on the model they are used with. The main
differences are that on the execution side keywords do not have child
keywords nor messages, and that only the result objects have status
related attributes like status and starttime.
With Robot Framework 4.0 it is possible if the keyword that is used as a suite setup is implemented within the suite itself, aka if the suite owns the keyword. The robot.running.model.TestSuite resource attribute's doc says:
ResourceFile instance containing imports, variables and keywords the suite owns. When data is parsed from the file system, this data comes from the same test case file that creates the suite.
So the children keywords and their args can be found in suite.resource.keywords object list.
from robot.api import SuiteVisitor
class Visitor(SuiteVisitor):
def start_suite(self, suite):
for keyword in suite.resource.keywords:
if suite.setup.name == keyword.name:
for item in keyword.body:
print(f'{item.name} - {item.args} - {item.type}')
if keyword.teardown.name:
print(f'{keyword.teardown.name} - {keyword.teardown.args} - {keyword.teardown.type}')
This prints:
Log - ('1st child',) - KEYWORD
Log Many - ('2nd', 'child') - KEYWORD
No Operation - () - KEYWORD
My Keyword Teardown - () - TEARDOWN
Again, this does not work if the user keyword is implemented in an imported resource file and not in the suite file itself.
I'm trying to create a test report in JUnit XML format while running pytest on Jenkins.
The default test suite name is "pytest", but I want to change the name depending on parameter values.
For example,
in conftest.py, I have
def pytest_addoption(parser):
parser.addoption("--site", type=str.upper, action="append", default=[],
help="testing site")
And I want to change junit_suite_name option depending on the site value.
I read pytest document but I found that you can change the name in config file like this
[pytest]
junit_suite_name = my_suite
or on command line, -o junit_suite_name.
But this way, the name will always the same for all test cases.
Is there a way to group the suite name conditionally?
Thanks
You can change ini options programmatically via setting or changing values in config.inicfg dict. For example, do it in the custom pytest_configure hookimpl:
def pytest_configure(config):
if config.option.site == 'foo':
config.inicfg['junit_suite_name'] = 'bar'
The suite name in the JUnit report will now be bar when you run pytest --site=foo.
I have the following test scenario:
Check if project with a specific name was created
Edit this project
Verify that it was edited
remove this project as part of a teardown procedure
Here is an example code to achieve that:
Scenario:
#fixture.remove_edited_project
#web
Scenario: Edit a project data
Given Project was created with the following parameters
| project_name |
| my_project_to_edit |
When I edit the "my_project_to_edit" project
Then Project is edited
Step to save the data in some variable to be used in a teardown function(fixture):
#step('I edit the "{project_name}" project')
def step_impl(context, project_name):
# steps related to editing the project
# storing value in context variable to be used in fixture
context.edited_project_name = project_name
and an example fixture function to remove a project after scenario:
#fixture
def remove_edited_project(context):
yield
logging.info(f'Removing project: "{context.edited_project_name}"')
# Part deleting a project with name stored in context.edited_project_name
In such a configuration everything works fine and project is deleted by a fixture in any case(test failed or passed). Which is alright.
But, when I want to execute such a feature on a Feature level, means placing #fixture.remove_edited_project decorator before Feature Keyword:
#fixture.remove_edited_project
Feature: My project Edit feature
, then this is not working.
I know the reason already - the context.edited_project_name variable is cleaned after every scenario and it's no longer available for this fixture function later.
Is there any good way in passing a parameter somehow to a fixture on a feature level? Somehow globally?
I was trying to use global variables as an option, but this started to be a bit dirty and problematic in this framework.
Ideally it would be to have something like #fixture.edited_project_name('my_project_to_edit')
Because the context gets cleaned of variables created during execution of the scenario you need a mechanism that persists through the feature. One way to do this would be to create a dictionary or other container in the context during setup of the fixture so that it will persist through the feature. The scenarios can set attributes or add to the container and because the dictionary was added during the feature, it will still exist during destruction of the fixture. E.g.,
#fixture
def remove_edited_project(context):
context.my_fixture_properties = {}
yield
logging.info(f'Removing project: "{context.my_fixture_properties['edited_project_name']}"')
#step('I edit the "{project_name}" project')
def step_impl(context, project_name):
# steps related to editing the project
# storing value in context variable to be used in fixture
context.my_fixture_properties['edited_project_name'] = project_name
I'm not sure if this is an IntelliJ thing or not (using the built-in test runner) but I have a class whose logging output I'd like to appear in the test case that I am running. I hope the example code is enough scope, if not I can edit to include more.
Basically the log.info() call in the Matching() class never shows up in my test runner console when running. Is there something I need to configure on the class that extends TestCase ?
Here's the class in matching.py:
class Matching(object):
"""
The main compliance matching logic.
"""
request_data = None
def __init__(self, matching_request):
"""
Set matching request information.
"""
self.request_data = matching_request
def can_matching_run(self):
raise Exception("Not implemented yet.")
def run_matching(self):
log.info("Matching started at {0}".format(datetime.now()))
Here is the test:
class MatchingServiceTest(IntegrationTestBase):
def __do_matching(self, client_name, date_range):
"""
Pull control records from control table, and compare against program generated
matching data from teh non-control table.
The ``client_name`` dictates which model to use. Data is compared within
a mock ``date_range``.
"""
from matching import Matching, MatchingRequest
# Run the actual matching service for client.
match_request = MatchingRequest(client_name, date_range)
matcher = Matching(match_request)
matcher.run_matching()
Well I do not see where you initialize the log object but I presume you do that somewhere and you add a Handler to it (StreamHandler, FileHandler etc.)
This means that during your tests this does not occur. So you would have to that in test. Since you did not post that part of the code, I can't give an exact solution:
import logging
log = logging.getLogger("your-logger-name")
log.addHandler(logging.StreamHandler())
log.setLevel(logging.DEBUG)
Although test should generally not have anything printed out to stdout. It's best to use a FileHandler, and you should design your tests in such a way that they will fail if something goes wrong. That's the whole point of automated tests. So you won't have to manually inspect the output. If they fail, you can then check the log to see if they contain useful debugging information.
Hope this helps.
Read more on logging here.