pytest-html add custom test results to report - python

I'm using pytest0html to generate my html report. my test record tested values and I need to show a table of this values with prettytable in case of success.
I think that i need to implement this hook:
#pytest.mark.optionalhook
def pytest_html_results_table_html(report, data):
if report.passed:
del data[:]
data.append(my_pretty_table_string)
# or data.append(report.extra.text)
But how to pass my_pretty_table_string to the hook function or how to edit report.extra.text from my test function ?
Thank you for your help

Where is your my_pretty_table_string stored and generated?
Please give more details to your question so we can help :)
pytest_html_results_table_html gets report object from pytest_runtest_makereport hook. So you need to implement pytest_runtest_makereport to add your data to the result at 'call' step.
#pytest.mark.hookwrapper
def pytest_runtest_makereport(item, call):
outcome = yield # get makereport outcome
if call.when == 'call':
outcome.result.my_data = <your data object goes here>
then you can access report.my_data in pytest_html_results_table_html hook.
Alex

Related

How to parametrize prepared data

I need to prepare an entity with suitable child entities. I need to specify the type of entities that will be stored in the prepared parent entity. Like this:
#pytest.fixture(element_types)
def entry_type_id(element_types)
elements = [resolve_elements_create(data=element_data(element_type)) for element_type in element_types]
entry_type_id = resolve_entry_type_create(elements)
return entry_type_id
def test_something(entry_type_id([ElementType1, ElementType2])):
...
I can't create one fixture for each use case, because there are so many combinations I need. Is there any way I can pass parameters to the ficture to customize the prepared entity?
I don't fully understand what you end point is but according to your comment I think you should create a test class so you can create the elements and then delete them, since you want to test the creation + deletion of the entries
#pytest.fixture(scope="class")
def entry_type(request)
element = resolve_elements_create(data=element_data(request.param))
# This should return 0 if Error during creation
return resolve_entry_type_create(element)
following by the test it self
#pytest.mark.parametrize("entry_type", [ElementType1, ElementType2], indirect=True)
class TestEntries:
def test_create_entry(entry_type):
assert entry_type
def test_delete_entry(entry_type):
assert delete_entry(entry_type)
This is more of a psuedo code it will need some changes, but in most cases the use of fixtures should be prefered over functions in pytest.

How to design a logging event in python in an efficient way, rather than simply adding the events inside a function?

I want to implement the server-side event analytics feature (using https://segment.com/), I am clear in using the api, just we have to add the event api's inside a function whose action we need to monitor, for example, for creating a new user account in my application, I will have an event inside the function create_user
def create_user(email_id, name, id):
# some code to add the user in my table
....
....
# calling the segment api to track event
analytics.track(user_id=email_id, event="Add User", properties={"username": name})
The above code works, but design wise I feel it can be done in a better way, since the create_user should have functionality of adding the user only, but here it contains the track event, and I have to do the same in modifying all the areas wherever I need to monitor by adding this analytics api and this makes my code contain irrelevant logic, I read about decorators but my analytics event depend on the code logic inside the function (like if the user email is valid then only I add the user to Db and trigger the event), so that doesn't seem to help.
So I am seeking help here in handling this scenario with the better approach and keeping the code clean. Is there any way or design approach exist for solving this case?
We can achieve this using decorators and one more separate function like below mentioned. With this code you have to call your confirm_loggin function based on the conditional event to log the data. While your inputs to the user function can be logged temporarily each time.
def confirm_logging():
'''
Final logging function, where once called from the main function
would log the data as need. Can be customized how it needs to be logged.
'''
global temp_log_data
print("Finally logging the data", temp_log_data)
# Can be taken ahead into DB logging.
temp_log_data.clear()
return
def logging_func(func):
'''
Temporary logging function for every function called into temp_log_data
The below temporary logging mechanism can be customized as required
'''
global temp_log_data
temp_log_data = []
def wrapper_function(*args, **kwargs):
# The below print statement can be customzied per your requirement
# You can also call anyother function instead of print and use the args
temp_log_data.append([args[0], args[1], args[2]])
print("Temporary logging data here -", (args[0], args[1], args[2]))
func(*args, **kwargs)
return wrapper_function
#logging_func
def create_user(greet, name, surname):
'''
Your main function, core to functionality specific
'''
print("{} {} {}".format(greet, name, surname))
if name == 'Abhi':
confirm_logging()
return
create_user('Welcome', 'Abhi', 'Jain')

How to build a wrapper pytest plugin?

I want to wrap the pytest-html plugin in the following way:
Add an option X
Given the option X, delete data from the report
I was able to add the option with implementing the pytest_addoption(parser) function, but got stuck on the 2nd thing...
What I was able to do is this: implement a hook frmo pytest-html. However, I have to access my option X, in order to do what to do. The problem is, pytest-html's hook does not give the "request" object as a param, so I can't access the option value...
Can I have additional args for a hook? or something like this?
You can attach additional data to the report object, for example via a custom wrapper around the pytest_runtest_makereport hook:
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
report = outcome.get_result()
report.config = item.config
Now the config object will be accessible via report.config in all reporting hooks, including the ones of pytest-html:
def pytest_html_report_title(report):
""" Called before adding the title to the report """
assert report.config is not None

python pytest for testing the requests and response

I am a beginner to using pytest in python and trying to write test cases for the following method which get the user address when correct Id is passed else rises custom error BadId.
def get_user_info(id: str, host='127.0.0.1', port=3000 ) -> str:
uri = 'http://{}:{}/users/{}'.format(host,port,id)
result = Requests.get(uri).json()
address = result.get('user',{}).get('address',None)
if address:
return address
else:
raise BadId
Can someone help me with this and also can you suggest me what are the best resources for learning pytest? TIA
Your test regimen might look something like this.
First I suggest creating a fixture to be used in your various method tests. The fixture sets up an instance of your class to be used in your tests rather than creating the instance in the test itself. Keeping tasks separated in this way helps to make your tests both more robust and easier to read.
from my_package import MyClass
import pytest
#pytest.fixture
def a_test_object():
return MyClass()
You can pass the test object to your series of method tests:
def test_something(a_test_object):
# do the test
However if your test object requires some resources during setup (such as a connection, a database, a file, etc etc), you can mock it instead to avoid setting up the resources for the test. See this talk for some helpful info on how to do that.
By the way: if you need to test several different states of the user defined object being created in your fixture, you'll need to parametrize your fixture. This is a bit of a complicated topic, but the documentation explains fixture parametrization very clearly.
The other thing you need to do is make sure any .get calls to Requests are intercepted. This is important because it allows your tests to be run without an internet connection, and ensures they do not fail as a result of a bad connection, which is not the thing you are trying to test.
You can intercept Requests.get by using the monkeypatch feature of pytest. All that is required is to include monkeypatch as an input parameter to the test regimen functions.
You can employ another fixture to accomplish this. It might look like this:
import Requests
import pytest
#pytest.fixture
def patched_requests(monkeypatch):
# store a reference to the old get method
old_get = Requests.get
def mocked_get(uri, *args, **kwargs):
'''A method replacing Requests.get
Returns either a mocked response object (with json method)
or the default response object if the uri doesn't match
one of those that have been supplied.
'''
_, id = uri.split('/users/', 1)
try:
# attempt to get the correct mocked json method
json = dict(
with_address1 = lambda: {'user': {'address': 123}},
with_address2 = lambda: {'user': {'address': 456}},
no_address = lambda: {'user': {}},
no_user = lambda: {},
)[id]
except KeyError:
# fall back to default behavior
obj = old_get(uri, *args, **kwargs)
else:
# create a mocked requests object
mock = type('MockedReq', (), {})()
# assign mocked json to requests.json
mock.json = json
# assign obj to mock
obj = mock
return obj
# finally, patch Requests.get with patched version
monkeypatch.setattr(Requests, 'get', mocked_get)
This looks complicated until you understand what is happening: we have simply made some mocked json objects (represented by dictionaries) with pre-determined user ids and addresses. The patched version of Requests.get simply returns an object- of type MockedReq- with the corresponding mocked .json() method when its id is requested.
Note that Requests will only be patched in tests that actually use the above fixture, e.g.:
def test_something(patched_requests):
# use patched Requests.get
Any test that does not use patched_requests as an input parameter will not use the patched version.
Also note that you could monkeypatch Requests within the test itself, but I suggest doing it separately. If you are using other parts of the requests API, you may need to monkeypatch those as well. Keeping all of this stuff separate is often going to be easier to understand than including it within your test.
Write your various method tests next. You'll need a different test for each aspect of your method. In other words, you will usually write a different test for the instance in which your method succeeds, and another one for testing when it fails.
First we test method success with a couple test cases.
#pytest.mark.parametrize('id, result', [
('with_address1', 123),
('with_address2', 456),
])
def test_get_user_info_success(patched_requests, a_test_object, id, result):
address = a_test_object.get_user_info(id)
assert address == result
Next we can test for raising the BadId exception using the with pytest.raises feature. Note that since an exception is raised, there is not a result input parameter for the test function.
#pytest.mark.parametrize('id', [
'no_address',
'no_user',
])
def test_get_user_info_failure(patched_requests, a_test_object, id):
from my_package import BadId
with pytest.raises(BadId):
address = a_test_object.get_user_info(id)
As posted in my comment, here also are some additional resources to help you learn more about pytest:
link
link
Also be sure to check out Brian Okken's book and Bruno Oliveira's book. They are both very helpful for learning pytest.

Mark test as skipped from pytest_collection_modifyitems

How can I mark a test as skipped in pytest collection process?
What I'm trying to do is have pytest collect all tests and then using the pytest_collection_modifyitems hook mark a certain test as skipped according to a condition I get from a database.
I found a solution which I don't like, I was wondering if maybe there is a better way.
def pytest_collection_modifyitems(items, config):
... # get skip condition from database
for item in items:
if skip_condition == True:
item._request.applymarker(pytest.mark.skipif(True, reason='Put any reason here'))
The problem with this solution is I'm accessing a protected member (_request) of the class..
You were almost there. You just need item.add_marker
def pytest_collection_modifyitems(config, items):
skip = pytest.mark.skip(reason="Skipping this because ...")
for item in items:
if skip_condition: # NB You don't need the == True
item.add_marker(skip)
Note that item has an iterable attribute keywords which contains its markers. So you can use that too.
See pytest documentation on this topic.
You can iterate over testcases (items) and skip them using a common fixture. With 'autouse=True' you shouldn't pass it in each testcase as a parameter:
#pytest.fixture(scope='function', autouse=True)
def my_common_fixture(request):
if True:
pytest.skip('Put any reason here')

Categories