I am getting following error while running pytest using following code im unable to figure out whats wrong please find below code snippets.
Console ouput :
PS C:\Bhargav\Python Learning\Projects\MBDailyBoost> pytest
================================================================================= test session starts =================================================================================
platform win32 -- Python 3.10.2, pytest-7.1.2, pluggy-1.0.0
rootdir: C:\Bhargav\Python Learning\Projects\MBDailyBoost
plugins: allure-pytest-2.9.45, html-3.1.1, metadata-2.0.2
collected 2 items
E fixture 'setup' not found
> available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, doctest_namespace, extra, include_metadata_in_junit_xml, metadata, monkeypatch, pytestconfig, recor
d_property, record_testsuite_property, record_xml_attribute, recwarn, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory
> use 'pytest --fixtures [testpath]' for help on them.
C:\Bhargav\Python Learning\Projects\MBDailyBoost\testCases\test_login.py:22
=============================================================================== short test summary info ===============================================================================
ERROR testCases/test_login.py::Test_001_Login::test_homePageTitle
ERROR testCases/test_login.py::Test_001_Login::test_Login
My conftest.py file contains below code:
import pytest
from selenium import webdriver
#pytest.fixture(scope="class")
def setup():
global driver
# driver = webdriver.Chrome(executable_path="C:\\chromedriver.exe")
driver = webdriver.Chrome()
driver.maximize_window()
return driver
My test_login.py file contains below code:
import time
from pageObjects.LoginPageGMB import LoginPage
class Test_001_Login:
baseURL = ""
usename = ""
password = ""
def test_homePageTitle(self, setup):
self.driver = setup
self.driver.get(self.baseURL)
self.driver.maximize_window()
actual_title = self.driver.title
if actual_title == "Google Business Profile – Get Listed on Google":
assert True
else:
assert False
Keep both code snippets in the same module
declare all your fixtures inside conftest.py under test folder.
Related
I started learning playwright and from documentation (https://playwright.dev/python/docs/intro) tried to to run the following sample code:
import re
from playwright.sync_api import Page, expect
def test_pp(page: Page):
page.goto("https://playwright.dev/")
# Expect a title "to contain" a substring.
expect(page).to_have_title(re.compile("Playwright"))
# create a locator
get_started = page.locator("text=Get Started")
# Expect an attribute "to be strictly equal" to the value.
expect(get_started).to_have_attribute("href", "/docs/intro")
# Click the get started link.
get_started.click()
# Expects the URL to contain intro.
expect(page).to_have_url(re.compile(".*intro"))
While running using command pytest, throwing following error:
file D:\play_wright\test\test_sample.py, line 5
def test_pp(page: Page):
E fixture 'page' not found
> available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, doctest_namespace, monkeypatch, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory
> use 'pytest --fixtures [testpath]' for help on them.
D:\play_wright\test\test_sample.py:5
pytest-playwright module (0.3.0) is installed. confirmed with pip list command
Restarted Widows Machine
playwright version - 1.27.1
pytest - 6.2.1 (tried with 7.* as well)
Still getting the issue. please help.
I had the same issue as well. I ran this command, then restarted my command line and it started working.
playwright install-deps
I hope this helps.
Try the following:
py.test test_.py
I'm attempting to write a test fixture based on randomly generated data. This randomly generated data needs to be able to accept a seed so that we can generate the same data on two different computers at the same time.
I'm using the pytest parse.addoption fixture (I think it's a fixture) to add this ability.
My core issue is that I'd like to be able to parameterize a randomly generated list that uses a fixture as an argument.
from secrets import randbelow
from pytest_cases import parametrize_with_cases, fixture, parametrize
def pytest_addoption(parser):
parser.addoption("--seed", action="store", default=randbelow(10))
#fixture(scope=session)
def seed(pytestconfig):
return pytestconfig.getoption("seed")
#fixture(scope=session)
def test_context(seed):
# In my actual tests these are randomly generated from the seed.
# each element here is actually a dictionary but I'm showing strings
# for simplicity of example.
return ['a', 'test', 'list']
#parametrize(group_item=test_context["group_items"])
def case_group_item(group_item: str):
return group_item, "expected_result_goes_here"
#parametrize_with_cases("sql_statement, expected_result", cases='.')
def test_example(
sql_statement: str,
expected_result: int) -> None:
assert False
Leads to this result.
% pytest test.py
========================================================================================================================================================================== test session starts ===========================================================================================================================================================================
platform darwin -- Python 3.8.2, pytest-6.2.5, py-1.10.0, pluggy-1.0.0
rootdir: /Users/{Home}/tests, configfile: pytest.ini
plugins: datadir-1.3.1, celery-4.4.7, anyio-3.4.0, cases-3.6.11
collected 0 items / 1 error
================================================================================================================================================================================= ERRORS =================================================================================================================================================================================
________________________________________________________________________________________________________________________________________________________________________ ERROR collecting test.py ________________________________________________________________________________________________________________________________________________________________________
test.py:12: in <module>
???
E TypeError: 'function' object is not subscriptable
======================================================================================================================================================================== short test summary info =========================================================================================================================================================================
ERROR test.py - TypeError: 'function' object is not subscriptable
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
============================================================================================================================================================================ 1 error in 0.18s ============================================================================================================================================================================
I think I might be able work around this issue by making an empty test that leaks the test_context up to the global scope but that feels really really brittle. I'm looking for another method to still be able to
Use the seed fixture to generate data
Generate one test per element in the generated list
Not depend on the order in which the tests are run.
Edit
Here's an example of this not working with straight pytest
import pytest
from pytest_cases import parametrize_with_cases, fixture, parametrize
#fixture
def seed():
return 1
#fixture
def test_context(seed):
return [seed, 'a', 'test', 'list']
#pytest.fixture(params=test_context)
def example_fixture(request):
return request.param
def test_reconciliation(example_fixture) -> None:
print(example_fixture)
assert False
pytest test.py
========================================================================================================================================================================== test session starts ===========================================================================================================================================================================
platform darwin -- Python 3.8.2, pytest-6.2.5, py-1.10.0, pluggy-1.0.0
rootdir: /Users/{HOME}/tests/integration, configfile: pytest.ini
plugins: datadir-1.3.1, celery-4.4.7, anyio-3.4.0, cases-3.6.11
collected 0 items / 1 error
================================================================================================================================================================================= ERRORS =================================================================================================================================================================================
________________________________________________________________________________________________________________________________________________________________________ ERROR collecting test.py ________________________________________________________________________________________________________________________________________________________________________
test.py:14: in <module>
???
../../../../../.venvs/data_platform/lib/python3.8/site-packages/_pytest/fixtures.py:1327: in fixture
fixture_marker = FixtureFunctionMarker(
<attrs generated init _pytest.fixtures.FixtureFunctionMarker>:5: in __init__
_inst_dict['params'] = __attr_converter_params(params)
../../../../../.venvs/data_platform/lib/python3.8/site-packages/_pytest/fixtures.py:1159: in _params_converter
return tuple(params) if params is not None else None
E TypeError: 'function' object is not iterable
======================================================================================================================================================================== short test summary info =========================================================================================================================================================================
ERROR test.py - TypeError: 'function' object is not iterable
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
============================================================================================================================================================================ 1 error in 0.23s ======================================================================================================================================================================
I tried your code with testfile and conftest.py
conftest.py
import pytest
from secrets import randbelow
from pytest_cases import parametrize_with_cases, fixture, parametrize
def pytest_addoption(parser):
# If you add a breakpoint() here it'll never be hit.
parser.addoption("--seed", action="store", default=randbelow(1))
#fixture(scope="session")
def seed(pytestconfig):
# This line throws an exception since seed was never added.
return pytestconfig.getoption("seed")
myso_test.py
import pytest
from pytest_cases import parametrize_with_cases, fixture, parametrize
#fixture(scope="session")
def test_context(seed):
# In my actual tests these are randomly generated from the seed.
# each element here is actually a dictionary but I'm showing strings
# for simplicity of example.
return ['a', 'test', 'list']
#parametrize("group_item", [test_context])
def case_group_item(group_item: str):
return group_item, "expected_result_goes_here"
#parametrize_with_cases("sql_statement, expected_result", cases='.')
def test_example(
sql_statement: str,
expected_result: int) -> None:
assert True
Test Run:
PS C:\Users\AB45365\PycharmProjects\SO> pytest .\myso_test.py -s -v --seed=10
============================================================== test session starts ==============================================================
platform win32 -- Python 3.9.2, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- c:\users\ab45365\appdata\local\programs\python\python39\python.exe
cachedir: .pytest_cache
rootdir: C:\Users\AB45365\PycharmProjects\SO
plugins: cases-3.6.11, lazy-fixture-0.6.3
collected 1 item
myso_test.py::test_example[group_item-test_context] PASSED
To complement Devang Sanghani's answer : as of pytest 7.1, pytest_addoption is a pytest plugin hook. So, as for all other plugin hooks, it can only be present in plugin files or in conftest.py.
See the note in https://docs.pytest.org/en/7.1.x/reference/reference.html#pytest.hookspec.pytest_addoption :
This function should be implemented only in plugins or conftest.py
files situated at the tests root directory due to how pytest discovers
plugins during startup.
This issue is therefore not related to pytest-cases.
After doing some more digging I ran into this documentation around pytest-cases
from secrets import randbelow
import pytest
from pytest_cases import parametrize_with_cases, fixture, parametrize
def pytest_addoption(parser):
parser.addoption("--seed", action="store", default=randbelow(1))
#fixture(scope="session")
def seed(pytestconfig):
# return pytestconfig.getoption("seed")
return 1
#pytest.fixture(scope="session")
def test_context(seed):
# In my actual tests these are randomly generated from the seed.
# each element here is actually a dictionary but I'm showing strings
# for simplicity of example.
return ['a', 'test', 'list']
#parametrize("group_item", [test_context])
def case_group_item(group_item: str):
return group_item, "expected_result_goes_here"
#parametrize_with_cases("sql_statement, expected_result", cases='.')
def test_example(
sql_statement: str,
expected_result: int) -> None:
assert False
This unfortunately ran me into a new problem. Looks like pytest-cases doesn't currently call pytest_addoption during the fixture execution step rihgt now. I created this ticket to cover this case but this does effectively solve my original question even if it has a caveat.
While my practice I try to build my hybrid test framework. Here is link to GitHub repository of My framework
Its structure is
...root/
...Base/
...Locators/
...Pages/
...Tests/
...conftest.py
...pytest.ini
I would like to make possibility to pass configurations from terminal command or from conftest.py file
But each time I run command:
pytest search_test.py --browser chrome --server server_ip/wd/hub
I get error:
ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...]
pytest: error: unrecognized arguments: --browser chrome --server server_ip/wd/hub/wd/hub
inifile: path to inifile
rootdir: path to root directory
here is content of my conftest.py file:
import pytest
import configparser
from selenium import webdriver
def environment_options(parser):
parser.addoption('--browser', '-B', dest='BROWSER', choises=['chrome', 'chr', 'firefox', 'ff'],
help=f"possible values are: {['chrome', 'chr', 'firefox', 'ff']}")
parser.addoption('--server', '-S', dest="SERVER")
#pytest.fixture(scope='class')
def environment_configuration(request):
read_config = configparser.ConfigParser()
# checking if browser was input from console or from config file from section Environment and assigment
# it to browser_name variable
browser_name = request.config.getoption(
"BROWSER") or read_config.get("Environments", "browser")
# checking if remote server was input from console or from config file from section Environment and assigment
# it to remote_server variable
remote_server = request.config.getoption(
"SERVER") or read_config.get("Environments", "remote_server")
try:
request.cls.driver = webdriver.Remote(
command_executor=remote_server,
desired_capabilities={
"browserName": browser_name})
except BaseException:
print("check browser or remote server configs")
yield request.cls.driver
request.cls.driver.close()
request.cls.driver.quit()
need to change name of function from environment_options to pytest_addoption and everything starts to work :-)
I have a problem with setting report name and folder with it dynamically in Python's pytest.
For example: I've run all pytest's tests # 2020-03-06 21:50 so I'd like to have my report stored in folder 20200306 with name report_2150.html. I want it to be automated and triggered right after the tests are finished.
I'm working in VS Code and I'm aiming to share my work with colleagues with no automation experience so I'm aiming to use it as "click test to start".
My project structure:
webtools/
|── .vscode/
|──── settings.json
|── drivers/
|── pages/
|── reports/
|── tests/
|──── __init__.py
|──── config.json
|──── conftest.py
|──── test_1.py
|──── test_2.py
|── setup.py
Code samples:
settings.json
{
"python.linting.pylintEnabled": false,
"python.linting.flake8Enabled": true,
"python.linting.enabled": true,
"python.pythonPath": "C:\\Users\\user\\envs\\webtools\\Scripts\\python.exe",
"python.testing.pytestArgs": [
"tests",
"--self-contained-html",
"--html=./reports/tmp_report.html"
],
"python.testing.unittestEnabled": false,
"python.testing.nosetestsEnabled": false,
"python.testing.pytestEnabled": true,
"python.testing.unittestArgs": [
"-v",
"-s",
"./tests",
"-p",
"test_*.py"
]
}
config.json
{
"browser": "chrome",
"wait_time": 10
}
conftest.py
import json
import pytest
from datetime import datetime
import time
import shutil
import os
from selenium import webdriver
from selenium.webdriver import Chrome
CONFIG_PATH = 'tests/config.json'
DEFAULT_WAIT_TIME = 10
SUPPORTED_BROWSERS = ['chrome', 'explorer']
#pytest.fixture(scope='session')
def config():
# Read the JSON config file and returns it as a parsed dict
with open(CONFIG_PATH) as config_file:
data = json.load(config_file)
return data
#pytest.fixture(scope='session')
def config_browser(config):
# Validate and return the browser choice from the config data
if 'browser' not in config:
raise Exception('The config file does not contain "browser"')
elif config['browser'] not in SUPPORTED_BROWSERS:
raise Exception(f'"{config["browser"]}" is not a supported browser')
return config['browser']
#pytest.fixture(scope='session')
def config_wait_time(config):
# Validate and return the wait time from the config data
return config['wait_time'] if 'wait_time' in config else DEFAULT_WAIT_TIME
#pytest.fixture
def browser(config_browser, config_wait_time):
# Initialize WebDriver
if config_browser == 'chrome':
driver = webdriver.Chrome(r"./drivers/chromedriver.exe")
elif config_browser == 'explorer':
driver = webdriver.Ie(r"./drivers/IEDriverServer.exe")
else:
raise Exception(f'"{config_browser}" is not a supported browser')
# Wait implicitly for elements to be ready before attempting interactions
driver.implicitly_wait(config_wait_time)
# Maximize window for test
driver.maximize_window()
# Return the driver object at the end of setup
yield driver
# For cleanup, quit the driver
driver.quit()
#pytest.fixture(scope='session')
def cleanup_report():
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
os.chdir("./reports")
os.mkdir(timestamp)
yield
shutil.move("./tmp_report.html", "./%s/test_report.html" % timestamp)
In current situation the report is created as tmp_report.html in the reports folder, but I don't know how I can force running cleanup_report() after all tests are completed and tmp_report.html is present and complete in folder. For checking if complete I assume I'd have to verify if all html tags have their closing (or at least <html> one).
Can somebody help me with that? If you need some further code portions I'll provide them as soon as possible.
Thank you in advance!
You can customize the plugin options in a custom impl of the pytest_configure hook. Put this example code in a conftest.py file in your project root dir:
from datetime import datetime
from pathlib import Path
import pytest
#pytest.hookimpl(tryfirst=True)
def pytest_configure(config):
# set custom options only if none are provided from command line
if not config.option.htmlpath:
now = datetime.now()
# create report target dir
reports_dir = Path('reports', now.strftime('%Y%m%d'))
reports_dir.mkdir(parents=True, exist_ok=True)
# custom report file
report = reports_dir / f"report_{now.strftime('%H%M')}.html"
# adjust plugin options
config.option.htmlpath = report
config.option.self_contained_html = True
If you want to completely ignore what's passed from command line, remove the if not config.option.htmlpath: condition.
If you want to stick with your current impl, notice that on fixtures teardown, pytest-html hasn't written the report yet. Move the code from cleanup_report to a custom impl of the pytest_sessionfinish hook to ensure pytest-html has already written the default report file:
#pytest.hookimpl(trylast=True)
def pytest_sessionfinish(session, exitstatus):
shutil.move(...)
I'm getting Unresolved import: HTMLTestRunner
My code
import random
import unittest
import HTMLTestRunner
class TestSequenceFunctions(unittest.TestCase):
def setUp(self):
self.seq = range(10)
def test_shuffle(self):
# make sure the shuffled sequence does not lose any elements
random.shuffle(self.seq)
self.seq.sort()
self.assertEqual(self.seq, range(10))
# should raise an exception for an immutable sequence
self.assertRaises(TypeError, random.shuffle, (1,2,3))
#unittest.skip("Test Skipped1")
def test_choicep(self):
element = random.choice(self.seq)
self.assertTrue(element in self.seq)
#unittest.skip("Test Skipped2")
def test_samplep(self):
with self.assertRaises(ValueError):
random.sample(self.seq, 20)
for element in random.sample(self.seq, 5):
self.assertTrue(element in self.seq)
suite = unittest.TestLoader().loadTestsFromTestCase(TestSequenceFunctions)
unittest.TextTestRunner(verbosity=2).run(suite)
outfile = open("/Users/bhanusaa/Downloads/screenshots/", "w")
runner = HTMLTestRunner.HTMLTestRunner(
stream=outfile,title='Test Report',description='This demonstrates the report output by Prasanna.Yelsangikar.')
runner.run(suite)
i had downloaded HTMLTestRunner from http://tungwaiyip.info/software/HTMLTestRunner.html
and saved the HTMLTestRunner.py file in /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages
but still getting error
FYI i had configured all the required settings in eclipse and able to launch selenium web driver script successfully, but when i'm trying to import HTMLTestRunner IM GETTING unresolved error
SYSTEM INFO.
Python 2.7
pydev 2.2
OS MAC
HTMLTestRunner seems to have issue when installed using 'pip'. So as a workaround follow below steps:
1) Get the HTMLTestRunner API from : https://raw.githubusercontent.com/tungwaiyip/HTMLTestRunner/master/HTMLTestRunner.py
2) Save 'HTMLTestRunner.py' at 'C:\Python27\Lib'
3) Execute placed HTMLTestRunner from command prompt with below set of steps:
cd c:\Python27\Lib
python HTMLTestRunner.py
[Check pyc created or not & Py file execution & navigation to .py file may vary depending on OS you are using needless to say]
4) Make sure environment variables set correctly
For confirming the installation successful:
python
import HTMLTestRunner
return key