I am trying to use the pytester fixture, to create some files in the test directory, but I get the following error:
def test_something(pytester):
E fixture 'pytester' not found
> available fixtures: _configure_application, _monkeypatch_response_class, _push_request_context, accept_any, accept_json, accept_jsonp, accept_mimetype, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, class_mocker, client, client_class, config, cov, doctest_namespace, live_server, mock_sparql_endpoint, mocker, module_mocker, monkeypatch, no_cover, package_mocker, pytestconfig, quotation_access_db_columns, quotation_access_db_path, record_property, record_testsuite_property, record_xml_attribute, recwarn, request_ctx, session_mocker, sparql_query1, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory
> use 'pytest --fixtures [testpath]' for help on them.
The code I am trying to run, to create some python files in below, taken from the pytester documentation:
def test_something(pytester):
# Initial file is created test_something.py.
pytester.makepyfile("foobar")
# To create multiple files, pass kwargs accordingly.
pytester.makepyfile(custom="foobar")
# At this point, both 'test_something.py' & 'custom.py' exist in the test directory.
How can I use the pytester fixture to create some files in the test directory?
The link for the pytester documentation:
https://docs.pytest.org/en/6.2.x/reference.html#pytest.Pytester.path
Try adding this line in conftest.py (or in your test file) as documented.
pytest_plugins = "pytester"
Or if you already have any existing one, just make it a list:
pytest_plugins = [
...
"pytester",
]
Also, make sure that your version of pytest is >= 6.2 as it is the requirement based on the docs.
Related
I started learning playwright and from documentation (https://playwright.dev/python/docs/intro) tried to to run the following sample code:
import re
from playwright.sync_api import Page, expect
def test_pp(page: Page):
page.goto("https://playwright.dev/")
# Expect a title "to contain" a substring.
expect(page).to_have_title(re.compile("Playwright"))
# create a locator
get_started = page.locator("text=Get Started")
# Expect an attribute "to be strictly equal" to the value.
expect(get_started).to_have_attribute("href", "/docs/intro")
# Click the get started link.
get_started.click()
# Expects the URL to contain intro.
expect(page).to_have_url(re.compile(".*intro"))
While running using command pytest, throwing following error:
file D:\play_wright\test\test_sample.py, line 5
def test_pp(page: Page):
E fixture 'page' not found
> available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, doctest_namespace, monkeypatch, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory
> use 'pytest --fixtures [testpath]' for help on them.
D:\play_wright\test\test_sample.py:5
pytest-playwright module (0.3.0) is installed. confirmed with pip list command
Restarted Widows Machine
playwright version - 1.27.1
pytest - 6.2.1 (tried with 7.* as well)
Still getting the issue. please help.
I had the same issue as well. I ran this command, then restarted my command line and it started working.
playwright install-deps
I hope this helps.
Try the following:
py.test test_.py
I am writing an API in Flask and in some point I send email to users who register. I store variables concerning this email service in .env file. Now want to test a piece where I use these variables, but I have no idea how to load them from the .env file.
I tried basically all the answers here https://rb.gy/0nro1a, monkey patching setenv as show here https://rb.gy/kd07wa + other tips here and there. Each failed on some point. I also tried using pytest-dotenv. pytest-env, pytest.ini etc..but nothing really worked as expected, and it is all pretty confusing to me.
My pytests fixture looks like this
#pytest.fixture(autouse=True)
def test_client_db():
# set up
app.config["SQLALCHEMY_DATABASE_URI"] = "sqlite:///"
app.config["JWT_SECRET_KEY"] = "testing"
with app.app_context():
db.init_app(app)
db.create_all()
testing_client = app.test_client()
ctx = app.app_context()
ctx.push()
# do testing
yield testing_client
# tear down
with app.app_context():
db.session.remove()
db.drop_all()
ctx.pop()
I am wondering why I cant just simply load the .env file with a line like this load_dotenv(path/to/.env) somewhere in the set up of the fixture and be done?
Can someone explain to me as a newbie how to read the .env variables in a simple straightforward way to work with pytest?
The only way that actually works for me is to pass the environment variables on the command line as I run the tests.
FROM_EMAIL="some#email.com" MAILGUN_DOMAIN="sandbox6420919ab29b4228sdfda9d43ff37f7689072.mailgun.org" MAILGUN_API_KEY="245d6d0asldlasdkjfc380fba7fbskfsj1ad3125649esadbf2-7cd1ac2b-47fb3ac2" pytest tests
But this is a terrible way and I don't want to write all these var into the command line every time I run tests.
I just want to write pytest test, the .env file should be loaded somewhere automatically I believe. But where and how?
Any help appreciated.
If you install python-dotenv, you can use that to load the variables from the .env file. Here is a minimal example:
.env
SQLALCHEMY_DATABASE_URI="sqlite:///"
JWT_SECRET_KEY="testing"
test.py
import os
import pytest
from dotenv import load_dotenv
#pytest.fixture(scope='session', autouse=True)
def load_env():
load_dotenv()
#pytest.fixture(autouse=True)
def test_client_db():
print(f"\nSQLALCHEMY_DATABASE_URI"
f"={os.environ.get('SQLALCHEMY_DATABASE_URI')}")
print(f"JWT_SECRET_KEY={os.environ.get('JWT_SECRET_KEY')}")
def test():
pass
python -m pytest -s test.py gives:
============================================ test session starts ============================================
...
collected 1 item
test.py
SQLALCHEMY_DATABASE_URI=sqlite:///
JWT_SECRET_KEY=testing
.
============================================= 1 passed in 0.27s =============================================
e.g. the enviroment variables are set throughout the test session and can be used to configure your app. Note that I didn't provide a path in load_dotenv(), because I put the .env file in the same directory as the test - in your code, you probably have to add the path (load_dotenv(dotenv_path=your_path)).
You can use the monkeypatch fixture provided by pytest to manipulate env variables:
#pytest.fixture(scope="function")
def configured_env(monkeypatch):
monkeypatch.setenv("SQLALCHEMY_DATABASE_URI", "sqlite:///")
monkeypatch.setenv("JWT_SECRET_KEY", testing)
def test_client(configured_env):
#env variables are set here
I have created a simple HTTP trigger-based azure function in python which is calling another python script to create a sample file in azure data lake gen 1. My solution structure is given below: -
Requirements.txt contains the following imports: -
azure-functions
azure-mgmt-resource
azure-mgmt-datalake-store
azure-datalake-store
init.py
import logging, os, sys
import azure.functions as func
import json
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
name = req.params.get('name')
if not name:
try:
req_body = req.get_json()
except ValueError:
pass
else:
name = req_body.get('name')
if name:
full_path_to_script = os.path.join(os.path.dirname( __file__ ) + '/Test.py')
logging.info(f"Path: - {full_path_to_script}")
os.system(f"python {full_path_to_script}")
return func.HttpResponse(f"Hello {name}!")
else:
return func.HttpResponse(
"Please pass a name on the query string or in the request body",
status_code=400
)
Test.py
import json
from azure.datalake.store import core, lib, multithread
directoryId = ''
applicationKey = ''
applicationId = ''
adlsCredentials = lib.auth(tenant_id = directoryId, client_secret = applicationKey, client_id = applicationId)
adlsClient = core.AzureDLFileSystem(adlsCredentials, store_name = '')
with adlsClient.open('stage1/largeFiles/TestFile.json', 'rb') as input_file:
data = json.load(input_file)
with adlsClient.open('stage1/largeFiles/Result.json', 'wb') as responseFile:
responseFile.write(data)
Test.py is failing with an error that no module found azure.datalake.store
Why other required modules are not working for Test.py since it is inside the same directory?
pip freeze output: -
adal==1.2.2
azure-common==1.1.23
azure-datalake-store==0.0.48
azure-functions==1.0.4
azure-mgmt-datalake-nspkg==3.0.1
azure-mgmt-datalake-store==0.5.0
azure-mgmt-nspkg==3.0.2
azure-mgmt-resource==6.0.0
azure-nspkg==3.0.2
certifi==2019.9.11
cffi==1.13.2
chardet==3.0.4
cryptography==2.8
idna==2.8
isodate==0.6.0
msrest==0.6.10
msrestazure==0.6.2
oauthlib==3.1.0
pycparser==2.19
PyJWT==1.7.1
python-dateutil==2.8.1
requests==2.22.0
requests-oauthlib==1.3.0
six==1.13.0
urllib3==1.25.6
Problem
os.system(f"python {full_path_to_script}") from your functions project is causing the issue.
Azure Functions Runtime sets up the environment, along with modifying process level variables like os.path so that your function can load any dependencies you may have. When you create a sub-process like that, not all information will flow through. Additionally, you will face issues with logging -- logs from test.py would not show up properly unless explicitly handled.
Importing works locally because you have all your requirements.txt modules installed and available to test.py. This is not the case in Azure. After remotely building as part of publish, your modules are included as part of your code package published. It's not "installed" globally in the Azure environment per se.
Solution
You shouldn't have to run your script like that. In the example above, you could import your test.py from your __init__.py file, and that should behave like it was called python test.py (at least in the case above). Is there a reason you'd want to do python test.py in a sub-process over importing it?
Here's the official guide on how you'd want to structure your app to import shared code -- https://learn.microsoft.com/en-us/azure/azure-functions/functions-reference-python#folder-structure
Side-Note
I think once you get through the import issue, you may also face problems with adlsClient.open('stage1/largeFiles/TestFile.json', 'rb'). We recommend following the developer guide above to structure your project and using __file__ to get the absolute path (reference).
For example --
import pathlib
with open(pathlib.Path(__file__).parent / 'stage1' / 'largeFiles' /' TestFile.json'):
....
Now, if you really want to make os.system(f"python {full_path_to_script}") work, we have workarounds to the import issue. But, I'd rather not recommend such approach unless you have a really compelling need for it. :)
I have a framework which working under py.test. py.test can generate beauty reports with params --html and --junitxml. But clients that using my framework not always type this params to command line where they using py.test. I want make py.test to generate reports always when the py.test used with my framework. And i want to put this reports with log folder. So i need to generate path for report in runtime. Can i do this by fixtures? Or maybe by the plugin API?
Putting this in conftest.py will suffice:
def pytest_configure(config):
if config.option.xmlpath is None:
config.option.xmlpath = get_custom_xml_path() # implement this
The accepted answer is probably a bit more complicated than necessary for most people for a few reasons:
The decorator doesn't help. It doesn't matter when this executes.
There is no need make a custom LogXML since you can just set the property here and it will be used.
slaveinput is specific to a pytest plugin xdist. I don't think there is any need to check for that, especially if you don't use xdist.
First of all, if you want to implicitly add the command line args to pytest, you can use the pytest.ini placed in the tests root dir with the addopts config value:
[pytest]
addopts=--verbose --junit-xml=/tmp/myreport.xml # etc
Of course, if you want to dynamically calculate the directory to store the reports, then you can't put it in the config and will need to extend pytest. The best spot would be the pytest_configure hook. Example:
# conftest.py
import tempfile
import pytest
from _pytest.junitxml import LogXML
#pytest.hookimpl(tryfirst=True)
def pytest_configure(config):
if config.option.xmlpath: # was passed via config or command line
return # let pytest handle it
if not hasattr(config, 'slaveinput'):
with tempfile.NamedTemporaryFile(suffix='.xml') as tmpfile:
xmlpath = tmpfile.name
config._xml = LogXML(xmlpath, config.option.junitprefix, config.getini('junit_suite_name'))
config.pluginmanager.register(config._xml)
If you remove the first if block, then pytest will completely ignore --junit-xml arg passed via command line or in addopts value in config.
Example run:
$ pytest
=================================== test session starts ====================================
platform darwin -- Python 3.6.3, pytest-3.3.1, py-1.5.2, pluggy-0.6.0
rootdir: /Users/hoefling/projects/private/stackoverflow/so-48320357, inifile:
plugins: forked-0.2, asyncio-0.8.0, xdist-1.22.0, mock-1.6.3, hypothesis-3.44.4
collected 1 item
test_spam.py . [100%]
--- generated xml file: /var/folders/_y/2qk6029j4c7bwv0ddk3p96r00000gn/T/tmp1tleknm3.xml ---
================================ 1 passed in 0.01 seconds ==================================
The xml report is now put in a tempfile.
Configure pytest.ini file with parameters:
# content of pytest.ini
[pytest]
addopts = --html=report.html --self-contained-html
;addopts = -vv -rw --html=./results/report.html --self-contained-html
#hoefling's answer worked perfectly for me in conftest.py. the code looks simpler there.
def pytest_configure(config):
if not config.option.xmlpath and not hasattr(config, 'slaveinput'):
xmlpath = "test_report_" + str(int(time.time())) + ".xml"
config._xml = LogXML(xmlpath, config.option.junitprefix, config.getini('junit_suite_name'))
config.pluginmanager.register(config._xml)
Just to keep things more clear, pytest uses argparse and the request.config.option is a argparse.Namespace object. Then, if you would like to simulate a command line option as pytest ... --docker-compose-remove-volumes, you can directly attribute the option docker_compose_remove_volumes to request.config.option (because --docker-compose-remove-volumes is converted to docker_compose_remove_volumes by argparse module).
This examples inverts the default option for --docker-compose-remove-volumes which is false. But allow you to enable it back by providing --keep-containers option to pytest.
def pytest_addoption(parser):
parser.addoption("--keep-containers", action="store_true", default=False,
help="Keeps docker-compose on failure.")
#pytest.fixture(scope='session', autouse=True)
def load_env(request):
is_to_keep_container = request.config.getoption("--keep-containers")
if not is_to_keep_container:
request.config.option.docker_compose_remove_volumes = True
I'm trying to test file parsing with pytest. I have a directory tree that looks something like this for my project:
project
project/
cool_code.py
setup.py
setup.cfg
test/
test_read_files.py
test_files/
data_file1.txt
data_file2.txt
My setup.py file looks something like this:
from setuptools import setup
setup(
name = 'project',
description = 'The coolest project ever!',
setup_requires = ['pytest-runner'],
tests_require = ['pytest'],
)
My setup.cfg file looks something like this:
[aliases]
test=pytest
I've written several unit tests with pytest to verify that files are properly read. They work fine when I run pytest from within the "test" directory. However, if I execute any of the following from my project directory, the tests fail because they cannot find data files in test_files:
>> py.test
>> python setup.py pytest
The test seems to be sensitive to the directory from which pytest is executed.
How can I get pytest unit tests to discover the files in "data_files" for parsing when I call it from either the test directory or the project root directory?
One solution is to define a rootdir fixture with the path to the test directory, and reference all data files relative to this. This can be done by creating a test/conftest.py (if not already created) with some code like this:
import os
import pytest
#pytest.fixture
def rootdir():
return os.path.dirname(os.path.abspath(__file__))
Then use os.path.join in your tests to get absolute paths to test files:
import os
def test_read_favorite_color(rootdir):
test_file = os.path.join(rootdir, 'test_files/favorite_color.csv')
data = read_favorite_color(test_file)
# ...
One solution is to try multiple paths to find the files.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from coolprogram import *
import os
def test_file_locations():
"""Possible locations where test data could be found."""
return(['./test_files',
'./tests/test_files',
])
def find_file(filename):
""" Searches for a data file to use in tests """
for location in test_file_locations():
filepath = os.path.join(location, filename)
if os.path.exists(filepath):
return(filepath)
raise IOError('Could not find test file.')
def test_read_favorite_color():
""" Test that favorite color is read properly """
filename = 'favorite_color.csv'
test_file = find_file(filename)
data = read_favorite_color(test_file)
assert(data['first_name'][1] == 'King')
assert(data['last_name'][1] == 'Arthur')
assert(data['correct_answers'][1] == 2)
assert(data['cross_bridge'][1] == True)
assert(data['favorite_color'][1] == 'green')
One way is to pass a dictionary of command name and custom command class to cmdclass argument of setup function.
Another way is like here, posted it here for quick reference.
pytest-runner will install itself on every invocation of setup.py. In some cases, this causes delays for invocations of setup.py that will never invoke pytest-runner. To help avoid this contingency, consider requiring pytest-runner only when pytest is invoked:
pytest = {'pytest', 'test', 'ptr'}.intersection(sys.argv)
pytest_runner = ['pytest-runner'] if needs_pytest else []
# ...
setup(
#...
setup_requires=[
#... (other setup requirements)
] + pytest_runner,
)
Make sure all the data you read in your test module is relative to the location of setup.py directory.
In OP's case data file path would be test/test_files/data_file1.txt,
I made a project with same structure and read the data_file1.txt with some text in it and it works for me.