python pytest - receiving test parameters from outside - python

I am new to pytest framework and I wonder if there is a standard approach my pytests to receive parameters from other script. I guess the most common practice is to write #pytest.fixture that looks like:
#pytest.fixture
def param(request):
device = request.config.getoption("--parameter")
if parameter == 'A':
return value_a
else:
return default_value
And running my test with command py.test --parameter=value_a(this command is executed by some other script)
But if my test needs many parameters, lets say for example 10, that would be one long command. So I am asking what is the standard approach in situation like that - do I provide some kind of xml file or serialized dictionary with parameters and my fixture to take the parameters from them.
Also how my other script will know what kind of parameters to provide to my test - should there be test_parameters configuration file or some hardcoded data in my conftest.py, that holds the information for the parameters, or should I use get them by reading the signature of the tests with from inspect import signature.
EDITED
Here is a sample of my tests
class TestH5Client:
# runs before everything else in the class
def setup_class(cls, ip="11.111.111.111",
browserType="Chrome",
port="4444",
client_url="https://somelink.com/",
username="some_username",
hpassword="some_pass"):
cls.driver = get_remote_webdriver(ip, port, browserType)
cls.driver.implicitly_wait(60)
cls.client_url = client_url
cls.username = username
cls.password = password
def teardown_class(cls):
cls.driver.quit()
def test_login_logout(self):
# opening web_client
self.driver.set_page_load_timeout(60)
self.driver.get(self.h5_client_url)
# opening web_client log-in window
self.driver.set_page_load_timeout(60)
self.driver.find_element_by_css_selector("div.gettingStarted p:nth-child(4) a:nth-child(1)").click()
# log in into the client
self.driver.find_element_by_id("username").send_keys(self.h5_username)
self.driver.find_element_by_id("password").send_keys(self.h5_password)
self.driver.set_page_load_timeout(60)
self.driver.find_element_by_id("submit").click()
# clicking on the app_menu so the logout link appears
self.driver.implicitly_wait(60)
self.driver.find_element_by_id("action-userMenu").click()
# clicking on the logout link
self.driver.implicitly_wait(60)
self.driver.find_element_by_css_selector("#vui-actions-menu li:nth-child(3)").click()
assert "Login" in self.driver.title
def test_open_welcome_page(self):
"""fast selenium test for local testing"""
self.driver.set_page_load_timeout(20)
self.driver.get(self.h5_client_url)
assert "Welcome" in self.driver.title
def test_selenium_fail(self):
"""quick test of selenium failure for local testing"""
self.driver.set_page_load_timeout(20)
self.driver.get(self.h5_client_url)
assert "NotInHere" in self.driver.title
I need all those parameters to be provided by an outside python providing test parameters framework. And I need to know how this framework should get the name of the parameters and how to run those tests with these parameters.

Given your answers in the comments, the data changes weekly.
So I'd suggest passing in a single parameter, the path to a file specifying the rest of your info.
Use whatever parsing mechanism you're already using elsewhere - XML, Json, whatever - or use something simple like a config file reader.
Create a fixture with session scope (like this) and either give it some sensible defaults, or have it fail violently if it doesn't get a valid parameter file.

Related

How to get input variable from other function while running pytest for SeleniumBase in Python?

The file I upload to a website changes with each run, and I'd like a way to change it in the test method. Currently, I have it set up like this:
functions.py
def get_file():
...
file = file_ex1.doc
return file
def run_test1():
cmd = "pytest -v webtests.py::EF --browser=chrome"
subprocess.run(split(cmd)) # run command line
...
webtests.py
file_path = file # changes with each run, should come from get_file()
class EF(BaseCase):
def test_now(self):
self.open('https://www...')
self.find_element('input[name="Query"]').send_keys(file_path)`
I was wondering about the best way to change the file_path variable based on the output of certain functions in functions.py. For example, it could sometimes produce a file called file1 to upload, or another time filet2.txt. What's the best way to connect the two scripts and run test_now() with the updated file path for each run? Or should they be set up in a different way?
Thanks so much in advance.
You can use the pytest args that come with SeleniumBase.
--data=DATA # (Extra test data. Access with "self.data" in tests.)
--var1=DATA # (Extra test data. Access with "self.var1" in tests.)
--var2=DATA # (Extra test data. Access with "self.var2" in tests.)
--var3=DATA # (Extra test data. Access with "self.var3" in tests.)
Source: Docs

Showing server errors in test output during Django StaticLiveServerTestCase?

Is there a way to show the errors that occurred on the server during a StaticLiveServerTestCase directly in the test feedback? That being, when some server function call errors and page just doesn't show up, the test execution by default has no knowledge of the server error. Is there someway to pass that output onto the testing thread?
Preferably the these errors would show up in the same place that errors directly in the test code execution show up. If this isn't (easily) possible though, what's the next best way to quickly see those server errors?
Thanks!
Code (as requested):
class TestFunctionalVisitor(StaticLiveServerTestCase):
def setUp(self):
self.browser = webdriver.Firefox()
def tearDown(self):
self.browser.quit()
def test_visitor(self):
self.browser.get(self.live_server_url)
self.assertEqual(self.browser.title, "Something")
...
class Home(TemplateView):
template_name = 'home.html'
def get_context_data(self):
context = {}
MyModel = None
context['my_models'] = MyModel.objects.all()
return context
This has been significantly altered to make it simple and short. But when MyModel is None and tries to call objects.all() the server has a server 500 error, but all I get is the "Something" not in self.browser.title error from the test output, when I'd like to see the NoneType has no... error in the test output.
To see the errors immediately, run the test in DEBUG mode:
from django.test.utils import override_settings
#override_settings(DEBUG=True)
class DjkSampleTestCase(StaticLiveServerTestCase):
# fixtures = ['club_app_phase01_2017-01-09_13-30-19-169537.json']
reset_sequences = True
But one should also configure logging of server-side errors, either via custom django.core.handlers.base.BaseHandler class handle_uncaught_exception() method implementation or via sentry.
I use to override the default logger using:
import logging
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(levelname)s -%(filename)s:%(lineno)d - %(message)s')
This will display stderr on your terminal. You can even do:
logging.debug('My var %s', var)
I only do this for debugging, if you want to use logging for non-debugging things I'd suggest to create custom loggers.
More details about logging:
https://docs.djangoproject.com/en/1.10/topics/logging/
https://docs.python.org/3/library/logging.html
This is exactly why it is recommended to have much more unit tests than integration and UI/end-to-end. Aside from other things, the latter don't provide you with a specific feedback and you often need more time debugging and investigating why a UI test failed. On the other hand, when a unit test fails, it is usually a failure or an exception pointing you to a specific line in the code - you get the "What went wrong" answer right away.
In other words, the point is: Cover this particular problem with unit tests leaving your UI test as is.
To help you gather more information about the "Something" not in self.browser.title failure, turn the logging on and log as much details as possible. You may also use the built-in Error Reporting and, for instance, let Django send you an email on 500 error. In other words, collect all the details and troubleshoot the failure manually.

Have a single test method return multiple test results

Before I get you all confused, let me clarify: I'm NOT asking about running a single test method with different arguments. All clear? Then let's go:
I have a test in Python (Django, but not relevant) that basically...
starts a http server,
starts Selenium, opens a web page on this server,
via Selenium loads and runs a suite of JavaScript tests (via Jasmine)
collects the results and fails if any test failed
I'd like to make the output of the each Jasmine spec visible as a separate entry in Python unit test output (with its own name)? Extracting it from Javascript via Selenium is the easy part, but I don't know how to connect it with the UnitTest machinery.
Expected code would look something like (pseudocode):
class FooPageTest(TestCase):
def setUp(self):
# start selenium, etc
def run(self, result):
self.run_tests()
for test_name, status, failure_message in self.get_test_results():
if status:
result.add_successful_test(test_name)
else:
result.add_failed_test(test_name, failure_message)
Expected output:
$ python manage.py test FooPageTest -v2
first_external_test ... ok
second_external_test ... ok
third_external_test ... ok
A gotcha: The number and names of test cases would be only known after actually running the tests.
Is it possible to bend unittest2 to my will? How?
It sounds like you have multiple external tests to run, and you want to have the results of each test reported individually through Python unit test. I think that I would do something like:
class FooPageTest(TestCase):
#classmethod
def setUpClass(cls):
# start selenium, etc
cls.run_tests()
#classmethod
def getATest(cls, test_name):
def getOneResult(self):
# read the result for "test_name" from the selenium results
if not status:
raise AssertionError("Test %s failed: %s" % (test_name, failure_message)
setattr(cls, 'test%s' test_name, getOneResult)
for test_name in get_test_names():
FooPageTest.getATest(test_name)
This approach does a couple of things that I think are nice:
It runs the tests when the tests would be run by test discovery, not on module import
Each selenium test generates to a Python test.
To use this, you'll get to define get_test_names(), which reads the names of the tests that will be run. You'll also need a function to read each individual result from the selenium results, but it sounds like you must already have a way to do this (your get_test_results() method).

How to access plugin options within a test? (Python Nose)

We are trying to implement an automated testing framework using nose. The intent is to add a few command line options to pass into the tests, for example a hostname. We run these tests against a web app as integration tests.
So, we've created a simple plugin that adds a option to the parser:
import os
from nose.plugins import Plugin
class test_args(Plugin):
"""
Attempting to add command line parameters.
"""
name = 'test_args'
enabled = True
def options(self, parser, env=os.environ):
super(test_args, self).options(parser, env)
parser.add_option("--hostname",
action="store",
type="str",
help="The hostname of the server")
def configure(self, options, conf):
self.hostname = options.hostname
The option is available now when we run nosetests...but I can't figure out how to use it within a test case? Is this possible? I can't find any documentation on how to access options or the configuration within a test case.
The adding of the command line arguments is purely for development/debugging code purposes. We plan to use config files for our automated runs in bamboo. However, when developing integration tests and also debugging issues, it is nice to change the config on the fly. But we need to figure out how to actually use the options first...I feel like I'm just missing something basic, or I'm blind...
Ideally we could extend the testconfig plugin to make passing in config arguments from this:
--tc=key:value
to:
--key=value
If there is a better way to do this then I'm all ears.
One shortcut is to access import sys; sys.argv within the test - it will have the list of parameters passed to the nose executable, including the plugin ones. Alternatively your plugin can add attributes to your tests, and you can refer to those attributes - but it requires more heavy lifting - similar to this answer.
So I've found out how to make this work:
import os
from nose.plugins import Plugin
case_options = None
class test_args(Plugin):
"""
Attempting to add command line parameters.
"""
name = 'test_args'
enabled = True
def options(self, parser, env=os.environ):
super(test_args, self).options(parser, env)
parser.add_option("--hostname",
action="store",
type="str",
help="The hostname of the server")
def configure(self, options, conf):
global case_options
case_options = options
Using this you can do this in your test case to get the options:
from test_args import case_options
To solve the different config file issues, I've found you can use a setup.cfg file written like an INI file to pass in default command line parameters. You can also pass in a -c config_file.cfg to pick a different config. This should work nicely for what we need.

Good way to collect programmatically generated test suites in nose or pytest

Say I've got a test suite like this:
class SafeTests(unittest.TestCase):
# snip 20 test functions
class BombTests(unittest.TestCase):
# snip 10 different test cases
I am currently doing the following:
suite = unittest.TestSuite()
loader = unittest.TestLoader()
safetests = loader.loadTestsFromTestCase(SafeTests)
suite.addTests(safetests)
if TARGET != 'prod':
unsafetests = loader.loadTestsFromTestCase(BombTests)
suite.addTests(unsafetests)
unittest.TextTestRunner().run(suite)
I have major problem, and one interesting point
I would like to be using nose or py.test (doestn't really matter which)
I have a large number of different applications that are exposing these test
suites via entry points.
I would like to be able to aggregate these custom tests across all installed
applications so I can't just use a clever naming convention. I don't
particularly care about these being exposed through entry points, but I
do care about being able to run tests across applications in
site-packages. (Without just importing... every module.)
I do not care about maintaining the current dependency on
unittest.TestCase, trashing that dependency is practically a goal.
EDIT This is to confirm that #Oleksiy's point about passing args to
nose.run does in fact work with some caveats.
Things that do not work:
passing all the files that one wants to execute (which, weird)
passing all the modules that one wants to execute. (This either executes
nothing, the wrong thing, or too many things. Interesting case of 0, 1 or
many, perhaps?)
Passing in the modules before the directories: the directories have to come
first, or else you will get duplicate tests.
This fragility is absurd, if you've got ideas for improving it I welcome
comments, or I set up
a github repo with my
experiments trying to get this to work.
All that aside, The following works, including picking up multiple projects
installed into site-packages:
#!python
import importlib, os, sys
import nose
def runtests():
modnames = []
dirs = set()
for modname in sys.argv[1:]:
modnames.append(modname)
mod = importlib.import_module(modname)
fname = mod.__file__
dirs.add(os.path.dirname(fname))
modnames = list(dirs) + modnames
nose.run(argv=modnames)
if __name__ == '__main__':
runtests()
which, if saved into a runtests.py file, does the right thing when run as:
runtests.py project.tests otherproject.tests
For nose you can have both tests in place and select which one to run using attribute plugin, which is great for selecting which tests to run. I would keep both tests and assign attributes to them:
from nose.plugins.attrib import attr
#attr("safe")
class SafeTests(unittest.TestCase):
# snip 20 test functions
class BombTests(unittest.TestCase):
# snip 10 different test cases
For you production code I would just call nose with nosetests -a safe, or setting NOSE_ATTR=safe in your os production test environment, or call run method on nose object to run it natively in python with -a command line options based on your TARGET:
import sys
import nose
if __name__ == '__main__':
module_name = sys.modules[__name__].__file__
argv = [sys.argv[0], module_name]
if TARGET == 'prod':
argv.append('-a slow')
result = nose.run(argv=argv)
Finally, if for some reason your tests are not discovered you can explicitly mark them as test with #istest attribute (from nose.tools import istest)
This turned out to be a mess: Nose pretty much exclusively uses the
TestLoader.load_tests_from_names function (it's the only function tested in
unit_tests/test_loader)
so since I wanted to actually load things from an arbitrary python object I
seemed to need to write my own figure out what kind of load function to use.
Then, in addition, to correctly get things to work like the nosetests script
I needed to import a large number of things. I'm not at all certain that this
is the best way to do things, not even kind of. But this is a stripped down
example (no error checking, less verbosity) that is working for me:
import sys
import types
import unittest
from nose.config import Config, all_config_files
from nose.core import run
from nose.loader import TestLoader
from nose.suite import ContextSuite
from nose.plugins.manager import PluginManager
from myapp import find_test_objects
def load_tests(config, obj):
"""Load tests from an object
Requires an already configured nose.config.Config object.
Returns a nose.suite.ContextSuite so that nose can actually give
formatted output.
"""
loader = TestLoader()
kinds = [
(unittest.TestCase, loader.loadTestsFromTestCase),
(types.ModuleType, loader.loadTestsFromModule),
(object, loader.loadTestsFromTestClass),
]
tests = None
for kind, load in kinds.items():
if isinstance(obj, kind) or issubclass(obj, kind):
log.debug("found tests for %s as %s", obj, kind)
tests = load(obj)
break
suite = ContextSuite(tests=tests, context=obj, config=config)
def main():
"Actually configure the nose config object and run the tests"
config = Config(files=all_config_files(), plugins=PluginManager())
config.configure(argv=sys.argv)
tests = []
for group in find_test_objects():
tests.append(load_tests(config, group))
run(suite=tests)
If your question is, "How do I get pytest to 'see' a test?", you'll need to prepend 'test_' to each test file and each test case (i.e. function). Then, just pass the directories you want to search on the pytest command line and it will recursively search for files that match 'test_XXX.py', collect the 'test_XXX' functions from them and run them.
As for the docs, you can try starting here.
If you don't like the default pytest test collection method, you can customize it using the directions here.
If you are willing to change your code to generate a py.test "suite" (my definition) instead of a unittest suite (tech term), you may do so easily. Create a file called conftest.py like the following stub
import pytest
def pytest_collect_file(parent, path):
if path.basename == "foo":
return MyFile(path, parent)
class MyFile(pytest.File):
def collect(self):
myname="foo"
yield MyItem(myname, self)
yield MyItem(myname, self)
class MyItem(pytest.Item):
SUCCEEDED=False
def __init__(self, name, parent):
super(MyItem, self).__init__(name, parent)
def runtest(self):
if not MyItem.SUCCEEDED:
MyItem.SUCCEEDED = True
print "good job, buddy"
return
else:
print "you sucker, buddy"
raise Exception()
def repr_failure(self, excinfo):
return ""
Where you will be generating/adding your code into your MyFile and MyItem classes (as opposed to the unittest.TestSuite and unittest.TestCase). I kept the naming convention of MyFile class that way, because it is intended to represent something that you read from a file, but of course you can basically decouple it (as I've done here). See here for an official example of that. The only limit is that in the way I've written this foo must exist as a file, but you can decouple that too, e.g. by using conftest.py or whatever other file name exist in your tree (and only once, otherwise everything will run for each files that matches -- and if you don't do the if path.basename test for every file that exists in your tree!!!)
You can run this from command line with
py.test -whatever -options
or programmactically from any code you with
import pytest
pytest.main("-whatever -options")
The nice thing with py.test is that you unlock many very powerful plugings such as html report

Categories