How do I specify a single test in a file with nosetests? - python

I have a file called test_web.py containing a class TestWeb and many methods named like test_something().
I can run every test in the class like so:
$ nosetests test_web.py
...
======================================================================
FAIL: checkout test
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/me/path/here/test_web.py", line 187, in test_checkout
...
But I can’t seem to run individual tests. These give me “No such test” errors when run in the same PWD:
$ nosetests test_web.py:test_checkout
$ nosetests TestWeb:test_checkout
What could be wrong here?

You must specify it like so: nosetests <file>:<Test_Case>.<test_method>, or
nosetests test_web.py:TestWeb.test_checkout
See the docs

You can also specify a module:
nosetests tests.test_integration:IntegrationTests.test_user_search_returns_users

Specifying names on the command line like the other answers suggest does work and is useful. However, when I'm in the midst of writing tests, I often find that I want to run just the test I'm working on, and the names that I would have to write on the command line get pretty long and cumbersome to write. In such case, I prefer to use a custom decorator and flag.
I define wipd ("work in progress decorator") like this:
from nose.plugins.attrib import attr
def wipd(f):
return attr('wip')(f)
This defines a decorator #wipd which will set the wip attribute on objects it decorates. For instance:
import unittest
class Test(unittest.TestCase):
#wipd
def test_something(self):
pass
Then -a wip can be used at the command line to narrow the execution of the test to the ones marked with #wipd.
Note on names...
I'm using the name #wipd for the decorator rather than #wip to avoid this kind of problem:
import unittest
class Test(unittest.TestCase):
from mymodule import wip
#wip
def test_something(self):
pass
def test_something_else(self):
pass
The import will make the wip decorator a member of the class, and all tests in the class will be selected. The attrib plugin checks the parent class of a test method to see if the attribute selected exists there too, and the attributes that are created and tested by attrib do not exist in a segregated space. So if you test with -a foo and your class contains foo = "platypus", then all tests in the class will be selected by the plugin.

To run multiple specific tests, you can just add them to the command line, separated by space.
nosetests test_web.py:TestWeb.test_checkout test_web.py:TestWeb.test_another_checkout

In my tests, specifying tests with module names do not work
You must specify the actual path to the .py:
nosetests /path/to/test/file.py:test_function
This with nose==1.3.7

My requirement was to run a single test in a test file that was in another windows directory - this was done from the anaconda command prompt as follows:
ran nosetests from:
(base) C:\Users\ABC\Documents\work\
but test_MyTestFile.py and methodsFile.py were in:
(base) C:\Users\ABC\Documents\work\daily\
run single test by including path with quotes as follows:
(base) C:\Users\ABC\Documents\work>nosetests "daily\\test_MyTestFile.py:MyTestClass.test_add_integers"
test_MyTestFile.py looked like this:
import methodsFile
import unittest
class MyTestClass(unittest.TestCase):
def test_add_integers(self):
assert methodsFile.add(5, 3) == 8
def test_add_integers_zero(self):
assert methodsFile.add(3, 0) == 3
methodsFile.py looked like this:
def add(num1, num2):
return num1 + num2

Related

How to execute test from inside a class in pytest?

I am trying to run a test method which is inside a class in Python using pytest framework.
Not sure what is going wrong, but the test is not getting picked. I made sure the package name, module name, class name and function name starts with "test".
There is no content inside init.py, I am not sure if I need to include anything inside this file to make sure the test(s) are picked which are under the class.
The interpreter I am using is shown in the screenshot. Also, I have added a screenshot showing the code so it becomes easier to understand the directory structure.
I visited several blogs and this, but none of them helped me resolve this.
Could you please help?
By default, pytest expects test classes to be named in CamelCase: TestDemo, not test_demo. The rest of your names follow the correct schema for the defaults, so if you change the class name to TestDemo, pytest should be able to find it.
pytest docs on test discovery: https://docs.pytest.org/en/stable/example/pythoncollection.html
To test a function in a class with pytest, we can use this command:
$ pytest -v /path/to/test_file.py::ClassName::test_function_name
remember use "::"

py.test: programmaticalyl ignore unittest tests

I'm trying to promote our a team to migrate to py.test from unittest in hope that less boilerplate and faster runs will reduce accuses as to why they don't write as much unittests as they should.
One problem we have is that almost all of our old django.unittest.TestCase fail due to errors. Also, most of them are really slow.
We had decided that the new test system will ignore the old tests, and will be used for new tests only. I tried to get py.test to ignore old tests by creating the following in conftest.py:
def pytest_collection_modifyitems(session, config, items):
print ("Filtering unittest.TestCase tests")
selected = []
for test in items:
parent = test.getparent(pytest.Class)
if not parent or not issubclass(parent.obj, unittest.TestCase) or hasattr(parent.obj, 'use_pytest'):
selected.append(test)
print("Filtered {} tests out of {}".format(len(items) - len(selected), len(items)))
items[:] = selected
Problem is, it filters all tests, also this one:
import pytest
class SanityCheckTest(object):
def test_something(self):
assert 1
Using some different naming pattern for the new tests would be a rather poor solution.
My test class did not conform to the naming convention. I change it to:
import pytest
class TestSanity(object):
def test_something(self):
assert 1
And also fixed a bug in my pytest_collection_modifyitems and it works.

Django Test Case Can't run method

I am just getting started with Django, so this may be something stupid, but I am not even sure what to google at this point.
I have a method that looks like this:
def get_user(self,user):
return Utilities.get_userprofile(user)
The method looks like this:
#staticmethod
def get_userprofile(user):
return UserProfile.objects.filter(user_auth__username=user)[0]
When I include this in the view, everything is fine. When I write a test case to use any of the methods inside the Utility class, I get None back:
Two test cases:
def test_stack_overflow(self):
a = ObjName()
print(a.get_user('admin'))
def test_Utility(self):
print(Utilities.get_user('admin'))
Results:
Creating test database for alias 'default'...
None
..None
.
----------------------------------------------------------------------
Can someone tell me why this is working in the view, but not working inside of the test case and does not generate any error messages?
Thanks
Verify whether your unit test comply the followings,
TestClass must be written in a file name test*.py
TestClass must have been subclassed from unittest.TestCase
TestClass should have a setUp function to create objects(usually done in this way, but objects creation can happen in the test functions as well) in the database
TestClass functions should start with test so as to be identified and run by the ./manage.py test command
TestClass may have tearDown to properly end the unit test case.
Test Case Execution process:
When you run ./manage.py test django sets up a test_your_database_name and creates all the objects mentioned in the setUp function(Usually) and starts executing the test functions in the order of placement inside the class and once when all the test functions are executed, finally looks for the tearDown function executes if any present in the test class and destroys the test database.
It may be because that you might not have invoked objects creation in setUp function or elsewhere in the TestClass.
Can you kindly post the entire traceback and test file to help you better?

How do I register a mark in pytest 2.5.1?

I've the read pytest documentation. Section 7.4.3 gives instructions for registering markers. I have followed the instructions exactly, but it doesn't seem to have worked for me.
I'm using Python 2.7.2 and pytest 2.5.1.
I have a pytest.ini file at the root of my project. Here is the entire contents of that file:
[pytest]
python_files=*.py
python_classes=Check
python_functions=test
rsyncdirs = . logs
rsyncignore = docs archive third_party .git procs
markers =
mammoth: mark a test as part of the Mammoth regression suite
A little background to give context: The folks that created the automation framework I am working on no longer work for the company. They created a custom plugin that extended the functionality of the default pytest.mark. From what I understand, the only thing the custom plugin does is make it so that I can add marks to a test like this:
#pytest.marks(CompeteMarks.MAMMOTH, CompeteMarks.QUICK_TEST_A, CompeteMarks.PROD_BVT)
def my_test(self):
instead of like this:
#pytest.mark.mammoth
#pytest.mark.quick_test_a
#pytest.mark.prod_bvt
def my_test(self):
The custom plugin code remains present in the code base. I do not know if that has any negative effect on trying to register a mark, but thought it was worth mentioning if someone knows otherwise.
The problem I'm having is when I execute the following command on a command-line, I do NOT see my mammoth mark listed among the other registered marks.
py.test --markers
The output returned after running the above command is this:
#pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see http://pytest.org/latest/skipping.html
#pytest.mark.xfail(condition, reason=None, run=True): mark the the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. See http://pytest.org/latest/skipping.html
#pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: #parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see http://pytest.org/latest/parametrize.html for more info and examples.
#pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures
#pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
#pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.
What am I doing wrong and how can I get my mark registered?
One more piece of info, I have applied the mammoth mark to a single test (shown below) when I ran the py.test --markers command:
#pytest.mark.mammoth
def my_test(self):
If I understand your comments correctly the project layout is the following:
~/customersites/
~/customersites/automation/
~/customersites/automation/pytest.ini
Then invoking py.test as follows:
~/customersites$ py.test --markers
will make py.test look for a configuration file in ~/customersites/ and subsequently all the parents: ~/, /home/, /. In this case this will not make it find pytest.ini.
However when you invoke it with one or more arguments, py.test will try to interpret each argument as a file or directory and start looking for a configuration file from that directory and it's parents. It then iterates through all arguments in order until it found the first configuration file.
So with the above directory layout invoking py.test as follows will make it find pytest.ini and show the markers registered in it:
~/customersites$ py.test automation --markers
as now py.test will first look in ~/customersites/automation/ for a configuration file before going up the directory tree and looking in ~/customersites/. But since it finds one in ~/customersites/automation/pytest.ini it stops there and uses that.
Have you tried here?
From the docs:
API reference for mark related objects
class MarkGenerator[source]
Factory for MarkDecorator objects - exposed as a pytest.mark singleton
instance.
Example:
import py
#pytest.mark.slowtest
def test_function():
pass
will set a slowtest MarkInfo object on the test_function object.
class MarkDecorator(name, args=None, kwargs=None)[source]
A decorator for test functions and test classes. When applied it will
create MarkInfo objects which may be retrieved by hooks as item keywords.
MarkDecorator instances are often created like this:
mark1 = pytest.mark.NAME # simple MarkDecorator
mark2 = pytest.mark.NAME(name1=value) # parametrized MarkDecorator
and can then be applied as decorators to test functions:
#mark2
def test_function():
pass

How to stop Python unittest from printing test docstring?

I've noticed that, when my Python unit tests contain documentation at the top of the function, sometimes the framework prints them in the test output. Normally, the test output contains one test per line:
<test name> ... ok
If the test has a docstring of the form
"""
test that so and so happens
"""
than all is well. But if the test has a docstring all on one line:
"""test that so and so happens"""
then the test output takes more than one line and includes the doc like this:
<test name>
test that so and so happens ... ok
I can't find where this is documented behavior. Is there a way to turn this off?
The first line of the docstring is used; the responsible method is TestCase.shortDescription(), which you can override in your testcases:
class MyTests(unittest.TestCase):
# ....
def shortDescription(self):
return None
By always returning None you turn the feature off entirely. If you want to format the docstring differently, it's available as self._testMethodDoc.
This is an improved version of MartijnPieters excellent answer.
Instead of overriding that method for every test, it is more convenient (at least for me) to add the following file to your list of tests. Name the file test_[whatever you want].py.
test_config.py
import unittest
# Hides Docstring from Verbosis mode
unittest.TestCase.shortDescription = lambda x: None
This code snippet could also be placed in the __init__.py files of the test folder.
In my case, I just added to the root folder of my project, scripts, since I use discover as in python -m unittest from scripts to run all the unittests of my project. As this is the only test*.py file on that directory level, it will load before any other test.
(I tried the snippet on the __init__.py of the root folder, it didn't seem to work, so I sticked with the file approach)
BTW: I actually prefer lambda x: "\t" instead of lambda x: None
After reading this I made a plugin for nosetests to avoid the boilerplate.
https://github.com/MarechJ/nosenodocstrings

Categories