nose2 with such DSL does not find tests - python

This might be really stupid, but I can't get it to work...
I'm want to use the such DLS in nose2 with python 2.7 in Linux.
I'm trying out the beginning of the example from the documentation http://nose2.readthedocs.org/en/latest/such_dsl.html (see code below) but it doesn't run the tests, no matter how I launch it from the command line.
My file is called test_something.py, it's the only file in the directory.
I've tried running from the command line with >> nose2 and >> nose2 --plugin nose2.plugins.layers, but I always get Ran 0 tests in 0.000s. With >> nose2 --plugin layers I get ImportError: No module named layers.
How am I supposed to run this test from the command line??
Thanks!
Code below:
import unittest
from nose2.tools import such
with such.A("system with complex setup") as it:
#it.has_setup
def setup():
print "Setup"
it.things = [1]
#it.has_teardown
def teardown():
print "Teardown"
it.things = []
#it.should("do something")
def test():
print "Test"
assert it.things
it.assertEqual(len(it.things), 1)

DOH!
I forgot to add it.createTests(globals()) at the end of the file!

Related

Use unittest to run tests from function [duplicate]

I have a file TestProtocol.py that has unittests. I can run that script and get test results for my 30 tests as expected. Now I want to run those tests from another file tester.py that is located in the same directory. Inside tester.py I tried import TestProtocol, but it runs 0 tests.
Then I found the documentation which says I should do something like this:
suite = unittest.TestLoader().discover(".", pattern = "*")
unittest.run(suite)
This should go through all files in the current directory . that match the pattern *, so all tests in all files. Unfortunately it again runs 0 tests.
There is a related QA that suggests to do
import TestProtocol
suite = unittest.findTestCases(TestProtocol)
unittest.run(suite)
but that also does not find any tests.
How do I import and run my tests?
You can try with following
# preferred module name would be test_protol as CamelCase convention are used for class name
import TestProtocol
# try to load all testcases from given module, hope your testcases are extending from unittest.TestCase
suite = unittest.TestLoader().loadTestsFromModule(TestProtocol)
# run all tests with verbosity
unittest.TextTestRunner(verbosity=2).run(suite)
Here is a full example
file 1: test_me.py
# file 1: test_me.py
import unittest
class TestMe(unittest.TestCase):
def test_upper(self):
self.assertEqual('foo'.upper(), 'FOO')
if __name__ == '__main__':
unittest.main()
file 2: test_other.py, put this under same directory
# file 2: test_other.py, put this under same directory
import unittest
import test_me
suite = unittest.TestLoader().loadTestsFromModule(test_me)
unittest.TextTestRunner(verbosity=2).run(suite)
run each file, it will show the same result
# python test_me.py - Ran 1 test in 0.000s
# python test_other.py - Ran 1 test in 0.000s

Why do my imports work in pycharm, but not on the command line?

I have the following folder layout:
my_folder/
my_subfolder/
__init__.py
main.py
import_1.py
import_2.py
With files:
# main.py
from my_subfolder import import_1
import_1.call_import_2(3)
And
# import_1.py
from my_subfolder import import_2
def call_import_2(n):
import_2.print_hello_world_n_times(n)
And
# import_2.py
def print_hello_world_n_times(n):
for i in range(n):
print('hello world')
Now the thing is, if I run main.py in pycharm, it works fine. However, if I run it from the command line python my_subfolder/main.py or python main.py (depending which folder I am in), it doesn't work! The git bash also cannot get it to work. I get the error:
ModuleNotFoundError no module named 'my_subfolder'
Does anyone know what causes this discrepancy between pycharm and the command line?
# main.py
from . import import_1
import_1.call_import_2(3)
and
# import_1.py
from . import import_2
def call_import_2(n):
import_2.print_hello_world_n_times(n)
You're already in my_subfolder so it looks for another one inside of that.

Difference between `Run()` and `py_run` and the use of `try except` block in `b.py` in Pylint

This is my code:
# uses Python3.7
# a.py
from pylint import lint as pl
pathvar = 'test.py'
pylint_opts = [pathvar]
pl.Run(pylint_opts)
print('New Text File Here')
This code gives me the correct output but doesn't execute anything after Run statement and hence doesn't execute the print statement. However, If i add a try except block in there it runs fine.
# uses Python3.7
# b.py
from pylint import lint as pl
try:
pathvar = 'test.py'
pylint_opts = [pathvar]
pl.Run(pylint_opts)
except:
pass
print('New Text File Here')
There is also another method to run pylint on a file from python program:
# uses Python3.7
# c.py
from pylint import epylint as lint
pathvar = 'test.py'
lint.py_run(pathvar)
print('New Text File Here')
This one executes the py_run and then prints the correct output.
I know you might suggest that I should use c.py as it already solves my problem of running pylint. But a.py is more general and various arguments can also be passed apart from running the pylint file. Why b.py needs a try except block and c.py doesnt for the print command to execute ?
This is because Run class uses sys.exit in its __init__ method. You can pass do_exit=False argument like pl.Run(pylint_opts, do_exit=False) to make a.py working as you wish: printing after running pylint.

Python3 - output __main__ file prints when running unittests (from actual program, not unittests)

How can I make that my __main__ file prints are outputted, when I run tests? I mean prints from that file, not unittests files prints.
I have this sample structure (all files are in the same directory):
main.py:
import argparse
print('print me?') # no output
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('name')
args = parser.parse_args()
print(args.name) # no output
other.py:
def print_me():
print('ran print_me')
test.py:
import unittest
import sh
import other
class TestMain(unittest.TestCase):
def test_main(self):
print('test_main') # prints it.
sh.python3('main.py', 'test123')
def test_other(self):
print('test_other') # prints it.
other.print_me()
And I run it with python3 -m nose -s or python3 -m unittest, but it makes no difference, prints are not outputted from main.py, only the ones that are defined directly on test file. Here is what I do get:
user#user:~/python-programs/test_main$ python3 -m nose -s
test_main
.test_other
ran print_me
.
----------------------------------------------------------------------
Ran 2 tests in 0.040s
OK
P.S. Of course if I run main.py without using tests, then it prints normally (for example using python shell/interpreter and calling main.py with sh, just like in unittests)
sh.python3 starts new process and its output is not captured by nose. You can redirect the output printing the result from it:
print(sh.python3('main.py', 'test123'))

python unittest and pytest - can I assign test status to a variable

I am working on a python based testing system, that iterates through a set of python tests and run them one by one (there are unittests and pytests).
Is there a way that my testing system to understand the result of every individual tests and saves it to a dictionary with key [test_name] and value [test_status] for example. I imagine if the result of the test to be assigned to a variable for example:
test_status = "passed"
PS: all of the tests have a main(), that looks like that
# for unittests
def main():
unittest.main()
# for pytests
def main():
os.system("py.test -v {}".format(os.path.abspath(__file__)))
If I understood correctly, you want to actually run pytest or unittests as a command line tool and retrieve the results.
The straightforward way to do this would be to use JUnit xml output and parse it. For example in pytest:
pytest --junitxml=path
Maybe you want to consider using an automation server like Jenkins, that will run unittests and pytest tests separately and then collect the results.
I have found a solution for unittest framework:
The idea is to change the test ouput data to be not on the terminal console, but to a file. One way to do it is to add the following code to the tests:
if __name__ == '__main__':
# terminal command to run a specific test and save its output to a log file
# python [this_module_name] [log_file.log] [*tests]
parser = argparse.ArgumentParser()
parser.add_argument('test_log_file')
parser.add_argument('unittest_args', nargs='*')
args = parser.parse_args()
log_file = sys.argv[1]
# Now set the sys.argv to the unittest_args (leaving sys.argv[0] alone)
sys.argv[1:] = args.unittest_args
with open(log_file, "w") as f:
runner = unittest.TextTestRunner(f)
unittest.main(defaultTest=sys.argv[2:], exit=False, testRunner=runner)
and run it with command like this one:
python my_tests.py log_file.log class_name.test_1 class_name.test_2 ... test_n
Other way is with direct command that looks like this one:
python -m unittest [test_module_name].[test_class_name].[test_name] 2> [log_file_name]
# real command example:
python -m unittest my_tests.class_name.test_1 2> my_test_log_file.log
# real command example with multiple tests:
python -m unittest my_tests.class_name.test_1 my_tests.class_name.test_2 my_tests.class_name.test_3 2> my_test_log_file.log
The final part is to write a method, that reads this log file and get the result of the test. Those log files look something like that:
.FEs
======================================================================
ERROR: test_simulative_error (__main__.SimulativeTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "unittest_tests.py", line 84, in test_simulative_error
raise ValueError
ValueError
======================================================================
FAIL: test_simulative_fail (__main__.SimulativeTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "unittest_tests.py", line 81, in test_simulative_fail
assert False
AssertionError
----------------------------------------------------------------------
Ran 4 tests in 0.001s
FAILED (failures=1, errors=1, skipped=1)
So the final step is to open that log_file and read the first line that gives information how the test/tests have finished and save this info as you want.

Categories