Unit tests for a single python file - python

I have a python script with main, and a whole lot of helper methods. How do I write unit tests for these helper methods which all reside in the same file. A lot of the examples I see online involve creating libraries of helpers and then importing and testing that. In my case it is all one file.
Sample structure:
/user/
|-- pythonscript.py
and I want to write tests for pythonscript.py.
Sample pythonscript.py:
def getSum(i ,j):
return i+j
def main():
summer = getSum(1,1)
if __name__ == '__main__':
main()
For example, I want to write tests for methods like getSum in another file. (I recall there was a tester which would screen .py files and identify functions needing tests based on appending test_ or _test, but I cannot seem to find it anymore.)

It sounds like you're describing pytest, which can automatically discover tests that follow certain naming and path conventions. One of these is considering every function prefixed with test to be a test.
For your scenario, I'd recommend creating a file called test_pythonscript.py also at the root of your directory. Inside that file, you could import functions from pythonscript.py, then test them:
# test_pythonscript.py
from pythonscript import getSum
test_getSum():
assert getSum(1, 2) == 3
Once you've installed pytest as a dependency, you'd run pytest at the root of your project and it should be able to discover the above test.

Related

Why do you run other lines of codes of a python file(say test.py) when you are just importing a piece of test.py from somewhere else (say main.py)?

I have two python files. One is main.py which I execute. The other is test.py, from which I import a class in main.py.
Here is the sample code for main.py:
from test import Test
if __name__ == '__main__':
print("You're inside main.py")
test_object = Test()
And, here is the sample code for test.py:
class Test:
def __init__(self):
print("you're initializing the class.")
if __name__ == '__main__':
print('You executed test.py')
else:
print('You executed main.py')
Finally, here's the output, when you execute main.py:
You executed main.py
You're inside main.py
you're initializing the class.
From the order of outputs above, you can see that once you import a piece of a file, the whole file gets executed immediately. I am wondering why? what's the logic behind that?
I am coming from java language, where all files included a single class with the same name. I am just confused that why python behaves this way.
Any explanation would be appricated.
What is happening?
When you import the test-module, the interpreter runs through it, executing line by line. Since the if __name__ == '__main__' evaluates as false, it executes the else-clause. After this it continues beyond the from test import Test in main.py.
Why does python execute the imported file?
Python is an interpreted language. Being interpreted means that the program being read and evaluated one line at the time. Going through the imported module, the interpreter needs to evaluate each line, as it has no way to discern which lines are useful to the module or not. For instance, a module could have variables that need to be initialized.
Python is designed to support multiple paradigms. This behavior is used in some of the paradigms python supports, such as procedural programming.
Execution allows the designer of that module to account for different use cases. The module could be imported or run as a script. To accommodate this, some functions, classes or methods may need to be redefined. As an example, a script could output non-critical errors to the terminal, while an imported module to a log-file.
Why specify what to import?
Lets say you are importing two modules, both with a Test-class. If everything from those modules is imported, only one version of the Test-class can exist in our program. We can resolve this issue using different syntax.
import package1
import package2
package1.Test()
packade2.Test()
Alternatively, you can rename them with the as-keyword.
from package1 import Test
from package2 import Test as OtherTest
Test()
OtherTest()
Dumping everything into the global namepace (i.e from test import *) pollutes the namespace of your program with a lot of definitions you might not need and unintentionally overwrite/use.
where all files included a single class with the same name
There is not such requirement imposed in python, you can put multiple classes, functions, values in single .py file for example
class OneClass:
pass
class AnotherClass:
pass
def add(x,y):
return x+y
def diff(x,y):
return x-y
pi = 22/7
is legal python file.
According to interview with python's creator modules mechanism in python was influenced by Modula-2 and Modula-3 languages. So maybe right question is why creators of said languages elected to implement modules that way?

Can I put step definitions in a folder which is not "steps" with behave?

I am trying to work with Behave on Python.
I was wondering if there would be a way to put my .py files somewhere else instead of being forced to put them all inside the "steps" folder. My current structure would look like this
tests/
features/
steps/ #all code inside here, for now
What I would like to accomplish is something like
tests/
features/ #with all the .feature files
login/ #with all the .py files for logging in inside a service
models/ #with all the .py files that represents a given object
and so on
The only BDD framework that I used before Behave was Cucumber with Java, which allowed to insert the step definitions wherever I wanted to (and the rest was handled by Cucumber itself).
I am asking this because I would like to have a lot of classes in my project in order to organize my code in a better way.
This may be a bit late but you can do the following:
Have the structure like this:
tests/
features/
steps/
login
main_menu
all_steps.py
In the subfolders in steps you can create your _steps.py file with the implementation and then in the all_steps.py(or how you want to name it) you just need to import them:
from tests.steps.login.<feature>_step import *
from tests.steps.main_menu.<feature>_step import *
etc
And when you run this it should find the step files. Alternatively you can have the files anywhere in the project as long as you have 1 Steps folder and a file in the file were you import all steps
First, from the behave Documentation (Release 1.2.7.dev0):
behave works with three types of files:
feature files written by your Business Analyst / Sponsor / whoever with your behaviour scenarios in it, and
a “steps” directory with Python step implementations for the scenarios.
optionally some environmental controls (code to run before and after steps, scenarios, features or the whole
shooting match).
So a steps/ directory is required.
To accomplish a workaround similar to what you have in mind, I tried creating a subdirectory in the /steps directory: /steps/deeper/ and inserted my Python file there: /steps/deeper/testing.py. After running behave, I received the "NotImplementedError", meaning the step definitions in /deeper/testing.py were not found.
It appears that behave doesn't search recursively through subdirectories of the steps/ directory for any additional Python files.
As for what you're trying to do, I think it's decent organizational idea, but since it's not doable, you could do this: instead of having directories for the Python files in your tests/ directory, why not have a good naming convention for your Python file and separate the associated functions into their own Python files? That is:
tests/
features/
steps/
login_prompt.py # contains all the functions for logging in inside a service
login_ssh.py # contains all the functions for SSH login
models_default.py # contains all the functions for the default object
models_custom.py # contains all the functions for a custom object
and so on...
Of course, at this point, it really doesn't matter if you separate them into different Python files, since behave searches through all the Python files in steps/ when called, but for organization's sake, it accomplishes the same effect.
You can do it with additional method, smth like this:
def import_steps_from_subdirs(dir_path):
for directory in walk(dir_path):
current_directory = directory[0] + '/'
all_modules = [module_info[1] for module_info in iter_modules(path=[current_directory])]
current_directory = current_directory.replace(Resources.BASE_DIR + '/', '')
for module in all_modules:
import_module(current_directory.replace('/', '.') + module)
Then call this method in before_all layer

Generating a test result of multiple sikuli scripts

I want to run multiple (like 10 or so) sikuli scripts consecutively and output the result in XML. I have found this SO question:
How to generate report using sikuli for desktop application
and xmlrunner looks quite ok. Now, my sikuli scripts have multiple test methods, but not all of them have tearDown steps since those test don't do much.
Do I have to implement all 3 methods for a test to work?
How does the test runner work? Does it start by calling setUp and then proceeds to call all other methods in sequence?
Furthermore, using template provided in the answer of the question:
import xmlrunner
import unittest
class MyTest(unittest.TestCase):
def setUp(self):
// setUp
def testMyTest(self):
// test
def tearDown(self):
// tearDown
suite = unittest.TestLoader().loadTestsFromTestCase(MyTest)
result = XMLTestRunner(file("unittest.xml", "w")).run(suite)
How would I go and include all my sikuli scripts, which are all separate classes in separate folders? Is it possible somehow to reference or import the test .py file generated by sikuli? Reason is, I wouldn't like to copy and paste all code in one large file which would then have many classes and would be very large.
You could make a Main() class where you call on all other files you would like to execute.
To call another file you could use execfile(), use the complete path to the .py file in the .sikuli directory.

Bootstrapping tests and using Python test discovery

A problem I continue to have it "bootstrapping" my tests.
The problem that I have is exactly what this guy has.
The top solution talks about creating a "boostrap" script. I presume that I must then enumerate all of the tests to be run, or use test manifests in the __init__.py files using the __all__ keyword. However, I noticed that the most recent Python documentation on unittest does not talk about __all__ anymore.
In 2.7, we have the python command called "discovery"
python -m unittest discover
That works even nicer. Because:
1) There's no need for Nose
2) There's no need for test manifests
But it doesn't seem to have a way to "bootstrap"
Do I need to use another test runner? One that allows bootstrapping AND discovery?
Do I need py.test?
http://pytest.org/
The reason that I need bootstrapping, is the problem that this guy has. Basically, my import statements don't work right if I run the test directly. I want to execute my suite of tests from the top of my project, just like the app would when it runs normally.
After all, import statements are always relative to their physical location. (BTW, I think this is a hindrance in Python)
Definition: What is Bootstrapping?
Bootstrapping means that I want to do some setup before running any tests at all in the entire project. This is sort of like me asking for a "test setup" at the whole project level.
Update
Here is another posting about the same thing. Using this 2.7 command, we can avoid Nose. But how does one add bootstrapping?
I got it!
Using this one script that I wrote and called it "runtests.py" and placed in my project root, I was able to "bootstrap" that is to run some initialization code AND use discovery. Woot!
In my case, the "bootstrap" code is the two lines that say:
import sys
sys.path.insert(0, 'lib.zip')
Thanks!
#!/usr/bin/python
import unittest
import sys
sys.path.insert(0, 'lib.zip')
if __name__ == "__main__":
all_tests = unittest.TestLoader().discover('.')
unittest.TextTestRunner().run(all_tests)
Here's what I do, and I think it works quite well. For a file/directory structure similar to this:
main_code.py
run_tests.py
/Modules
__init__.py
some_module1.py
some_module2.py
/Tests
__init__.py
test_module1.py
test_module2.py
It's fairly easy to organize your run_tests.py file to bootstrap the tests. First every file with test (test_module1.py, etc.) should implement a function that generates a test suite. Something like:
def suite():
suite = unittest.TestSuite()
suite.addTest(unittest.makeSuite(Test_Length))
suite.addTest(unittest.makeSuite(Test_Sum))
return suite
at the end of your test code. Then, in the run_tests.py file, you aggregate these into an additional test_suite, and run that:
import unittest
import Tests.test_module1 as test_module1
import Tests.test_module2 as test_module2
module1_test_suite = test_module1.suite()
module2_test_suite = test_module2.suite()
aggregate_suite = unittest.TestSuite()
aggregate_suite.addTest(module1_test_suite)
aggregate_suite.addTest(module2_test_suite)
unittest.TextTestsRunner(verbosity = 2).run(aggregate_suite
Then to run all of these tests, from the command line, simply run
python run_tests.py

Getting a list of all modules in the current package

Here's what I want to do: I want to build a test suite that's organized into packages like tests.ui, tests.text, tests.fileio, etc. In each __init__.py in these packages, I want to make a test suite consisting of all the tests in all the modules in that package. Of course, getting all the tests can be done with unittest.TestLoader, but it seems that I have to add each module individually. So supposing that test.ui has editor_window_test.py and preview_window_test.py, I want the __init__.py to import these two files and get a list of the two module objects. The idea is that I want to automate making the test suites so that I can't forget to include something in the test suite.
What's the best way to do this? It seems like it would be an easy thing to do, but I'm not finding anything.
I'm using Python 2.5 btw.
Good answers here, but the best thing to do would be to use a 3rd party test discovery and runner like:
Nose (my favourite)
Trial (pretty nice, especially when testing async stuff)
py.test (less good, in my opinion)
They are all compatible with plain unittest.TestCase and you won't have to modify your tests in any way, neither would you have to use the advanced features in any of them. Just use as a suite discovery.
Is there a specific reason you want to reinvent the nasty stuff in these libs?
Solution to exactly this problem from our django project:
"""Test loader for all module tests
"""
import unittest
import re, os, imp, sys
def find_modules(package):
files = [re.sub('\.py$', '', f) for f in os.listdir(os.path.dirname(package.__file__))
if f.endswith(".py")]
return [imp.load_module(file, *imp.find_module(file, package.__path__)) for file in files]
def suite(package=None):
"""Assemble test suite for Django default test loader"""
if not package: package = myapp.tests # Default argument required for Django test runner
return unittest.TestSuite([unittest.TestLoader().loadTestsFromModule(m)
for m in find_modules(package)])
if __name__ == '__main__':
unittest.TextTestRunner().run(suite(myapp.tests))
EDIT: The benefit compared to bialix's solution is that you can place this loader anytwhere in the project tree, there's no need to modify init.py in every test directory.
You can use os.listdir to find all files in the test.* directory and then filter out .py files:
# Place this code to your __init__.py in test.* directory
import os
modules = []
for name in os.listdir(os.path.dirname(os.path.abspath(__file__))):
m, ext = os.path.splitext()
if ext == '.py':
modules.append(__import__(m))
__all__ = modules
The magic variable __file__ contains filepath of the current module. Try
print __file__
to check.

Categories