I am trying to design and test code similar to the following in a good object oriented way? (Or a pythonic way?)
Here is a factory class which decides whether a person's name is long or short:
class NameLengthEvaluator(object):
def __init__(self, cutoff=10)
self.cutoff = cutoff
def evaluate(self, name):
if len(self.name) > cutoff:
return 'long'
else:
return 'short'
Here is a person class with an opinion on the length of their own name:
class Person(object):
def __init__(self, name=None, long_name_opinion=8):
self.name = name
def name_length_opinion(self):
return 'My names is ' + \
NameLengthEvaluator(long_name_opinion).evaluate(self.name)
A couple questions:
Does the Person method name_length_opinion() deserve a unit test, and if so what would it look like?
In general, is there a good way to test simple methods of classes with functionality that is entirely external?
It seems like any test for this method would just restate its implementation, and that the test would just exist to confirm that nobody was touching the code.
(disclaimer: code is untested and I am new to python)
Unit Testing
Does the Person method name_length_opinion() deserve a unit test, and if so what would it look like?
Do you want to make sure it does what you think it does and makes sure it doesn't break in the future? If so, write a unit test for it.
and that the test would just exist to confirm that nobody was touching the code
Unit testing is more about making sure a class conforms to the contract that it specifies. You don't have to write a unit test for everything, but if it's a simple method, it should be a simple unit test anyways.
Repetition
It seems like any test for this method would just restate its implementation
You shouldn't be repeating the algorithm, you should be using use cases. For instance, a NameLengthEvaluator with a cutoff of 10 should have these be short names:
George
Mary
and these be long names:
MackTheKnife
JackTheRipper
So you should verify that the method reports the shortness of these names correctly. You should also test that a NameLengthEvaluator with a cutoff of 4 would report Mary as short and the others as long.
Throwaway Code?
If you've ever written a class and then written a main method that just runs the class to make sure it does what it is supposed to (and then you throw that main method away when you move onto another class), you've already written a unit test. But instead of throwing away, save it and convert it to a unit test so that in the future you can make sure you didn't break anything.
External Code
In general, is there a good way to test simple methods of classes with functionality that is entirely external
Well, if it's entirely external then why is it a method on that class? Normally you have at least some logic that can be tested. In this case, you can test that name_length_opinion returns My names is long or My names is short in the correct cases.
It really depends on the lifecycle of that code. It's obvious that, in its current state, the method is obviously correct, and the unit test is more of a specification for how it should behave. If you plan on making changes in the future (reimplementing NameLengthEvaluator somehow differently, for instance), having unit tests is great, because running your tests will catch any regressions. But in this case, it seems unlikely that you'd make any changes, so the tests are probably excessive (though a good sanity check).
Normally you'd use a mock here. You could make a mock NameLengthEvaluator that returned an object that recorded what it was concatenated with, and when name_length_opinion returned, you'd check to make sure that it was used and concatenated with the right thing.
For example, using unittest.mock:
from unittest.mock import MagicMock, patch
#patch('your_module.NameLengthEvaluator', autospec=True)
def test_person_name_length_opinion(NameLengthEvaluator):
expected_result = object()
opinion = MagicMock(name='opinion')
opinion.__radd__.return_value = expected_result
name_length_evaluator = MagicMock(name='name_length_evaluator')
name_length_evaluator.evaluate.return_value = opinion
NameLengthEvaluator.return_value = name_length_evaluator
name = object()
length_limit = object()
person = Person(name, long_name_opinion=length_limit)
result = person.name_length_opinion()
NameLengthEvaluator.assert_called_with(length_limit)
name_length_evaluator.evaluate.assert_called_with(name)
opinion.__radd__.assert_called_with('My names is ')
assert result is expected_result
However, since the method is so simple, I'm not sure you care that much.
Related
From what I’ve read, unit test should test only one function/method at a time. But I’m not clear on how to test methods that only set internal object data with no return value to test off of, like the setvalue() method in the following Python class (and this is a simple representation of something more complicated):
class Alpha(object):
def __init__(self):
self.__internal_dict = {}
def setvalue(self, key, val):
self.__internal_dict[key] = val
def getvalue(self, key):
return self.__internal_dict[key]
If unit test law dictates that we should test every function, one at a time, then how do I test the setvalue() method on its own? One "solution" would be to compare what I passed into setvalue() with the return of getvalue(), but if my assert fails, I don't know which method is failing - is it setvalue() or getvalue()? Another idea would be to compare what I passed into setvalue() with the object's private data, __internal_dict[key] - a HUGE disgusting hack!
As of now, this is my solution for this type of problem, but if the assert raises, that would only indicate that 1 of my 2 main methods is not properly working.
import pytest
def test_alpha01():
alpha = Alpha()
alpha.setvalue('abc', 33)
expected_val = 33
result_val = alpha.getvalue('abc')
assert result_val == expected_val
Help appreciated
The misconception
The real problem you have here is that you are working on a false premise:
If unit test law dictates that we should test every function, one at a time...
This is not at all what good unit testing is about.
Good unit testing is about decomposing your code into logical components, putting them into controlled environments and testing that their actual behaviour matches their expected behaviour - from the perspective of a consumer.
Those "units" may be (depending on your environment) anonymous functions, individual classes or clusters of tightly-coupled classes (and don't let anyone tell you that class coupling is inherently bad; some classes are made to go together).
The important thing to ask yourself is - what does a consumer care about?
What they certainly don't care about is that - when they call a set method - some internal private member that they can't even access is set.
The solution
Naively, from looking at your code, it seems that what the consumer cares about is that when they call setvalue for a particular key, calling getvalue for that same key gives them back the value that they put in. If that's the intended behaviour of the unit (class), then that's what you should be testing.
Nobody should care what happens behind the scenes as long as the behaviour is correct.
However, I would also consider if that's really all that this class is for - what else does that value have an impact on? It's impossible to say from the example in your question but, whatever it is, that should be tested too.
Or maybe, if that's hard to define, this class in itself isn't very meaningful and your "unit" should actually be an independent set of small classes that only really have meaningful behaviour when they're put together and should be tested as such.
The balance here is subtle, though, and difficult to be less cryptic about without more context.
The pitfall
What you certainly shouldn't (ever ever ever) do is have your tests poking around internal state of your objects. There are two very important reasons for this:
First, as already mentioned, unit tests are about behaviour of units as perceived by a client. Testing that it does what I believe it should do as a consumer. I don't - and shouldn't - care about how it does it under the hood. That dictionary is irrelevant to me.
Second, good unit tests allow you to verify behaviour while still giving you the freedom to change how that behaviour is achieved - if you tie your tests to that dictionary, it ceases to be an implementation detail and becomes part of the contract, meaning any changes to how this unit is implemented force you either to retain that dictionary or change your tests.
This is a road that leads to the opposite of what unit testing is intended to achieve - painless maintenance.
The bottom line is that consumers - and therefore your tests - do not care about whether setvalue updates an internal dictionary. Figure out what they actually care about and test that instead.
As an aside, this is where TDD (specifically test-first) really comes into its own - if you state the intended behaviour with a test up-front, it's difficult to find yourself stuck in that "what am I trying to test?" rut.
I would consider accessing your internal data structure in your test less of a "disgusting hack" since it tests one function at a time and you know what's wrong almost immediately.
I admit it's not a great idea to access private members from the test, but still I see this adding more value:
class Alpha(object):
def __init__(self):
self._internal_dict = {}
def setvalue(self, key, val):
self._internal_dict[key] = val
def getvalue(self, key):
return self._internal_dict[key]
def test_alpha_setvalue():
alpha = Alpha()
alpha.setvalue('abc', 33)
assert alpha._internal_dict['abc'] == 33
def test_alpha_getvalue():
alpha = Alpha()
alpha._internal_dict['abc'] = 33
assert alpha.getvalue('abc') == 33
Please note that this approach requires that you use a single underscore for your internal data structure for the test to be able to access it. It is a convention followed to indicate to other programmers that it is non-public.
More info about this in python docs: https://docs.python.org/3/tutorial/classes.html#tut-private
I've got a Coordinate class, which has an add(Coordinate) method. When writing unit tests for this class, I've got tests to assertEqual a result:
a = Coordinate(1,2,3)
b = Coordinate(5,6,7)
result = a.add(b)
assertEqual(result.x, 6)
assertEqual(result.y, 8)
assertEqual(result.z,10)
I can 'fake' this rather easily:
def add(self, other):
return Coordinate(6,8,10)
This is the simplest solution to the test failure. The next step is to write a second test which prevents me from faking it in this manner. I could either:
write another assertEqual test with different numbers (so faking the Coordinate(6,8,10) doesn't pass, or
Write an assertNotEqual test with two different inputs, ensuring that the result isn't 6,8,10.
If I write an assertEquals test, I've then got two tests that look very, very similar. Is this a problem? If I saw code that similar in the project, I'd be tempted to refactor it. Should I do this for the test code too - and, if so, won't this mean every pair of tests will end up being refactored?
If I write an assertNotEqual, the test is only testing for "fake results" - which I am very sure won't ever come up from an algorithmic error. In essence, once I write the test, stop faking the result so both tests pass, the assertNotEquals test can be safely removed, and I will still have confidence in the code - so I'd write the test, fix the fake, remove the test, which seems rather silly.
What should I be doing in this situation?
Another assertEqual test with different numbers will be good enough. If possible, take a "borderline" or uncommon case, like :
a = Coordinate(1,2,3)
b = Coordinate(-5,-6,-7)
...
An AssertNotEqual test would be absurd and not intuitive for the reader IMO.
In any case, I wouldn't worry too much about this kind of tests. As a reader it's really obvious that the developer who wrote them just wanted to test a couple of cases, and it would take a real refactoring extremist to want to refactor them. I mean, it's only 2 tests with almost no duplication, the intent is obvious and it's not like you have to rewrite 300 lines of code when the object changes...
I wouldn't get too worried about being able to 'fake-pass' a test as you describe - it will always be possible to write a convoluted if statement inside your method to make each of your test cases pass. Instead have a look at the logic involved in your add method and ensure that your test cases cover all the code branches - that should be good enough in my opinion.
It would be impossible to write tests that guard against someone maliciously writing code to fake out the testing code based on the particular cases the test code uses. So you shouldn't try.
When I say "impossible" I don't mean just "very hard", it may be an instance of the Liar Paradox which breaks formal systems.
Note: I'm going to change Coordinate to Int - for simplicity. Also the below syntax is made up.. so won't execute. But it should get my point across
testAdd()
assertThat(2.Add(2), isEqualTo(4))
You then can fake this out by making Add return 4 always. Next to triangulate (drive general code by varying certain params).
testAdd()
assertThat(2.Add(2), isEqualTo(4))
assertThat(3.Add(5), isEqualTo(8))
Now you need to make Add to do some actual work..
Once it is green, you could either let the two tests remain or you could delete the simple/trivial ones - as long as it doesn't let your confidence drop. If the duplication bothers you, you could see if your test runner supports 'parameterized tests'
[2,2,4]
[3,5,8]
testAdd(operand1, operand2, expectedResult)
assertThat(operand1.Add(operand2), isEqualTo(expectedResult))
What does the Coordinate.Add() method look like?
Seems like that method should handle how to add (no point faking it), and then all you need to do is:
# In your test setUp code
testCoord = Coordinate(6, 8, 10) # Explicitly setting the value you expect the test to return.
addCoord1 = Coordinate(1, 2, 3)
addCoord2 = Coordinate(5, 6, 7)
# The test
addCoord1.add(addCoord2)
assertEqual(testCoord, addCoord1, "Testing addition of coordinates using add() method.")
Correct me if I'm wrong, but are you using a unit test framework? I'd put the object creation in the setUp of a testcase in unittest, personally.
I have a method that calls two other methods in it.
def main_method(self, query):
result = self.method_one(query)
count = self.method_two(result)
return count
def method_one(self, query):
#Do some stuff based on results.
#This method hits the database.
return result
def method_two(self, result):
#Do some stuff based on result.
#This method also hits the database.
return count
I'm not very experienced at unit testing and have never worked with Mocks and Stubs.
I'm not too sure how to create a unit test for my first method. Since method_one and method_two hit the database many times and they are very expensive, I have decided to use mox to create a mock or stub in order to eliminate the need of hitting database.
I would really appreciate it if someone who has experience working with Mocks and Stubs give me some hints on using mocks and stubs for my case.
Before worrying about testing main_method(), first test the smaller methods. Consider method_one(). For the purpose of discussion, let's say it exists in a class like this:
class Foo(object):
def method_one(self, query):
# Big nasty query that hits the database really hard!!
return query.all()
In order to test that method without hitting the database, we need an object that knows how to respond to the all() method. For example:
class MockQuery(object):
def all(self):
return [1,2]
Now we can test it:
f = Foo()
q = MockQuery()
assert f.method_one(q) == [1,2]
That's a basic illustration. The real world is often more complicated. In order to be worth the trouble of writing the test, your mock all() would likely do something more interesting than return a constant. Along similar lines, if method_one() contains a bunch of other logic, our MockQuery might need to be more elaborate -- that is, capable of responding appropriately to more methods. Often while trying to test code you realize that your original design was overburdened: you might need to refactor method_one() into smaller, more tightly defined -- and thus more testable -- parts.
Taking the same logic a step up in the hierarchy, you might create a MockFoo class that would know how to respond in simplified ways to method_one() and method_two().
How can I be sure of the unittest methods' order? Is the alphabetical or numeric prefixes the proper way?
class TestFoo(TestCase):
def test_1(self):
...
def test_2(self):
...
or
class TestFoo(TestCase):
def test_a(self):
...
def test_b(self):
...
You can disable it by setting sortTestMethodsUsing to None:
import unittest
unittest.TestLoader.sortTestMethodsUsing = None
For pure unit tests, you folks are right; but for component tests and integration tests...
I do not agree that you shall assume nothing about the state.
What if you are testing the state?
For example, your test validates that a service is auto-started upon installation. If in your setup, you start the service, then do the assertion, and then you are no longer testing the state, but you are testing the "service start" functionality.
Another example is when your setup takes a long time or requires a lot of space and it just becomes impractical to run the setup frequently.
Many developers tend to use "unit test" frameworks for component testing...so stop and ask yourself, am I doing unit testing or component testing?
There is no reason given that you can't build on what was done in a previous test or should rebuild it all from scratch for the next test. At least no reason is usually offered but instead people just confidently say "you shouldn't". That isn't helpful.
In general I am tired of reading too many answers here that say basically "you shouldn't do that" instead of giving any information on how to best do it if in the questioners judgment there is good reason to do so. If I wanted someone's opinion on whether I should do something then I would have asked for opinions on whether doing it is a good idea.
That out of the way, if you read say loadTestsFromTestCase and what it calls it ultimately scans for methods with some name pattern in whatever order they are encountered in the classes method dictionary, so basically in key order. It take this information and makes a testsuite of mapping it to the TestCase class. Giving it instead a list ordered as you would like is one way to do this. I am not so sure of the most efficient/cleanest way to do it but this does work.
If you use 'nose' and you write your test cases as functions (and not as methods of some TestCase derived class), 'nose' doesn't fiddle with the order, but uses the order of the functions as defined in the file.
In order to have the assert_* methods handy without needing to subclass TestCase I usually use the testing module from NumPy. Example:
from numpy.testing import *
def test_aaa():
assert_equal(1, 1)
def test_zzz():
assert_equal(1, 1)
def test_bbb():
assert_equal(1, 1)
Running that with ''nosetest -vv'' gives:
test_it.test_aaa ... ok
test_it.test_zzz ... ok
test_it.test_bbb ... ok
----------------------------------------------------------------------
Ran 3 tests in 0.050s
OK
Note to all those who contend that unit tests shouldn't be ordered: while it is true that unit tests should be isolated and can run independently, your functions and classes are usually not independent.
They rather build up on another from simpler/low-level functions to more complex/high-level functions. When you start optimising your low-level functions and mess up (for my part, I do that frequently; if you don't, you probably don't need unit test anyway;-) then it's a lot better for diagnosing the cause, when the tests for simple functions come first, and tests for functions that depend on those functions later.
If the tests are sorted alphabetically the real cause usually gets drowned among one hundred failed assertions, which are not there because the function under test has a bug, but because the low-level function it relies on has.
That's why I want to have my unit tests sorted the way I specified them: not to use state that was built up in early tests in later tests, but as a very helpful tool in diagnosing problems.
I half agree with the idea that tests shouldn't be ordered. In some cases it helps (it's easier, damn it!) to have them in order... after all, that's the reason for the 'unit' in UnitTest.
That said, one alternative is to use mock objects to mock out and patch the items that should run before that specific code under test. You can also put a dummy function in there to monkey patch your code. For more information, check out Mock, which is part of the standard library now.
Here are some YouTube videos if you haven't used Mock before.
Video 1
Video 2
Video 3
More to the point, try using class methods to structure your code, and then place all the class methods in one main test method.
import unittest
import sqlite3
class MyOrderedTest(unittest.TestCase):
#classmethod
def setUpClass(cls):
cls.create_db()
cls.setup_draft()
cls.draft_one()
cls.draft_two()
cls.draft_three()
#classmethod
def create_db(cls):
cls.conn = sqlite3.connect(":memory:")
#classmethod
def setup_draft(cls):
cls.conn.execute("CREATE TABLE players ('draftid' INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, 'first', 'last')")
#classmethod
def draft_one(cls):
player = ("Hakeem", "Olajuwon")
cls.conn.execute("INSERT INTO players (first, last) VALUES (?, ?)", player)
#classmethod
def draft_two(cls):
player = ("Sam", "Bowie")
cls.conn.execute("INSERT INTO players (first, last) VALUES (?, ?)", player)
#classmethod
def draft_three(cls):
player = ("Michael", "Jordan")
cls.conn.execute("INSERT INTO players (first, last) VALUES (?, ?)", player)
def test_unordered_one(self):
cur = self.conn.execute("SELECT * from players")
draft = [(1, u'Hakeem', u'Olajuwon'), (2, u'Sam', u'Bowie'), (3, u'Michael', u'Jordan')]
query = cur.fetchall()
print query
self.assertListEqual(query, draft)
def test_unordered_two(self):
cur = self.conn.execute("SELECT first, last FROM players WHERE draftid=3")
result = cur.fetchone()
third = " ".join(result)
print third
self.assertEqual(third, "Michael Jordan")
Why do you need a specific test order? The tests should be isolated and therefore it should be possible to run them in any order, or even in parallel.
If you need to test something like user unsubscribing, the test could create a fresh database with a test subscription and then try to unsubscribe. This scenario has its own problems, but in the end it’s better than having tests depend on each other. (Note that you can factor out common test code, so that you don’t have to repeat the database setup code or create testing data ad nauseam.)
There are a number of reasons for prioritizing tests, not the least of which is productivity, which is what JUnit Max is geared for. It's sometimes helpful to keep very slow tests in their own module so that you can get quick feedback from the those tests that that don't suffer from the same heavy dependencies. Ordering is also helpful in tracking down failures from tests that are not completely self-contained.
Don't rely on the order. If they use some common state, like the filesystem or database, then you should create setUp and tearDown methods that get your environment into a testable state, and then clean up after the tests have run.
Each test should assume that the environment is as defined in setUp, and should make no further assumptions.
You should try the proboscis library. It will allow you to make tests order as well as set up any test dependencies. I use it and this library is truly awesome.
For example, if test case #1 from module A should depend on test case #3 from module B you CAN set this behaviour using the library.
From unittest — Unit testing framework
Note that the order in which the various test cases will be run is determined by sorting the test function names with respect to the built-in ordering for strings.
If you need set the order explicitly, use a monolithic test.
class Monolithic(TestCase):
def step1(self):
...
def step2(self):
...
def steps(self):
for name in sorted(dir(self)):
if name.startswith("step"):
yield name, getattr(self, name)
def test_steps(self):
for name, step in self.steps():
try:
step()
except Exception as e:
self.fail("{} failed ({}: {})".format(step, type(e), e)
Check out this Stack Overflow question for details.
Here is a simpler method that has the following advantages:
No need to create a custom TestCase class.
No need to decorate every test method.
Use the unittest standard load test protocol. See the Python docs here.
The idea is to go through all the test cases of the test suites given to the test loader protocol and create a new suite but with the tests ordered by their line number.
Here is the code:
import unittest
def load_ordered_tests(loader, standard_tests, pattern):
"""
Test loader that keeps the tests in the order they were declared in the class.
"""
ordered_cases = []
for test_suite in standard_tests:
ordered = []
for test_case in test_suite:
test_case_type = type(test_case)
method_name = test_case._testMethodName
testMethod = getattr(test_case, method_name)
line = testMethod.__code__.co_firstlineno
ordered.append( (line, test_case_type, method_name) )
ordered.sort()
for line, case_type, name in ordered:
ordered_cases.append(case_type(name))
return unittest.TestSuite(ordered_cases)
You can put this in a module named order_tests and then in each unittest Python file, declare the test loader like this:
from order_tests import load_ordered_tests
# This orders the tests to be run in the order they were declared.
# It uses the unittest load_tests protocol.
load_tests = load_ordered_tests
Note: the often suggested technique of setting the test sorter to None no longer works because Python now sorts the output of dir() and unittest uses dir() to find tests. So even though you have no sorting method, they still get sorted by Python itself!
A simple method for ordering "unittest" tests is to follow the init.d mechanism of giving them numeric names:
def test_00_createEmptyObject(self):
obj = MyObject()
self.assertIsEqual(obj.property1, 0)
self.assertIsEqual(obj.dict1, {})
def test_01_createObject(self):
obj = MyObject(property1="hello", dict1={"pizza":"pepperoni"})
self.assertIsEqual(obj.property1, "hello")
self.assertIsDictEqual(obj.dict1, {"pizza":"pepperoni"})
def test_10_reverseProperty(self):
obj = MyObject(property1="world")
obj.reverseProperty1()
self.assertIsEqual(obj.property1, "dlrow")
However, in such cases, you might want to consider structuring your tests differently so that you can build on previous construction cases. For instance, in the above, it might make sense to have a "construct and verify" function that constructs the object and validates its assignment of parameters.
def make_myobject(self, property1, dict1): # Must be specified by caller
obj = MyObject(property1=property1, dict1=dict1)
if property1:
self.assertEqual(obj.property1, property1)
else:
self.assertEqual(obj.property1, 0)
if dict1:
self.assertDictEqual(obj.dict1, dict1)
else:
self.assertEqual(obj.dict1, {})
return obj
def test_00_createEmptyObject(self):
obj = self.make_object(None, None)
def test_01_createObject(self):
obj = self.make_object("hello", {"pizza":"pepperoni"})
def test_10_reverseProperty(self):
obj = self.make_object("world", None)
obj.reverseProperty()
self.assertEqual(obj.property1, "dlrow")
There are scenarios where the order can be important and where setUp and Teardown come in as too limited. There's only one setUp and tearDown method, which is logical, but you can only put so much information in them until it gets unclear what setUp or tearDown might actually be doing.
Take this integration test as an example:
You are writing tests to see if the registration form and the login form are working correctly. In such a case the order is important, as you can't login without an existing account.
More importantly the order of your tests represents some kind of user interaction. Where each test might represent a step in the whole process or flow you're testing.
Dividing your code in those logical pieces has several advantages.
It might not be the best solution, but I often use one method that kicks off the actual tests:
def test_registration_login_flow(self):
_test_registration_flow()
_test_login_flow()
I agree with the statement that a blanket "don't do that" answer is a bad response.
I have a similar situation where I have a single data source and one test will wipe the data set causing other tests to fail.
My solution was to use the operating system environment variables in my Bamboo server...
(1) The test for the "data purge" functionality starts with a while loop that checks the state of an environment variable "BLOCK_DATA_PURGE." If the "BLOCK_DATA_PURGE" variable is greater than zero, the loop will write a log entry to the effect that it is sleeping 1 second. Once the "BLOCK_DATA_PURGE" has a zero value, execution proceeds to test the purge functionality.
(2) Any unit test which needs the data in the table simply increments "BLOCK_DATA_PURGE" at the beginning (in setup()) and decrements the same variable in teardown().
The effect of this is to allow various data consumers to block the purge functionality so long as they need without fear that the purge could execute in between tests. Effectively the purge operation is pushed to the last step...or at least the last step that requires the original data set.
Today I am going to extend this to add more functionality to allow some tests to REQUIRE_DATA_PURGE. These will effectively invert the above process to ensure that those tests only execute after the data purge to test data restoration.
See the example of WidgetTestCase on Organizing test code. It says that
Class instances will now each run one of the test_*() methods, with self.widget created and destroyed separately for each instance.
So it might be of no use to specify the order of test cases, if you do not access global variables.
I have implemented a plugin, nosedep, for Nose which adds support for test dependencies and test prioritization.
As mentioned in the other answers/comments, this is often a bad idea, however there can be exceptions where you would want to do this (in my case it was performance for integration tests - with a huge overhead for getting into a testable state, minutes vs. hours).
A minimal example is:
def test_a:
pass
#depends(before=test_a)
def test_b:
pass
To ensure that test_b is always run before test_a.
The philosophy behind unit tests is to make them independent of each other. This means that the first step of each test will always be to try to rethink how you are testing each piece to match that philosophy. This can involve changing how you approach testing and being creative by narrowing your tests to smaller scopes.
However, if you still find that you need tests in a specific order (as that is viable), you could try checking out the answer to Python unittest.TestCase execution order .
It seems they are executed in alphabetical order by test name (using the comparison function between strings).
Since tests in a module are also only executed if they begin with "test", I put in a number to order the tests:
class LoginTest(unittest.TestCase):
def setUp(self):
driver.get("http://localhost:2200")
def tearDown(self):
# self.driver.close()
pass
def test1_check_at_right_page(self):
...
assert "Valor" in driver.page_source
def test2_login_a_manager(self):
...
submit_button.click()
assert "Home" in driver.title
def test3_is_manager(self):
...
Note that numbers are not necessarily alphabetical - "9" > "10" in the Python shell is True for instance. Consider using decimal strings with fixed 0 padding (this will avoid the aforementioned problem) such as "000", "001", ... "010"... "099", "100", ... "999".
Contrary to what was said here:
tests have to run in isolation (order must not matter for that)
and
ordering them is important because they describe what the system do and how the developer implements it.
In other words, each test brings you information of the system and the developer logic.
So if this information is not ordered it can make your code difficult to understand.
To randomise the order of test methods you can monkey patch the unittest.TestLoader.sortTestMethodsUsing attribute
if __name__ == '__main__':
import random
unittest.TestLoader.sortTestMethodsUsing = lambda self, a, b: random.choice([1, 0, -1])
unittest.main()
The same approach can be used to enforce whatever order you need.
This question is in continuation to my previous question, in which I asked about passing around an ElementTree.
I need to read the XML files only and to solve this, I decided to create a global ElementTree and then parse it wherever required.
My question is:
Is this an acceptable practice? I heard global variables are bad. If I don't make it global, I was suggested to make a class. But do I really need to create a class? What benefits would I have from that approach. Note that I would be handling only one ElementTree instance per run, the operations are read-only. If I don't use a class, how and where do I declare that ElementTree so that it available globally? (Note that I would be importing this module)
Please answer this question in the respect that I am a beginner to development, and at this stage I can't figure out whether to use a class or just go with the functional style programming approach.
There are a few reasons that global variables are bad. First, it gets you in the habit of declaring global variables which is not good practice, though in some cases globals make sense -- PI, for instance. Globals also create problems when you on purpose or accidentally re-use the name locally. Or worse, when you think you're using the name locally but in reality you're assigning a new value to the global variable. This particular problem is language dependent, and python handles it differently in different cases.
class A:
def __init__(self):
self.name = 'hi'
x = 3
a = A()
def foo():
a.name = 'Bedevere'
x = 9
foo()
print x, a.name #outputs 3 Bedevere
The benefit of creating a class and passing your class around is you will get a defined, constant behavior, especially since you should be calling class methods, which operate on the class itself.
class Knights:
def __init__(self, name='Bedevere'):
self.name = name
def knight(self):
self.name = 'Sir ' + self.name
def speak(self):
print self.name + ":", "Run away!"
class FerociousRabbit:
def __init__(self):
self.death = "awaits you with sharp pointy teeth!"
def speak(self):
print "Squeeeeeeee!"
def cave(thing):
thing.speak()
if isinstance(thing, Knights):
thing.knight()
def scene():
k = Knights()
k2 = Knights('Launcelot')
b = FerociousRabbit()
for i in (b, k, k2):
cave(i)
This example illustrates a few good principles. First, the strength of python when calling functions - FerociousRabbit and Knights are two different classes but they have the same function speak(). In other languages, in order to do something like this, they would at least have to have the same base class. The reason you would want to do this is it allows you to write a function (cave) that can operate on any class that has a 'speak()' method. You could create any other method and pass it to the cave function:
class Tim:
def speak(self):
print "Death awaits you with sharp pointy teeth!"
So in your case, when dealing with an elementTree, say sometime down the road you need to also start parsing an apache log. Well if you're doing purely functional program you're basically hosed. You can modify and extend your current program, but if you wrote your functions well, you could just add a new class to the mix and (technically) everything will be peachy keen.
Pragmatically, is your code expected to grow? Even though people herald OOP as the right way, I found that sometimes it's better to weigh cost:benefit(s) whenever you refactor a piece of code. If you are looking to grow this, then OOP is a better option in that you can extend and customise any future use case, while saving yourself from unnecessary time wasted in code maintenance. Otherwise, if it ain't broken, don't fix it, IMHO.
I generally find myself regretting it when I give in to the temptation to give a module, for example, a load_file() method that sets a global that the module's other functions can then use to find the file they're supposed to be talking about. It makes testing far more difficult, for example, and as soon as I need two XML files there is a problem. Plus, every single function needs to check whether the file's there and give an error if it's not.
If I want to be functional, I simply therefore have every function take the XML file as an argument.
If I want to be object oriented, I'll have a MyXMLFile class whose methods can just look at self.xmlfile or whatever.
The two approaches are more or less equivalent when there's just one single thing, like a file, to be passed around; but when the number of things in the "state" becomes larger than a few, then I find classes simpler because I can stick all of those things in the class.
(Am I answering your question? I'm still a big vague on what kind of answer you want.)