I have created a testsuite which has 2 testcases that are recorded using selenium in firefox. Both of those test cases are in separate classes with their own setup and teardown functions, because of which each test case opens the browser and closes it during its execution.
I am not able to use the same web browser instance for every testcase called from my test suite. Is there a way to achieve this?
This is how is suppose to work.
Tests should be independent else they can influence each other.
I think you would want to have a clean browser each time and not having to clean session/cookies each time, maybe now not, but when you will have a larger suite you will for sure.
Each scenario has will start the browser and it will close it at the end, you will have to research which methods are doing this and do some overriding, this is not recommended at all.
Related
I read multiple times that one should use mock to mimic outside calls and there should be no calls made to any outside service because your tests need to run regardless of outside services.
This totally makes sense....BUT
What about outside services changing? What good is a test, testing that my code works like it should if I will never know when it breaks because of the outside service being modified/updated/removed/deprecated/etc...
How can I reconcile this? The pseudocode is below
function post_tweet:
data = {"tweet":"tweetcontent"}
send request to twitter
receive response
return response
If I mock this there is no way I will be notified that twitter changed their API and now I have to update my test...
There are different levels of testing.
Unit tests are testing, as you might guess from the name, a unit. Which is for example a function or a method, maybe a class. If you interpret it wider it might include a view to be tested with Djangos test client. Unittests never test external stuff like libraries, dependencies or interfaces to other Systems. Theses thing will be mocked.
Integration tests are testing if your interfaces and usage of outside libraries, systems and APIs is implemented properly. If the dependency changes, you will notice have to change your code and unit tests.
There are other levels of tests as well, like behavior tests, UI tests, usability tests. You should make sure to separate theses tests classes in your project.
I have a class Node (add_node.py) in which I create nodes that connect to a websocket. Now I want to write unit-tests for checking whether or not the connection was successful, and for checking the response of the server.
So I created a node_tests.py file which the following content:
import unittest
import json
import re
from add_node import Node
class TestNodes(unittest.TestCase):
def test_node_creation(self):
self.node = Node(a='1', b='2', c=True)
self.response = json.loads(self.node.connect())
self.assertIn('ok', self.response['r'])
def test_node_c(self):
self.assertTrue(self.response['c'])
if __name__ == '__main__':
unittest.main()
The first method is working but the second is failing because there is no attribute 'response'. So how could I approach this problem?
Also, is it ok to do it they way I'm doing it? Importing the class and writing multiple test within the same Test class?
The point of a unit test is to verify the functionality of a single isolated unit. Exactly what a unit is can be debated. Some would argue it's a single public method on a single class. I think most would agree that is not a very useful definition though. I like to think of them as use-cases. Your tests should do one use-case, and then make one assertion about the results from that use-case. This means that sometimes it's OK to let classes collaborate, and sometimes it's better to isolate a class and use test doubles for it's dependencies.
Knowing when to isolate is something you learn over time. I think the most important points to consider are that every test you write should
Fully define the context in which it's run (without depending on global state or previously run tests)
Be fast, a few milliseconds tops (this means you don't touch external resources like the file system, a web server or some database)
Not test a bunch of other things that are covered by other tests.
This third point is probably the hardest to balance. Obviously several tests will run the same code if they're making different assertions on the same use-case. You should keep the tests small though. Let's say we want to test the cancellation process of orders in an e-commerce application. In this case we probably don't want to import and set up all the classes used to create, verify, etc. an order before we cancel it. In that case it's probably a better idea to just create an order object manually or maybe use a test double.
In your case, if what you actually want to do is to test the real connection and the responses the real server gives, you don't want a unit test, you want an integration test.
If what you actually want is to test the business logic of your client class, however, you should probably create a fake socket/server where you can yourself define the responses and whether or not the connection is successful. This allows you to test that your client behaves correctly depending on it's communications with the remote server, without actually having to depend on the remote server in your test suite.
Running selenium tests in python and chrome. Between each test, it logs that its creating a new http connection.
Is it possible to create a connection pool so that a new one isn't created for each test?
Based on the low details provided on your question, and that you're mentioning that this happens only between tests, leads me to think that you have something in your setUp or tearDown methods.
Check what gets executed there and see whether that is the source of your problem.
I have a few functions for performing actions in python, and I am debating whether or not it makes sense to write unit tests for these.
There is one function that resolves a host using the socket module. I don't think it makes sense to write a test for the socket module itself, but seems like it may make sense to write a functional test to see if DNS is working.
I have another function that opens a file and puts the contents into a list.
Does anyone have any guidance or links on performing unit tests/functional tests for items like this? Most of what I have read pertains to application level testing only.
Thank you in advance.
First of all, if you don't have tests at all, better start with high-level end-to-end functional tests and end with unit tests gathering coverage statistics on every new test you write.
While writing a test to a function which uses system or network libraries, you usually want to isolate your test, to make it independent and straight as much as possible by mocking out system and network calls (see Mock library).
By using mock library you can and should test how does your application/function respond to situations where there is a socket or system error.
Also see:
python unittest howto
How to Mock an HTTP request in a unit testing scenario in Python
An Introduction to Mocking in Python
Hope that helps.
I'm in the process of writing a small/medium sized GUI application with PyGObject (the new introspection based bindings for Gtk). I started out with a reasonable test suite based on nose that was able to test most of the functions of my app by simply importing the modules and calling various functions and checking the results.
However, recently I've begun to take advantage of some Gtk features like GLib.timeout_add_seconds which is a fairly simple callback mechanism that simply calls the specified callback after a timer expires. The problem I'm naturally facing now is that my code seems to work when I use the app, but the testsuite is poorly encapsulated so when one test checks that it's starting with clean state, it finds that it's state has been trampled all over by a callback that was registered by a different test. Specifically, the test successfully checks that no files are loaded, then it loads some files, then checks that the files haven't been modified since loading, and the test fails!
It took me a while to figure out what was going on, but essentially one test would modify some files (which initiates a timer) then close them without saving, then another test would reopen the unmodified files and find that they're modified, because the callback altered the files after the timer was up.
I've read about Python's reload() builtin for reloading modules in the hopes that I could make it unload and reload my app to get a fresh start, but it just doesn't seem to be working.
I'm afraid that I might have to resort to launching the app as a subprocess, tinkering with it, then ending the subprocess and relaunching it when I need to guarantee fresh state. Are there any test frameworks out there that would make this easy, particularly for pygobject code?
Would a mocking framework help you isolate the callbacks? That way, you should be able to get back to the same state as when you started. Note that a SetUp() and tearDown() pattern may help you there as well -- but I am kind of assuming that you already are using that.