How to perform a unit test in python2.7? - python

I have a program and I want to test my code.
This is one and this is the other. I want to test the Employee.py one with the help of unit test framework but have got no idea from where to start. I tried with basic terminal by entering a few values and getting results but I was curious if this would hold good even for a large amount of data. So how do I go forward with it? I also heard of pytest but the forums said it works better with Django and stuff so I was sticking to unittest

Related

turn on/off test in the source code

I am reproducing a spreadsheet in python. The spreadsheet contains the data and the processing logic on every Monday not on the rest weekdays.
I want to run the python code on everyday, if it is Monday, I want to compare the python result with the spreadsheet result. I have 20+ tests spread across the python code doing the comparisons.The tests include: 1) comparing data that I got from production database is the same as in the excel 2) comparing the python produces the same results as excel(the logic is the same) if the inputs are the same.
How can I turn on the test for Monday, without inserting 20+ "if Monday: run test_n" to the python code?
I don't think I can separate the test and the source code, since later tests takes inputs from previous processing steps.
It looks like you have a limited number of choices.
You could refactor your code to pull the tests together to activate them with fewer if tests. You say that may not be possible, but it seems to me that you should try to do that first. You recognize there is a smell in your code, so you should try some of the refactoring techniques to succeed in separating the test and the source code. Check some of the techniques--there are many books and web sites that discuss some of them.
You could leave your code as is. This will build up technical debt but that may be necessary. Use the over 20 if statements and comment them well to they can be found and modified later if needed. At least do the date check only once in your code, set a Boolean variable, and test that variable rather than redoing the date check.
Without more detail I do not see how we could offer any other options.
If these are tests in the "make sure it works" sense, they should not be in the production code. They should be wholly separate in a test suite.
Testing code is a very broad topic, but here's a few resources to get you started.
Writing unit tests in Python, where do I start?
The Python unittest library.
Improve Your Python: Understanding Unit Testing
Python Software Development and Software Testing (posts and podcast)
I don't think I can separate the test and the source code, since later tests takes inputs from previous processing steps.
You absolutely can, every system does, but it may require redesigning your system. This is a common chicken-and-egg problem for legacy code: how do you change it safely if you can't test it? And there are various techniques for dealing with that. Refactoring, the process of redesigning code without changing how it works, will feature prominently. But without details I can't say much more.
1) comparing data that I got from production database is the same as in the excel
2) comparing the python produces the same results as excel(the logic is the same) if the inputs are the same.
Rather than testing inside your code, you should be testing its outputs.
Both of these should be a matter of converting the output of the various processes into a common format which can then be compared. This could be dumping them as JSON, turning them all into Python data structures, CSVs... whatever is easiest for your data. Then compare them to ensure they're the same.
Again, without more detail about your situation I can't offer much more.

Best way to test a bunch of files with python

I'm trying to process a bunch of files, which aren't known until the testing begins, and I want to add testing mechanisms to the processing so I know if there are any errors and get a report out at the end.
This processing requires a little bit of setup so a setup test needs runs once before the processing actually begins. Does anyone have any good examples of this kind of process being done in Python?
I've been researching this for the past few days and I haven't found any good solutions. A few options that I've seen are:
unittest: Using dynamically generated tests with a setUpClass method to do the setup.
Nose doesn't seem like an option at all because of the lack of continued support. This testing needs to last for a long time.
pytest (?): Not quite sure on this, the documentation isn't very good and I haven't found any concrete examples.
Basically I need a testing framework that has the ability to dynamically create parameterized tests with dependencies.

Testing only affected code in Python

I've been working on a fairly large Python project with a number of tests.
Some specific parts of the application require some CPU-intensive testing, and our approach of testing everything before commit stopped making sense.
We've adopted a tag-based selective testing approach since. The problem is that, as the codebase grows, maintaining said tagging scheme becomes somewhat cumbersome, and I'd like to start studying whether we could build something smarter.
In a previous job the test system was such that it only tested code that was affected by the changes in the commit.
It seems like Mighty Moose employs a similar approach for CLR languages. Using these as inspiration, my question is, what alternatives are there (if any) for smart selective testing in Python projects?
In case there aren't any, what would be good initial approaches for building something like that?
The idea of automating the selective testing of parts of your application definitely sounds interesting. However, it feels like this is something that would be much easier to achieve with a statically typed language, but given the dynamic nature of Python it would probably be a serious time investment to get something that can reliably detect all tests affected by a given commit.
When reading your problem, and putting aside the idea of selective testing, the approach that springs to mind is being able to group tests so that you can execute test suites in isolation, enabling a number of useful automated test execution strategies that can shorten the feedback loop such as:
Parallel execution of separate test suites on different machines
Running tests at different stages of the build pipeline
Running some tests on each commit and others on nightly builds.
Therefore, I think your approach of using tags to partition tests into different 'groups' is a smart one, though as you say the management of these becomes difficult with a large test suite. Given this, it may be worth focussing time in building tools to aid in the management of your test suite, particularly the management of your tags. Such a system could be built by gathering information from:
Test result output (pass/fail, execution time, logged output)
Code coverage output
Source code analysis
Good luck, its definitely an interesting problem you are trying to solve, and hope some of these ideas help you.
I guess you are looking for a continuous testing tool?
I created a tool that sits in the background and runs only impacted tests: (You will need PyCharm plugin and pycrunch-engine from pip)
https://github.com/gleb-sevruk/pycrunch-engine
This will be particularly useful if you are using PyCharm.
More details are in this answer:
https://stackoverflow.com/a/58136374/2377370
If you are using unittest.TestCase then you can specify which files to execute with the pattern parameter. Then you can execute tests based on the code changed. Even if not using unittest, you should have your tests are organsied by functional area/module so that you can use a similar approach.
Optionally, not an elegant solution to your problem but if each developer/group or functional code area was committed to a separate branch, you could have it executed on your Continuous Testing environment. Once that's completed (and passed), you can merge them into your main trunk/master branch.
A combination of nightly jobs of all tests and per-branch tests every 15-30 minutes (if there are new commits) should suffice.
A few random thoughts on this subject, based on work I did previously on a Perl codebase with similar "full build is too long" problems:
Knowing your dependencies is key to having this work. If module A is dependent on B and C, then you need to test A when either of then is changed. It looks like Snakefood is a good way to get a dictionary that outlines the dependencies in your code; if you take that and translate it into a makefile, then you can simply "make test" on check in and all of the dependencies (and only the needed ones) will be rebuilt and tested.
Once you have a makefile, work on making it parallel; if you can run a half-dozen tests in parallel, you'll greatly decrease running time.
If you write the test results to file you can then use make or an similar alternative to determine when it needs to "rebuild" the tests. If you write results to the file, make can compare the date time stamp of the tests with the dependant python files.
Unfortunately Python isn't too good at determining what it depends on, because modules can be imported dynamically, so you can't reliably look at imports to determine affected modules.
I would use a naming convention to allow make to solve this generically. A naive example would be:
%.test_result : %_test.py
python $< > $#
Which defines a new implicit rule to convert between _test.py and test results.
Then you can tell make your additional dependencies for you tests, something like this:
my_module_test.py : module1.py module2.py external\module1.py
Consider turning the question around: What tests need to be excluded to make running the rest tolerable. The CPython test suite in Lib/test excludes resource heavy tests until specifically requested (as they may be on a buildbot). Some of the optional resources are 'cpu' (time), 'largefile' (disk space), and 'network' (connections). (python -m test -h (on 3.x, test.regrtest on 2.x) gives the whole list.)
Unfortunately, I cannot tell you how to do so as 'skip if resource is not available' is a feature of the older test.regrtest runner that the test suite uses. There is an issue on the tracker to add resources to unittest.
What might work in the meantime is something like this: add a machine-specific file, exclusions.py,containing a list of strings like those above. Then import exclusions and skip tests, cases, or modules if the appropriate string is in the list.
We've run into this problem a number of times in the past and have been able to answer it by improving and re-factoring tests. You are not specifying your development practices nor how long it takes you to run your tests. I would say that if you are doing TDD, you tests need to run no more than a few seconds. Anything that runs longer than that you need to move to a server. If your tests take longer than a day too run, then you have a real issue and it'll limit your ability to deliver functionality quickly and effectively.
Couldn't you use something like Fabric? http://docs.fabfile.org/en/1.7/

i am trying to develop a test runner in python

Any pointers ? Suggestions ? Opinions ?
I am thinking here is a draft specification:
Can run individual test methods
Can run a single Test Class
Rsult in XML
Result in HTML
Dry-run
Calculate and display time taken by each test case, and overall time.
Timeout for test cases
TAP type test results
Log Levels
Create Skeleton test cases
Coverage
Be able to run on a remote host (maybe)
Test Reports
Command line Help (--help)
Now, where do i start ?
Have you seen nose or py.test? Those projects implement a lot of the features that you describe. It might be easier to write an extension for one of those projects rather than starting from scratch.
There's also green. I wrote it after I got frustrated with nose refusing to accept my pull requests to fix bugs, among other reasons.

Where are the layman non-prerequisite introductory materials for everything-testing in Python?

This question has been modified
If I had to really pick the kind of testing I would like to learn (I have no idea which way this translates to Python) is found here http://butunclebob.com/ArticleS.UncleBob.TheThreeRulesOfTdd. I don't know if this is agile, extreme or if it's just called TDD. Is this unit testing, doc testing a combination of both or am I missing something? Is there something even better, similar or is this style simply not applicable? I am looking for extreme beginners material on how to do testing (specifically for Python) as described in my link. I am open to ideas and all Python resources. Thanks!
I pasted the original message here for reference
http://dpaste.com/274603/plain/
The simplest thing to start with is Python's unittest module. There are frameworks to do things more automatically later on, but unittest works fine for simple cases.
The basic idea is that you have a class for a particular test suite. You define a set of test methods, and optional setUp and tearDown methods to be executed before and after each test in that suite. Within each test, you can use various assert* methods to check that things work.
Finally, you call unittest.main() to run all the tests you've defined.
Have a look at this example: http://docs.python.org/library/unittest#basic-example
What resources do you fellas have for starting out in testing in particular to Python? ... What are the most excellent places to start out at, anybody?
Step 1. Write less.
Step 2. Focus.
Step 3. Identify in clear, simple sentences what you are testing. Is it software? What does it do? What architecture does it run on? Specifically list the specific things you're actually going to test. Specific. Focused.
Step 4. For one thing you're going to test. One thing. Pick a requirement it must meet. One requirement. I.e., "Given x and y as input, computes z."
Looking at your question, I feel you might find this to be very, very hard. But it's central to testing. Indeed, this is all that testing is.
You must have something to test. (A
"fixture".)
You must have requirements against
which to test it. (A "TestCase".)
You have have measurable pass/fail
criteria. (An "assertion".)
If you don't have that, you can't test. It helps to write it down in words. Short, focused lists of fixtures, cases and assertions. Short. Focused.
Once you've got one requirement, testing is just coding the requirements into a language that asserts the results of each test case. Nothing more.
Your unittest.TestCase uses setUp to create a fixture. A TestCase can have one or more test methods to exercise the fixture in different ways. Each test method has one or more assertions about the fixture.
Once you have a test case which passes, you can move back to Step 4 and do one more requirement.
Once you have all the requirements for a fixture, you go back to Step 3 and do one more fixture.
Build up your tests slowly. In pieces. Write less. Focus.
I like this one and this one too
Testing is more of an art, and you get better through practice.
For myself, I have read this book Pragmatic Unit Testing in C# with NUnit, and I'll really recommend it to you.
Although its about .Net and C# and not about Python, you can skip the exact code, but try to understand the principles being taught.
It contains good examples about what one should look out for when designing tests. And also, what kind of code should be written that supports the tests-first paradigm.
Here's a link to C. Titus Brown's Software Carpentry notes on testing:
http://ivory.idyll.org/articles/advanced-swc/#testing-your-software
It talks basics about testing, including ideas of how to test your code (and retrofitting tests onto existing code). Seems up your alley.

Categories