I was using Pydev on Eclipse. I understand if I have a Eclipse folder with 5 files containing unit-tests, I can run these tests once by right-clicking on the name of the folder in Eclipse, choosing "Run-As" --> "Python unit-test". This works fine for me.
What would be the recommended way to run these tests for the fixed number of times? For example, if I wanted to run the 5 tests in the folder 10 times each?
I would be very grateful if you could help me out.
Thanks!
I think that the problem is in the way you are constructing your tests. There are a two problems I see:
If tests are failing because of poor image recognition, then surely they indicate either a bug in Sikuli, or a badly designed test. Unit tests should be predictable and repeatable, so requiring that they run several times indicates that they are not well set up.
If you really do need to run the UI tests mutliple times, then this should be done in the code, not in the IDE, since you can't guarantee that they will always be run in that environment (e.g. what if you want to move to CI?). So you need something like this in your code:
def test_ui_component(self):
for i in range(1):
# Test code here
You could probably abstract the pattern out using a decorator or class inheritance if you really want to.
Related
I've seen a few questions asking this, but none of the solutions worked for me.
I am developing a few functions/classes in different modules and have a main.py script that calls everything.
The problem is, when I make a change to a function in another module i.e. module1.py, VSCode does not detect the changes when I call the function in main.py after updating, it's still the older version.
I can get around this by doing something like:
from importlib import reload
reload module1
but this gets old real quick especially when I'm importing specific functions or classes from a module.
Simply re-running the imports at the top of my main.py doesn't actually do anything, I can only do that if I kill the shell and reopen it from the begining, which is not ideal if I am incrementally developing something.
I've read on a few questions that I could include this:
"files.useExperimentalFileWatcher" : true
into my settings.json, but it does not seem to be a known configuration setting in my version, 1.45.1.
This is something Spyder handles by default, and makes it very easy to code incrementally when calling functions and classes from multiple modules in the pkg you are developing.
How can I achieve this in VSCode? To be clear, I don't want to use IPython autoreload magic command.
Much appreciated
FYI here are the other questions I saw, but did not get a working solution out of, amongst others with similar questions/answers :
link1
link2
There is no support for this in VS Code as Python's reload mechanism is not reliable enough to use outside of the REPL, and even then you should be careful. It isn't a perfect solution and can lead to stale code lying about which can easily trip you up (and I know this because I wrote importlib.reload() 😁).
We have been using Django for a long time. Some old code is not being used now. How can I find which code is not being used any more and remove them.
I used coverage.py with unit tests, which works fine and shows which part of code is never used, but the test covered is very low. Is there any way to use it with WSGI server to find which code have never served any web requests?
It depends on what you mean by unused code.
For unreachable dead code, like functions are were never called, classes that are never instantiated, you can use a pure static code analyzer to find them. Pylint is a good option. Bear in mind that this is not 100% accurate, false positive is possible:
# static analysis can't detect methods called this way
func = getattr(obj, "func_name")
func()
For code that are reachable, but never reached. You have to rely on tools like coverage.py, and improve your test coverage.
On a well tested project, coverage would be ideal but with some untested legacy code I don't think there is a magical tool.
You could write a big test loading all the pages and run coverage to get some indication.
Cowboy style:
If it's not some critical code and you're fairly sure it's unused (i.e. not handling payments, etc.). Comment it out, check that the tests pass, deploy and wait a week or so before removing it definitely (or putting it back if you got a notification).
As other answers indicate coverage.py accurately finds out which parts of the code are never executed, but coverage requires your code to be actually run to perform the analysis. Vulture on the other hand, runs static analysis for finding dead (unused code) for Python Programs. Also, if you run vulture on both your library and test suite, you might be able to find untested code.
Vulture is a standard PyPI package and can be installed using pip:
$ pip install vulture
Run vulture using the command:
$ vulture apps/ tests/ --exclude settings
Bear in mind that due to Python's dynamic nature, there may be some false positives, but they can be dealt with by the means of "Whitelists" - Please refer to this answer by Rahul for more information on how to use Vulture with django and tackle false positives.
My folder tree:
project/
app/
__init__.py
models.py
dir/test1.py
dir/__init__.py
run.py
dir/test2.py
dir/__init__.py
If I want to do a
from app.models import Whatever
from test1 and test2 the only thing that works is to manually sys.path.append something like
os.path.join(os.path.dirname(__file__), "../..")
However there are a ton of answers on SO saying messing up with sys.path will give me troubles down the line (why?); Sadly, after 1+ hour of googling I still haven't figured out the right way to do import stuff and I'm getting really confused.
It is enormously better to test than not test, so if you need to append paths to sys.path to make it work--and in this directory configuration, you will--that's a reasonable and pragmatic step.
However, in general it is better not to fiddle with module load paths manually. It assumes that your code will will always be loaded in a directory right outside your test folder, which might not always be true. But "you will run into problems down the line" is pretty weak tea.
The bigger issue is that you cannot use that little path-patch to accomplish the kind of automated testing and test management you will ultimately want/need. Much better is to learn to use a real test harness / test runner such as pytest or nose. Even better if you also use a macro test runner such as tox. The combination will automatically install your software (including any declared dependencies) in a completely new, pristine virtual environment. This will help you test not just the normal operability of your module, but also its installability. It will also help you easily run tests across different versions of Python with very little additional effort. (I currently test across a range of 2.6, 2.7, 3.2, 3.3, 3.4, 3.5, and several versions of PyPy, for example--not because I use all those versions, but because it doesn't cost me anything extra to make sure my software runs across a large swath of the entire Python universe.)
Fair warning: Setting up your testing environment is a bit of a pill the first time out, requiring a fair amount of effort and learning of "mechanic" skills. But this is a one-time investment that will pay dividends across all of your Python work.
So long story short, patching sys.path is a fair place to start. But you'll ultimately need more generality, breadth, depth, test fixtures, and automation in your testing--and path patching can't help you with those. Upgrading to real testing tools can.
I have a third party software which is able to run some python scripts using something like:
software.exe -script pythonscript.py
My company is heavily dependent on this software as well as on the scripts we develop for it. Currently we have some QA that checks the output of the scripts, but we really want to start unit testing the scripts to make it easier to find bugs and make the test system more complete.
My problem is how is it possible to run "embedded" unit tests? We use pydev+eclipse and I tried to use it's remote debbuging to make it work with the unit tests, but I cannot really make it work. How can I make the server connection "feed" the unit test?
The other idea would be to parse the stdout of the software, but that would not really be a unit test... And the added complexity it seems to bring makes this approach less interesting.
I would expect that something like this has already been done somewhere else and I tried googling for it, but maybe I am just not using the correct keywords. Could anyone give me a starting point?
Thank you
A bit more info would be helpful. Are you using a testing framework (e.g. unittest or nose), or if not, how are the tests structured? What is software.exe?
In python, unit tests are really nothing more than a collection of functions which raise an exception on failure, so they can be called from a script like any other function. In theory, therefore, you can simply create a test runner (if you're not already using one such as nose), and run it as software.exe -script runtests.py. In pydev, you can set up software.exe as a customised python interpreter.
If the problem is that software.exe hides stdout, then simply write the results to a log file instead. You could also create a environment which mocks that provided by software.exe and run the tests using python.exe.
If unit tests are for your code and not for the functionality provided by software.exe then you could run the tests using a standalone python mocking software.exe parts where necessary. As an intermediate step you could try to run unittest-based scripts using `software.exe'
Well, generally speaking, testing software shall be done by a Continuous Integration suite (And Jenkins is your friend).
Now, I think you'll have to test your scripts pythonscript.py by setting a test() function inside the python script that will emulate the possible environments you'll give to the entry point of your script. And you'll be able to use unittest to execute the test functions of all your scripts. You can also embed tests in doctests, but I personally don't like that.
And then, in your software.exe, you'll be able to execute tests by emulating all the environment combinations. But as you don't say much about software.exe I won't be able to help you more... (what language ? is software.exe already unit tested ?)
I'm writing quite a few unit tests and using nosetests to run them. Nose certainly makes it nice and easy to run tests, but the output can be pretty cluttered at the best of times, and a downright mess at others, depending on warnings and errors.
I was wondering if there are any visual tools that wrap nose so that the feedback cleaner. A results grid showing test names, run times and indicating success/failure status with colours would be a huge visual aide. Better still, if it could split up the error messages and tracebacks on a case by case basis, it would really cut down on the amount of clutter when a large number of tests fail on a major change.
This is starting to read like a wishlist, but does anything even close to this exist?
There's a very similar question here.
It doesn't look like there's a standardized GUI for nosetests (as in, standardized for just the python interface). There seems to be a few GUI plugins for nosetests depending on which IDE you're using - a quick search brought up this link for Komodo.
However, there are many other GUI based unit testing frameworks out there for Python - have you tried PyUnit?
Well, what you can do is to use the jenkins integration server, we use it with py.test and phpunit (it is really simple to use it with any other testing framework), it provides us some fancy (and useful!) graphs containing
Number of tests that run
Number of tests passing/failing
The stack trace for the failed tests
Execution times for tests
Code coverage for the tests
You can also relate it to git/svn/mercurial...
Here you have an example of how to configure hudson (for the record, jenkins is a fork of hudson, so the instructions are still valid :), in order to provide the elements from your wishlist.
An easier option is to use a IDE as PyCharm, it has integration with nosetests, and will tell you which tests passed/failed and its times, however, jenkis stores historical data.
Good luck :)
Have you taken a look at the Nose plugins page? A quick glance revealed at least a couple that stated they could provide a GUI interface.