I implemented some unit tests (with unittest) for a qgis3+ plugin, and I would like to run those tests programmatically.
Currently, I launch those tests directly from the QGis UI, but that prevent me from using some CI tool like Travis or Gitlab-CI...
I already found this topic, but it is outdated and most of the links are already dead: https://gis.stackexchange.com/questions/71206/writing-automated-tests-for-qgis-plugins?rq=1
This page was also greatly detailed, but a note from 2017 said it to be obsolete.
Does someone know about a way to achieve this, or at least about some ressource or documentation on the subject?
Related
I'm a newbie, I'm afraid, and have newbie questions.
I have used python for simple scripts and automation for a while, but am challenging myself to go deeper by contributing to some open source projects on GitHub.
It's been fun, but also nerve-wracking to make dumb mistakes in such a public environment.
Sometimes one of my changes causes an error that is caught by one of the automated tests that the GitHub project runs when a PR is submitted. I'd like to catch those myself, if possible, before submitting the PR. Is there a way for me to run the same build tests locally on my own machine?
Any other best practice suggestions for doing open-source contributions without asking for too much time/help from maintainers is also appreciated.
Running the entire build locally doesn't really make sense. Especially not for just the tests.
Github and most open source repositories have a contribution guidelines. Github especially has CONTRIBUTING.md to allow repo owners demonstrate how to contribute.
For example:
CPython has a testing section on their readme.
Django has a contributing section and how to run the test suite in their readme.
Most proper open source projects would have explanations on how to run tests/builds locally.
Do not, however feel ashamed over something like broken tests. This is what version control systems are for. Make 10 mistakes, fix the bug/add the feature, make 20 mistakes afterwards. You can just make typos and fix them in the next commit. It doesn't matter. Just rebase your branch after you added what you needed to add and you are good to go. Making mistakes is nothing to be ashamed especially since we have tools to fix those mistakes easily.
Why not act?
Act is OK. It is a nice tool that I myself use. But you don't need to run entire workflow just for tests when you can run tests without it, and it is not really a small tool.
The problem with act is that it is only for github actions, which is only one of the many CI tools.
Travis, CircleCI, Jenkins, ...
It's better to just read the project you are contributing to and follow their guidelines.
Act works most of the time but is a bit limited on the types of images it can use.
I really feel you on this one, would be nice if there were tools for this :/
I am studying TDD and developing an API in Django Rest Framework, and I had a need that I researched and could not find some tools to solve my problem, I am trying to find out how much my tests cover my application in %.
For know the number of possibilities and possible suggestions of what is missing cover, I found the coverage lib, but it generates a report with lots of data, which are not very useful for my case, I just want to know the coverage of my tests that I created. Does anyone know of any tool or plugin for pycharm that does this coverage of the tests?
I know that in visual studio there is Ncrunch that does this, but I do not know if there is something similar in pycharm.
I was struggling with the same question.
Especially I wanted to visualize the execution path of each test and run only affected tests.
I created a tool that sits in the background and runs only impacted tests:
(You will need PyCharm plugin and pycrunch-engine from pip)
https://pycrunch.com
https://github.com/gleb-sevruk/pycrunch-engine
This is how it looks like:
It is currently in beta, and may not support all usage scenarios, but I use it every day for development, without major issues.
I found a tool in the professional pycharm of which does what I need, is the functionality of running the tests with coverage, there is an option that runs the tests again to check if everything is ok:
And in this tool there is also another feature that shows the coverage of your tests against the existing code:
I hope I can help someone who has the same doubt! Thanks!
I'm using py.test to create unit tests for my application, but I am stuck with a problem.
I create automated web software, so a lot of my methods connect to external servers. I don't want to do this within the test, instead I would rather store the HTML source and test against that.
The question is how do I do this? For example where do I store the test data? Is there anything within py.test that can aid in storing/testing offline data?
The general solution is to use mocking; replacing the library that calls out to the web service and replacing it with something that acts like that library but returns test versions of normal results.
Use the unittest.mock library to do the mocking; it comes with Python 3.3 and up, or is available as a backport for older Python releases.
Just add a new package to your tests package (where all your unittests are stored) that handles the 'fixtures', the test data to be produced for certain arguments.
I've recently implemented two-factor auth in a django application. I used a third-party package for it, which is well-tested already. I want to write unit tests for my code, but it seems silly to test things which are really just their package. I feel really odd writing larger-scale selenium tests for the login process, especially e.g. scanning a QR code. Is the answer that if I'm not doing anything new with the code, just dropping in the existing library, that I can't effectively write tests for it? (because it's unnecessary?)
I have developed a Blackbox Test Environment in Python 3.2 for testing a piece of hardware. In this environment I have a TestExecution.py module where I run my tests as follows:
while(True):
TestWithRestart("Test122")
TestWithRestart("Test123",keys="invalid_keys.dat")
TestWithOneComPort("Test200", keys="invalid_keys.dat")
TestWithTwoComPorts("Test200")
TestWithTwoComPorts("Test200", ppc_simulation_script="Test200.pcc")
TestWithNoComPort()
TestTime("Test500")
Test600()
TestWithComPortNoise("Test600")
TestWithComPortInteruption("Test601")
Each hardware release I test is represented on my PC by its own Test Environment folder. This folder contains logs, keys and a TestExecution.py. Each Test Case has its own results folder and in this folder I have log folders for each execution of the test.
Its also possible that I need to design new tests for a new hardware release. In this case it can take numerous attempts until I get this test to work properly.
With regard to the Pass/Fail status of a test, I do this by manually checking within my log files. The next improvement will be to automate the process of establishing if a test passed or not. I will write separate classes for this. This process will be ongoing.
I'm wondering if I can integrate my environment with Continuous Integration Software with a view to presenting both test execution and/or results in a nice graphical form. It would also be nice to select the tests I wish to execute. What open source software would you recommend?
Thanks,
Barry
Jenkins. For example, you can dump your test results in JUnit XML format and Jenkins will automatically produce nice graphs.
Plugins depend on your needs, of course, but here is a list of the essential plugins plus my favorites (some of them are bundled in the basic package):
Ant
A version control integration plugin (like Subversion, depends on what you are using)
Parameterized Trigger Plugin
Build Timeout Plugin
Log Parser Plugin
Regex Email Plugin
Artifact Deployer Plugin
Extended e-mail Plugin
As a Python programmer you will also benefit greatly from Python Jenkins API Wrapper.
In general, however, be careful with plugins: sometimes they are unstable and/or don't function properly. A look at plugin revision history usually can tell you if it is well-maintained.
You may install Jenkins locally on your machine and play with it for a few days before deciding if it fits your needs.