I am studying TDD and developing an API in Django Rest Framework, and I had a need that I researched and could not find some tools to solve my problem, I am trying to find out how much my tests cover my application in %.
For know the number of possibilities and possible suggestions of what is missing cover, I found the coverage lib, but it generates a report with lots of data, which are not very useful for my case, I just want to know the coverage of my tests that I created. Does anyone know of any tool or plugin for pycharm that does this coverage of the tests?
I know that in visual studio there is Ncrunch that does this, but I do not know if there is something similar in pycharm.
I was struggling with the same question.
Especially I wanted to visualize the execution path of each test and run only affected tests.
I created a tool that sits in the background and runs only impacted tests:
(You will need PyCharm plugin and pycrunch-engine from pip)
https://pycrunch.com
https://github.com/gleb-sevruk/pycrunch-engine
This is how it looks like:
It is currently in beta, and may not support all usage scenarios, but I use it every day for development, without major issues.
I found a tool in the professional pycharm of which does what I need, is the functionality of running the tests with coverage, there is an option that runs the tests again to check if everything is ok:
And in this tool there is also another feature that shows the coverage of your tests against the existing code:
I hope I can help someone who has the same doubt! Thanks!
Related
I'm a newbie, I'm afraid, and have newbie questions.
I have used python for simple scripts and automation for a while, but am challenging myself to go deeper by contributing to some open source projects on GitHub.
It's been fun, but also nerve-wracking to make dumb mistakes in such a public environment.
Sometimes one of my changes causes an error that is caught by one of the automated tests that the GitHub project runs when a PR is submitted. I'd like to catch those myself, if possible, before submitting the PR. Is there a way for me to run the same build tests locally on my own machine?
Any other best practice suggestions for doing open-source contributions without asking for too much time/help from maintainers is also appreciated.
Running the entire build locally doesn't really make sense. Especially not for just the tests.
Github and most open source repositories have a contribution guidelines. Github especially has CONTRIBUTING.md to allow repo owners demonstrate how to contribute.
For example:
CPython has a testing section on their readme.
Django has a contributing section and how to run the test suite in their readme.
Most proper open source projects would have explanations on how to run tests/builds locally.
Do not, however feel ashamed over something like broken tests. This is what version control systems are for. Make 10 mistakes, fix the bug/add the feature, make 20 mistakes afterwards. You can just make typos and fix them in the next commit. It doesn't matter. Just rebase your branch after you added what you needed to add and you are good to go. Making mistakes is nothing to be ashamed especially since we have tools to fix those mistakes easily.
Why not act?
Act is OK. It is a nice tool that I myself use. But you don't need to run entire workflow just for tests when you can run tests without it, and it is not really a small tool.
The problem with act is that it is only for github actions, which is only one of the many CI tools.
Travis, CircleCI, Jenkins, ...
It's better to just read the project you are contributing to and follow their guidelines.
Act works most of the time but is a bit limited on the types of images it can use.
I really feel you on this one, would be nice if there were tools for this :/
Ideally I’d like to build a package to deploy to Debian. Ideally the installation process would check the system has the required dependencies installed, as well as configure Cronjobs, set up users etc.
I’ve tried googling around and I understand a .deb is the format I can distribute in - but that is as far as I got since I’m getting confused now with the tooling I need to get up to speed with. The other option is to just git clone on the server and configure the environment manually… but that’s not preferable for obvious reasons.
How can I get started with building a Debian package and is that the right direction for deploying web applications? If anyone could point me in the right direction tools-wise and perhaps a tutorial that would be massively appreciated :) also if you advise to just take the simple route with git, happy to take that advice as well if you explain why. if it makes any difference I’m deploying one nodejs and one python web application
You can for sure package everything as a Linux application; for example using pyinstaller for your python webapp.
Besides that, it depends on your use case.
I will focus on the second part of your question,
How can I get started with building a Debian package and is that the right direction for deploying web applications?
as that seems to be what you are after when considering other alternatives to .dev already in your question.
I want to deploy 1-2 websites on my linux server
In this case, I'd say manually git clone and configure everything. Its totally fine when you know that there won't be much more running on the server and is pretty hassle free.
Why spend time packaging when noone will need the package ever again after you just installed it on your server?
I want to distribute my webapps to others on Debian
Here a .deb would make total sense. For example Plex media server and other applications are shipped like this.
If the official Debian wiki is too abstract, there are also other more hands on guides to get you started quickly. You could also get other .deb Packages and extract them to see what they are made up from. You mentioned one of your websites is using python, so I just suspect it might be flask or Django. If it's Django, there is an example repository you might want to check out.
I want to run a lot of stuff on my server / distribute to other devs and platforms / or scale soon
In this case I would make the webapps into docker containers. They are easy to build, share, and deploy. On top you can easily bundle all dependencies and scripts to make sure everything is setup right. Also they are easy to run and stop. So you have a simple "on/off" switch if your server is running low on resources while you want to run something else. I highly favour this solution, as it also allows you to easily control what is running on what ip when you deploy more and more applications to your server. But, as you pointed out, it runs with a bit of overhead and is not the best solution on weak hardware.
Also, if you know for sure what will be running on the server long term and don't need the flexibility I would probably skip Docker as well.
I implemented some unit tests (with unittest) for a qgis3+ plugin, and I would like to run those tests programmatically.
Currently, I launch those tests directly from the QGis UI, but that prevent me from using some CI tool like Travis or Gitlab-CI...
I already found this topic, but it is outdated and most of the links are already dead: https://gis.stackexchange.com/questions/71206/writing-automated-tests-for-qgis-plugins?rq=1
This page was also greatly detailed, but a note from 2017 said it to be obsolete.
Does someone know about a way to achieve this, or at least about some ressource or documentation on the subject?
I have a django site that needs to be rebuilt every night. I would like to check out the code from the Git repo and then begin doing the stuff like setting up the virtual environment, downloading the packages, etc. This would have no manual intervention as this would be run from cron
I'm really confused as to what to use for this. Should I write a Python script or a Shell script? Are there any tools that assist in this?
Thanks.
So what I'm looking for is CI and from what I've seen I'll probably end up using Jenkins or Buildbot for it. I've found the docs to be rather cryptic for someone who's never attempted anything like this before.
Do all CI like Buildbot/Jenkins simply run tests and more test and send you reports or do they actually set up a working Django environment that you can access through your browser?
You'll need to create some sort of build script that does everything but the GIT checkout. I've never used any Python build tools, but perhaps something like: http://www.scons.org/.
Once you've created a script you can use Jenkins to schedule a nightly build and report success/failure: http://jenkins-ci.org/. Jenkins will know how to checkout your code and then you can have it run your script.
There are litterally 100's of different tools to do this. You can write python scripts to be run from cron, you can write shell scripts, you can use one of the 100's of different build tools.
Most python/django shops would likely recommend Fabric. This really is a matter of you running through and making sure you understand everything that needs to be done and how to script it. Do you need to run a test suite before you deploy to ensure it doesn't really break everything? Do you need to run South database migrations? You really need to think about what needs to be done and then you just write a fabric script to do those things.
None of this even touches the fact that overall what you're asking for is continuous integration which itself has a whole slew of tools to help manage that.
What you are asking for is Continuous Integration.
There are many CI tools out there, but in the end it boils down to your personal preferences (like always, hopefully) and which one just works for you.
The Django project itself uses buildbot.
If you would ask me, then I would recommend you continuous.io, which works ouf the box with Django applications.
You can manually set how many times you would like to build your Django project, which is great.
You can, of course, write a shell script which rebuilds your Django project via cron, but you should deserve better than that.
I'm using Git to push my code from my development machine to my testing server.
Dev: Mac, Python 2.6, sqllite and
Test: Linux, Python 2.7, MySQL
I took an early dev database and exported it to MySQL for initial testing.
So now I'm regularly pushing new code to the testing server. In general it seems to be working well, but I'm getting an Integrity Error regarding multiple objects with same Primary Key occasionally.
Does this ring any bells at this point? Is there something inherently wrong with the setups? Obviously there are some configuration differences for instance Python 2.6 and 2.7. So if there were issues here, I was hoping somebody could target them before I try some platform config syncing.
Thanks!
I'm not able to answer this question directly.
Depending on the reasons why you have used a different Python environment for your testing server, there are a few options:
First, if you want to test to see whether your code functions in multiple environments, I recommend you look into py.test. It has support for distributed testing. This includes providing the ability to create a virtualenv for each Python version that you wish to test.
Once you've done this, it will easier to tell whether your code, Django core or whether MqSQL is at fault. My suspicion is that there may be a problem with the database abstraction. It looks like sqllite is being tolerant but MySQL is not.
Secondly, it may be worthwhile to look into virtualenv yourself. This creates a standalone Python environment that makes replicating your dev setup much simpler.