What's a sane way to define custom travis job? - python

Currently travis-ci does not support multiple languages or custom jobs at all. I'm aware that I can install a second language in the before_install hook though.
Let me explain my scenario:
I have a Python package which I currently unit test via travis with language: python for multiple Python versions. Now I want to add an additional Job which uses docker to build and run a container to build the Python package as debian package.
One option would be to just do it for every Job but that would slow down the test time significantly. Thus, I want to avoid that.
Another option would be to work with environment variables in set in the build matrix of travis and check an env variable if it's set and if that's so I'd run the docker integration tests.
Both of those options seem rather bad and hacky.
Thus, what's a sane way of adding such a custom job to my travis build matrix?

I've now solved my needs with the new "in Beta" Build Stages. It's not exactly what I wanted but it works for now.
See https://github.com/timofurrer/w1thermsensor/blob/master/.travis.yml for the .travis.yml and https://travis-ci.org/timofurrer/w1thermsensor/builds/243322310 for the example build.

Related

Python - packaging a source distribution

I'm currently writing a python program and I want to distribute it to some en users (and developers). I would like to reduce the number of necessary steps to run the program to a minimum.
My use case is relatively simple. I'd like the process/tool/whatever to:
A) Download the list of packages required for the application to work.
B) Run a list of python scripts, sequentially (e.g create database and then run migrations).
I understand that distlib does some of this already. However I find the documentation kind of confusing, there seems to be an API to install scripts, but not one to execute them automatically.
Ideally I would specify a list of scripts, and a list of dependencies and have the program install them automatically.
Maybe the best way to tackle this would be to use make with a Makefile (https://www.gnu.org/software/make/).
Distlib, via the setup.py file, would help you make it more readable by giving names to some python scripts. And you could make use of make target/dependencies system to execute tasks sequentially.
If you want to stick to python, you could also use Luigi (https://luigi.readthedocs.io/en/stable/) but it seems like overkill here.
Ok, so I ended writing my own thing, based on how I wanted the interface to look. The code that installs the application looks like this:
from installtools import setup
scripts = ['create_database.py', 'run_migrations.py']
setup("Shelob", "requirements.txt", scripts)
The full script can be found here: https://gist.github.com/fdemian/808c2b95b4521cd87268235e133c563f
Since PIP doesn't have a public API(and isn't likely to have one in the near future) the script uses the subprocess API to call:
pip install -r [requirements_file_path]
After that, it calls the specified python scripts, one by one. While it is probably not a very robust, as a stopgap solution it seems to do the trick.

Running python unit tests through python

I am working on an interface helper library for a piece of software. Software is on it's own release cycle.
I have pretty solr unit tests, but I am not using mock and require the actual software installed to test fully. Testing is currently automated through travis CI.
I want to be able to automatically test with multiple versions of python (travis is doing it now) and multiple versions of the software. I have set up a vagrant box that along with ansible deploys required versions of the software. I have also included tox to test with multiple versions of python, I am looking to test my supported versions of python with each supported version of the software automatically.
Tox now runs a shell script that sets the URL of the software endpoint in the environment and runs through all the unit tests. However, at this point I can't tell exactly what failed, as in version of software and version of python. It still requires me to manually look and review a bunch of output.
I would like to write a python script to manage testing. Does anyone know how I can invoke a unittest class object from python? Is there a better way to do this?
Thanks

Is it possible to run 2 seperate .travis.yml files from the same github repository

My current use case is that I use travis-ci very happily to run my test cases for a python project. This reports a fail or pass based on whether the py.unit tests pass.
I would like to add pep8 checking to this repository as well, but I don't want my core functionality tests to fail if there is some incorrectly formatted code, but I would like to know about it.
Any possible ways of dealing with this would be useful, but my immediate thought was, is there any way of having 2 separate test runners, running off the same repository? ".travis.yml" running the main tests, and a separate process monitoring my pep8 compliance from ".travis2.yml" for example.
I would then have 2 jobs running, and could see if my core functionality tests are still OK at a glance(from the github badge for example), but also how my pep8 compliance is going.
Thanks
Mark
From http://docs.travis-ci.com/user/customizing-the-build/ :
Travis CI uses .travis.yml file in the root of your repository to
learn about your project and how you want your builds to be executed.
A mixture of matrix and allow_failurescould be used in the single .travis.yml file to address your use case of having two jobs run where one build reports your functionality tests and a second build gives you feedback on your pep8 compliance,
For example, the following .travis.yml file cause two builds to occur on traivs. In only one of the builds (i.e. where PEP=true), the pep8 check would occur. If the pep8-check failed it wouldn't be considered a failure due to allow_failures:
language: python
env:
- PEP=true
- PEP=false
matrix:
allow_failures:
- env: PEP=true
script:
- if $PEP ; then pep8 ; fi
- python -m unittest discover

Tools for continuous integration with Python

My project is based on only Python code.We are using multiple tool for pylint,profiler for improving the code quality.So each developer need to run individually run this Tool.I am planning to integrate all tools into single script or tool. We generally integrate new tool in hudson build tool in c++ But I am not sure is it possible in Python as I recently move to Python.So I have very basic question.
I have searched into Python and found many build tool But I could not figure out any one which can use be for integrate to plug-in.
Do we have any tool which can do our purpose and not require to have build functionality.
Somebody suggest me to write shell script rather than look for any tool.
As of now, we are not using any build tool in Python.
You'd better follow #WoLpH comment on how to configure hudson. And I strongly advice you to switch to Jenkins, as it has a more active developer community than hudson.
About using a build tool in python, it depends on the complexity of the project you want to deploy:
if it is a project that has only simple python dependencies, you'd better use virtualenv ;
if you need to checkout some private repositories, or make more complex arrangements on your repositories (or if you don't want to mess your shell's environment), then zc.buildout is for you ;
if what you want is something closer to Makefile, but that you can use in a more extensible and pythonic way, then you'd better have a look at scons
In either way, you'll need to make a setup.py, and add support for unit testing. For unit testing in python, you'd better have a look at nose.py.
Once you chose your weapons and configured your environment, jenkins (or hudson if you want to keep the old one) is pretty easy to configure.

unit testing embedded python

I have a third party software which is able to run some python scripts using something like:
software.exe -script pythonscript.py
My company is heavily dependent on this software as well as on the scripts we develop for it. Currently we have some QA that checks the output of the scripts, but we really want to start unit testing the scripts to make it easier to find bugs and make the test system more complete.
My problem is how is it possible to run "embedded" unit tests? We use pydev+eclipse and I tried to use it's remote debbuging to make it work with the unit tests, but I cannot really make it work. How can I make the server connection "feed" the unit test?
The other idea would be to parse the stdout of the software, but that would not really be a unit test... And the added complexity it seems to bring makes this approach less interesting.
I would expect that something like this has already been done somewhere else and I tried googling for it, but maybe I am just not using the correct keywords. Could anyone give me a starting point?
Thank you
A bit more info would be helpful. Are you using a testing framework (e.g. unittest or nose), or if not, how are the tests structured? What is software.exe?
In python, unit tests are really nothing more than a collection of functions which raise an exception on failure, so they can be called from a script like any other function. In theory, therefore, you can simply create a test runner (if you're not already using one such as nose), and run it as software.exe -script runtests.py. In pydev, you can set up software.exe as a customised python interpreter.
If the problem is that software.exe hides stdout, then simply write the results to a log file instead. You could also create a environment which mocks that provided by software.exe and run the tests using python.exe.
If unit tests are for your code and not for the functionality provided by software.exe then you could run the tests using a standalone python mocking software.exe parts where necessary. As an intermediate step you could try to run unittest-based scripts using `software.exe'
Well, generally speaking, testing software shall be done by a Continuous Integration suite (And Jenkins is your friend).
Now, I think you'll have to test your scripts pythonscript.py by setting a test() function inside the python script that will emulate the possible environments you'll give to the entry point of your script. And you'll be able to use unittest to execute the test functions of all your scripts. You can also embed tests in doctests, but I personally don't like that.
And then, in your software.exe, you'll be able to execute tests by emulating all the environment combinations. But as you don't say much about software.exe I won't be able to help you more... (what language ? is software.exe already unit tested ?)

Categories