I am working on an interface helper library for a piece of software. Software is on it's own release cycle.
I have pretty solr unit tests, but I am not using mock and require the actual software installed to test fully. Testing is currently automated through travis CI.
I want to be able to automatically test with multiple versions of python (travis is doing it now) and multiple versions of the software. I have set up a vagrant box that along with ansible deploys required versions of the software. I have also included tox to test with multiple versions of python, I am looking to test my supported versions of python with each supported version of the software automatically.
Tox now runs a shell script that sets the URL of the software endpoint in the environment and runs through all the unit tests. However, at this point I can't tell exactly what failed, as in version of software and version of python. It still requires me to manually look and review a bunch of output.
I would like to write a python script to manage testing. Does anyone know how I can invoke a unittest class object from python? Is there a better way to do this?
Thanks
Related
I'm working on a big project involvoing more than 100 programmers. We work in a model of code owners, and each group is working on their own code segment.
There are some rules that I have to enforce:
The code must be compatible with python6 (compatible for both 2 and 3)
The code must be compatible with linux and windows
In order to check rule number 1, I use futurize in my CI. That works fine to check the compatibility.
I also need to find a way to check the Linux/Windows issue, are there any tools I can use to check it? The only thing I have in my mind right now is to use a Windows agent in my CI, but I would like to have a static analysis for the enforcement of this rule.
Thanks in advance :)
The only way to ensure compatibility of code in a given environment is to test it thoroughly in said environment. Use a CI runner to execute your test suite with a test matrix. Popular services for CI are Github Actions, Gitlab CI, Jenkins, CircleCI, Travis CI, drone, tox, and many more.
Here's an example of Celery's CI suite, based on Github Actions, which use a matrix of python versions and operating systems.
https://github.com/celery/celery/blob/master/.github/workflows/python-package.yml#L28-L30
Q: In creating a python distribution using setup.py, how can I define Python code that will be run by pip at installation time (NOT at build time!) and runs on the installation target machine (NOT on the build machine!)
I have spent the past seek searching the web for answers, reading contradictory documentation pages, and viewing multiple videos about setup.py. But in all this research can't find even one working example of how install time tasks can be specified.
Can someone point me at a complete working example?
Background: I am writing Python code for an application that is controlling a specialized USB peripheral my company is making, the processor where this will be installed is embedded/bundled with the peripheral and control software.
What's Needed: During the installation of the controlling application, I need the installing program (pip?) to write a configuration file on the install target machine. This file needs to include machine specific information about the target machine acquired using calls to functions imported from Lib/platform.py.
What I tried: Everything I've tried so far either runs at build time on the build machine (when setup.py runs and thus picks up the WRONG information for the target machine), or it merely installs the code I want to run on the target, but does not run it. It thus requires manual intervention by the user after the pip installation but prior to attempting to run the program they think they just installed, to run the auxiliary program that creates the installation config file. Only after this 2 step process can the user actually run the installed (and now properly configured) application.
Source code: Sorry. All my failed attempts to put functions in setup.py (which only run on the build machine, at build time) would only further confuse any readers and encourage more misleading wild goose chases down pointless rat holes.
If my users were sophisticated python developers who are comfortable with command line error messages, the link that #sinoroc has provided in the previous comment would have been an interesting solution.
Given that my users are barely comfortable installing packages from the App Store or Google Play store, the referenced work around is probably not right for me.
But given that install time functions are regarded as bad practice, my workaround is to alter the installed program so that its first action is to check for the presence of the necessary configuration file every time the program runs.
While this checking is seemingly unnecessary after the first run, it consumes only minimal CPU resources and would be more robust if the configuration file is ever accidentally deleted.
In trying to install a custom Python 3 environment on my webhost (Dreamhost), make fails because the webhost's process monitor sees the unit tests as taking too much CPU. While I am able to install the untested Python binaries with make install anyway, I would love to be able to do the build without it even trying to run the unit tests in the first place (mostly to avoid getting the "helpful" automated email from Dreamhost that suggests I upgrade to a VPS).
Since I'm only building stable releases of Python it's pretty much guaranteed that the unit tests would all pass anyway. So, is there an option to python's ./configure or make that will cause it to skip attempting to run the test suite?
Currently travis-ci does not support multiple languages or custom jobs at all. I'm aware that I can install a second language in the before_install hook though.
Let me explain my scenario:
I have a Python package which I currently unit test via travis with language: python for multiple Python versions. Now I want to add an additional Job which uses docker to build and run a container to build the Python package as debian package.
One option would be to just do it for every Job but that would slow down the test time significantly. Thus, I want to avoid that.
Another option would be to work with environment variables in set in the build matrix of travis and check an env variable if it's set and if that's so I'd run the docker integration tests.
Both of those options seem rather bad and hacky.
Thus, what's a sane way of adding such a custom job to my travis build matrix?
I've now solved my needs with the new "in Beta" Build Stages. It's not exactly what I wanted but it works for now.
See https://github.com/timofurrer/w1thermsensor/blob/master/.travis.yml for the .travis.yml and https://travis-ci.org/timofurrer/w1thermsensor/builds/243322310 for the example build.
I have built an application with python.
How can I detect minimum version of Python that my application needs?
Like Django , in that website it tells you the minimum version of python that it required (example: 2.6.6 and later).
It means I wanna tell to user what minimum version of python he should install on his system
I know this is an old post, but I've recently started a project called Vermin that tries to detect the minimum required Python versions needed to run code. It will do what you request, #Mortezaipo.
There isn't really an automated way to check what features your code is using and correlate that to specific Python versions. If your code relies on fixed bugs or additional keywords to existing methods, the version detection gets harder still.
Generally speaking, you set the minimal version based on experience and knowledge of what features you are using; you can check the What's New documentation.
If you have a comprehensive test suite, you could just run that on older Python versions; if all your tests pass you support that older version.