In trying to install a custom Python 3 environment on my webhost (Dreamhost), make fails because the webhost's process monitor sees the unit tests as taking too much CPU. While I am able to install the untested Python binaries with make install anyway, I would love to be able to do the build without it even trying to run the unit tests in the first place (mostly to avoid getting the "helpful" automated email from Dreamhost that suggests I upgrade to a VPS).
Since I'm only building stable releases of Python it's pretty much guaranteed that the unit tests would all pass anyway. So, is there an option to python's ./configure or make that will cause it to skip attempting to run the test suite?
Related
Unlike earlier version 3 releases nowadays building Python 3.10 from source does not seem to run the (time-consuming) tests.
I need to build Python 3.10 on an oldish platform (no, I can't change that). I would actually like to run the tests, even if they are time consuming.
Unfortunately, I can't find the way to do it. Googling shows nonsensical results (how to do testing while using Python, unittest etc), while ./configure --help doesn't show anything.
Have the tests been removed? If not, how can I enable them?
Building from source make -j 4 prefix="/usr" usually does the tests too. At least that's what I've observed.
How to detect which tests pytest need to run?
Up to now I run all tests in CI. This is slow.
Isn't there a way to automatically detect which tests need to run, and only execute a fraction of the whole test suite?
Background: I am reading the Google SE book and read about their gigantic monorepo and that their tool Blaze can detect which tests need to run. Sounds very cool.
I had good experience with pytest-testmon which determines the subset of tests to execute on code change based on test coverage. The details on that are explained well in Determining affected tests section of testmon's docs, but the general advice is to maintain high line/branch coverage for testmon to be effective. This also means that it will fail where coverage fails.
If you want to use it in the CI, you need to cache the .testmondata database file between test jobs and ensure the --testmon arg is used when invoking pytest. It also depends on your CI provider, but most popular ones out there offer caching mechanisms (Github, Gitlab, CircleCI, Appveyor etc).
testmon can also be effectively combined with pytest-watch watchdog to keep a daemon for test execution running. An example command that reruns subset of tests on code changes and sends system notifications on success or failure:
$ pytest-watch --runner="pytest --testmon" \
--onfail="notify-send --urgency=low -i error 'Tests Failed'" \
--onpass="notify-send --urgency=low -i info 'Tests passed'"
You can also use Bazel for building and testing as well; however, IMO it is an overkill for a pure Python project and is not worth it for just the incremental testing purposes.
You can use the pytest-incremental plugin for this which uses AST to analyse the import structure of a project and only runs tests for which the source code has changed.
pip install pytest-incremental
python -m pytest --inc
Analyzing the graph is easy to see that a change in app would cause
only test_app to be execute. And a change in util would cause all
tests to be executed.
from the docs
You can add #pytest.mark to the tests
#pytest.mark.SomeFeature
#pytest.mark.Regression
def test_example1(self):
pass
#pytest.mark.AnotherFeature
#pytest.mark.Regression
def test_example2(self):
pass
And use it when triggering the tests
pytest TestsFolder -m SomeFeature
This will run only test_example1, -m Regression will run both tests.
Q: In creating a python distribution using setup.py, how can I define Python code that will be run by pip at installation time (NOT at build time!) and runs on the installation target machine (NOT on the build machine!)
I have spent the past seek searching the web for answers, reading contradictory documentation pages, and viewing multiple videos about setup.py. But in all this research can't find even one working example of how install time tasks can be specified.
Can someone point me at a complete working example?
Background: I am writing Python code for an application that is controlling a specialized USB peripheral my company is making, the processor where this will be installed is embedded/bundled with the peripheral and control software.
What's Needed: During the installation of the controlling application, I need the installing program (pip?) to write a configuration file on the install target machine. This file needs to include machine specific information about the target machine acquired using calls to functions imported from Lib/platform.py.
What I tried: Everything I've tried so far either runs at build time on the build machine (when setup.py runs and thus picks up the WRONG information for the target machine), or it merely installs the code I want to run on the target, but does not run it. It thus requires manual intervention by the user after the pip installation but prior to attempting to run the program they think they just installed, to run the auxiliary program that creates the installation config file. Only after this 2 step process can the user actually run the installed (and now properly configured) application.
Source code: Sorry. All my failed attempts to put functions in setup.py (which only run on the build machine, at build time) would only further confuse any readers and encourage more misleading wild goose chases down pointless rat holes.
If my users were sophisticated python developers who are comfortable with command line error messages, the link that #sinoroc has provided in the previous comment would have been an interesting solution.
Given that my users are barely comfortable installing packages from the App Store or Google Play store, the referenced work around is probably not right for me.
But given that install time functions are regarded as bad practice, my workaround is to alter the installed program so that its first action is to check for the presence of the necessary configuration file every time the program runs.
While this checking is seemingly unnecessary after the first run, it consumes only minimal CPU resources and would be more robust if the configuration file is ever accidentally deleted.
Currently travis-ci does not support multiple languages or custom jobs at all. I'm aware that I can install a second language in the before_install hook though.
Let me explain my scenario:
I have a Python package which I currently unit test via travis with language: python for multiple Python versions. Now I want to add an additional Job which uses docker to build and run a container to build the Python package as debian package.
One option would be to just do it for every Job but that would slow down the test time significantly. Thus, I want to avoid that.
Another option would be to work with environment variables in set in the build matrix of travis and check an env variable if it's set and if that's so I'd run the docker integration tests.
Both of those options seem rather bad and hacky.
Thus, what's a sane way of adding such a custom job to my travis build matrix?
I've now solved my needs with the new "in Beta" Build Stages. It's not exactly what I wanted but it works for now.
See https://github.com/timofurrer/w1thermsensor/blob/master/.travis.yml for the .travis.yml and https://travis-ci.org/timofurrer/w1thermsensor/builds/243322310 for the example build.
I am working on an interface helper library for a piece of software. Software is on it's own release cycle.
I have pretty solr unit tests, but I am not using mock and require the actual software installed to test fully. Testing is currently automated through travis CI.
I want to be able to automatically test with multiple versions of python (travis is doing it now) and multiple versions of the software. I have set up a vagrant box that along with ansible deploys required versions of the software. I have also included tox to test with multiple versions of python, I am looking to test my supported versions of python with each supported version of the software automatically.
Tox now runs a shell script that sets the URL of the software endpoint in the environment and runs through all the unit tests. However, at this point I can't tell exactly what failed, as in version of software and version of python. It still requires me to manually look and review a bunch of output.
I would like to write a python script to manage testing. Does anyone know how I can invoke a unittest class object from python? Is there a better way to do this?
Thanks