How to run tests while building Python 3.10+ from source? - python

Unlike earlier version 3 releases nowadays building Python 3.10 from source does not seem to run the (time-consuming) tests.
I need to build Python 3.10 on an oldish platform (no, I can't change that). I would actually like to run the tests, even if they are time consuming.
Unfortunately, I can't find the way to do it. Googling shows nonsensical results (how to do testing while using Python, unittest etc), while ./configure --help doesn't show anything.
Have the tests been removed? If not, how can I enable them?

Building from source make -j 4 prefix="/usr" usually does the tests too. At least that's what I've observed.

Related

Building Python 3 without running unit tests

In trying to install a custom Python 3 environment on my webhost (Dreamhost), make fails because the webhost's process monitor sees the unit tests as taking too much CPU. While I am able to install the untested Python binaries with make install anyway, I would love to be able to do the build without it even trying to run the unit tests in the first place (mostly to avoid getting the "helpful" automated email from Dreamhost that suggests I upgrade to a VPS).
Since I'm only building stable releases of Python it's pretty much guaranteed that the unit tests would all pass anyway. So, is there an option to python's ./configure or make that will cause it to skip attempting to run the test suite?

Python - packaging a source distribution

I'm currently writing a python program and I want to distribute it to some en users (and developers). I would like to reduce the number of necessary steps to run the program to a minimum.
My use case is relatively simple. I'd like the process/tool/whatever to:
A) Download the list of packages required for the application to work.
B) Run a list of python scripts, sequentially (e.g create database and then run migrations).
I understand that distlib does some of this already. However I find the documentation kind of confusing, there seems to be an API to install scripts, but not one to execute them automatically.
Ideally I would specify a list of scripts, and a list of dependencies and have the program install them automatically.
Maybe the best way to tackle this would be to use make with a Makefile (https://www.gnu.org/software/make/).
Distlib, via the setup.py file, would help you make it more readable by giving names to some python scripts. And you could make use of make target/dependencies system to execute tasks sequentially.
If you want to stick to python, you could also use Luigi (https://luigi.readthedocs.io/en/stable/) but it seems like overkill here.
Ok, so I ended writing my own thing, based on how I wanted the interface to look. The code that installs the application looks like this:
from installtools import setup
scripts = ['create_database.py', 'run_migrations.py']
setup("Shelob", "requirements.txt", scripts)
The full script can be found here: https://gist.github.com/fdemian/808c2b95b4521cd87268235e133c563f
Since PIP doesn't have a public API(and isn't likely to have one in the near future) the script uses the subprocess API to call:
pip install -r [requirements_file_path]
After that, it calls the specified python scripts, one by one. While it is probably not a very robust, as a stopgap solution it seems to do the trick.

How to statically determine whether a source file supports python 3, for flake8?

How can I statically determine whether a Python source file supports Python 3?
The question may be impossible to answer as stated there, so I'll be more specific:
My application for this is to pick a version of flake8 to run that won't give spurious SyntaxErrors. For that reason, I'm looking for (at least) some heuristic that will tell me whether I should run a Python 3 copy of flake8, or a Python 2 copy.
I'm using flake8 from my editor, which shows me lint errors as I type, and I'm most concerned with the fact that sometimes I lose naming errors (such as F821), as a side effect of pyflakes/flake8 thinking that something is a syntax error that's in fact correct syntax: when it encounters a syntax error it (understandably) appears to give up with things like naming errors.
This is nigh impossible. There are way too many codepaths to test.
Moreover, code can be written to run on both Python 2 and 3, and flake8 doesn't always like the tricks used to make this possible, unless the project specifically tests with flake8 and has marked such sites to be excluded. So you could either have false positives (errors in both the Python 2 and Python 3 versions of flake8) or the code will simply work on Python 2 and 3 without any warnings.
You could use tox to manage version support for a given project instead; have tox figure out what flake8 command to use (which may be multiple):
[tox]
envlist = py27,py35,flake8-27,flake8-35
# ...
[testenv:flake8-27]
basepython=python2.7
deps=flake8
commands=
flake8 projectdir
[testenv:flake8-35]
basepython=python3.5
deps=flake8
commands=
flake8 projectdir
and use tox -e flake8-27 or tox -e flake8-35.
If you only care about SyntaxError (which you specifically mention in your question), you can simply try to compile the file with Python 2 and 3:
python -m compileall
python3 -m compileall
If either of these commands fails, you at least know that the code does not work with that Python version. The reverse is of course not true: If the code compiles in a specific version of Python, that doesn't guarantee you that it will work correctly in that version of Python. It just tells you there are no SyntaxErrors.

How can I ensure good test-coverage of my big Python proejct

I have a very large python project with a very large test suite. Recently we have decided to quantify the quality of our test-coverage.
I'm looking for a tool to automate the test coverage report generation. Ideally I'd like to have attractive, easy to read reports but I'd settle for less attractive reports if I could make it work quickly.
I've tried Nose, which is not good enough: It is incompatible with distribute / setuptools' namespace package feature. Unfortunately nose coverage will never work for us since we make abundant use of this feature. That's a real shame because Nose seems to work really nicely in Hudson (mostly)
As an alternative, I've heard that there's a way to do a Python coverage analysis in Eclipse, but I've not quite locked-down the perfect technique.
Any suggestions welcome!
FYI we use Python 2.4.4 on Windows XP 32bit
Have you tried using coverage.py? It underlies "nose coverage", but can be run perfectly well outside of nose if you need to.
If you run your tests with (hypothetically) python run_my_tests.py, then you can measure coverage with coverage run run_my_tests.py, then get HTML reports with coverage html.
From your description, I'm not sure what problem you had with nose, especially whether it was a nose issue, or a coverage.py issue. Provide some more details, and I'm sure we can work through them.
Ned has already mentioned his excellent coverage.py module.
If the problem you're having is something nose specific, you might want to consider using another test runner. I've used py.test along with the pytest_coverage plugin that lets you generate coverage statistics. It also has a pytest_nose plugin to help you migrate.
However, I don't understand exactly what the problem you're facing is. Can you elaborate a little on the "distribute / setuptools' namespace package feature" you mentioned? I'm curious to know what the problem is.

Is there something like 'autotest' for Python unittests?

Basically, growl notifications (or other callbacks) when tests break or pass. Does anything like this exist?
If not, it should be pretty easy to write.. Easiest way would be to..
run python-autotest myfile1.py myfile2.py etc.py
Check if files-to-be-monitored have been modified (possibly just if they've been saved).
Run any tests in those files.
If a test fails, but in the previous run it passed, generate a growl alert. Same with tests that fail then pass.
Wait, and repeat steps 2-5.
The problem I can see there is if the tests are in a different file. The simple solution would be to run all the tests after each save.. but with slower tests, this might take longer than the time between saves, and/or could use a lot of CPU power etc..
The best way to do it would be to actually see what bits of code have changed, if function abc() has changed, only run tests that interact with this.. While this would be great, I think it'd be extremely complex to implement?
To summarise:
Is there anything like the Ruby tool autotest (part of the ZenTest package), but for Python code?
How do you check which functions have changed between two revisions of a script?
Is it possible to determine which functions a command will call? (Somewhat like a reverse traceback)
I found autonose to be pretty unreliable but sniffer seems to work very well.
$ pip install sniffer
$ cd myproject
Then instead of running "nosetests", you run:
$ sniffer
Or instead of nosetests --verbose --with-doctest, you run:
$ sniffer -x--verbose -x--with-doctest
As described in the readme, it's a good idea to install one of the platform-specific filesystem-watching libraries, pyinotify, pywin32 or MacFSEvents (all installable via pip etc)
autonose created by gfxmonk:
Autonose is an autotest-like tool for python, using the excellent nosetest library.
autotest tracks filesystem changes and automatically re-run any changed tests or dependencies whenever a file is added, removed or updated. A file counts as changed if it has iself been modified, or if any file it imports has changed.
...
Autonose currently has a native GUI for OSX and GTK. If neither of those are available to you, you can instead run the console version (with the --console option).
I just found this: http://www.metareal.org/p/modipyd/
I'm currently using thumb.py, but as my current project transitions from a small project to a medium sized one, I've been looking for something that can do a bit more thorough dependency analysis, and with a few tweaks, I got modipyd up and running pretty quickly.
Guard is an excellent tool that monitors for file changes and triggers tasks automatically. It's written in Ruby, but it can be used as a standalone tool for any task like this. There's a guard-nosetests plugin to run Python tests via nose.
Guard supports cross-platform notifications (Linux, OSX, Windows), including Growl, as well as many other great features. One of my can't-live-without dev tools.
One very useful tool that can make your life easier is entr. Written in C, and uses kqueue or inotify under the hood.
Following command runs your test suite if any *.py file in your project is changed.
ls */**.py | entr python -m unittest discover -s test
Works for BSD, Mac OS, and Linux. You can get entr from Homebrew.
Maybe buildbot would be useful http://buildbot.net/trac
For your third question, maybe the trace module is what you need:
>>> def y(a): return a*a
>>> def x(a): return y(a)
>>> import trace
>>> tracer = trace.Trace(countfuncs = 1)
>>> tracer.runfunc(x, 2)
4
>>> res = tracer.results()
>>> res.calledfuncs
{('<stdin>', '<stdin>', 'y'): 1, ('<stdin>', '<stdin>', 'x'): 1}
res.calledfuncs contains the functions that were called. If you specify countcallers = 1 when creating the tracer, you can get caller/callee relationships. See the docs of the trace module for more information.
You can also try to get the calls via static analysis, but this can be dangerous due to the dynamic nature of Python.
Django's development server has a file change monitor that watches for modifications and automatically reloads itself. You could re-use this code to launch unit tests on file modification.
Maybe Nose http://somethingaboutorange.com/mrl/projects/nose/ has a plugin http://somethingaboutorange.com/mrl/projects/nose/doc/writing_plugins.html
Found this: http://jeffwinkler.net/2006/04/27/keeping-your-nose-green/
You can use nodemon for the task, by watching .py files and execute manage.py test. The command will be: nodemon --ext py --exec "python manage.py test".
nodemon is an npm package however, I assume you have node installed.
Check out pytddmon. Here is a video demonstration of how to use it:
http://pytddmon.org/?page_id=33

Categories