This is my tox.ini file:
# Tox (https://tox.readthedocs.io/) is a tool for running tests
# in multiple virtualenvs. This configuration file will run the
# test suite on all supported python versions. To use it, "pip install tox"
# and then run "tox" from this directory.
#
# See also https://tox.readthedocs.io/en/latest/config.html for more
# configuration options.
[tox]
# Choose your Python versions. They have to be available
# on the system the tests are run on.
# skipsdist=True
ignore_basepython_conflict=false
[testenv:{setup,lint,codestyle,docstyle,tests,doc-linux,doc-darwin,doc-win32}]
basepython=python3.9
envdir = {toxworkdir}/py39
setenv =
PROJECT_NAME = project_name
passenv =
WINDIR
install_command=
pip install \
--find-links=pkg \
--trusted-host=pypi.python.org \
--trusted-host=pypi.org \
--trusted-host=files.pythonhosted.org \
{opts} {packages}
platform = doc-linux: linux
doc-darwin: darwin
doc-win32: win32
deps =
-r{toxinidir}/requirements-dev.txt
-r{toxinidir}/requirements.txt
commands =
setup: python -c "print('All SetUp')"
# Mind the gap, use a backslash :)
lint: pylint -f parseable -r n --disable duplicate-code \
lint: --extension-pkg-whitelist=PyQt5,numpy,torch,cv2,boto3 \
lint: --ignored-modules=PyQt5,numpy,torch,cv2,boto3 \
lint: --ignored-classes=PyQt5,numpy,torch,cv2,boto3 \
lint: project_name \
lint: {toxinidir}/script
lint: pylint -f parseable -r n --disable duplicate-code \
lint: demo/demo_file.py
codestyle: pycodestyle --max-line-length=100 \
codestyle: --exclude=project_name/third_party/* \
codestyle: project_name demo script
docstyle: pydocstyle \
docstyle: --match-dir='^((?!(third_party|deprecated)).)*' \
docstyle: project_name demo script
doc-linux: make -C {toxinidir}/doc html
doc-darwin: make -C {toxinidir}/doc html
doc-win32: {toxinidir}/doc/make.bat html
tests: python -m pytest -v -s --cov-report xml --durations=10 \
tests: --cov=project_name --cov=script \
tests: {toxinidir}/test
tests: coverage report -m --fail-under 100
On tox<4.0 it was very convinient to run tox -e lint to fix linting stuff or tox -e codestyle tox fix codestyle stuff, etc. But now, with version tox>4.0 each time I run one of these commands I get this message (for instance):
codestyle: recreate env because env type changed from {'name': 'lint', 'type': 'VirtualEnvRunner'} to {'name': 'codestyle', 'type': 'VirtualEnvRunner'}
codestyle: remove tox env folder .tox/py39
And it takes forever to run these commands since the evironments are recreated each time ...
I also use these structure for running tests on jenkins so I can map each of these commands to a jenkins stage.
How can I reuse the environment? I have read that it is possible to do it using plugins, but no idea how this can be done, or how to install/use plugins.
I have tried this:
tox multiple tests, re-using tox environment
But it does not work in my case.
I spect to reuse the environment for each of the environments defined in the tox file.
As an addition to N1ngu's excellent answer...
You could re-structure your tox.ini as following:
[tox]
...
[testenv]
<here goes all the common configuration>
[testenv:lint]
<here goes the lint specific configuration>
[testenv:codestyle]
...
And so on. This is a common setup.
While still the environments need to be created at least once, they won't get recreated on each invocation.
This all said, you could also have a look at https://pre-commit.com/ to run your linters, which is very common in the Python community.
Then you would have a tox.ini like the following...
[tox]
...
[testenv]
<here goes all the common configuration>
[testenv:lint]
deps = pre-commit
commands = pre-commit run --all-files
There is now a definite answer about re-use of environments in the faq:
https://tox.wiki/en/latest/upgrading.html#re-use-of-environments
I fear the generative names + factor-specific commands solution you linked relied on tox-3 not auto-recreating the environments by default, which is among the new features in tox 4. Now, environment recreation is something that can be forced (--recreate) but can't be opted-out.
Official answer on this https://github.com/tox-dev/tox/issues/425 boils down to
Officially we don't allow sharing tox environments at the moment [...] As of today, each tox environment has to have it's own virtualenv even if the Python version and dependencies are identical [...] We'll not plan to support this. However, tox 4 allows one to do this via a plugin, so we'd encourage people [...] to try it [...]. Once the project is stable and widely used we can revisit accepting it in core.
So that's it, write a plugin. No idea on how to do that either, so my apologies if this turns out as "not an answer"
Related
When I create a tox environment, some libraries are installed under different paths depending on the environment that I use to trigger tox:
# Tox triggered inside virtual env
.tox/lib/site-packages
Sometimes
# Tox triggered inside docker
.tox/lib/python3.8/site-packages
I need to reuse such path in further steps inside tox env. I decided to create a bash script to find the path for installed libraries to be able to reuse it and to run it inside tox env. I thought that I can pass found path to tox and reuse it in one of the next commands. Is it possible to do such thing?
I tried:
tox.ini
[tox]
envlist =
docs
min_version = 4
skipsdist = True
allowlist_externals = cd
passenv =
HOMEPATH
PROGRAMDATA
basepython = python3.8
[testenv:docs]
changedir = docs
deps =
-r some_path/library_name/requirements.txt
commands =
my_variable=$(bash ../docs/source/script.sh)
sphinx-apidoc -f -o $my_variable/source $my_variable
But apparently this doesn't work with tox:
docs: commands[0] docs> my_variable=$(bash ../docs/source/script.sh)
docs: exit 2 (0.03 seconds) docs>
my_variable=$(bash../docs/source/script.sh) docs: FAIL code 2
(0.20=setup[0.17]+cmd[0.03] seconds) evaluation failed :( (0.50
seconds)
Bash script
script.sh
#!/bin/bash
tox_env_path="../.tox/docs/"
conf_source="source"
tox_libs=$(find . $tox_env_path -type d -name "<name of library>")
sudo mkdir -p $tox_libs/docs/source
cp $conf_source/conf.py $conf_source/index.rst $tox_libs/docs/source
echo $tox_libs
Not a direct answer to your question, but maybe something like this can help you achieve the actual goal (XY problem):
[testenv:docs]
# ...
allowlist_externals =
bash
commands =
python -c 'print("The path is: {env_site_packages_dir}")'
bash ../docs/source/script.sh
sphinx-apidoc -f -o {env_site_packages_dir}/LibName/source {env_site_packages_dir}/LibName
Indeed, it seems to me like it is unnecessary to compute the path in the bash script and try to read this path in a variable.
1. As far as I know it is not possible to assign values to a variable like you suggest in your question, tox.ini does not allow it.
2. On the other hand, tox.ini allows subsitutions and in particular the {env_site_packages_dir} seems helpful for your use case.
I am running a python application that reads two paths from Windows env vars and proceeds to use the executables in those paths to do OCR on some documents. Since POPPLER, TESSERACT env vars are already set in Windows, this Python snippet works for me:
popplerPath = os.environ.get('POPPLER')
tesseractPath = os.environ.get('TESSERACT')
Now I am trying to dockerize the app, and, to my understanding, since my container will need access to those paths, I need to mount them using VOLUME during run. My dockerfile looks like this:
FROM python:3.7.7-slim
WORKDIR ./
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY documents/ .
COPY src/ ./src
CMD [ "python", "./src/run.py" ]
I build the image using:
docker build -t ocr .
And I try to run my container using:
docker run -v %POPPLER%:%POPPLER% -v %TESSERACT%:%TESSERACT% ocr
... but my app still gets a None value for these paths and can't use the executable files. Is my approach correct and beyond that, is it a good dev practice?
See the doc, the switch for environment variable is -e:
$ docker run -e MYVAR1 --env MYVAR2=foo --env-file ./env.list ubuntu bash
and in dockerfile, you can use
ENV FOO=/bar
If I understand your statement correctly, your paths are mounted in the container in the same path as the host. The only problem is your Python script, which expects the paths to be provided by the environment variable. This will not exist unless you pass on them from your host system to your container system.
Once you verified your mounted volume with -v is there correctly, you can try with
docker run -v %POPPLER%:%POPPLER% -v %TESSERACT%:%TESSERACT% --env POPPLER=%POPPLER% --env TESSERACT=%TESSERACT% ocr
or, if you always run this, you can consider to put them in your dockerfile to save some keystroke.
Any executable you call must be built into the image. Containers can't usually call executables on the host or in other containers. In the specific example you show, a Linux container can't run a Windows executable, even if you do use a bind mount to inject it into the container.
The "slim" python images are built on Debian GNU/Linux, and you need to use its APT tool to install these executable dependencies in your Dockerfile. (https://www.debian.org/distrib/packages has a search box to help you find the right package name; Ubuntu Linux also uses Debian packages.)
FROM python:3.7-slim
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install -y \
popper-utils \
tesseract-ocr-all
COPY requirements.txt .
...
I'd suggest putting reasonable defaults in your code if these environment variables aren't set. The apt-get install command will put them in the system path inside the image.
popplerPath = os.environ.get('POPPLER', 'poppler')
tesseractPath = os.environ.get('TESSERACT', 'tesseract')
If you really need them as environment variables you could use the Dockerfile ENV directive
ENV POPPLER=poppler TESSERACT=tesseract
Environment variables from the host don't automatically get passed through to the container; you need a Dockerfile ENV or docker run -e option. Also remember that the container has an isolated filesystem (and Windows-syntax paths don't make sense in Linux containers) so these environment variables would need to be container paths, the second half of your proposed docker run -v option.
In node, you can define a package.json. Then define a script block as following:
"scripts": {
"start": "concurrently -k -r -s first \"yarn test:watch\" \"yarn open:src\" \"yarn lint:watch\"",
},
So in root directory, I can just do yarn start to run concurrently -k -r -s first \"yarn test:watch\" \"yarn open:src\" \"yarn lint:watch\"
What is the equivalent of that in Python 3? If I want to have a script called python test to run python -m unittest discover -v
use make, its great.
create a Makefile and add some targets to run specific shell commands:
install:
pip install -r requirements.txt
test:
python -m unittest discover -v
# and so on, you got the idea
run with (assuming that Makefile is in the current dir):
make test
NOTE: if you want to run more commands but in the same environment from within a target do this:
install:
source ./venv/bin/activate; \
pip install -r requirements.txt; \
echo "do other stuff after in the same environment"
the key is the ;\ which puts the commands in a single run and make executes these commands as a single line because of the ;\. the space in ; \ its just for aesthetics.
Why don't you just use pipenv? It is the python's npm and you can add a [scripts] very similar to the one of npm on your Pipfile.
See this other question to discover more: pipenv stack overflow question
Not the best solution really. This totally works if you already familiar with npm, but like others have suggested, use makefiles.
Well, this is a work around, but apparently you can just use npm if you have it installed. I created a file package.json in root directory of python app.
{
"name": "fff-connectors",
"version": "1.0.0",
"description": "fff project to UC Davis",
"directories": {
"test": "tests"
},
"scripts": {
"install": "pip install -r requirements.txt",
"test": "python -m unittest discover -v"
},
"keywords": [],
"author": "Leo Qiu",
"license": "ISC"
}
then I can just use npm install or yarn install to install all dependencies, and yarn test or npm test to run test scripts.
You can also do preinstall and postinstall hooks. For example, you may need to remove files or create folder structures.
Another benefit is this setup allows you to use any npm libraries like concurrently, so you can run multiple files together and etc.
Answer specifically for tests, create a setup.py like this within your package/folder:
from setuptools import setup
setup(name='Your app',
version='1.0',
description='A nicely tested app',
packages=[],
test_suite="test"
)
Files are structured like this:
my-package/
| setup.py
| test/
| some_code/
| some_file.py
Then run python ./setup.py test to run the tests. You need to install setuptools as well (as a default you can use distutils.core setup function but it doesn't include much options).
I want to use a makefile to build my project's environment using a makefile and anaconda/miniconda, so I should be able to clone the repo and simply run make myproject
myproject: build
build:
#printf "\nBuilding Python Environment\n"
#conda env create --quiet --force --file environment.yml
#source /home/vagrant/miniconda/bin/activate myproject
If I try this, however, I get the following error
make: source: Command not found
make: *** [source] Error 127
I have searched for a solution, but [this question/answer(How to source a script in a Makefile?) suggests that I cannot use source from within a makefile.
This answer, however, proposes a solution (and received several upvotes) but this doesn't work for me either
( \
source /home/vagrant/miniconda/bin/activate myproject; \
)
/bin/sh: 2: source: not found
make: *** [source] Error 127
I also tried moving the source activate step to a separate bash script, and executing that script from the makefile. That doesn't work, and I assume for the a similar reason, i.e. I am running source from within a shell.
I should add that if I run source activate myproject from the terminal, it works correctly.
I had a similar problem; I wanted to create, or update, a conda environment from a Makefile to be sure my own scripts could use the python from that conda environment.
By default make uses sh to execute commands, and sh doesn't know source (also see this SO answer). I simply set the SHELL to bash and ended up with (relevant part only):
SHELL=/bin/bash
CONDAROOT = /my/path/to/miniconda2
.
.
install: sometarget
source $(CONDAROOT)/bin/activate && conda env create -p conda -f environment.yml && source deactivate
Hope it helps
You should use this, it's functional for me at moment.
report.ipynb : merged.ipynb
( bash -c "source ${HOME}/anaconda3/bin/activate py27; which -a python; \
jupyter nbconvert \
--to notebook \
--ExecutePreprocessor.kernel_name=python2 \
--ExecutePreprocessor.timeout=3000 \
--execute merged.ipynb \
--output=$< $<" )
I had the same problem. Essentially the only solution is stated by 9000. I have a setup shell script inside which I setup the conda environment (source activate python2), then I call the make command. I experimented with setting up the environment from inside Makefile and no success.
I have this line in my makefile:
installpy :
./setuppython2.sh && python setup.py install
The error messages is:
make
./setuppython2.sh && python setup.py install
running install
error: can't create or remove files in install directory
The following error occurred while trying to add or remove files in the
installation directory:
[Errno 13] Permission denied: '/usr/lib/python2.7/site-packages/test-easy-install-29183.write-test'
Essentially, I was able to set up my conda environment to use my local conda that I have write access. But this is not picked up by the make process. I don't understand why the environment set up in my shell script using 'source' is not visible in the make process; the source command is supposed to change the current shell. I just want to share this so that other people don't wast time trying to do this. I know autotoools has a way of working with python. But the make program is probably limited in this respect.
My current solution is a shell script:
cat py2make.sh
#!/bin/sh
# the prefix should be change to the target
# of installation or pwd of the build system
PREFIX=/some/path
CONDA_HOME=$PREFIX/anaconda3
PATH=$CONDA_HOME/bin:$PATH
unset PYTHONPATH
export PREFIX CONDA_HOME PATH
source activate python2
make
This seems to work well for me.
There were a solution for similar situation but it does not seems to work for me:
My modified Makefile segment:
installpy :
( source activate python2; python setup.py install )
Error message after invoking make:
make
( source activate python2; python setup.py install )
/bin/sh: line 0: source: activate: file not found
make: *** [installpy] Error 1
Not sure where am I wrong. If anyone has a better solution please share it.
I am trying to install requirements for each project in a list automatically into its own virtualenv. I have gotten to the point of making the virtualenv correctly, but I cannot get it to activate and install requirements into only that virtualenv:
#!/usr/bin/env python
import subprocess, sys, time, os
HOMEPATH = os.path.expanduser('~')
CWD = os.getcwd()
d = {'cwd': ''}
if len(sys.argv) == 2:
projects = sys.argv[1:]
def call_sp(command, **arg_list):
p = subprocess.Popen(command, shell=True, **arg_list)
p.communicate()
def my_makedirs(path):
if not path.startswith('/home/cchilders'):
path = os.path.join(HOMEPATH, path)
try: os.makedirs(path)
except: pass
for project in projects:
path = os.path.join(CWD, project)
my_makedirs(path)
git_string = 'git clone git#bitbucket.org:codyc54321/{}.git {}'.format(project, d['cwd'])
call_sp(git_string)
d = {'executable': 'bash'}
call_sp("""source /usr/local/bin/virtualenvwrapper.sh && mkvirtualenv --no-site-packages {}""".format(project), **d)
# call_sp("""source /usr/local/bin/virtualenvwrapper.sh && workon {}""".format(project), **d)
# below, the dot (.) means the same as 'source'. the dot doesn't error, calling source does
call_sp('. /home/cchilders/.virtualenvs/{}/bin/activate'.format(project))
d = {'cwd': path}
call_sp("pip install -r requirements.txt", **d)
It works up to
call_sp("""source /usr/local/bin/virtualenvwrapper.sh && mkvirtualenv --no-site-packages {}""".format(project), **d)
but when the script ends, I am not active in the venv and the venv does not have any packages from requirements. Both efforts to source the venv (the one commented out and live) both fail.
The answer that helped me get the mkvirtualenv to work is subprocess.Popen: mkvirtualenv not found.
I also noticed I have a need to do more than just pip install, in one case I need to run 'python setup.py mycommand' which automates setup for each project. How can run commands as if a virtualenv is activated and also install dependencies to arbitrary venvs in a python script?
The only way I've found around this is turning the virtualenv on by hand, then calling my python script by hand. I was surprised, turning it on by bash worked, but calling the python script bombed (maybe because it's a different process than the bash one)
Thank you
This is because each call_sp call creates a new shell, so after the first call to call_sp ends all the settings created by sourcing of virtualenvwrapper are gone. You have to combine all your commands into the single call_sp chain. Otherwise you can just start shell using 'Popen' and feed commands to it using communicate.
If you go with the later you need to be careful with synchronizing and detecting when installation of requirements ends. Pip can take a long time downloading and installing packages with complex dependencies.
This is the way I have done this kind of bootstrapping for virtual environments. Let the script take care of it's own env and just run the script. Running this app.py will setup its VE and modules if missing.
./requirements.txt file
flask
./app.py script
#!/bin/bash
""":"
VENV=$(realpath -s $(dirname $0)/ve)
PYTHON=$VENV/bin/python
if [ ! -f "$PYTHON" ]; then
echo "installing env app"
python3 -m venv $VENV
${VENV}/bin/pip install -r $(dirname $0)/requirements.txt
fi
exec $PYTHON $0 $#
"""
import flask
print("I am Python with flask", flask)
No matter what dir we are in, app.py bootstrapps though the bash script header, installing a ve if python does not exist, running pip, and whatever else you need. Then exec $PYTHON $0 $# is a slick way to swap out bash process for the python process keeping the same pid.
When python takes over, it skips over the bash part because that script is in triple quotes string. So the first line python executes is import flask (well it discards the bash script string 1st). Another cool thing is the pid of the bash process is the same as the pid of the python process. So any daemon utility that babysits this will still see the pid it started.
The last trick in this is that bash needs one extra quote to balance its string """:" at the top. Python does not care about that extra quote
I hope you see the pattern. To upgrade modules in requirements.txt, just rm the ve and run the app again. Simple.