In node, you can define a package.json. Then define a script block as following:
"scripts": {
"start": "concurrently -k -r -s first \"yarn test:watch\" \"yarn open:src\" \"yarn lint:watch\"",
},
So in root directory, I can just do yarn start to run concurrently -k -r -s first \"yarn test:watch\" \"yarn open:src\" \"yarn lint:watch\"
What is the equivalent of that in Python 3? If I want to have a script called python test to run python -m unittest discover -v
use make, its great.
create a Makefile and add some targets to run specific shell commands:
install:
pip install -r requirements.txt
test:
python -m unittest discover -v
# and so on, you got the idea
run with (assuming that Makefile is in the current dir):
make test
NOTE: if you want to run more commands but in the same environment from within a target do this:
install:
source ./venv/bin/activate; \
pip install -r requirements.txt; \
echo "do other stuff after in the same environment"
the key is the ;\ which puts the commands in a single run and make executes these commands as a single line because of the ;\. the space in ; \ its just for aesthetics.
Why don't you just use pipenv? It is the python's npm and you can add a [scripts] very similar to the one of npm on your Pipfile.
See this other question to discover more: pipenv stack overflow question
Not the best solution really. This totally works if you already familiar with npm, but like others have suggested, use makefiles.
Well, this is a work around, but apparently you can just use npm if you have it installed. I created a file package.json in root directory of python app.
{
"name": "fff-connectors",
"version": "1.0.0",
"description": "fff project to UC Davis",
"directories": {
"test": "tests"
},
"scripts": {
"install": "pip install -r requirements.txt",
"test": "python -m unittest discover -v"
},
"keywords": [],
"author": "Leo Qiu",
"license": "ISC"
}
then I can just use npm install or yarn install to install all dependencies, and yarn test or npm test to run test scripts.
You can also do preinstall and postinstall hooks. For example, you may need to remove files or create folder structures.
Another benefit is this setup allows you to use any npm libraries like concurrently, so you can run multiple files together and etc.
Answer specifically for tests, create a setup.py like this within your package/folder:
from setuptools import setup
setup(name='Your app',
version='1.0',
description='A nicely tested app',
packages=[],
test_suite="test"
)
Files are structured like this:
my-package/
| setup.py
| test/
| some_code/
| some_file.py
Then run python ./setup.py test to run the tests. You need to install setuptools as well (as a default you can use distutils.core setup function but it doesn't include much options).
Related
This is my tox.ini file:
# Tox (https://tox.readthedocs.io/) is a tool for running tests
# in multiple virtualenvs. This configuration file will run the
# test suite on all supported python versions. To use it, "pip install tox"
# and then run "tox" from this directory.
#
# See also https://tox.readthedocs.io/en/latest/config.html for more
# configuration options.
[tox]
# Choose your Python versions. They have to be available
# on the system the tests are run on.
# skipsdist=True
ignore_basepython_conflict=false
[testenv:{setup,lint,codestyle,docstyle,tests,doc-linux,doc-darwin,doc-win32}]
basepython=python3.9
envdir = {toxworkdir}/py39
setenv =
PROJECT_NAME = project_name
passenv =
WINDIR
install_command=
pip install \
--find-links=pkg \
--trusted-host=pypi.python.org \
--trusted-host=pypi.org \
--trusted-host=files.pythonhosted.org \
{opts} {packages}
platform = doc-linux: linux
doc-darwin: darwin
doc-win32: win32
deps =
-r{toxinidir}/requirements-dev.txt
-r{toxinidir}/requirements.txt
commands =
setup: python -c "print('All SetUp')"
# Mind the gap, use a backslash :)
lint: pylint -f parseable -r n --disable duplicate-code \
lint: --extension-pkg-whitelist=PyQt5,numpy,torch,cv2,boto3 \
lint: --ignored-modules=PyQt5,numpy,torch,cv2,boto3 \
lint: --ignored-classes=PyQt5,numpy,torch,cv2,boto3 \
lint: project_name \
lint: {toxinidir}/script
lint: pylint -f parseable -r n --disable duplicate-code \
lint: demo/demo_file.py
codestyle: pycodestyle --max-line-length=100 \
codestyle: --exclude=project_name/third_party/* \
codestyle: project_name demo script
docstyle: pydocstyle \
docstyle: --match-dir='^((?!(third_party|deprecated)).)*' \
docstyle: project_name demo script
doc-linux: make -C {toxinidir}/doc html
doc-darwin: make -C {toxinidir}/doc html
doc-win32: {toxinidir}/doc/make.bat html
tests: python -m pytest -v -s --cov-report xml --durations=10 \
tests: --cov=project_name --cov=script \
tests: {toxinidir}/test
tests: coverage report -m --fail-under 100
On tox<4.0 it was very convinient to run tox -e lint to fix linting stuff or tox -e codestyle tox fix codestyle stuff, etc. But now, with version tox>4.0 each time I run one of these commands I get this message (for instance):
codestyle: recreate env because env type changed from {'name': 'lint', 'type': 'VirtualEnvRunner'} to {'name': 'codestyle', 'type': 'VirtualEnvRunner'}
codestyle: remove tox env folder .tox/py39
And it takes forever to run these commands since the evironments are recreated each time ...
I also use these structure for running tests on jenkins so I can map each of these commands to a jenkins stage.
How can I reuse the environment? I have read that it is possible to do it using plugins, but no idea how this can be done, or how to install/use plugins.
I have tried this:
tox multiple tests, re-using tox environment
But it does not work in my case.
I spect to reuse the environment for each of the environments defined in the tox file.
As an addition to N1ngu's excellent answer...
You could re-structure your tox.ini as following:
[tox]
...
[testenv]
<here goes all the common configuration>
[testenv:lint]
<here goes the lint specific configuration>
[testenv:codestyle]
...
And so on. This is a common setup.
While still the environments need to be created at least once, they won't get recreated on each invocation.
This all said, you could also have a look at https://pre-commit.com/ to run your linters, which is very common in the Python community.
Then you would have a tox.ini like the following...
[tox]
...
[testenv]
<here goes all the common configuration>
[testenv:lint]
deps = pre-commit
commands = pre-commit run --all-files
There is now a definite answer about re-use of environments in the faq:
https://tox.wiki/en/latest/upgrading.html#re-use-of-environments
I fear the generative names + factor-specific commands solution you linked relied on tox-3 not auto-recreating the environments by default, which is among the new features in tox 4. Now, environment recreation is something that can be forced (--recreate) but can't be opted-out.
Official answer on this https://github.com/tox-dev/tox/issues/425 boils down to
Officially we don't allow sharing tox environments at the moment [...] As of today, each tox environment has to have it's own virtualenv even if the Python version and dependencies are identical [...] We'll not plan to support this. However, tox 4 allows one to do this via a plugin, so we'd encourage people [...] to try it [...]. Once the project is stable and widely used we can revisit accepting it in core.
So that's it, write a plugin. No idea on how to do that either, so my apologies if this turns out as "not an answer"
We're using Azure DevOps at work and have used the Artifacts feed in there to share Python packages internally which is lovely.
I've been using WSL2 and artifacts-keyring to authenticate with DevOps and a pip.conf file to specify the feed URL as instructed in https://learn.microsoft.com/en-us/azure/devops/artifacts/quickstarts/python-cli?view=azure-devops#consume-python-packages which works great.
To develop Python and keep dependencies isolated while still having access to the private feed and authentication I've used Azure Devops Artifacts Helpers with virtualenv which have also worked like a charm.
Now we're trying more and more to use devcontainers to get even more isolation and ease of setup for new developers.
I've searched wide and far for a way to get access to the pip.conf URL:s and the artifacts-keyring authentication inside of my devcontainer. Is there any way that I can provide my container with these? I've tried all the different solutions I can find on Google but none of them work seamlessly and without PAT:s.
I do not want to use any PAT since I've already authenticated in WSL2.
I'm using WSL2 as the host i.e. I'm cloning the repo in WSL2 and then starting VScode and the devcontainer from there.
Is there anything related to keyring which I can mount inside the container so that it will see that the authentication is already done?
I could live with providing a copy of the pip.conf inside my repo which I could copy to the container on build, but to have to authenticate each time I rebuild my container is to much and so is using a PAT.
Kind Regards
Carl
I ran into the same problem today. The trouble is that the token cache file, $HOME/.local/share/MicrosoftCredentialProvider/SessionTokenCache.dat, is being written within the storage local to the container which gets reset each time we rebuild the devcontainer. This causes us to have to click the https://microsoft.com/devicelogin link every time we rebuild our container and login again in our browser which is a huge time waster.
I was able to resolve this by mounting my host's $HOME/.local/share/ into my devcontainer, so the SessionTokenCache.dat can survive past the rebuild. This is done by adding the following config in your devcontainer.json:
"mounts": [
"source=${localEnv:HOME}/.local/share/,target=/home/vscode/.local/share/,type=bind,consistency=cached"
],
This assumes you have "remoteUser": "vscode" in your devcontainer.json otherwise the home location in the target will need adjustment.
If you are using a Python devcontainer image, you may get an error that dotnet is a missing dependency for artifacts-keyring, but this can be resolved by adding a features configuration for dotnet to your devcontainer.json:
"features": {
"dotnet": {
"version": "latest",
"runtimeOnly": false
}
},
If you are also transitioning from using pip.conf outside of venv to now having a venv, the next problem you may run into is that when a venv the pip.conf has to exist in the .venv folder (you may have customized this folder name). For this I run a simple cp ./pip.conf ./.venv/pip.conf to copy the file from the root of my checkout into my .venv folder.
My full devcontainer.json:
// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.209.6/containers/python-3
{
"name": "Python 3",
"build": {
"dockerfile": "Dockerfile",
"context": "..",
"args": {
// Update 'VARIANT' to pick a Python version: 3, 3.10, 3.9, 3.8, 3.7, 3.6
// Append -bullseye or -buster to pin to an OS version.
// Use -bullseye variants on local on arm64/Apple Silicon.
"VARIANT": "3.8",
// Options
"NODE_VERSION": "lts/*"
}
},
"features": {
"dotnet": {
"version": "latest",
"runtimeOnly": false
}
},
// Set *default* container specific settings.json values on container create.
"settings": {
"python.defaultInterpreterPath": "${workspaceFolder}/.venv/bin/python",
"python.testing.pytestEnabled": true,
"python.testing.pytestPath": "${workspaceFolder}/.venv/bin/pytest",
"python.testing.pytestArgs": [
"tests"
],
"python.testing.unittestEnabled": false,
"python.testing.nosetestsEnabled": false,
"python.envFile": "${workspaceFolder}/src/.env_local",
"python.linting.enabled": false,
"python.linting.pylintEnabled": false,
"python.formatting.autopep8Path": "/usr/local/py-utils/bin/autopep8",
"python.formatting.blackPath": "/usr/local/py-utils/bin/black",
"python.formatting.yapfPath": "/usr/local/py-utils/bin/yapf",
"python.linting.banditPath": "/usr/local/py-utils/bin/bandit",
"python.linting.flake8Path": "/usr/local/py-utils/bin/flake8",
"python.linting.mypyPath": "/usr/local/py-utils/bin/mypy",
"python.linting.pycodestylePath": "/usr/local/py-utils/bin/pycodestyle",
"python.linting.pydocstylePath": "/usr/local/py-utils/bin/pydocstyle",
"python.linting.pylintPath": "/usr/local/py-utils/bin/pylint",
"azureFunctions.deploySubpath": "${workspaceFolder}/src/api",
"azureFunctions.scmDoBuildDuringDeployment": true,
"azureFunctions.pythonVenv": "${workspaceFolder}/.venv",
"azureFunctions.projectLanguage": "Python",
"azureFunctions.projectRuntime": "~3",
"azureFunctions.projectSubpath": "${workspaceFolder}/src/api",
"debug.internalConsoleOptions": "neverOpen"
},
"runArgs": ["--env-file","${localWorkspaceFolder}/src/.env_local"],
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"ms-python.python",
"ms-python.vscode-pylance",
"ms-azuretools.vscode-azurefunctions",
"ms-vscode.azure-account",
"ms-azuretools.vscode-docker",
"DurableFunctionsMonitor.durablefunctionsmonitor",
"eamodio.gitlens",
"ms-dotnettools.csharp",
"editorconfig.editorconfig",
"littlefoxteam.vscode-python-test-adapter"
],
"mounts": [
"source=${localEnv:HOME}/.local/share/,target=/home/vscode/.local/share/,type=bind,consistency=cached"
],
// Use 'forwardPorts' to make a list of ports inside the container available locally.
"forwardPorts": [9090, 9091],
// Use 'postCreateCommand' to run commands after the container is created.
"postCreateCommand": "bash ./resetenv.sh",
// Comment out connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "vscode"
}
The referenced resetenv.sh:
#!/bin/bash
pushd . > /dev/null
SCRIPT_PATH="${BASH_SOURCE[0]}";
if ([ -h "${SCRIPT_PATH}" ]) then
while([ -h "${SCRIPT_PATH}" ]) do cd "$(dirname "$SCRIPT_PATH")"; SCRIPT_PATH=`readlink "${SCRIPT_PATH}"`; done
fi
cd "$(dirname ${SCRIPT_PATH})" > /dev/null
SCRIPT_PATH=$(pwd);
popd > /dev/null
pushd ${SCRIPT_PATH}
deactivate
python3 -m venv --clear .venv
. .venv/bin/activate && pip install --upgrade pip && pip install twine keyring artifacts-keyring && cp ./pip.conf ./.venv/pip.conf && pip install -r deployment/requirements.txt -r deployment/api/requirements.txt
echo Env Reset
full Dockerfile:
# See here for image contents: https://github.com/microsoft/vscode-dev-containers/tree/v0.209.6/containers/python-3/.devcontainer/base.Dockerfile
# [Choice] Python version (use -bullseye variants on local arm64/Apple Silicon): 3, 3.10, 3.9, 3.8, 3.7, 3.6, 3-bullseye, 3.10-bullseye, 3.9-bullseye, 3.8-bullseye, 3.7-bullseye, 3.6-bullseye, 3-buster, 3.10-buster, 3.9-buster, 3.8-buster, 3.7-buster, 3.6-buster
ARG VARIANT="3.8"
FROM mcr.microsoft.com/vscode/devcontainers/python:0-${VARIANT}
# [Choice] Node.js version: none, lts/*, 16, 14, 12, 10
ARG NODE_VERSION="lts/*"
RUN if [ "${NODE_VERSION}" != "none" ]; then su vscode -c "umask 0002 && . /usr/local/share/nvm/nvm.sh && nvm install ${NODE_VERSION} 2>&1"; fi
# [Optional] If your pip requirements rarely change, uncomment this section to add them to the image.
# COPY requirements.txt /tmp/pip-tmp/
# RUN pip3 --disable-pip-version-check --no-cache-dir install -r /tmp/pip-tmp/requirements.txt \
# && rm -rf /tmp/pip-tmp
# [Optional] Uncomment this section to install additional OS packages.
# RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
# && apt-get -y install --no-install-recommends <your-package-list-here>
# [Optional] Uncomment this line to install global node packages.
RUN su vscode -c "source /usr/local/share/nvm/nvm.sh && npm install -g azure-functions-core-tools#3 --unsafe-perm true" 2>&1
# Instead of running Azurite from within this devcontainer, we run a docker container on the host to be shared by VSCode and VS
# See https://github.com/VantageSoftware/azurite-forever
#RUN su vscode -c "source /usr/local/share/nvm/nvm.sh && npm install -g azurite --unsafe-perm true" 2>&1
The referenced .env_local is just a simple env file that is used to set secrets and other config as environment variables inside the devcontainer.
I am a rookie to Jenkins trying to set up a Python pytest test suite to run. In order to properly execute the test suite, I have to install several Python packages. I'm having trouble with this particular step because Jenkins consistently is unable to find virtualenv and pip:
pipeline {
parameters {
gitParameter branchFilter: 'origin/(.*)', defaultValue: 'master', name: 'BRANCH', type: 'PT_BRANCH', quickFilterEnabled: true
}
agent any
stages {
stage('Checkout source code') {
steps {
checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: '------', url: 'git#github.com:path-to-my-repo/my-test-repo.git']]])
}
}
stage('Start Test Suite') {
steps {
sh script: 'PATH=/Library/Frameworks/Python.framework/Versions/3.6/bin/:$PATH'
echo "Checking out Test suite repo."
sh script: 'virtualenv venv --distribute'
sh label: 'install deps', script: '/Library/Frameworks/Python.framework/Versions/3.6/bin/pip install -r requirements.txt'
sh label: 'execute test suite, exit upon first failure', script: 'pytest --verbose -x --junit-xml reports/results.xml'
post {
always {
junit allowEmptyResults: true, testResults: 'reports/results.xml'
}
}
}
}
On the virtualenv venv --distribute step, Jenkins throws an error (I'm running this initially on a Jenkins server on my local instance, although in production it will be on an Amazon Linux 2 machine):
virtualenv venv --distribute /Users/Shared/Jenkins/Home/workspace/my-project-name#tmp/durable-5045c283/script.sh:
line 1: virtualenv: command not found
Why is this happening? The step before, I make sure to append where I know my virtualenv and pip are:
sh script: 'PATH=/Library/Frameworks/Python.framework/Versions/3.6/bin/:$PATH'
For instance, when I type in
sudo su jenkins
which pip
which virtualenv
I get the following outputs as expected:
/Library/Frameworks/Python.framework/Versions/3.6/bin/pip
/Library/Frameworks/Python.framework/Versions/3.6/bin/virtualenv
Here are the things I do know:
Jenkins runs as a user called jenkins
best practice is to create a virtual environment, activate it, and the perform my pip installations inside there
Jenkins runs sh by default, not bash (but I'm not sure if this has anything to do with my problem)
Why is Jenkins unable to find my virtualenv? What's the best practice for installing Python libraries for a Jenkins build?
Edit: I played around some more and found a working solution:
I don't know if this is the proper way to do it, but I used the following syntax:
withEnv(['PATH+EXTRA=/Library/Frameworks/Python.framework/Versions/3.6/bin/']) {
sh script: "pip install virtualenv"
// do other setup stuff
}
However, I'm now stuck w/ a new issue: I've clearly hardcoded in my Python path here. If I'm running on a remote Linux machine, am I going to have to install that specific version of Python (3.6)?
I want AWS Cloud9 to use the Python version and specific packages from my Anaconda Python environment. How can I achieve this? Where should I look in the settings or configuration?
My current setup: I have an AWS EC2 instance with Ubuntu Linux, and I have configured AWS Cloud9 to work with the EC2 instance.
I have Anaconda installed on the EC2 instance, and I have created a conda Python3 environment to use, but Cloud9 always wants to use my Linux system's installed Python3 version.
I finally found something that forces AWS Cloud9 to use the Python3 version installed in my Anaconda environment on my AWS EC2 instance.
The instructions to create a custom AWS Cloud9 runner for Python are here:
{
"cmd" : ["/home/ubuntu/anaconda3/envs/ijackweb/bin/python3.6", "$file", "$args"],
"info" : "Running $project_path$file_name...",
"selector" : "source.py"
}
I just create a new runner and paste the above code in there, and Cloud9 runs my application with my Anaconda environment's version of Python3.
The only thing I don't understand about the above code is what the "selector": "source.py" line does.
After some testing, I realised that my previous answer prevents you being able to use the debugger. Building on #Sean_Calgary 's answer (which is better than my original answer), you can edit one of the in-built python runners (again, just replacing the python call with the full path to the conda env's python path), like so:
{
"script": [
"if [ \"$debug\" == true ]; then ",
" /home/tg/miniconda/envs/env-name/bin/python -m ikp3db -ik_p=15471 -ik_cwd=$project_path \"$file\" $args",
"else",
" /home/tg/miniconda/envs/env-name/bin/python \"$file\" $args",
"fi",
"checkExitCode() {",
" if [ $1 ] && [ \"$debug\" == true ]; then ",
" /home/tg/miniconda/envs/env-name/bin/python -m ikp3db 2>&1 | grep -q 'No module' && echo '",
" To use python debugger install ikpdb by running: ",
" sudo yum update;",
" sudo yum install python36-devel;",
" sudo pip-3.6 install ikp3db;",
" '",
" fi",
" return $1",
"}",
"checkExitCode $?"
],
"python_version": "python3",
"working_dir": "$project_path",
"debugport": 15471,
"$debugDefaultState": false,
"debugger": "ikpdb",
"selector": "^.*\\.(py)$",
"env": {
"PYTHONPATH": "$python_path"
},
"trackId": "Python3"
}
To do this, just click on 'runners' next to CWD in the bottom-right corner -> python3 -> edit runner -> save as 'env-name.run' in /.c9/runners (that save as should point you to the right directory by default).
N.B.
Replace env-name with the name of your environment throughout.
You will need the package for the debugger installed in your conda env. It's called ikp3db.
You may need to check the path to your conda envs executable python by activating the environment and running which python (his caught me out because my path ended in /python, not /python3.6, even though it's python 3.6 that's installed)
You could use a 'shell script' runner type. To do this you would:
create your conda env, with python3 and any packages etc you want in it. Call it py3env
create a directory to hold your runner scripts, something like $HOME/c9_runner_scripts
put a script in there called py3env_runner.sh runner with code like:
conda activate py3env
python ~/c9/my_py3_script.py
Then create a run configuration with the 'shell script' runner type and enter c9_runner_scripts/py3env_runner.sh
for me, on centos 7 the only way to execute with my conda python v 3.9.4 was to add a conda activate line to my .bash_profile like this:
conda activate /var/www/my_conda/python3.9
Then in Cloud 9 when I'm running my code under my conda python 3.9 env all is fine.
This is my simple python code which will print the current python version
import sys
print(sys.version)
Best.
I have a virtualenv in which I try to run a start script defined in package.json, but somehow npm changes the source and runs outside of the venv.
If I print which python for example in that npm start script, I get /usr/local/bin/python, so not the python from virtualenv.
Any ideas?
Edit:
package.json
{
...
"scripts": {
"start": "myscript & watchify -o assets/js/mylibs.js -v -d .",
},
...
}