npm start runs out of virtualenv - python

I have a virtualenv in which I try to run a start script defined in package.json, but somehow npm changes the source and runs outside of the venv.
If I print which python for example in that npm start script, I get /usr/local/bin/python, so not the python from virtualenv.
Any ideas?
Edit:
package.json
{
...
"scripts": {
"start": "myscript & watchify -o assets/js/mylibs.js -v -d .",
},
...
}

Related

Sharing artifacts-keyring authentication and pip.conf with devcontainer to reach private Azure feed

We're using Azure DevOps at work and have used the Artifacts feed in there to share Python packages internally which is lovely.
I've been using WSL2 and artifacts-keyring to authenticate with DevOps and a pip.conf file to specify the feed URL as instructed in https://learn.microsoft.com/en-us/azure/devops/artifacts/quickstarts/python-cli?view=azure-devops#consume-python-packages which works great.
To develop Python and keep dependencies isolated while still having access to the private feed and authentication I've used Azure Devops Artifacts Helpers with virtualenv which have also worked like a charm.
Now we're trying more and more to use devcontainers to get even more isolation and ease of setup for new developers.
I've searched wide and far for a way to get access to the pip.conf URL:s and the artifacts-keyring authentication inside of my devcontainer. Is there any way that I can provide my container with these? I've tried all the different solutions I can find on Google but none of them work seamlessly and without PAT:s.
I do not want to use any PAT since I've already authenticated in WSL2.
I'm using WSL2 as the host i.e. I'm cloning the repo in WSL2 and then starting VScode and the devcontainer from there.
Is there anything related to keyring which I can mount inside the container so that it will see that the authentication is already done?
I could live with providing a copy of the pip.conf inside my repo which I could copy to the container on build, but to have to authenticate each time I rebuild my container is to much and so is using a PAT.
Kind Regards
Carl
I ran into the same problem today. The trouble is that the token cache file, $HOME/.local/share/MicrosoftCredentialProvider/SessionTokenCache.dat, is being written within the storage local to the container which gets reset each time we rebuild the devcontainer. This causes us to have to click the https://microsoft.com/devicelogin link every time we rebuild our container and login again in our browser which is a huge time waster.
I was able to resolve this by mounting my host's $HOME/.local/share/ into my devcontainer, so the SessionTokenCache.dat can survive past the rebuild. This is done by adding the following config in your devcontainer.json:
"mounts": [
"source=${localEnv:HOME}/.local/share/,target=/home/vscode/.local/share/,type=bind,consistency=cached"
],
This assumes you have "remoteUser": "vscode" in your devcontainer.json otherwise the home location in the target will need adjustment.
If you are using a Python devcontainer image, you may get an error that dotnet is a missing dependency for artifacts-keyring, but this can be resolved by adding a features configuration for dotnet to your devcontainer.json:
"features": {
"dotnet": {
"version": "latest",
"runtimeOnly": false
}
},
If you are also transitioning from using pip.conf outside of venv to now having a venv, the next problem you may run into is that when a venv the pip.conf has to exist in the .venv folder (you may have customized this folder name). For this I run a simple cp ./pip.conf ./.venv/pip.conf to copy the file from the root of my checkout into my .venv folder.
My full devcontainer.json:
// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.209.6/containers/python-3
{
"name": "Python 3",
"build": {
"dockerfile": "Dockerfile",
"context": "..",
"args": {
// Update 'VARIANT' to pick a Python version: 3, 3.10, 3.9, 3.8, 3.7, 3.6
// Append -bullseye or -buster to pin to an OS version.
// Use -bullseye variants on local on arm64/Apple Silicon.
"VARIANT": "3.8",
// Options
"NODE_VERSION": "lts/*"
}
},
"features": {
"dotnet": {
"version": "latest",
"runtimeOnly": false
}
},
// Set *default* container specific settings.json values on container create.
"settings": {
"python.defaultInterpreterPath": "${workspaceFolder}/.venv/bin/python",
"python.testing.pytestEnabled": true,
"python.testing.pytestPath": "${workspaceFolder}/.venv/bin/pytest",
"python.testing.pytestArgs": [
"tests"
],
"python.testing.unittestEnabled": false,
"python.testing.nosetestsEnabled": false,
"python.envFile": "${workspaceFolder}/src/.env_local",
"python.linting.enabled": false,
"python.linting.pylintEnabled": false,
"python.formatting.autopep8Path": "/usr/local/py-utils/bin/autopep8",
"python.formatting.blackPath": "/usr/local/py-utils/bin/black",
"python.formatting.yapfPath": "/usr/local/py-utils/bin/yapf",
"python.linting.banditPath": "/usr/local/py-utils/bin/bandit",
"python.linting.flake8Path": "/usr/local/py-utils/bin/flake8",
"python.linting.mypyPath": "/usr/local/py-utils/bin/mypy",
"python.linting.pycodestylePath": "/usr/local/py-utils/bin/pycodestyle",
"python.linting.pydocstylePath": "/usr/local/py-utils/bin/pydocstyle",
"python.linting.pylintPath": "/usr/local/py-utils/bin/pylint",
"azureFunctions.deploySubpath": "${workspaceFolder}/src/api",
"azureFunctions.scmDoBuildDuringDeployment": true,
"azureFunctions.pythonVenv": "${workspaceFolder}/.venv",
"azureFunctions.projectLanguage": "Python",
"azureFunctions.projectRuntime": "~3",
"azureFunctions.projectSubpath": "${workspaceFolder}/src/api",
"debug.internalConsoleOptions": "neverOpen"
},
"runArgs": ["--env-file","${localWorkspaceFolder}/src/.env_local"],
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"ms-python.python",
"ms-python.vscode-pylance",
"ms-azuretools.vscode-azurefunctions",
"ms-vscode.azure-account",
"ms-azuretools.vscode-docker",
"DurableFunctionsMonitor.durablefunctionsmonitor",
"eamodio.gitlens",
"ms-dotnettools.csharp",
"editorconfig.editorconfig",
"littlefoxteam.vscode-python-test-adapter"
],
"mounts": [
"source=${localEnv:HOME}/.local/share/,target=/home/vscode/.local/share/,type=bind,consistency=cached"
],
// Use 'forwardPorts' to make a list of ports inside the container available locally.
"forwardPorts": [9090, 9091],
// Use 'postCreateCommand' to run commands after the container is created.
"postCreateCommand": "bash ./resetenv.sh",
// Comment out connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "vscode"
}
The referenced resetenv.sh:
#!/bin/bash
pushd . > /dev/null
SCRIPT_PATH="${BASH_SOURCE[0]}";
if ([ -h "${SCRIPT_PATH}" ]) then
while([ -h "${SCRIPT_PATH}" ]) do cd "$(dirname "$SCRIPT_PATH")"; SCRIPT_PATH=`readlink "${SCRIPT_PATH}"`; done
fi
cd "$(dirname ${SCRIPT_PATH})" > /dev/null
SCRIPT_PATH=$(pwd);
popd > /dev/null
pushd ${SCRIPT_PATH}
deactivate
python3 -m venv --clear .venv
. .venv/bin/activate && pip install --upgrade pip && pip install twine keyring artifacts-keyring && cp ./pip.conf ./.venv/pip.conf && pip install -r deployment/requirements.txt -r deployment/api/requirements.txt
echo Env Reset
full Dockerfile:
# See here for image contents: https://github.com/microsoft/vscode-dev-containers/tree/v0.209.6/containers/python-3/.devcontainer/base.Dockerfile
# [Choice] Python version (use -bullseye variants on local arm64/Apple Silicon): 3, 3.10, 3.9, 3.8, 3.7, 3.6, 3-bullseye, 3.10-bullseye, 3.9-bullseye, 3.8-bullseye, 3.7-bullseye, 3.6-bullseye, 3-buster, 3.10-buster, 3.9-buster, 3.8-buster, 3.7-buster, 3.6-buster
ARG VARIANT="3.8"
FROM mcr.microsoft.com/vscode/devcontainers/python:0-${VARIANT}
# [Choice] Node.js version: none, lts/*, 16, 14, 12, 10
ARG NODE_VERSION="lts/*"
RUN if [ "${NODE_VERSION}" != "none" ]; then su vscode -c "umask 0002 && . /usr/local/share/nvm/nvm.sh && nvm install ${NODE_VERSION} 2>&1"; fi
# [Optional] If your pip requirements rarely change, uncomment this section to add them to the image.
# COPY requirements.txt /tmp/pip-tmp/
# RUN pip3 --disable-pip-version-check --no-cache-dir install -r /tmp/pip-tmp/requirements.txt \
# && rm -rf /tmp/pip-tmp
# [Optional] Uncomment this section to install additional OS packages.
# RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
# && apt-get -y install --no-install-recommends <your-package-list-here>
# [Optional] Uncomment this line to install global node packages.
RUN su vscode -c "source /usr/local/share/nvm/nvm.sh && npm install -g azure-functions-core-tools#3 --unsafe-perm true" 2>&1
# Instead of running Azurite from within this devcontainer, we run a docker container on the host to be shared by VSCode and VS
# See https://github.com/VantageSoftware/azurite-forever
#RUN su vscode -c "source /usr/local/share/nvm/nvm.sh && npm install -g azurite --unsafe-perm true" 2>&1
The referenced .env_local is just a simple env file that is used to set secrets and other config as environment variables inside the devcontainer.

How do I properly set up my Python dependencies within Jenkins using pip and virtualenv?

I am a rookie to Jenkins trying to set up a Python pytest test suite to run. In order to properly execute the test suite, I have to install several Python packages. I'm having trouble with this particular step because Jenkins consistently is unable to find virtualenv and pip:
pipeline {
parameters {
gitParameter branchFilter: 'origin/(.*)', defaultValue: 'master', name: 'BRANCH', type: 'PT_BRANCH', quickFilterEnabled: true
}
agent any
stages {
stage('Checkout source code') {
steps {
checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: '------', url: 'git#github.com:path-to-my-repo/my-test-repo.git']]])
}
}
stage('Start Test Suite') {
steps {
sh script: 'PATH=/Library/Frameworks/Python.framework/Versions/3.6/bin/:$PATH'
echo "Checking out Test suite repo."
sh script: 'virtualenv venv --distribute'
sh label: 'install deps', script: '/Library/Frameworks/Python.framework/Versions/3.6/bin/pip install -r requirements.txt'
sh label: 'execute test suite, exit upon first failure', script: 'pytest --verbose -x --junit-xml reports/results.xml'
post {
always {
junit allowEmptyResults: true, testResults: 'reports/results.xml'
}
}
}
}
On the virtualenv venv --distribute step, Jenkins throws an error (I'm running this initially on a Jenkins server on my local instance, although in production it will be on an Amazon Linux 2 machine):
virtualenv venv --distribute /Users/Shared/Jenkins/Home/workspace/my-project-name#tmp/durable-5045c283/script.sh:
line 1: virtualenv: command not found
Why is this happening? The step before, I make sure to append where I know my virtualenv and pip are:
sh script: 'PATH=/Library/Frameworks/Python.framework/Versions/3.6/bin/:$PATH'
For instance, when I type in
sudo su jenkins
which pip
which virtualenv
I get the following outputs as expected:
/Library/Frameworks/Python.framework/Versions/3.6/bin/pip
/Library/Frameworks/Python.framework/Versions/3.6/bin/virtualenv
Here are the things I do know:
Jenkins runs as a user called jenkins
best practice is to create a virtual environment, activate it, and the perform my pip installations inside there
Jenkins runs sh by default, not bash (but I'm not sure if this has anything to do with my problem)
Why is Jenkins unable to find my virtualenv? What's the best practice for installing Python libraries for a Jenkins build?
Edit: I played around some more and found a working solution:
I don't know if this is the proper way to do it, but I used the following syntax:
withEnv(['PATH+EXTRA=/Library/Frameworks/Python.framework/Versions/3.6/bin/']) {
sh script: "pip install virtualenv"
// do other setup stuff
}
However, I'm now stuck w/ a new issue: I've clearly hardcoded in my Python path here. If I'm running on a remote Linux machine, am I going to have to install that specific version of Python (3.6)?

run python command with alias in command line like npm

In node, you can define a package.json. Then define a script block as following:
"scripts": {
"start": "concurrently -k -r -s first \"yarn test:watch\" \"yarn open:src\" \"yarn lint:watch\"",
},
So in root directory, I can just do yarn start to run concurrently -k -r -s first \"yarn test:watch\" \"yarn open:src\" \"yarn lint:watch\"
What is the equivalent of that in Python 3? If I want to have a script called python test to run python -m unittest discover -v
use make, its great.
create a Makefile and add some targets to run specific shell commands:
install:
pip install -r requirements.txt
test:
python -m unittest discover -v
# and so on, you got the idea
run with (assuming that Makefile is in the current dir):
make test
NOTE: if you want to run more commands but in the same environment from within a target do this:
install:
source ./venv/bin/activate; \
pip install -r requirements.txt; \
echo "do other stuff after in the same environment"
the key is the ;\ which puts the commands in a single run and make executes these commands as a single line because of the ;\. the space in ; \ its just for aesthetics.
Why don't you just use pipenv? It is the python's npm and you can add a [scripts] very similar to the one of npm on your Pipfile.
See this other question to discover more: pipenv stack overflow question
Not the best solution really. This totally works if you already familiar with npm, but like others have suggested, use makefiles.
Well, this is a work around, but apparently you can just use npm if you have it installed. I created a file package.json in root directory of python app.
{
"name": "fff-connectors",
"version": "1.0.0",
"description": "fff project to UC Davis",
"directories": {
"test": "tests"
},
"scripts": {
"install": "pip install -r requirements.txt",
"test": "python -m unittest discover -v"
},
"keywords": [],
"author": "Leo Qiu",
"license": "ISC"
}
then I can just use npm install or yarn install to install all dependencies, and yarn test or npm test to run test scripts.
You can also do preinstall and postinstall hooks. For example, you may need to remove files or create folder structures.
Another benefit is this setup allows you to use any npm libraries like concurrently, so you can run multiple files together and etc.
Answer specifically for tests, create a setup.py like this within your package/folder:
from setuptools import setup
setup(name='Your app',
version='1.0',
description='A nicely tested app',
packages=[],
test_suite="test"
)
Files are structured like this:
my-package/
| setup.py
| test/
| some_code/
| some_file.py
Then run python ./setup.py test to run the tests. You need to install setuptools as well (as a default you can use distutils.core setup function but it doesn't include much options).

Is it possible to source a vitualenv in a shell script and then have the shell that executes the script run in the new env?

Is it possible to run a shell script that sources my python virtualenv and then have the shell be in the new environment?
Here is my shell script
#!/usr/bin/env bash
function createProject() {
if [ -e $1 ]
then
rm -r ./$1
fi
if [ -e env-$1 ]
then
rm -r ./env-$1
fi
virtualenv ./env-$1
django-admin startproject $1
}
createProject $1
source ./env-$1/bin/activate
exit 1
I then run ./script.sh hello-world.
Basically if I were to run source ./hello-world/bin/activate my shell the virualenv would be activated and the shell would then be running in the new environment.
How do I accomplish this?
What you want is possible using shell functions only, as they do not spawn separate processes.
The problem with your approach is that the virtualenv is activated in a sub-process that was created to run the shell script.
Instead of having a executable shell script, do like this:
function createProject() {
if [ -e $1 ]
then
rm -r ./$1
fi
if [ -e env-$1 ]
then
rm -r ./env-$1
fi
virtualenv ./env-$1
django-admin startproject $1
source ./env-$1/bin/activate
}
Save this as createProject.sh and source this file in .bashrc or .bash_profile
source createProject.sh
This way the virtualenv is activated in the current process.

Cloud9 IDE to run python3 with venv

I'm trying to use a custom runner in Cloud9 to launch a project under python 3.4 using a virtual environment installed in the same directory, but it doesn't work. The runner doesn't detect my dependencies, which presumably means it isn't activating the venv properly.
// Create a custom Cloud9 runner - similar to the Sublime build system
// For more information see https://docs.c9.io/custom_runners.html
{
"cmd": [
"bash",
"--login",
"-c",
"source bin/activate && python oric.py"
],
"working_dir": "$project_path",
"info": "Your code is running at \\033[01;34m$url\\033[00m.\n\\033[01;31m"
}
Any thoughts on what's wrong? Many thanks
From start to finish:
Create a virtual environment:
$ virtualenv -p /usr/bin/python36 vpy36
Install Python package into virtual environment:
$ source vpy36/bin/activate
$ pip3 install tweepy
Create Runner:
Navigate the menu to create the runner
Create .run File
Copy and paste the example code below into your .run file. This will allow both normal and debug executions of your venv.
// This file overrides the built-in Python 3 runner
// For more information see http://docs.aws.amazon.com/console/cloud9/change-runner
{
"script": [
"if [ \"$debug\" == true ]; then ",
" /home/ec2-user/environment/venvpy36/bin/python -m ikp3db -ik_p=15471 -ik_cwd=$project_path \"$file\" $args",
"else",
" /home/ec2-user/environment/venvpy36/bin/python \"$file\" $args",
"fi",
"checkExitCode() {",
" if [ $1 ] && [ \"$debug\" == true ]; then ",
" /home/ec2-user/environment/venvpy36/bin/python -m ikp3db 2>&1 | grep -q 'No module' && echo '",
" To use python debugger install ikpdb by running: ",
" sudo yum update;",
" sudo yum install python36-devel;",
" sudo source /home/ec2-user/environment/venvpy36/bin activate",
" sudo pip-3.6 install ikp3db;",
" sudo deactivate",
" '",
" fi",
" return $1",
"}",
"checkExitCode $?"
],
"python_version": "/home/ec2-user/environment/venvpy36/bin/python",
"working_dir": "$project_path",
"debugport": 15471,
"$debugDefaultState": false,
"debugger": "ikpdb",
"selector": "^.*\\.(py)$",
"env": {
"PYTHONPATH": "$python_path"
},
"trackId": "/home/ec2-user/environment/venvpy36/bin/python"
}
If you placed your venv in a different directory during step 1 Find and replace all references of "/home/ec2-user/environment/venvpy36/bin" with your own venv bin directory and the code should work for you.
Finally, Save the file
Select the Runner and Run the File:
Select your runner (in this example, "vpy36"). Then click "Run" and it should work.
I use virtualenv on Cloud9 and it works fine for me. Cloud9 workspaces seem to come with virtualenv wrapper pre-installed (at least, Django workspace does), so if you create a virtualenv with:
$ mkvirtualenv foo
Then, you can create your runner like so, for example:
{
"cmd": [
"bash",
"--login",
"-c",
"source /home/ubuntu/.virtualenvs/foo/bin/activate && python whatever.py"
],
# ... rest of the configuration
}
I got cloud9 to use virtualenv by just setting the environment vars directly instead of trying to source the activate script.
{
"cmd": [
"/var/lib/cloud9/venv/bin/python",
"$file",
"$args"
],
"selector": "^.*\\.(python|py)$",
"env": {
"PYTHONPATH": "/var/lib/cloud9/venv/lib/python3.5/site-packages",
"VIRTUAL_ENV": "/var/lib/cloud9/venv",
"PATH": "/var/lib/cloud9/venv/bin:$PATH"
}
}

Categories