I'm trying to use a custom runner in Cloud9 to launch a project under python 3.4 using a virtual environment installed in the same directory, but it doesn't work. The runner doesn't detect my dependencies, which presumably means it isn't activating the venv properly.
// Create a custom Cloud9 runner - similar to the Sublime build system
// For more information see https://docs.c9.io/custom_runners.html
{
"cmd": [
"bash",
"--login",
"-c",
"source bin/activate && python oric.py"
],
"working_dir": "$project_path",
"info": "Your code is running at \\033[01;34m$url\\033[00m.\n\\033[01;31m"
}
Any thoughts on what's wrong? Many thanks
From start to finish:
Create a virtual environment:
$ virtualenv -p /usr/bin/python36 vpy36
Install Python package into virtual environment:
$ source vpy36/bin/activate
$ pip3 install tweepy
Create Runner:
Navigate the menu to create the runner
Create .run File
Copy and paste the example code below into your .run file. This will allow both normal and debug executions of your venv.
// This file overrides the built-in Python 3 runner
// For more information see http://docs.aws.amazon.com/console/cloud9/change-runner
{
"script": [
"if [ \"$debug\" == true ]; then ",
" /home/ec2-user/environment/venvpy36/bin/python -m ikp3db -ik_p=15471 -ik_cwd=$project_path \"$file\" $args",
"else",
" /home/ec2-user/environment/venvpy36/bin/python \"$file\" $args",
"fi",
"checkExitCode() {",
" if [ $1 ] && [ \"$debug\" == true ]; then ",
" /home/ec2-user/environment/venvpy36/bin/python -m ikp3db 2>&1 | grep -q 'No module' && echo '",
" To use python debugger install ikpdb by running: ",
" sudo yum update;",
" sudo yum install python36-devel;",
" sudo source /home/ec2-user/environment/venvpy36/bin activate",
" sudo pip-3.6 install ikp3db;",
" sudo deactivate",
" '",
" fi",
" return $1",
"}",
"checkExitCode $?"
],
"python_version": "/home/ec2-user/environment/venvpy36/bin/python",
"working_dir": "$project_path",
"debugport": 15471,
"$debugDefaultState": false,
"debugger": "ikpdb",
"selector": "^.*\\.(py)$",
"env": {
"PYTHONPATH": "$python_path"
},
"trackId": "/home/ec2-user/environment/venvpy36/bin/python"
}
If you placed your venv in a different directory during step 1 Find and replace all references of "/home/ec2-user/environment/venvpy36/bin" with your own venv bin directory and the code should work for you.
Finally, Save the file
Select the Runner and Run the File:
Select your runner (in this example, "vpy36"). Then click "Run" and it should work.
I use virtualenv on Cloud9 and it works fine for me. Cloud9 workspaces seem to come with virtualenv wrapper pre-installed (at least, Django workspace does), so if you create a virtualenv with:
$ mkvirtualenv foo
Then, you can create your runner like so, for example:
{
"cmd": [
"bash",
"--login",
"-c",
"source /home/ubuntu/.virtualenvs/foo/bin/activate && python whatever.py"
],
# ... rest of the configuration
}
I got cloud9 to use virtualenv by just setting the environment vars directly instead of trying to source the activate script.
{
"cmd": [
"/var/lib/cloud9/venv/bin/python",
"$file",
"$args"
],
"selector": "^.*\\.(python|py)$",
"env": {
"PYTHONPATH": "/var/lib/cloud9/venv/lib/python3.5/site-packages",
"VIRTUAL_ENV": "/var/lib/cloud9/venv",
"PATH": "/var/lib/cloud9/venv/bin:$PATH"
}
}
Related
We're using Azure DevOps at work and have used the Artifacts feed in there to share Python packages internally which is lovely.
I've been using WSL2 and artifacts-keyring to authenticate with DevOps and a pip.conf file to specify the feed URL as instructed in https://learn.microsoft.com/en-us/azure/devops/artifacts/quickstarts/python-cli?view=azure-devops#consume-python-packages which works great.
To develop Python and keep dependencies isolated while still having access to the private feed and authentication I've used Azure Devops Artifacts Helpers with virtualenv which have also worked like a charm.
Now we're trying more and more to use devcontainers to get even more isolation and ease of setup for new developers.
I've searched wide and far for a way to get access to the pip.conf URL:s and the artifacts-keyring authentication inside of my devcontainer. Is there any way that I can provide my container with these? I've tried all the different solutions I can find on Google but none of them work seamlessly and without PAT:s.
I do not want to use any PAT since I've already authenticated in WSL2.
I'm using WSL2 as the host i.e. I'm cloning the repo in WSL2 and then starting VScode and the devcontainer from there.
Is there anything related to keyring which I can mount inside the container so that it will see that the authentication is already done?
I could live with providing a copy of the pip.conf inside my repo which I could copy to the container on build, but to have to authenticate each time I rebuild my container is to much and so is using a PAT.
Kind Regards
Carl
I ran into the same problem today. The trouble is that the token cache file, $HOME/.local/share/MicrosoftCredentialProvider/SessionTokenCache.dat, is being written within the storage local to the container which gets reset each time we rebuild the devcontainer. This causes us to have to click the https://microsoft.com/devicelogin link every time we rebuild our container and login again in our browser which is a huge time waster.
I was able to resolve this by mounting my host's $HOME/.local/share/ into my devcontainer, so the SessionTokenCache.dat can survive past the rebuild. This is done by adding the following config in your devcontainer.json:
"mounts": [
"source=${localEnv:HOME}/.local/share/,target=/home/vscode/.local/share/,type=bind,consistency=cached"
],
This assumes you have "remoteUser": "vscode" in your devcontainer.json otherwise the home location in the target will need adjustment.
If you are using a Python devcontainer image, you may get an error that dotnet is a missing dependency for artifacts-keyring, but this can be resolved by adding a features configuration for dotnet to your devcontainer.json:
"features": {
"dotnet": {
"version": "latest",
"runtimeOnly": false
}
},
If you are also transitioning from using pip.conf outside of venv to now having a venv, the next problem you may run into is that when a venv the pip.conf has to exist in the .venv folder (you may have customized this folder name). For this I run a simple cp ./pip.conf ./.venv/pip.conf to copy the file from the root of my checkout into my .venv folder.
My full devcontainer.json:
// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.209.6/containers/python-3
{
"name": "Python 3",
"build": {
"dockerfile": "Dockerfile",
"context": "..",
"args": {
// Update 'VARIANT' to pick a Python version: 3, 3.10, 3.9, 3.8, 3.7, 3.6
// Append -bullseye or -buster to pin to an OS version.
// Use -bullseye variants on local on arm64/Apple Silicon.
"VARIANT": "3.8",
// Options
"NODE_VERSION": "lts/*"
}
},
"features": {
"dotnet": {
"version": "latest",
"runtimeOnly": false
}
},
// Set *default* container specific settings.json values on container create.
"settings": {
"python.defaultInterpreterPath": "${workspaceFolder}/.venv/bin/python",
"python.testing.pytestEnabled": true,
"python.testing.pytestPath": "${workspaceFolder}/.venv/bin/pytest",
"python.testing.pytestArgs": [
"tests"
],
"python.testing.unittestEnabled": false,
"python.testing.nosetestsEnabled": false,
"python.envFile": "${workspaceFolder}/src/.env_local",
"python.linting.enabled": false,
"python.linting.pylintEnabled": false,
"python.formatting.autopep8Path": "/usr/local/py-utils/bin/autopep8",
"python.formatting.blackPath": "/usr/local/py-utils/bin/black",
"python.formatting.yapfPath": "/usr/local/py-utils/bin/yapf",
"python.linting.banditPath": "/usr/local/py-utils/bin/bandit",
"python.linting.flake8Path": "/usr/local/py-utils/bin/flake8",
"python.linting.mypyPath": "/usr/local/py-utils/bin/mypy",
"python.linting.pycodestylePath": "/usr/local/py-utils/bin/pycodestyle",
"python.linting.pydocstylePath": "/usr/local/py-utils/bin/pydocstyle",
"python.linting.pylintPath": "/usr/local/py-utils/bin/pylint",
"azureFunctions.deploySubpath": "${workspaceFolder}/src/api",
"azureFunctions.scmDoBuildDuringDeployment": true,
"azureFunctions.pythonVenv": "${workspaceFolder}/.venv",
"azureFunctions.projectLanguage": "Python",
"azureFunctions.projectRuntime": "~3",
"azureFunctions.projectSubpath": "${workspaceFolder}/src/api",
"debug.internalConsoleOptions": "neverOpen"
},
"runArgs": ["--env-file","${localWorkspaceFolder}/src/.env_local"],
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"ms-python.python",
"ms-python.vscode-pylance",
"ms-azuretools.vscode-azurefunctions",
"ms-vscode.azure-account",
"ms-azuretools.vscode-docker",
"DurableFunctionsMonitor.durablefunctionsmonitor",
"eamodio.gitlens",
"ms-dotnettools.csharp",
"editorconfig.editorconfig",
"littlefoxteam.vscode-python-test-adapter"
],
"mounts": [
"source=${localEnv:HOME}/.local/share/,target=/home/vscode/.local/share/,type=bind,consistency=cached"
],
// Use 'forwardPorts' to make a list of ports inside the container available locally.
"forwardPorts": [9090, 9091],
// Use 'postCreateCommand' to run commands after the container is created.
"postCreateCommand": "bash ./resetenv.sh",
// Comment out connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "vscode"
}
The referenced resetenv.sh:
#!/bin/bash
pushd . > /dev/null
SCRIPT_PATH="${BASH_SOURCE[0]}";
if ([ -h "${SCRIPT_PATH}" ]) then
while([ -h "${SCRIPT_PATH}" ]) do cd "$(dirname "$SCRIPT_PATH")"; SCRIPT_PATH=`readlink "${SCRIPT_PATH}"`; done
fi
cd "$(dirname ${SCRIPT_PATH})" > /dev/null
SCRIPT_PATH=$(pwd);
popd > /dev/null
pushd ${SCRIPT_PATH}
deactivate
python3 -m venv --clear .venv
. .venv/bin/activate && pip install --upgrade pip && pip install twine keyring artifacts-keyring && cp ./pip.conf ./.venv/pip.conf && pip install -r deployment/requirements.txt -r deployment/api/requirements.txt
echo Env Reset
full Dockerfile:
# See here for image contents: https://github.com/microsoft/vscode-dev-containers/tree/v0.209.6/containers/python-3/.devcontainer/base.Dockerfile
# [Choice] Python version (use -bullseye variants on local arm64/Apple Silicon): 3, 3.10, 3.9, 3.8, 3.7, 3.6, 3-bullseye, 3.10-bullseye, 3.9-bullseye, 3.8-bullseye, 3.7-bullseye, 3.6-bullseye, 3-buster, 3.10-buster, 3.9-buster, 3.8-buster, 3.7-buster, 3.6-buster
ARG VARIANT="3.8"
FROM mcr.microsoft.com/vscode/devcontainers/python:0-${VARIANT}
# [Choice] Node.js version: none, lts/*, 16, 14, 12, 10
ARG NODE_VERSION="lts/*"
RUN if [ "${NODE_VERSION}" != "none" ]; then su vscode -c "umask 0002 && . /usr/local/share/nvm/nvm.sh && nvm install ${NODE_VERSION} 2>&1"; fi
# [Optional] If your pip requirements rarely change, uncomment this section to add them to the image.
# COPY requirements.txt /tmp/pip-tmp/
# RUN pip3 --disable-pip-version-check --no-cache-dir install -r /tmp/pip-tmp/requirements.txt \
# && rm -rf /tmp/pip-tmp
# [Optional] Uncomment this section to install additional OS packages.
# RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
# && apt-get -y install --no-install-recommends <your-package-list-here>
# [Optional] Uncomment this line to install global node packages.
RUN su vscode -c "source /usr/local/share/nvm/nvm.sh && npm install -g azure-functions-core-tools#3 --unsafe-perm true" 2>&1
# Instead of running Azurite from within this devcontainer, we run a docker container on the host to be shared by VSCode and VS
# See https://github.com/VantageSoftware/azurite-forever
#RUN su vscode -c "source /usr/local/share/nvm/nvm.sh && npm install -g azurite --unsafe-perm true" 2>&1
The referenced .env_local is just a simple env file that is used to set secrets and other config as environment variables inside the devcontainer.
I want AWS Cloud9 to use the Python version and specific packages from my Anaconda Python environment. How can I achieve this? Where should I look in the settings or configuration?
My current setup: I have an AWS EC2 instance with Ubuntu Linux, and I have configured AWS Cloud9 to work with the EC2 instance.
I have Anaconda installed on the EC2 instance, and I have created a conda Python3 environment to use, but Cloud9 always wants to use my Linux system's installed Python3 version.
I finally found something that forces AWS Cloud9 to use the Python3 version installed in my Anaconda environment on my AWS EC2 instance.
The instructions to create a custom AWS Cloud9 runner for Python are here:
{
"cmd" : ["/home/ubuntu/anaconda3/envs/ijackweb/bin/python3.6", "$file", "$args"],
"info" : "Running $project_path$file_name...",
"selector" : "source.py"
}
I just create a new runner and paste the above code in there, and Cloud9 runs my application with my Anaconda environment's version of Python3.
The only thing I don't understand about the above code is what the "selector": "source.py" line does.
After some testing, I realised that my previous answer prevents you being able to use the debugger. Building on #Sean_Calgary 's answer (which is better than my original answer), you can edit one of the in-built python runners (again, just replacing the python call with the full path to the conda env's python path), like so:
{
"script": [
"if [ \"$debug\" == true ]; then ",
" /home/tg/miniconda/envs/env-name/bin/python -m ikp3db -ik_p=15471 -ik_cwd=$project_path \"$file\" $args",
"else",
" /home/tg/miniconda/envs/env-name/bin/python \"$file\" $args",
"fi",
"checkExitCode() {",
" if [ $1 ] && [ \"$debug\" == true ]; then ",
" /home/tg/miniconda/envs/env-name/bin/python -m ikp3db 2>&1 | grep -q 'No module' && echo '",
" To use python debugger install ikpdb by running: ",
" sudo yum update;",
" sudo yum install python36-devel;",
" sudo pip-3.6 install ikp3db;",
" '",
" fi",
" return $1",
"}",
"checkExitCode $?"
],
"python_version": "python3",
"working_dir": "$project_path",
"debugport": 15471,
"$debugDefaultState": false,
"debugger": "ikpdb",
"selector": "^.*\\.(py)$",
"env": {
"PYTHONPATH": "$python_path"
},
"trackId": "Python3"
}
To do this, just click on 'runners' next to CWD in the bottom-right corner -> python3 -> edit runner -> save as 'env-name.run' in /.c9/runners (that save as should point you to the right directory by default).
N.B.
Replace env-name with the name of your environment throughout.
You will need the package for the debugger installed in your conda env. It's called ikp3db.
You may need to check the path to your conda envs executable python by activating the environment and running which python (his caught me out because my path ended in /python, not /python3.6, even though it's python 3.6 that's installed)
You could use a 'shell script' runner type. To do this you would:
create your conda env, with python3 and any packages etc you want in it. Call it py3env
create a directory to hold your runner scripts, something like $HOME/c9_runner_scripts
put a script in there called py3env_runner.sh runner with code like:
conda activate py3env
python ~/c9/my_py3_script.py
Then create a run configuration with the 'shell script' runner type and enter c9_runner_scripts/py3env_runner.sh
for me, on centos 7 the only way to execute with my conda python v 3.9.4 was to add a conda activate line to my .bash_profile like this:
conda activate /var/www/my_conda/python3.9
Then in Cloud 9 when I'm running my code under my conda python 3.9 env all is fine.
This is my simple python code which will print the current python version
import sys
print(sys.version)
Best.
In node, you can define a package.json. Then define a script block as following:
"scripts": {
"start": "concurrently -k -r -s first \"yarn test:watch\" \"yarn open:src\" \"yarn lint:watch\"",
},
So in root directory, I can just do yarn start to run concurrently -k -r -s first \"yarn test:watch\" \"yarn open:src\" \"yarn lint:watch\"
What is the equivalent of that in Python 3? If I want to have a script called python test to run python -m unittest discover -v
use make, its great.
create a Makefile and add some targets to run specific shell commands:
install:
pip install -r requirements.txt
test:
python -m unittest discover -v
# and so on, you got the idea
run with (assuming that Makefile is in the current dir):
make test
NOTE: if you want to run more commands but in the same environment from within a target do this:
install:
source ./venv/bin/activate; \
pip install -r requirements.txt; \
echo "do other stuff after in the same environment"
the key is the ;\ which puts the commands in a single run and make executes these commands as a single line because of the ;\. the space in ; \ its just for aesthetics.
Why don't you just use pipenv? It is the python's npm and you can add a [scripts] very similar to the one of npm on your Pipfile.
See this other question to discover more: pipenv stack overflow question
Not the best solution really. This totally works if you already familiar with npm, but like others have suggested, use makefiles.
Well, this is a work around, but apparently you can just use npm if you have it installed. I created a file package.json in root directory of python app.
{
"name": "fff-connectors",
"version": "1.0.0",
"description": "fff project to UC Davis",
"directories": {
"test": "tests"
},
"scripts": {
"install": "pip install -r requirements.txt",
"test": "python -m unittest discover -v"
},
"keywords": [],
"author": "Leo Qiu",
"license": "ISC"
}
then I can just use npm install or yarn install to install all dependencies, and yarn test or npm test to run test scripts.
You can also do preinstall and postinstall hooks. For example, you may need to remove files or create folder structures.
Another benefit is this setup allows you to use any npm libraries like concurrently, so you can run multiple files together and etc.
Answer specifically for tests, create a setup.py like this within your package/folder:
from setuptools import setup
setup(name='Your app',
version='1.0',
description='A nicely tested app',
packages=[],
test_suite="test"
)
Files are structured like this:
my-package/
| setup.py
| test/
| some_code/
| some_file.py
Then run python ./setup.py test to run the tests. You need to install setuptools as well (as a default you can use distutils.core setup function but it doesn't include much options).
I have a virtualenv in which I try to run a start script defined in package.json, but somehow npm changes the source and runs outside of the venv.
If I print which python for example in that npm start script, I get /usr/local/bin/python, so not the python from virtualenv.
Any ideas?
Edit:
package.json
{
...
"scripts": {
"start": "myscript & watchify -o assets/js/mylibs.js -v -d .",
},
...
}
Is it possible to run a shell script that sources my python virtualenv and then have the shell be in the new environment?
Here is my shell script
#!/usr/bin/env bash
function createProject() {
if [ -e $1 ]
then
rm -r ./$1
fi
if [ -e env-$1 ]
then
rm -r ./env-$1
fi
virtualenv ./env-$1
django-admin startproject $1
}
createProject $1
source ./env-$1/bin/activate
exit 1
I then run ./script.sh hello-world.
Basically if I were to run source ./hello-world/bin/activate my shell the virualenv would be activated and the shell would then be running in the new environment.
How do I accomplish this?
What you want is possible using shell functions only, as they do not spawn separate processes.
The problem with your approach is that the virtualenv is activated in a sub-process that was created to run the shell script.
Instead of having a executable shell script, do like this:
function createProject() {
if [ -e $1 ]
then
rm -r ./$1
fi
if [ -e env-$1 ]
then
rm -r ./env-$1
fi
virtualenv ./env-$1
django-admin startproject $1
source ./env-$1/bin/activate
}
Save this as createProject.sh and source this file in .bashrc or .bash_profile
source createProject.sh
This way the virtualenv is activated in the current process.