Configure AWS Cloud9 to use Anaconda Python Environment - python

I want AWS Cloud9 to use the Python version and specific packages from my Anaconda Python environment. How can I achieve this? Where should I look in the settings or configuration?
My current setup: I have an AWS EC2 instance with Ubuntu Linux, and I have configured AWS Cloud9 to work with the EC2 instance.
I have Anaconda installed on the EC2 instance, and I have created a conda Python3 environment to use, but Cloud9 always wants to use my Linux system's installed Python3 version.

I finally found something that forces AWS Cloud9 to use the Python3 version installed in my Anaconda environment on my AWS EC2 instance.
The instructions to create a custom AWS Cloud9 runner for Python are here:
{
"cmd" : ["/home/ubuntu/anaconda3/envs/ijackweb/bin/python3.6", "$file", "$args"],
"info" : "Running $project_path$file_name...",
"selector" : "source.py"
}
I just create a new runner and paste the above code in there, and Cloud9 runs my application with my Anaconda environment's version of Python3.
The only thing I don't understand about the above code is what the "selector": "source.py" line does.

After some testing, I realised that my previous answer prevents you being able to use the debugger. Building on #Sean_Calgary 's answer (which is better than my original answer), you can edit one of the in-built python runners (again, just replacing the python call with the full path to the conda env's python path), like so:
{
"script": [
"if [ \"$debug\" == true ]; then ",
" /home/tg/miniconda/envs/env-name/bin/python -m ikp3db -ik_p=15471 -ik_cwd=$project_path \"$file\" $args",
"else",
" /home/tg/miniconda/envs/env-name/bin/python \"$file\" $args",
"fi",
"checkExitCode() {",
" if [ $1 ] && [ \"$debug\" == true ]; then ",
" /home/tg/miniconda/envs/env-name/bin/python -m ikp3db 2>&1 | grep -q 'No module' && echo '",
" To use python debugger install ikpdb by running: ",
" sudo yum update;",
" sudo yum install python36-devel;",
" sudo pip-3.6 install ikp3db;",
" '",
" fi",
" return $1",
"}",
"checkExitCode $?"
],
"python_version": "python3",
"working_dir": "$project_path",
"debugport": 15471,
"$debugDefaultState": false,
"debugger": "ikpdb",
"selector": "^.*\\.(py)$",
"env": {
"PYTHONPATH": "$python_path"
},
"trackId": "Python3"
}
To do this, just click on 'runners' next to CWD in the bottom-right corner -> python3 -> edit runner -> save as 'env-name.run' in /.c9/runners (that save as should point you to the right directory by default).
N.B.
Replace env-name with the name of your environment throughout.
You will need the package for the debugger installed in your conda env. It's called ikp3db.
You may need to check the path to your conda envs executable python by activating the environment and running which python (his caught me out because my path ended in /python, not /python3.6, even though it's python 3.6 that's installed)

You could use a 'shell script' runner type. To do this you would:
create your conda env, with python3 and any packages etc you want in it. Call it py3env
create a directory to hold your runner scripts, something like $HOME/c9_runner_scripts
put a script in there called py3env_runner.sh runner with code like:
conda activate py3env
python ~/c9/my_py3_script.py
Then create a run configuration with the 'shell script' runner type and enter c9_runner_scripts/py3env_runner.sh

for me, on centos 7 the only way to execute with my conda python v 3.9.4 was to add a conda activate line to my .bash_profile like this:
conda activate /var/www/my_conda/python3.9
Then in Cloud 9 when I'm running my code under my conda python 3.9 env all is fine.
This is my simple python code which will print the current python version
import sys
print(sys.version)
Best.

Related

How do I properly set up my Python dependencies within Jenkins using pip and virtualenv?

I am a rookie to Jenkins trying to set up a Python pytest test suite to run. In order to properly execute the test suite, I have to install several Python packages. I'm having trouble with this particular step because Jenkins consistently is unable to find virtualenv and pip:
pipeline {
parameters {
gitParameter branchFilter: 'origin/(.*)', defaultValue: 'master', name: 'BRANCH', type: 'PT_BRANCH', quickFilterEnabled: true
}
agent any
stages {
stage('Checkout source code') {
steps {
checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: '------', url: 'git#github.com:path-to-my-repo/my-test-repo.git']]])
}
}
stage('Start Test Suite') {
steps {
sh script: 'PATH=/Library/Frameworks/Python.framework/Versions/3.6/bin/:$PATH'
echo "Checking out Test suite repo."
sh script: 'virtualenv venv --distribute'
sh label: 'install deps', script: '/Library/Frameworks/Python.framework/Versions/3.6/bin/pip install -r requirements.txt'
sh label: 'execute test suite, exit upon first failure', script: 'pytest --verbose -x --junit-xml reports/results.xml'
post {
always {
junit allowEmptyResults: true, testResults: 'reports/results.xml'
}
}
}
}
On the virtualenv venv --distribute step, Jenkins throws an error (I'm running this initially on a Jenkins server on my local instance, although in production it will be on an Amazon Linux 2 machine):
virtualenv venv --distribute /Users/Shared/Jenkins/Home/workspace/my-project-name#tmp/durable-5045c283/script.sh:
line 1: virtualenv: command not found
Why is this happening? The step before, I make sure to append where I know my virtualenv and pip are:
sh script: 'PATH=/Library/Frameworks/Python.framework/Versions/3.6/bin/:$PATH'
For instance, when I type in
sudo su jenkins
which pip
which virtualenv
I get the following outputs as expected:
/Library/Frameworks/Python.framework/Versions/3.6/bin/pip
/Library/Frameworks/Python.framework/Versions/3.6/bin/virtualenv
Here are the things I do know:
Jenkins runs as a user called jenkins
best practice is to create a virtual environment, activate it, and the perform my pip installations inside there
Jenkins runs sh by default, not bash (but I'm not sure if this has anything to do with my problem)
Why is Jenkins unable to find my virtualenv? What's the best practice for installing Python libraries for a Jenkins build?
Edit: I played around some more and found a working solution:
I don't know if this is the proper way to do it, but I used the following syntax:
withEnv(['PATH+EXTRA=/Library/Frameworks/Python.framework/Versions/3.6/bin/']) {
sh script: "pip install virtualenv"
// do other setup stuff
}
However, I'm now stuck w/ a new issue: I've clearly hardcoded in my Python path here. If I'm running on a remote Linux machine, am I going to have to install that specific version of Python (3.6)?

cassandra-snapshotter: not found

i installed cassandra snapshotter using pip install cassandra_snapshotter. It's working fine if i run it on terminal with command
sudo cassandra-snapshotter --s3-bucket-name=vivek-bucket
--s3-base-path=cassandra --aws-access-key-id=XXXX --aws-secret-access-key=XXX backup --hosts=172.31.2.85 --user ubuntu
--sshkey=/home/ubuntu/XXXX.pem --cassandra-conf-path=/etc/dse/cassandra --use-sudo=yes --new-snapshot
when i tried same command with ansible it ends with error
"start": "2017-04-25 10:02:39.111333",
"stderr": "/bin/sh: 1: cassandra-snapshotter: not found",
"stderr_lines": [
"/bin/sh: 1: cassandra-snapshotter: not found"
]
- name: snapshot and backup
hosts: localhost
connection: local
become: yes
tasks:
- name: taking snapshot
shell: cassandra-snapshotter --s3-bucket-name=vivek-bucket --s3-base-path=cassandra --aws-access-key-id=XXXX --aws-secret-access-key=XXX backup --hosts=172.31.2.85 --user ubuntu --sshkey=/home/ubuntu/XXXX.pem --cassandra-conf-path=/etc/dse/cassandra --use-sudo=yes --new-snapshot
pip installs executables in it's own location. That location is probably not in the search path. You can either set the PATH environment variable in your ansible and extend it to include that location or you could just manually do a 'which cassandra_snapshotter' on the commandline and put the full path to the cassandra_snapshotter executable in your ansible.
Also: I don't think you are using any 'shell' features in that cassandra_snapshotter call. It's better to use https://docs.ansible.com/ansible/command_module.html when possible.

Activate Anaconda Python environment from makefile

I want to use a makefile to build my project's environment using a makefile and anaconda/miniconda, so I should be able to clone the repo and simply run make myproject
myproject: build
build:
#printf "\nBuilding Python Environment\n"
#conda env create --quiet --force --file environment.yml
#source /home/vagrant/miniconda/bin/activate myproject
If I try this, however, I get the following error
make: source: Command not found
make: *** [source] Error 127
I have searched for a solution, but [this question/answer(How to source a script in a Makefile?) suggests that I cannot use source from within a makefile.
This answer, however, proposes a solution (and received several upvotes) but this doesn't work for me either
( \
source /home/vagrant/miniconda/bin/activate myproject; \
)
/bin/sh: 2: source: not found
make: *** [source] Error 127
I also tried moving the source activate step to a separate bash script, and executing that script from the makefile. That doesn't work, and I assume for the a similar reason, i.e. I am running source from within a shell.
I should add that if I run source activate myproject from the terminal, it works correctly.
I had a similar problem; I wanted to create, or update, a conda environment from a Makefile to be sure my own scripts could use the python from that conda environment.
By default make uses sh to execute commands, and sh doesn't know source (also see this SO answer). I simply set the SHELL to bash and ended up with (relevant part only):
SHELL=/bin/bash
CONDAROOT = /my/path/to/miniconda2
.
.
install: sometarget
source $(CONDAROOT)/bin/activate && conda env create -p conda -f environment.yml && source deactivate
Hope it helps
You should use this, it's functional for me at moment.
report.ipynb : merged.ipynb
( bash -c "source ${HOME}/anaconda3/bin/activate py27; which -a python; \
jupyter nbconvert \
--to notebook \
--ExecutePreprocessor.kernel_name=python2 \
--ExecutePreprocessor.timeout=3000 \
--execute merged.ipynb \
--output=$< $<" )
I had the same problem. Essentially the only solution is stated by 9000. I have a setup shell script inside which I setup the conda environment (source activate python2), then I call the make command. I experimented with setting up the environment from inside Makefile and no success.
I have this line in my makefile:
installpy :
./setuppython2.sh && python setup.py install
The error messages is:
make
./setuppython2.sh && python setup.py install
running install
error: can't create or remove files in install directory
The following error occurred while trying to add or remove files in the
installation directory:
[Errno 13] Permission denied: '/usr/lib/python2.7/site-packages/test-easy-install-29183.write-test'
Essentially, I was able to set up my conda environment to use my local conda that I have write access. But this is not picked up by the make process. I don't understand why the environment set up in my shell script using 'source' is not visible in the make process; the source command is supposed to change the current shell. I just want to share this so that other people don't wast time trying to do this. I know autotoools has a way of working with python. But the make program is probably limited in this respect.
My current solution is a shell script:
cat py2make.sh
#!/bin/sh
# the prefix should be change to the target
# of installation or pwd of the build system
PREFIX=/some/path
CONDA_HOME=$PREFIX/anaconda3
PATH=$CONDA_HOME/bin:$PATH
unset PYTHONPATH
export PREFIX CONDA_HOME PATH
source activate python2
make
This seems to work well for me.
There were a solution for similar situation but it does not seems to work for me:
My modified Makefile segment:
installpy :
( source activate python2; python setup.py install )
Error message after invoking make:
make
( source activate python2; python setup.py install )
/bin/sh: line 0: source: activate: file not found
make: *** [installpy] Error 1
Not sure where am I wrong. If anyone has a better solution please share it.

how do you install requirements to arbitrary virtualenv in python scripts?

I am trying to install requirements for each project in a list automatically into its own virtualenv. I have gotten to the point of making the virtualenv correctly, but I cannot get it to activate and install requirements into only that virtualenv:
#!/usr/bin/env python
import subprocess, sys, time, os
HOMEPATH = os.path.expanduser('~')
CWD = os.getcwd()
d = {'cwd': ''}
if len(sys.argv) == 2:
projects = sys.argv[1:]
def call_sp(command, **arg_list):
p = subprocess.Popen(command, shell=True, **arg_list)
p.communicate()
def my_makedirs(path):
if not path.startswith('/home/cchilders'):
path = os.path.join(HOMEPATH, path)
try: os.makedirs(path)
except: pass
for project in projects:
path = os.path.join(CWD, project)
my_makedirs(path)
git_string = 'git clone git#bitbucket.org:codyc54321/{}.git {}'.format(project, d['cwd'])
call_sp(git_string)
d = {'executable': 'bash'}
call_sp("""source /usr/local/bin/virtualenvwrapper.sh && mkvirtualenv --no-site-packages {}""".format(project), **d)
# call_sp("""source /usr/local/bin/virtualenvwrapper.sh && workon {}""".format(project), **d)
# below, the dot (.) means the same as 'source'. the dot doesn't error, calling source does
call_sp('. /home/cchilders/.virtualenvs/{}/bin/activate'.format(project))
d = {'cwd': path}
call_sp("pip install -r requirements.txt", **d)
It works up to
call_sp("""source /usr/local/bin/virtualenvwrapper.sh && mkvirtualenv --no-site-packages {}""".format(project), **d)
but when the script ends, I am not active in the venv and the venv does not have any packages from requirements. Both efforts to source the venv (the one commented out and live) both fail.
The answer that helped me get the mkvirtualenv to work is subprocess.Popen: mkvirtualenv not found.
I also noticed I have a need to do more than just pip install, in one case I need to run 'python setup.py mycommand' which automates setup for each project. How can run commands as if a virtualenv is activated and also install dependencies to arbitrary venvs in a python script?
The only way I've found around this is turning the virtualenv on by hand, then calling my python script by hand. I was surprised, turning it on by bash worked, but calling the python script bombed (maybe because it's a different process than the bash one)
Thank you
This is because each call_sp call creates a new shell, so after the first call to call_sp ends all the settings created by sourcing of virtualenvwrapper are gone. You have to combine all your commands into the single call_sp chain. Otherwise you can just start shell using 'Popen' and feed commands to it using communicate.
If you go with the later you need to be careful with synchronizing and detecting when installation of requirements ends. Pip can take a long time downloading and installing packages with complex dependencies.
This is the way I have done this kind of bootstrapping for virtual environments. Let the script take care of it's own env and just run the script. Running this app.py will setup its VE and modules if missing.
./requirements.txt file
flask
./app.py script
#!/bin/bash
""":"
VENV=$(realpath -s $(dirname $0)/ve)
PYTHON=$VENV/bin/python
if [ ! -f "$PYTHON" ]; then
echo "installing env app"
python3 -m venv $VENV
${VENV}/bin/pip install -r $(dirname $0)/requirements.txt
fi
exec $PYTHON $0 $#
"""
import flask
print("I am Python with flask", flask)
No matter what dir we are in, app.py bootstrapps though the bash script header, installing a ve if python does not exist, running pip, and whatever else you need. Then exec $PYTHON $0 $# is a slick way to swap out bash process for the python process keeping the same pid.
When python takes over, it skips over the bash part because that script is in triple quotes string. So the first line python executes is import flask (well it discards the bash script string 1st). Another cool thing is the pid of the bash process is the same as the pid of the python process. So any daemon utility that babysits this will still see the pid it started.
The last trick in this is that bash needs one extra quote to balance its string """:" at the top. Python does not care about that extra quote
I hope you see the pattern. To upgrade modules in requirements.txt, just rm the ve and run the app again. Simple.

Cloud9 IDE to run python3 with venv

I'm trying to use a custom runner in Cloud9 to launch a project under python 3.4 using a virtual environment installed in the same directory, but it doesn't work. The runner doesn't detect my dependencies, which presumably means it isn't activating the venv properly.
// Create a custom Cloud9 runner - similar to the Sublime build system
// For more information see https://docs.c9.io/custom_runners.html
{
"cmd": [
"bash",
"--login",
"-c",
"source bin/activate && python oric.py"
],
"working_dir": "$project_path",
"info": "Your code is running at \\033[01;34m$url\\033[00m.\n\\033[01;31m"
}
Any thoughts on what's wrong? Many thanks
From start to finish:
Create a virtual environment:
$ virtualenv -p /usr/bin/python36 vpy36
Install Python package into virtual environment:
$ source vpy36/bin/activate
$ pip3 install tweepy
Create Runner:
Navigate the menu to create the runner
Create .run File
Copy and paste the example code below into your .run file. This will allow both normal and debug executions of your venv.
// This file overrides the built-in Python 3 runner
// For more information see http://docs.aws.amazon.com/console/cloud9/change-runner
{
"script": [
"if [ \"$debug\" == true ]; then ",
" /home/ec2-user/environment/venvpy36/bin/python -m ikp3db -ik_p=15471 -ik_cwd=$project_path \"$file\" $args",
"else",
" /home/ec2-user/environment/venvpy36/bin/python \"$file\" $args",
"fi",
"checkExitCode() {",
" if [ $1 ] && [ \"$debug\" == true ]; then ",
" /home/ec2-user/environment/venvpy36/bin/python -m ikp3db 2>&1 | grep -q 'No module' && echo '",
" To use python debugger install ikpdb by running: ",
" sudo yum update;",
" sudo yum install python36-devel;",
" sudo source /home/ec2-user/environment/venvpy36/bin activate",
" sudo pip-3.6 install ikp3db;",
" sudo deactivate",
" '",
" fi",
" return $1",
"}",
"checkExitCode $?"
],
"python_version": "/home/ec2-user/environment/venvpy36/bin/python",
"working_dir": "$project_path",
"debugport": 15471,
"$debugDefaultState": false,
"debugger": "ikpdb",
"selector": "^.*\\.(py)$",
"env": {
"PYTHONPATH": "$python_path"
},
"trackId": "/home/ec2-user/environment/venvpy36/bin/python"
}
If you placed your venv in a different directory during step 1 Find and replace all references of "/home/ec2-user/environment/venvpy36/bin" with your own venv bin directory and the code should work for you.
Finally, Save the file
Select the Runner and Run the File:
Select your runner (in this example, "vpy36"). Then click "Run" and it should work.
I use virtualenv on Cloud9 and it works fine for me. Cloud9 workspaces seem to come with virtualenv wrapper pre-installed (at least, Django workspace does), so if you create a virtualenv with:
$ mkvirtualenv foo
Then, you can create your runner like so, for example:
{
"cmd": [
"bash",
"--login",
"-c",
"source /home/ubuntu/.virtualenvs/foo/bin/activate && python whatever.py"
],
# ... rest of the configuration
}
I got cloud9 to use virtualenv by just setting the environment vars directly instead of trying to source the activate script.
{
"cmd": [
"/var/lib/cloud9/venv/bin/python",
"$file",
"$args"
],
"selector": "^.*\\.(python|py)$",
"env": {
"PYTHONPATH": "/var/lib/cloud9/venv/lib/python3.5/site-packages",
"VIRTUAL_ENV": "/var/lib/cloud9/venv",
"PATH": "/var/lib/cloud9/venv/bin:$PATH"
}
}

Categories