I want to use a makefile to build my project's environment using a makefile and anaconda/miniconda, so I should be able to clone the repo and simply run make myproject
myproject: build
build:
#printf "\nBuilding Python Environment\n"
#conda env create --quiet --force --file environment.yml
#source /home/vagrant/miniconda/bin/activate myproject
If I try this, however, I get the following error
make: source: Command not found
make: *** [source] Error 127
I have searched for a solution, but [this question/answer(How to source a script in a Makefile?) suggests that I cannot use source from within a makefile.
This answer, however, proposes a solution (and received several upvotes) but this doesn't work for me either
( \
source /home/vagrant/miniconda/bin/activate myproject; \
)
/bin/sh: 2: source: not found
make: *** [source] Error 127
I also tried moving the source activate step to a separate bash script, and executing that script from the makefile. That doesn't work, and I assume for the a similar reason, i.e. I am running source from within a shell.
I should add that if I run source activate myproject from the terminal, it works correctly.
I had a similar problem; I wanted to create, or update, a conda environment from a Makefile to be sure my own scripts could use the python from that conda environment.
By default make uses sh to execute commands, and sh doesn't know source (also see this SO answer). I simply set the SHELL to bash and ended up with (relevant part only):
SHELL=/bin/bash
CONDAROOT = /my/path/to/miniconda2
.
.
install: sometarget
source $(CONDAROOT)/bin/activate && conda env create -p conda -f environment.yml && source deactivate
Hope it helps
You should use this, it's functional for me at moment.
report.ipynb : merged.ipynb
( bash -c "source ${HOME}/anaconda3/bin/activate py27; which -a python; \
jupyter nbconvert \
--to notebook \
--ExecutePreprocessor.kernel_name=python2 \
--ExecutePreprocessor.timeout=3000 \
--execute merged.ipynb \
--output=$< $<" )
I had the same problem. Essentially the only solution is stated by 9000. I have a setup shell script inside which I setup the conda environment (source activate python2), then I call the make command. I experimented with setting up the environment from inside Makefile and no success.
I have this line in my makefile:
installpy :
./setuppython2.sh && python setup.py install
The error messages is:
make
./setuppython2.sh && python setup.py install
running install
error: can't create or remove files in install directory
The following error occurred while trying to add or remove files in the
installation directory:
[Errno 13] Permission denied: '/usr/lib/python2.7/site-packages/test-easy-install-29183.write-test'
Essentially, I was able to set up my conda environment to use my local conda that I have write access. But this is not picked up by the make process. I don't understand why the environment set up in my shell script using 'source' is not visible in the make process; the source command is supposed to change the current shell. I just want to share this so that other people don't wast time trying to do this. I know autotoools has a way of working with python. But the make program is probably limited in this respect.
My current solution is a shell script:
cat py2make.sh
#!/bin/sh
# the prefix should be change to the target
# of installation or pwd of the build system
PREFIX=/some/path
CONDA_HOME=$PREFIX/anaconda3
PATH=$CONDA_HOME/bin:$PATH
unset PYTHONPATH
export PREFIX CONDA_HOME PATH
source activate python2
make
This seems to work well for me.
There were a solution for similar situation but it does not seems to work for me:
My modified Makefile segment:
installpy :
( source activate python2; python setup.py install )
Error message after invoking make:
make
( source activate python2; python setup.py install )
/bin/sh: line 0: source: activate: file not found
make: *** [installpy] Error 1
Not sure where am I wrong. If anyone has a better solution please share it.
Related
Helli, I have to build a Docker image for the following bioinformatics tool: https://github.com/CAMI-challenge/CAMISIM. Their dockerfile works but takes a long time to build and I would like to build my own, slightly differently, to learn. I face issues: there are several python script that I should be able to choose to run, not only a main. If I add one script in particular as an ENTRYPOINT then the behavior isn't exactly what I shoud have.
The Dockerfile:
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive
USER root
#COPY ./install_docker.sh ./
#RUN chmod +x ./install_docker.sh && sh ./install_docker.sh
RUN apt-get update && \
apt install -y git python3-pip libxml-simple-perl libncursesw5 && \
git clone https://github.com/CAMI-challenge/CAMISIM.git && \
pip3 install numpy ete3 biom-format biopython matplotlib joblib scikit-learn
ENTRYPOINT ["python3"]
ENV PATH="/CAMISIM/:${PATH}"
This yields :
sudo docker run camisim:latest metagenomesimulation.py --help
python3: can't open file 'metagenomesimulation.py': [Errno 2] No such file or directory
Adding that script as an ENTRYPOINT after python3 allows me to use it with 2 drawbacks: I cannot use another script (I could build a second docker image but that would be a bad solution), and it outputs:
ERROR: 0
usage: python metagenomesimulation.py configuration_file_path
#######################################
# MetagenomeSimulationPipeline #
#######################################
Pipeline for the simulation of a metagenome
optional arguments:
-h, --help show this help message and exit
-silent, --silent Hide unimportant Progress Messages.
-debug, --debug_mode more information, also temporary data will not be deleted
-log LOGFILE, --logfile LOGFILE
output will also be written to this log file
optional config arguments:
-seed SEED seed for random number generators
-s {0,1,2}, --phase {0,1,2}
available options: 0,1,2. Default: 0
0 -> Full run,
1 -> Only Comunity creation,
2 -> Only Readsimulator
-id DATA_SET_ID, --data_set_id DATA_SET_ID
id of the dataset, part of prefix of read/contig sequence ids
-p MAX_PROCESSORS, --max_processors MAX_PROCESSORS
number of available processors
required:
config_file path to the configuration file
You can see there is an error that should'nt be there, it actually does not use the help flag. The original Dockerfile is:
FROM ubuntu:20.04
RUN apt update
RUN apt install -y python3 python3-pip perl libncursesw5
RUN perl -MCPAN -e 'install XML::Simple'
ADD requirements.txt /requirements.txt
RUN cat requirements.txt | xargs -n 1 pip install
ADD *.py /usr/local/bin/
ADD scripts /usr/local/bin/scripts
ADD tools /usr/local/bin/tools
ADD defaults /usr/local/bin/defaults
WORKDIR /usr/local/bin
ENTRYPOINT ["python3"]
It works but shows the error as above, so not so much. Said error is not present when using the tool outside of docker. Last time I made a Docker image I just pulled the git repo and added the main .sh script as an ENTRYPOINT and everything worked despite being more complex (see https://github.com/Louis-MG/Metadbgwas).
Why would I need ADD and moving everything ? I added the git folder to the path, why can't I find the scripts ? How is it different from the Metadbgwas image ?
In your first setup, you start in the image root directory / and run git clone to check out the repository into /CAMISIM. You never change the current directory, though, so when you try to run python3 metagenomesimulation.py --help it's looking in / and not /CAMISIM, hence the "not found" error.
You can fix this just by changing the current directory. At any point after you check out the repository, run
WORKDIR /CAMISIM
You should also delete the ENTRYPOINT line. For each of the scripts you could run as a top-level entry point, check two things:
Is it executable; if you ls -l metagenomesimulation.py are there x in the permission listing? If not, on the host system, run chmod +x metagenomesimulation.py and commit to source control. (Or you could RUN chmod ... in the Dockerfile if you really can't change the repository.)
Does it have a "shebang" line? The very first line of the script should be
#!/usr/bin/env python3
If both of these things are true, then you can just run ./metagenomesimulation.py without explicitly saying python3; since you add the directory to $PATH as well, you can probably run it without specifying the ./... file location.
(Probably deleting the ENTRYPOINT line on its own is enough, given that ENV PATH setting, but your script still might be confused by starting up in the wrong directory.)
The long "help" output just suggests to me that the script is expecting a configuration file name as a parameter and you haven't provided it, or else you've repeated the script name in both the entrypoint and command parts of the container command string.
In the end very little was recquired and the original Dockerfile was correct, the same error is displayed anyway, that is due to the script itself.
What was missing was a link to the interpreter, so I could remove the ENTRYPOINT and actually interpret the script instead of having python look for it in its own path. The Dockerfile:
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive
USER root
RUN ln -s /usr/bin/python3 /usr/bin/python
RUN apt-get update && \
apt install -y git python3-pip libxml-simple-perl libncursesw5 && \
git clone https://github.com/CAMI-challenge/CAMISIM.git && \
pip3 install numpy ete3 biom-format biopython matplotlib joblib scikit-learn
ENV PATH="/CAMISIM:${PATH}"
Trying WORKDIR as suggested instead of the PATH yielded an error.
I have the following Makefile:
PYTHON = python
.DEFAULT_GOAL = help
help:
#echo ------------------------------Makefile for Flask app------------------------------
#echo USAGE:
#echo make dependencies Install all project dependencies
#echo make docker Run Docker
#echo make env Set environment variables
#echo make run Run Flask app
#echo make test Run tests for app
#echo ----------------------------------------------------------------------------------
dependencies:
#pip install -r requirements.txt
#pip install -r dev-requirements.txt
docker:
docker compose up
env:
#set CS_HOST_PORT=5000
#set CS_HOST_IP=127.0.0.1
#set DATABASE_URL=postgresql://lv-python-mc:575#127.0.0.1:5482/Realty_DB
#set REDIS_IP=127.0.0.1
#set REDIS_PORT=6379
run:
${PYTHON} app.py test:
#${PYTHON} -m pytest
The set command doesn't work and the environment variables aren't set, what may be the problem?
You can certainly set environment variables that will be in effect for programs make will invoke. But make cannot set environment variables for shells that invoke make. So if your makefile runs a program then you can set an environment variable in your makefile that will be visible in that program.
This has nothing to do with make, by the way. This is a limitation (or feature, depending on your perspective) of the operating system. Try this experiment:
Open a terminal.
Run set FOO=bar
Run echo %FOO%. See that it prints bar.
From that same terminal start a new shell by running cmd.exe
Now here run set FOO=nobar
Run echo %FOO%. See that it prints nobar.
Now exit the new shell by running exit
Now run echo %FOO%
You'll see that instead of nobar, it still prints bar. That's because the OS does not allow a child program to modify the environment of its parent program.
So, there's nothing make can do about this.
I write Dockerfile which is based on windowsnanoserver. I need to add to this image git. In order to achieve it I did the following:
RUN Invoke-WebRequest 'https://github.com/git-for-windows/git/releases/download/v2.12.2.windows.2/Git-2.12.2.2-64-bit.exe'
RUN Invoke-Expression "c:\Git-2.12.2.2-64-bit.exe"
But when I execute this lines via docker build, I receive following error message:
Invoke-Expression : The term 'c:\Git-2.12.2.2-64-bit.exe' is not
recognized as the name of a cmdlet, function, script file, or operable
program. Check the spelling of the name, or if a path was included,
verify that the path is correct and try again.
I realize that this error message indicates that due to console nature of windows docker images I'll not be able to execute GUI installers. Unfortunately git doesn't have console installer. Chocolatey works fine under windowsservercore image but doesn't work at windowsnanoserver. In order to install git for windowsnanoserver I have idea to repeat in Dockerfile commands from chocolatey git installer which is fine for me, but still I'd like to know is there any simpler way to install git on windowsnanoserver?
I've solved issue with GUI through usage of MinGit and by putting information about mingit into environment/path variable. I've used following approach:
RUN Invoke-WebRequest 'https://github.com/git-for-windows/git/releases/download/v2.12.2.windows.2/MinGit-2.12.2.2-64-bit.zip' -OutFile MinGit.zip
RUN Expand-Archive c:\MinGit.zip -DestinationPath c:\MinGit; \
$env:PATH = $env:PATH + ';C:\MinGit\cmd\;C:\MinGit\cmd'; \
Set-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\Session Manager\Environment\' -Name Path -Value $env:PATH
You are correct, both Windows and Linux containers generally focus on running headless applications (i.e. without GUI).
It sounds like you want to create a container image based on the nanoserver image that has git?
Chocolatey is a great idea.
If you give me the broader context of your goals I can help you further.
Cheers :)
Installing to the docker image using Chocolatey worked for me as per this image: ehong
I addeded these lines to my Dockerfile:
ENV ChocolateyUseWindowsCompression false
RUN powershell Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Force
RUN powershell -NoProfile -ExecutionPolicy Bypass -Command "iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))" && SET "PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin"
RUN choco install git.install -y --no-progress
Call the git.setup.exe installation file with the parameters /? to list all possible switches.
To run a silent installation:
git.setup.exe /VERYSILENT /NORESTART /NOCANCEL /SP- /CLOSEAPPLICATIONS /RESTARTAPPLICATIONS
To do a customized installation:
run manually git installation with the parameter /SAVEINF="filename"
e.g:. git-2.xx.exe /SAVEINF="filename"
And then to repeat the installation with /LOADINF="filename"
e.g.: git.setup.exe /VERYSILENT /NORESTART /NOCANCEL /SP- /CLOSEAPPLICATIONS /RESTARTAPPLICATIONS /LOADINF="filename"
It's documented on:
Git: Silent-or-Unattended-Installation
You can download and use the Git Thumbdrive edition:
https://git-scm.com/download/win
look for the link under:
Git for Windows Portable ("thumbdrive edition")
E.G.: https://github.com/git-for-windows/git/releases/download/v2.23.0.windows.1/PortableGit-2.23.0-64-bit.7z.exe
Based on the answer of #Mariusz, the following lines install git into Windows image
# copy inf file
COPY resources/git-install.inf c:\git-install.inf
# get Git install file
RUN Invoke-WebRequest 'https://github.com/git-for-windows/git/releases/download/v2.30.1.windows.1/Git-2.30.1-64-bit.exe' -OutFile 'git.exe'; `
# install Git
Start-Process "c:\git.exe" -ArgumentList '/SP-', '/VERYSILENT', '/NORESTART', '/NOCANCEL', '/CLOSEAPPLICATIONS', '/RESTARTAPPLICATIONS', '/LOADINF=git-install.inf' -Wait -NoNewWindow; `
# delete files
Remove-Item -Force git-install.inf; `
Remove-Item -Force git.exe;
I would like to set permanently a conda environment in my docker image in order that the functions of the conda package could be used by the script given as argument to the entrypoint.
This is the dockerfile that I created.
FROM continuumio/anaconda3
RUN conda create -n myenv
RUN echo "source activate myenv" > ~/.bashrc
ENV PATH:="/opt/conda/envs/myenv/bin:$PATH"
SHELL ["/bin/bash", "-c"]
ENTRYPOINT ["python3"]
It seems that the ~/.bashrc file is not sourced when I run the docker container. Am I doing something wrong?
Thank you
As a work around either use 'SHELL ["/bin/bash", "-i", "--login", "-c"]'
-or-
edit the .bashrc file in the image to not exit if not in interactive mode by changing "*) return;;" to read "*) ;;"
Using the first option bash will complain about job control and ttys, but the error can be ignored.
cause of the issue:
the .bashrc file contains the following command:
# If not running interactively, don't do anything
case $- in
*i*) ;;
*) return;;
esac
which causes bash to stop sourcing the file if not in interactive mode. (the -i flag)
Unfortunately, I haven't found a way for the conda stanza to be inserted into .bash_profile or .profile automatically instead of (or in addition to) .bashrc, as there doesn't seem to be an option to override or add to the list of what files conda init examines for modification.
I am trying to install requirements for each project in a list automatically into its own virtualenv. I have gotten to the point of making the virtualenv correctly, but I cannot get it to activate and install requirements into only that virtualenv:
#!/usr/bin/env python
import subprocess, sys, time, os
HOMEPATH = os.path.expanduser('~')
CWD = os.getcwd()
d = {'cwd': ''}
if len(sys.argv) == 2:
projects = sys.argv[1:]
def call_sp(command, **arg_list):
p = subprocess.Popen(command, shell=True, **arg_list)
p.communicate()
def my_makedirs(path):
if not path.startswith('/home/cchilders'):
path = os.path.join(HOMEPATH, path)
try: os.makedirs(path)
except: pass
for project in projects:
path = os.path.join(CWD, project)
my_makedirs(path)
git_string = 'git clone git#bitbucket.org:codyc54321/{}.git {}'.format(project, d['cwd'])
call_sp(git_string)
d = {'executable': 'bash'}
call_sp("""source /usr/local/bin/virtualenvwrapper.sh && mkvirtualenv --no-site-packages {}""".format(project), **d)
# call_sp("""source /usr/local/bin/virtualenvwrapper.sh && workon {}""".format(project), **d)
# below, the dot (.) means the same as 'source'. the dot doesn't error, calling source does
call_sp('. /home/cchilders/.virtualenvs/{}/bin/activate'.format(project))
d = {'cwd': path}
call_sp("pip install -r requirements.txt", **d)
It works up to
call_sp("""source /usr/local/bin/virtualenvwrapper.sh && mkvirtualenv --no-site-packages {}""".format(project), **d)
but when the script ends, I am not active in the venv and the venv does not have any packages from requirements. Both efforts to source the venv (the one commented out and live) both fail.
The answer that helped me get the mkvirtualenv to work is subprocess.Popen: mkvirtualenv not found.
I also noticed I have a need to do more than just pip install, in one case I need to run 'python setup.py mycommand' which automates setup for each project. How can run commands as if a virtualenv is activated and also install dependencies to arbitrary venvs in a python script?
The only way I've found around this is turning the virtualenv on by hand, then calling my python script by hand. I was surprised, turning it on by bash worked, but calling the python script bombed (maybe because it's a different process than the bash one)
Thank you
This is because each call_sp call creates a new shell, so after the first call to call_sp ends all the settings created by sourcing of virtualenvwrapper are gone. You have to combine all your commands into the single call_sp chain. Otherwise you can just start shell using 'Popen' and feed commands to it using communicate.
If you go with the later you need to be careful with synchronizing and detecting when installation of requirements ends. Pip can take a long time downloading and installing packages with complex dependencies.
This is the way I have done this kind of bootstrapping for virtual environments. Let the script take care of it's own env and just run the script. Running this app.py will setup its VE and modules if missing.
./requirements.txt file
flask
./app.py script
#!/bin/bash
""":"
VENV=$(realpath -s $(dirname $0)/ve)
PYTHON=$VENV/bin/python
if [ ! -f "$PYTHON" ]; then
echo "installing env app"
python3 -m venv $VENV
${VENV}/bin/pip install -r $(dirname $0)/requirements.txt
fi
exec $PYTHON $0 $#
"""
import flask
print("I am Python with flask", flask)
No matter what dir we are in, app.py bootstrapps though the bash script header, installing a ve if python does not exist, running pip, and whatever else you need. Then exec $PYTHON $0 $# is a slick way to swap out bash process for the python process keeping the same pid.
When python takes over, it skips over the bash part because that script is in triple quotes string. So the first line python executes is import flask (well it discards the bash script string 1st). Another cool thing is the pid of the bash process is the same as the pid of the python process. So any daemon utility that babysits this will still see the pid it started.
The last trick in this is that bash needs one extra quote to balance its string """:" at the top. Python does not care about that extra quote
I hope you see the pattern. To upgrade modules in requirements.txt, just rm the ve and run the app again. Simple.