How do I set an environment variable in Makefile for Windows? - python

I have the following Makefile:
PYTHON = python
.DEFAULT_GOAL = help
help:
#echo ------------------------------Makefile for Flask app------------------------------
#echo USAGE:
#echo make dependencies Install all project dependencies
#echo make docker Run Docker
#echo make env Set environment variables
#echo make run Run Flask app
#echo make test Run tests for app
#echo ----------------------------------------------------------------------------------
dependencies:
#pip install -r requirements.txt
#pip install -r dev-requirements.txt
docker:
docker compose up
env:
#set CS_HOST_PORT=5000
#set CS_HOST_IP=127.0.0.1
#set DATABASE_URL=postgresql://lv-python-mc:575#127.0.0.1:5482/Realty_DB
#set REDIS_IP=127.0.0.1
#set REDIS_PORT=6379
run:
${PYTHON} app.py test:
#${PYTHON} -m pytest
The set command doesn't work and the environment variables aren't set, what may be the problem?

You can certainly set environment variables that will be in effect for programs make will invoke. But make cannot set environment variables for shells that invoke make. So if your makefile runs a program then you can set an environment variable in your makefile that will be visible in that program.
This has nothing to do with make, by the way. This is a limitation (or feature, depending on your perspective) of the operating system. Try this experiment:
Open a terminal.
Run set FOO=bar
Run echo %FOO%. See that it prints bar.
From that same terminal start a new shell by running cmd.exe
Now here run set FOO=nobar
Run echo %FOO%. See that it prints nobar.
Now exit the new shell by running exit
Now run echo %FOO%
You'll see that instead of nobar, it still prints bar. That's because the OS does not allow a child program to modify the environment of its parent program.
So, there's nothing make can do about this.

Related

Passing of environment variables with Dockerfile RUN instead of CMD

I have a dockerfile where a few commands need to be executed in a row, not in parallel or asynchronously, so cmd1 finishes, cmd2 starts, etc. etc.
Dockerfile's RUN is perfect for that. However, one of those RUN commands uses environment variables, meaning i'm calling os.getenv at some point. Sadly, it seems like when passing environment variables, be it through the CLI itself or with help of a .env file, only CMD instead of RUN works. but CMD is launching concurrently, so the container executes this command, but goes over right to the next one, which i definitely don't want.
In conclusion, is there even a way to pass environment variables to RUN commands in a dockerfile?
To help understand a bit better, here's an excerpt from my dockerfile:
FROM python:3.8
# Install python dependencies
RUN pip install --upgrade pip
COPY requirements.txt .
RUN pip install -r requirements.txt
# Create working directory
RUN mkdir -p /usr/src/my_directory
WORKDIR /usr/src/my_directory
# Copy contents
COPY . /usr/src/my_directory
# RUN calling method that uses calls os.getenv at some point (THIS IS THE PROBLEM)
RUN ["python3" ,"some_script.py"]
# RUN some other commands (this needs to run AFTER the command above finishes)
#if i replace the RUN above with CMD, this gets called right after
RUN ["python3", "some_other_script.py","--param","1","--param2", "config.yaml"]
Excerpt from some_script.py:
if __name__ == "__main__":
abc = os.getenv("my_env_var") # this is where i get a ReferenceError if i use RUN
do_some_other_stuff(abc)
The .env file I'm using with the dockerfile (or docker-compose):
my_env_var=some_url_i_need_for_stuff
Do not use the exec form of a RUN instruction if you want variable substitution, or use it to execute a shell. From the documentation:
Unlike the shell form, the exec form does not invoke a command shell. This means that normal shell processing does not happen. For example, RUN [ "echo", "$HOME" ] will not do variable substitution on $HOME. If you want shell processing then either use the shell form or execute a shell directly, for example: RUN [ "sh", "-c", "echo $HOME" ]. When using the exec form and executing a shell directly, as in the case for the shell form, it is the shell that is doing the environment variable expansion, not docker.
This is how I solved my problem:
write a bash script that executes all relevant commands in the nice order that i want to
use ENTRYPOINT instead of CMD or RUN
the bash script will already have the ENV vars, but you can double check with positional arguments passed to that bash script

Setting Docker ENV using python -c "command"

The python modules that I downloaded are inside the user's home directory. I need to set the python path to the user's bin profile. I tried two approaches as shown below in my dockerfile but to no avail. When I check the environment variable in the running container for the first case the PY_USER BIN is $(python -c 'import site; print(site.USER_BASE + "/bin")') and for the second case the PY_USER_BIN is blank. However, when I manually try to export the PY_USER_BIN variable, it works.
ENV PY_USER_BIN $(python -c 'import site; print(site.USER_BASE + "/bin")')
ENV PATH $PY_USER_BIN:$PATH
and
RUN export PY_USER_BIN=$(python -c 'import site; print(site.USER_BASE + "/bin")')
ENV PATH $PY_USER_BIN:$PATH
To me you mix different context of execution.
The ENV command that you use is a Dockerfile command, it is for env variable in the context of docker that would be forwarded to the container.
The RUN command execute a command inside the container, here, export. Whatever is done inside the container stay inside the container and docker will not have access to it.
For me there no point to give as docker ENV variable where python is on the host has they don't share the same file system. If you need to do it on the container context, then run these command inside the container context with standard shell commands.
Try that first by connecting to your container and running a shell inside it, once the commands works, put them in your Dockerfile. That's as simple as that. To do that run it like:
docker run -ti [your container name/tag] [your shell]
if you use sh as shell:
docker run -ti [your container name/tag] sh
Then try your commands.
To me it seems the commands you want would look like that:
RUN export PY_USER_BIN=$(python -c 'import site; print(site.USER_BASE + "/bin")')
RUN export PATH=$PY_USER_BIN:$PATH
Anyway the point of a container is to have a fixed file system, fixed user names and all. So the USER_BIN shall be always in the same path inside the container in 99% case you could has well hardcode it.

Problems in activating conda enviroment in docker

I would like to set permanently a conda environment in my docker image in order that the functions of the conda package could be used by the script given as argument to the entrypoint.
This is the dockerfile that I created.
FROM continuumio/anaconda3
RUN conda create -n myenv
RUN echo "source activate myenv" > ~/.bashrc
ENV PATH:="/opt/conda/envs/myenv/bin:$PATH"
SHELL ["/bin/bash", "-c"]
ENTRYPOINT ["python3"]
It seems that the ~/.bashrc file is not sourced when I run the docker container. Am I doing something wrong?
Thank you
As a work around either use 'SHELL ["/bin/bash", "-i", "--login", "-c"]'
-or-
edit the .bashrc file in the image to not exit if not in interactive mode by changing "*) return;;" to read "*) ;;"
Using the first option bash will complain about job control and ttys, but the error can be ignored.
cause of the issue:
the .bashrc file contains the following command:
# If not running interactively, don't do anything
case $- in
*i*) ;;
*) return;;
esac
which causes bash to stop sourcing the file if not in interactive mode. (the -i flag)
Unfortunately, I haven't found a way for the conda stanza to be inserted into .bash_profile or .profile automatically instead of (or in addition to) .bashrc, as there doesn't seem to be an option to override or add to the list of what files conda init examines for modification.

Activate Anaconda Python environment from makefile

I want to use a makefile to build my project's environment using a makefile and anaconda/miniconda, so I should be able to clone the repo and simply run make myproject
myproject: build
build:
#printf "\nBuilding Python Environment\n"
#conda env create --quiet --force --file environment.yml
#source /home/vagrant/miniconda/bin/activate myproject
If I try this, however, I get the following error
make: source: Command not found
make: *** [source] Error 127
I have searched for a solution, but [this question/answer(How to source a script in a Makefile?) suggests that I cannot use source from within a makefile.
This answer, however, proposes a solution (and received several upvotes) but this doesn't work for me either
( \
source /home/vagrant/miniconda/bin/activate myproject; \
)
/bin/sh: 2: source: not found
make: *** [source] Error 127
I also tried moving the source activate step to a separate bash script, and executing that script from the makefile. That doesn't work, and I assume for the a similar reason, i.e. I am running source from within a shell.
I should add that if I run source activate myproject from the terminal, it works correctly.
I had a similar problem; I wanted to create, or update, a conda environment from a Makefile to be sure my own scripts could use the python from that conda environment.
By default make uses sh to execute commands, and sh doesn't know source (also see this SO answer). I simply set the SHELL to bash and ended up with (relevant part only):
SHELL=/bin/bash
CONDAROOT = /my/path/to/miniconda2
.
.
install: sometarget
source $(CONDAROOT)/bin/activate && conda env create -p conda -f environment.yml && source deactivate
Hope it helps
You should use this, it's functional for me at moment.
report.ipynb : merged.ipynb
( bash -c "source ${HOME}/anaconda3/bin/activate py27; which -a python; \
jupyter nbconvert \
--to notebook \
--ExecutePreprocessor.kernel_name=python2 \
--ExecutePreprocessor.timeout=3000 \
--execute merged.ipynb \
--output=$< $<" )
I had the same problem. Essentially the only solution is stated by 9000. I have a setup shell script inside which I setup the conda environment (source activate python2), then I call the make command. I experimented with setting up the environment from inside Makefile and no success.
I have this line in my makefile:
installpy :
./setuppython2.sh && python setup.py install
The error messages is:
make
./setuppython2.sh && python setup.py install
running install
error: can't create or remove files in install directory
The following error occurred while trying to add or remove files in the
installation directory:
[Errno 13] Permission denied: '/usr/lib/python2.7/site-packages/test-easy-install-29183.write-test'
Essentially, I was able to set up my conda environment to use my local conda that I have write access. But this is not picked up by the make process. I don't understand why the environment set up in my shell script using 'source' is not visible in the make process; the source command is supposed to change the current shell. I just want to share this so that other people don't wast time trying to do this. I know autotoools has a way of working with python. But the make program is probably limited in this respect.
My current solution is a shell script:
cat py2make.sh
#!/bin/sh
# the prefix should be change to the target
# of installation or pwd of the build system
PREFIX=/some/path
CONDA_HOME=$PREFIX/anaconda3
PATH=$CONDA_HOME/bin:$PATH
unset PYTHONPATH
export PREFIX CONDA_HOME PATH
source activate python2
make
This seems to work well for me.
There were a solution for similar situation but it does not seems to work for me:
My modified Makefile segment:
installpy :
( source activate python2; python setup.py install )
Error message after invoking make:
make
( source activate python2; python setup.py install )
/bin/sh: line 0: source: activate: file not found
make: *** [installpy] Error 1
Not sure where am I wrong. If anyone has a better solution please share it.

how do you install requirements to arbitrary virtualenv in python scripts?

I am trying to install requirements for each project in a list automatically into its own virtualenv. I have gotten to the point of making the virtualenv correctly, but I cannot get it to activate and install requirements into only that virtualenv:
#!/usr/bin/env python
import subprocess, sys, time, os
HOMEPATH = os.path.expanduser('~')
CWD = os.getcwd()
d = {'cwd': ''}
if len(sys.argv) == 2:
projects = sys.argv[1:]
def call_sp(command, **arg_list):
p = subprocess.Popen(command, shell=True, **arg_list)
p.communicate()
def my_makedirs(path):
if not path.startswith('/home/cchilders'):
path = os.path.join(HOMEPATH, path)
try: os.makedirs(path)
except: pass
for project in projects:
path = os.path.join(CWD, project)
my_makedirs(path)
git_string = 'git clone git#bitbucket.org:codyc54321/{}.git {}'.format(project, d['cwd'])
call_sp(git_string)
d = {'executable': 'bash'}
call_sp("""source /usr/local/bin/virtualenvwrapper.sh && mkvirtualenv --no-site-packages {}""".format(project), **d)
# call_sp("""source /usr/local/bin/virtualenvwrapper.sh && workon {}""".format(project), **d)
# below, the dot (.) means the same as 'source'. the dot doesn't error, calling source does
call_sp('. /home/cchilders/.virtualenvs/{}/bin/activate'.format(project))
d = {'cwd': path}
call_sp("pip install -r requirements.txt", **d)
It works up to
call_sp("""source /usr/local/bin/virtualenvwrapper.sh && mkvirtualenv --no-site-packages {}""".format(project), **d)
but when the script ends, I am not active in the venv and the venv does not have any packages from requirements. Both efforts to source the venv (the one commented out and live) both fail.
The answer that helped me get the mkvirtualenv to work is subprocess.Popen: mkvirtualenv not found.
I also noticed I have a need to do more than just pip install, in one case I need to run 'python setup.py mycommand' which automates setup for each project. How can run commands as if a virtualenv is activated and also install dependencies to arbitrary venvs in a python script?
The only way I've found around this is turning the virtualenv on by hand, then calling my python script by hand. I was surprised, turning it on by bash worked, but calling the python script bombed (maybe because it's a different process than the bash one)
Thank you
This is because each call_sp call creates a new shell, so after the first call to call_sp ends all the settings created by sourcing of virtualenvwrapper are gone. You have to combine all your commands into the single call_sp chain. Otherwise you can just start shell using 'Popen' and feed commands to it using communicate.
If you go with the later you need to be careful with synchronizing and detecting when installation of requirements ends. Pip can take a long time downloading and installing packages with complex dependencies.
This is the way I have done this kind of bootstrapping for virtual environments. Let the script take care of it's own env and just run the script. Running this app.py will setup its VE and modules if missing.
./requirements.txt file
flask
./app.py script
#!/bin/bash
""":"
VENV=$(realpath -s $(dirname $0)/ve)
PYTHON=$VENV/bin/python
if [ ! -f "$PYTHON" ]; then
echo "installing env app"
python3 -m venv $VENV
${VENV}/bin/pip install -r $(dirname $0)/requirements.txt
fi
exec $PYTHON $0 $#
"""
import flask
print("I am Python with flask", flask)
No matter what dir we are in, app.py bootstrapps though the bash script header, installing a ve if python does not exist, running pip, and whatever else you need. Then exec $PYTHON $0 $# is a slick way to swap out bash process for the python process keeping the same pid.
When python takes over, it skips over the bash part because that script is in triple quotes string. So the first line python executes is import flask (well it discards the bash script string 1st). Another cool thing is the pid of the bash process is the same as the pid of the python process. So any daemon utility that babysits this will still see the pid it started.
The last trick in this is that bash needs one extra quote to balance its string """:" at the top. Python does not care about that extra quote
I hope you see the pattern. To upgrade modules in requirements.txt, just rm the ve and run the app again. Simple.

Categories