Docker file with git installation for windows [duplicate] - python

I write Dockerfile which is based on windowsnanoserver. I need to add to this image git. In order to achieve it I did the following:
RUN Invoke-WebRequest 'https://github.com/git-for-windows/git/releases/download/v2.12.2.windows.2/Git-2.12.2.2-64-bit.exe'
RUN Invoke-Expression "c:\Git-2.12.2.2-64-bit.exe"
But when I execute this lines via docker build, I receive following error message:
Invoke-Expression : The term 'c:\Git-2.12.2.2-64-bit.exe' is not
recognized as the name of a cmdlet, function, script file, or operable
program. Check the spelling of the name, or if a path was included,
verify that the path is correct and try again.
I realize that this error message indicates that due to console nature of windows docker images I'll not be able to execute GUI installers. Unfortunately git doesn't have console installer. Chocolatey works fine under windowsservercore image but doesn't work at windowsnanoserver. In order to install git for windowsnanoserver I have idea to repeat in Dockerfile commands from chocolatey git installer which is fine for me, but still I'd like to know is there any simpler way to install git on windowsnanoserver?

I've solved issue with GUI through usage of MinGit and by putting information about mingit into environment/path variable. I've used following approach:
RUN Invoke-WebRequest 'https://github.com/git-for-windows/git/releases/download/v2.12.2.windows.2/MinGit-2.12.2.2-64-bit.zip' -OutFile MinGit.zip
RUN Expand-Archive c:\MinGit.zip -DestinationPath c:\MinGit; \
$env:PATH = $env:PATH + ';C:\MinGit\cmd\;C:\MinGit\cmd'; \
Set-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\Session Manager\Environment\' -Name Path -Value $env:PATH

You are correct, both Windows and Linux containers generally focus on running headless applications (i.e. without GUI).
It sounds like you want to create a container image based on the nanoserver image that has git?
Chocolatey is a great idea.
If you give me the broader context of your goals I can help you further.
Cheers :)

Installing to the docker image using Chocolatey worked for me as per this image: ehong
I addeded these lines to my Dockerfile:
ENV ChocolateyUseWindowsCompression false
RUN powershell Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Force
RUN powershell -NoProfile -ExecutionPolicy Bypass -Command "iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))" && SET "PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin"
RUN choco install git.install -y --no-progress

Call the git.setup.exe installation file with the parameters /? to list all possible switches.
To run a silent installation:
git.setup.exe /VERYSILENT /NORESTART /NOCANCEL /SP- /CLOSEAPPLICATIONS /RESTARTAPPLICATIONS
To do a customized installation:
run manually git installation with the parameter /SAVEINF="filename"
e.g:. git-2.xx.exe /SAVEINF="filename"
And then to repeat the installation with /LOADINF="filename"
e.g.: git.setup.exe /VERYSILENT /NORESTART /NOCANCEL /SP- /CLOSEAPPLICATIONS /RESTARTAPPLICATIONS /LOADINF="filename"
It's documented on:
Git: Silent-or-Unattended-Installation

You can download and use the Git Thumbdrive edition:
https://git-scm.com/download/win
look for the link under:
Git for Windows Portable ("thumbdrive edition")
E.G.: https://github.com/git-for-windows/git/releases/download/v2.23.0.windows.1/PortableGit-2.23.0-64-bit.7z.exe

Based on the answer of #Mariusz, the following lines install git into Windows image
# copy inf file
COPY resources/git-install.inf c:\git-install.inf
# get Git install file
RUN Invoke-WebRequest 'https://github.com/git-for-windows/git/releases/download/v2.30.1.windows.1/Git-2.30.1-64-bit.exe' -OutFile 'git.exe'; `
# install Git
Start-Process "c:\git.exe" -ArgumentList '/SP-', '/VERYSILENT', '/NORESTART', '/NOCANCEL', '/CLOSEAPPLICATIONS', '/RESTARTAPPLICATIONS', '/LOADINF=git-install.inf' -Wait -NoNewWindow; `
# delete files
Remove-Item -Force git-install.inf; `
Remove-Item -Force git.exe;

Related

Docker : Add a valid entrypoint for multiple python script

Helli, I have to build a Docker image for the following bioinformatics tool: https://github.com/CAMI-challenge/CAMISIM. Their dockerfile works but takes a long time to build and I would like to build my own, slightly differently, to learn. I face issues: there are several python script that I should be able to choose to run, not only a main. If I add one script in particular as an ENTRYPOINT then the behavior isn't exactly what I shoud have.
The Dockerfile:
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive
USER root
#COPY ./install_docker.sh ./
#RUN chmod +x ./install_docker.sh && sh ./install_docker.sh
RUN apt-get update && \
apt install -y git python3-pip libxml-simple-perl libncursesw5 && \
git clone https://github.com/CAMI-challenge/CAMISIM.git && \
pip3 install numpy ete3 biom-format biopython matplotlib joblib scikit-learn
ENTRYPOINT ["python3"]
ENV PATH="/CAMISIM/:${PATH}"
This yields :
sudo docker run camisim:latest metagenomesimulation.py --help
python3: can't open file 'metagenomesimulation.py': [Errno 2] No such file or directory
Adding that script as an ENTRYPOINT after python3 allows me to use it with 2 drawbacks: I cannot use another script (I could build a second docker image but that would be a bad solution), and it outputs:
ERROR: 0
usage: python metagenomesimulation.py configuration_file_path
#######################################
# MetagenomeSimulationPipeline #
#######################################
Pipeline for the simulation of a metagenome
optional arguments:
-h, --help show this help message and exit
-silent, --silent Hide unimportant Progress Messages.
-debug, --debug_mode more information, also temporary data will not be deleted
-log LOGFILE, --logfile LOGFILE
output will also be written to this log file
optional config arguments:
-seed SEED seed for random number generators
-s {0,1,2}, --phase {0,1,2}
available options: 0,1,2. Default: 0
0 -> Full run,
1 -> Only Comunity creation,
2 -> Only Readsimulator
-id DATA_SET_ID, --data_set_id DATA_SET_ID
id of the dataset, part of prefix of read/contig sequence ids
-p MAX_PROCESSORS, --max_processors MAX_PROCESSORS
number of available processors
required:
config_file path to the configuration file
You can see there is an error that should'nt be there, it actually does not use the help flag. The original Dockerfile is:
FROM ubuntu:20.04
RUN apt update
RUN apt install -y python3 python3-pip perl libncursesw5
RUN perl -MCPAN -e 'install XML::Simple'
ADD requirements.txt /requirements.txt
RUN cat requirements.txt | xargs -n 1 pip install
ADD *.py /usr/local/bin/
ADD scripts /usr/local/bin/scripts
ADD tools /usr/local/bin/tools
ADD defaults /usr/local/bin/defaults
WORKDIR /usr/local/bin
ENTRYPOINT ["python3"]
It works but shows the error as above, so not so much. Said error is not present when using the tool outside of docker. Last time I made a Docker image I just pulled the git repo and added the main .sh script as an ENTRYPOINT and everything worked despite being more complex (see https://github.com/Louis-MG/Metadbgwas).
Why would I need ADD and moving everything ? I added the git folder to the path, why can't I find the scripts ? How is it different from the Metadbgwas image ?
In your first setup, you start in the image root directory / and run git clone to check out the repository into /CAMISIM. You never change the current directory, though, so when you try to run python3 metagenomesimulation.py --help it's looking in / and not /CAMISIM, hence the "not found" error.
You can fix this just by changing the current directory. At any point after you check out the repository, run
WORKDIR /CAMISIM
You should also delete the ENTRYPOINT line. For each of the scripts you could run as a top-level entry point, check two things:
Is it executable; if you ls -l metagenomesimulation.py are there x in the permission listing? If not, on the host system, run chmod +x metagenomesimulation.py and commit to source control. (Or you could RUN chmod ... in the Dockerfile if you really can't change the repository.)
Does it have a "shebang" line? The very first line of the script should be
#!/usr/bin/env python3
If both of these things are true, then you can just run ./metagenomesimulation.py without explicitly saying python3; since you add the directory to $PATH as well, you can probably run it without specifying the ./... file location.
(Probably deleting the ENTRYPOINT line on its own is enough, given that ENV PATH setting, but your script still might be confused by starting up in the wrong directory.)
The long "help" output just suggests to me that the script is expecting a configuration file name as a parameter and you haven't provided it, or else you've repeated the script name in both the entrypoint and command parts of the container command string.
In the end very little was recquired and the original Dockerfile was correct, the same error is displayed anyway, that is due to the script itself.
What was missing was a link to the interpreter, so I could remove the ENTRYPOINT and actually interpret the script instead of having python look for it in its own path. The Dockerfile:
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive
USER root
RUN ln -s /usr/bin/python3 /usr/bin/python
RUN apt-get update && \
apt install -y git python3-pip libxml-simple-perl libncursesw5 && \
git clone https://github.com/CAMI-challenge/CAMISIM.git && \
pip3 install numpy ete3 biom-format biopython matplotlib joblib scikit-learn
ENV PATH="/CAMISIM:${PATH}"
Trying WORKDIR as suggested instead of the PATH yielded an error.

launching python script in terminal at boot and keeping open if/when exception

writing a data logging program that is intended to run when raspberry boots. I'm using lxsessions autostart to launch a shell script that has the command to launch my python program (my python script requires sudo)
while I continue to debug I would like the terminal window to stay open if/when it encounters an error.
I had done this successfully once before but lost my work.
my autostart file is:
#!/bin/bash
#lxpanel --profile LXDE-pi
#pcmanfm --desktop --profile LXDE-pi
#lxterminal -e sudo sh /home/pi/launcher.sh
#xscreensaver -no-splash
my script file is:
#!/bin/sh
echo Script is running
sudo /usr/bin/python3 /home/pi/hms/hms5-1.py
I thought something like this (in the autstart file) would work, but no:
#lxterminal -e -hold sudo sh /home/pi/launcher.sh
a simple internet search spit out of examples on how to execute command at boot, even launching scripts but nothing has helped so far. Thank you in advance.....
So I rebuilt my Raspberry Pi and had to go through this again. So after I was able to get it to work once, I followed my instructions from before, edited them to be clearer the posted here. NOTE - I think the mistake I made before was using sudo (sudo nano) when I should have just used nano....
Also note the python program I am launching is /home/pi/hms/hms2-v2.py
*** setting this up is a 4 step process ***
YOU MUST HAVE XTERM
STEP 1 - INSTALL XTERM:
sudo apt-get install xterm
STEP 2
read https://www.raspberrypi.org/forums/viewtopic.php?t=227191
FIRST CREATE autostart here: /home/pi/.config/lxsession/LXDE-pi/autostart
NOTE THE FOLDERS BELOW /home/pi/.config/ MAY NOT EXIST, IF NOT CREATE THEM EXACTLY AS ABOVE. NOTE: the directory must be LXDE-pi NOT LXDE
Then edit the autostart file
by using nano ~/.config/lxsession/LXDE-pi/autostart
NOTE: DO NOT use sudo in the above command
put the following in the file:
#!/bin/bash
#lxpanel --profile LXDE-pi
#pcmanfm --desktop --profile LXDE-pi
sh /home/pi/launcher.sh
#xscreensaver -no-splash
Step 3
create the script (.sh) file: launcher.sh in the directory /home/pi
include the following in the file launcher.sh:
#!/bin/sh
echo starting script
xterm -T "HMS" -geometry 100x70+10+35 -hold -e sudo /usr/bin/python3 /home/pi/hms/hms2-v2.py
Step 4
make the .sh file executable with: sudo chmod +x launcher.sh

Pass windows environmental variables to dockerized python app

I am running a python application that reads two paths from Windows env vars and proceeds to use the executables in those paths to do OCR on some documents. Since POPPLER, TESSERACT env vars are already set in Windows, this Python snippet works for me:
popplerPath = os.environ.get('POPPLER')
tesseractPath = os.environ.get('TESSERACT')
Now I am trying to dockerize the app, and, to my understanding, since my container will need access to those paths, I need to mount them using VOLUME during run. My dockerfile looks like this:
FROM python:3.7.7-slim
WORKDIR ./
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY documents/ .
COPY src/ ./src
CMD [ "python", "./src/run.py" ]
I build the image using:
docker build -t ocr .
And I try to run my container using:
docker run -v %POPPLER%:%POPPLER% -v %TESSERACT%:%TESSERACT% ocr
... but my app still gets a None value for these paths and can't use the executable files. Is my approach correct and beyond that, is it a good dev practice?
See the doc, the switch for environment variable is -e:
$ docker run -e MYVAR1 --env MYVAR2=foo --env-file ./env.list ubuntu bash
and in dockerfile, you can use
ENV FOO=/bar
If I understand your statement correctly, your paths are mounted in the container in the same path as the host. The only problem is your Python script, which expects the paths to be provided by the environment variable. This will not exist unless you pass on them from your host system to your container system.
Once you verified your mounted volume with -v is there correctly, you can try with
docker run -v %POPPLER%:%POPPLER% -v %TESSERACT%:%TESSERACT% --env POPPLER=%POPPLER% --env TESSERACT=%TESSERACT% ocr
or, if you always run this, you can consider to put them in your dockerfile to save some keystroke.
Any executable you call must be built into the image. Containers can't usually call executables on the host or in other containers. In the specific example you show, a Linux container can't run a Windows executable, even if you do use a bind mount to inject it into the container.
The "slim" python images are built on Debian GNU/Linux, and you need to use its APT tool to install these executable dependencies in your Dockerfile. (https://www.debian.org/distrib/packages has a search box to help you find the right package name; Ubuntu Linux also uses Debian packages.)
FROM python:3.7-slim
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install -y \
popper-utils \
tesseract-ocr-all
COPY requirements.txt .
...
I'd suggest putting reasonable defaults in your code if these environment variables aren't set. The apt-get install command will put them in the system path inside the image.
popplerPath = os.environ.get('POPPLER', 'poppler')
tesseractPath = os.environ.get('TESSERACT', 'tesseract')
If you really need them as environment variables you could use the Dockerfile ENV directive
ENV POPPLER=poppler TESSERACT=tesseract
Environment variables from the host don't automatically get passed through to the container; you need a Dockerfile ENV or docker run -e option. Also remember that the container has an isolated filesystem (and Windows-syntax paths don't make sense in Linux containers) so these environment variables would need to be container paths, the second half of your proposed docker run -v option.

How can I make Python3.6, Red Hat Software Collection, persist after a reboot/logout/login?

I am trying to enable rh-python36 software collection after reboot So I can avoid calling "scl enable" all the time.
After unzipping and installing the package:
yum install -y tmp/rpms/*
I created a new file "python36.sh" under /etc/profile.d with the following script:
#!/bin/bash
source /opt/rh/rh-python36/enable
export X_SCLS="`scl enable rh-python36 'echo $X_SCLS'`"
After restarting or rebooting the instance, I am getting : No such file or directoryenable
I am using CentOS release 6.10 (Final)
If you have the root privilege, then add the line of code below to the .bash_profile file found in your root directory:
source /opt/rh/rh-python36/enable
Try this:
#!/bin/bash
source scl_source enable rh-python36
Reference Doc: https://access.redhat.com/solutions/527703

Activate Anaconda Python environment from makefile

I want to use a makefile to build my project's environment using a makefile and anaconda/miniconda, so I should be able to clone the repo and simply run make myproject
myproject: build
build:
#printf "\nBuilding Python Environment\n"
#conda env create --quiet --force --file environment.yml
#source /home/vagrant/miniconda/bin/activate myproject
If I try this, however, I get the following error
make: source: Command not found
make: *** [source] Error 127
I have searched for a solution, but [this question/answer(How to source a script in a Makefile?) suggests that I cannot use source from within a makefile.
This answer, however, proposes a solution (and received several upvotes) but this doesn't work for me either
( \
source /home/vagrant/miniconda/bin/activate myproject; \
)
/bin/sh: 2: source: not found
make: *** [source] Error 127
I also tried moving the source activate step to a separate bash script, and executing that script from the makefile. That doesn't work, and I assume for the a similar reason, i.e. I am running source from within a shell.
I should add that if I run source activate myproject from the terminal, it works correctly.
I had a similar problem; I wanted to create, or update, a conda environment from a Makefile to be sure my own scripts could use the python from that conda environment.
By default make uses sh to execute commands, and sh doesn't know source (also see this SO answer). I simply set the SHELL to bash and ended up with (relevant part only):
SHELL=/bin/bash
CONDAROOT = /my/path/to/miniconda2
.
.
install: sometarget
source $(CONDAROOT)/bin/activate && conda env create -p conda -f environment.yml && source deactivate
Hope it helps
You should use this, it's functional for me at moment.
report.ipynb : merged.ipynb
( bash -c "source ${HOME}/anaconda3/bin/activate py27; which -a python; \
jupyter nbconvert \
--to notebook \
--ExecutePreprocessor.kernel_name=python2 \
--ExecutePreprocessor.timeout=3000 \
--execute merged.ipynb \
--output=$< $<" )
I had the same problem. Essentially the only solution is stated by 9000. I have a setup shell script inside which I setup the conda environment (source activate python2), then I call the make command. I experimented with setting up the environment from inside Makefile and no success.
I have this line in my makefile:
installpy :
./setuppython2.sh && python setup.py install
The error messages is:
make
./setuppython2.sh && python setup.py install
running install
error: can't create or remove files in install directory
The following error occurred while trying to add or remove files in the
installation directory:
[Errno 13] Permission denied: '/usr/lib/python2.7/site-packages/test-easy-install-29183.write-test'
Essentially, I was able to set up my conda environment to use my local conda that I have write access. But this is not picked up by the make process. I don't understand why the environment set up in my shell script using 'source' is not visible in the make process; the source command is supposed to change the current shell. I just want to share this so that other people don't wast time trying to do this. I know autotoools has a way of working with python. But the make program is probably limited in this respect.
My current solution is a shell script:
cat py2make.sh
#!/bin/sh
# the prefix should be change to the target
# of installation or pwd of the build system
PREFIX=/some/path
CONDA_HOME=$PREFIX/anaconda3
PATH=$CONDA_HOME/bin:$PATH
unset PYTHONPATH
export PREFIX CONDA_HOME PATH
source activate python2
make
This seems to work well for me.
There were a solution for similar situation but it does not seems to work for me:
My modified Makefile segment:
installpy :
( source activate python2; python setup.py install )
Error message after invoking make:
make
( source activate python2; python setup.py install )
/bin/sh: line 0: source: activate: file not found
make: *** [installpy] Error 1
Not sure where am I wrong. If anyone has a better solution please share it.

Categories