How to install python and ansible in windows docker container - python

I have my windows docker installed in my windows 10 machine. Now I need to install python and ansible in my docker container.
I got few references to install python and ansible in a Linux machine. But I could not find a source how to install python 3 and ansible in a windows10 docker container.
Once python is installed I can try to install ansible using pip command. But for that I am not sure how to start with python installation first. In docker I have installed Jenkins, and want to run my ansible playbooks in Jenkins. Kindly help. Thanks!

I build an ansible image periodically tracking the devel branch:
# syntax=docker/dockerfile:experimental
FROM ubuntu:18.04
ENV DEBIAN_FRONTEND noninteractive
ENV PATH /ansible/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
RUN apt-get update && \
apt-get -y install \
git \
openssh-client \
python3.7 \
python3.7-dev \
python3-pip \
python3-setuptools \
python3-pygit2 \
build-essential \
libssl-dev \
libffi-dev \
man
RUN groupadd -g 1000 ansible && \
useradd -u 1000 -g ansible -d /home/ansible -m -k /etc/skel -s /bin/bash ansible
RUN mkdir -p -m 0600 ~/.ssh && \
ssh-keyscan github.com >> ~/.ssh/known_hosts
RUN --mount=type=ssh git clone -b devel https://github.com/ansible/ansible.git /ansible && \
chown -R 1000:1000 /ansible
RUN python3 -m pip install -r /ansible/requirements.txt
RUN ln -s /usr/bin/python3 /usr/bin/python
RUN echo '. /ansible/hacking/env-setup' >> /home/ansible/.bashrc
ENTRYPOINT ["/ansible/bin/ansible"]
Note:
ansible is not intended to be run from a windows control
server - you can use Linux containers on Windows
this example uses the docker build
enhancements
the image is configured following the common
environment
setup
for developing ansible modules
Build the image: DOCKER_BUILDKIT=1 docker build --rm --network host -t so:5776957 .
Run the container: docker run --rm --network host -e ANSIBLE_HOME=/ansible -e PYTHONPATH=/ansible/lib so:5776957 localhost -m ping

Instead of installing in container from yourself, you may try to use existing docker image which already have the same already installed. If you still want to build by yourself, you can look at the Dockerfile in the github repo.
https://hub.docker.com/r/zeeshanjamal16/ansibledocker

Related

Can't connect to ActiveMQ Console running in Docker container

I made a Dockerfile to run an ActiveMQ service from, and when I try to connect to the console on the host machine using http://127.0.0.1:8161/ in my web browser, it says 127.0.0.1 didn’t send any data. in Google Chrome. This is with running the docker image using docker run -p 61613:61613 -p 8161:8161 -it service_test bash.
However, when I run it using docker run --net host -it service_test bash, Google Chrome says 127.0.0.1 refused to connect., which leads me to believe I'm doing something by adding the --net flag but I'm not sure why it can't connect. Maybe a port forwarding issue?
My Dockerfile is as follows
FROM <...>/library/ubuntu:20.04
ADD <proxy certs>
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends software-properties-common && \
update-ca-certificates && \
add-apt-repository -y ppa:deadsnakes/ppa && \
apt-get update && \
apt-get install -y --no-install-recommends \
curl \
git \
python3.8 \
python3.8-venv \
python3.8-dev \
openjdk-11-jdk \
make \
&& apt-get clean && \
rm -rf /var/lib/apt/lists/*
WORKDIR /opt
RUN <point pip to certs>
RUN echo "timeout = 300" >> /etc/pip.conf
RUN curl -O https://bootstrap.pypa.io/get-pip.py && \
python3.8 get-pip.py
# Run python in a venv
ENV VIRTUAL_ENV=/opt/venv
RUN python3.8 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
# Update pip before continuing
RUN pip install --upgrade pip
# Get wheel
RUN pip install wheel
# add extra index url
RUN echo "extra-index-url = <url>" >> /etc/pip.conf
# Install ActiveMQ
ENV JAVA_HOME="/usr/lib/jvm/java-11-openjdk-amd64"
ENV PATH="$JAVA_HOME/bin:$PATH"
RUN mkdir -p /opt/amq
RUN curl -kL \
http://archive.apache.org/dist/activemq/5.16.3/apache-activemq-5.16.3-bin.tar.gz \
>> /opt/amq/apache-activemq-5.16.3-bin.tar.gz && \
tar -xzf /opt/amq/apache-activemq-5.16.3-bin.tar.gz --directory /opt/amq
ENV PATH="/opt/amq/apache-activemq-5.16.3/bin:$PATH"
# Expose ports 61613 and 8161 to other containers
EXPOSE 61613
EXPOSE 8161
COPY <package>.whl <package>.whl
RUN pip install <package>
Note: Some sensitive info was removed, anything surrounded by <> has been hidden.
For context, I am running activemq from the container using activemq console, and trying to connect to it from my host OS using Google Chrome.
Got it to work!
For those having the same issue, I resolved it by changing the IP address in jetty.xml from 127.0.0.1 to 0.0.0.0. I am now able to connect to my containerized AMQ instance from my host OS.

changing python version in docker

I am trying to have this repo on docker: https://github.com/facebookresearch/detectron2/tree/main/docker
but when I want to docker compose it, I receive this error:
ERROR: Package 'detectron2' requires a different Python: 3.6.9 not in '>=3.7'
The default version of the python I am using is 3.10 but I don't know why through docker it's trying to run it on python 3.6.9.
Is there a way for me to change it to a higher version of python while running the following dockerfile?
FROM nvidia/cuda:11.1.1-cudnn8-devel-ubuntu18.04
# use an older system (18.04) to avoid opencv incompatibility (issue#3524)
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && apt-get install -y \
python3-opencv ca-certificates python3-dev git wget sudo ninja-build
RUN ln -sv /usr/bin/python3 /usr/bin/python
# create a non-root user
ARG USER_ID=1000
RUN useradd -m --no-log-init --system --uid ${USER_ID} appuser -g sudo
RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
USER appuser
WORKDIR /home/appuser
ENV PATH="/home/appuser/.local/bin:${PATH}"
RUN wget https://bootstrap.pypa.io/pip/3.6/get-pip.py && \
python3 get-pip.py --user && \
rm get-pip.py
# install dependencies
# See https://pytorch.org/ for other options if you use a different version of CUDA
RUN pip install --user tensorboard cmake # cmake from apt-get is too old
RUN pip install --user torch==1.10 torchvision==0.11.1 -f https://download.pytorch.org/whl/cu111/torch_stable.html
RUN pip install --user 'git+https://github.com/facebookresearch/fvcore'
# install detectron2
RUN git clone https://github.com/facebookresearch/detectron2 detectron2_repo
# set FORCE_CUDA because during `docker build` cuda is not accessible
ENV FORCE_CUDA="1"
# This will by default build detectron2 for all common cuda architectures and take a lot more time,
# because inside `docker build`, there is no way to tell which architecture will be used.
ARG TORCH_CUDA_ARCH_LIST="Kepler;Kepler+Tesla;Maxwell;Maxwell+Tegra;Pascal;Volta;Turing"
ENV TORCH_CUDA_ARCH_LIST="${TORCH_CUDA_ARCH_LIST}"
RUN pip install --user -e detectron2_repo
# Set a fixed model cache directory.
ENV FVCORE_CACHE="/tmp"
WORKDIR /home/appuser/detectron2_repo
# run detectron2 under user "appuser":
# wget http://images.cocodataset.org/val2017/000000439715.jpg -O input.jpg
# python3 demo/demo.py \
#--config-file configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml \
#--input input.jpg --output outputs/ \
#--opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl
You can use pyenv: https://github.com/pyenv/pyenv
Just google docker pyenv container, will give you some entries like: https://gist.github.com/jprjr/7667947
If you follow the gist you can see how it has been updated, very easy to update to latest python that pyenv support. anything since 2.2 to 3.11
Only drawback is that container becomes quite large because it holds all glibc development tools and libraries to compile cpython, but often it helps in case you need modules without wheels and need to compile because it is already there.
Below is a minimal Pyenv Dockerfile Just change the PYTHONVER or set a --build-arg to anything pythonversion pyenv support have (pyenv install -l):
FROM ubuntu:22.04
ARG MYHOME=/root
ENV MYHOME ${MYHOME}
ARG PYTHONVER=3.10.5
ENV PYTHONVER ${PYTHONVER}
ARG PYTHONNAME=base
ENV PYTHONNAME ${PYTHONNAME}
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get upgrade -y && \
apt-get install -y locales wget git curl zip vim apt-transport-https tzdata language-pack-nb language-pack-nb-base manpages \
build-essential libjpeg-dev libssl-dev xvfb zlib1g-dev libbz2-dev libreadline-dev libreadline6-dev libsqlite3-dev tk-dev libffi-dev libpng-dev libfreetype6-dev \
libx11-dev libxtst-dev libfontconfig1 lzma lzma-dev
RUN git clone https://github.com/pyenv/pyenv.git ${MYHOME}/.pyenv && \
git clone https://github.com/yyuu/pyenv-virtualenv.git ${MYHOME}/.pyenv/plugins/pyenv-virtualenv && \
git clone https://github.com/pyenv/pyenv-update.git ${MYHOME}/.pyenv/plugins/pyenv-update
SHELL ["/bin/bash", "-c", "-l"]
COPY ./.bash_profile /tmp/
RUN cat /tmp/.bash_profile >> ${MYHOME}/.bashrc && \
cat /tmp/.bash_profile >> ${MYHOME}/.bash_profile && \
rm -f /tmp/.bash_profile && \
source ${MYHOME}/.bash_profile && \
pyenv install ${PYTHONVER} && \
pyenv virtualenv ${PYTHONVER} ${PYTHONNAME} && \
pyenv global ${PYTHONNAME}
and the pyenv config to be saved as .bash_profile in Dockerfile directory:
# profile for pyenv
export PYENV_ROOT="$HOME/.pyenv"
export PATH="$PYENV_ROOT/bin:$PATH"
eval "$(pyenv init -)"
eval "$(pyenv init --path)"
eval "$(pyenv virtualenv-init -)"
build with:
docker build -t pyenv:3.10.5 .
Will build the image, but as said it is quite big:
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
pyenv 3.10.5 64a4b91364d4 2 minutes ago 1.04GB
very easy to test any python version only changing PYTHONVER
docker run -ti pyenv:3.10.5 /bin/bash
(base) root#968fd2178c8a:/# python --version
Python 3.10.5
(base) root#968fd2178c8a:/# which python
/root/.pyenv/shims/python
if I build with docker build -t pyenv:3.12-dev --build-arg PYTHONVER=3.12.dev . or change the PYTHONVER in the Dockerfile:
docker run -ti pyenv:3.12-dev /bin/bash
(base) root#c7245ea9f52e:/# python --version
Python 3.12.0a0
This is an open issue with facebookresearch/detectron2. The developers updated the base Python requirement from 3.6+ to 3.7+ with commit 5934a14 last week but didn't modify the Dockerfile.
I've created a Dockerfile based on Nvidia CUDA's CentOS8 image (rather than Ubuntu) that should work.
FROM nvidia/cuda:11.1.1-cudnn8-devel-centos8
RUN cd /etc/yum.repos.d/ && \
sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-* && \
sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-* && \
dnf check-update; dnf install -y ca-certificates python38 python38-devel git sudo which gcc-c++ mesa-libGL && \
dnf clean all
RUN alternatives --set python /usr/bin/python3 && alternatives --install /usr/bin/pip pip /usr/bin/pip3 1
# create a non-root user
ARG USER_ID=1000
RUN useradd -m --no-log-init --system --uid ${USER_ID} appuser -g wheel
RUN echo '%wheel ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
USER appuser
WORKDIR /home/appuser
ENV PATH="/home/appuser/.local/bin:${PATH}"
# install dependencies
# See https://pytorch.org/ for other options if you use a different version of CUDA
ARG CXX="g++"
RUN pip install --user tensorboard ninja cmake opencv-python opencv-contrib-python # cmake from apt-get is too old
RUN pip install --user torch==1.10 torchvision==0.11.1 -f https://download.pytorch.org/whl/cu111/torch_stable.html
RUN pip install --user 'git+https://github.com/facebookresearch/fvcore'
# install detectron2
RUN git clone https://github.com/facebookresearch/detectron2 detectron2_repo
# set FORCE_CUDA because during `docker build` cuda is not accessible
ENV FORCE_CUDA="1"
# This will by default build detectron2 for all common cuda architectures and take a lot more time,
# because inside `docker build`, there is no way to tell which architecture will be used.
ARG TORCH_CUDA_ARCH_LIST="Kepler;Kepler+Tesla;Maxwell;Maxwell+Tegra;Pascal;Volta;Turing"
ENV TORCH_CUDA_ARCH_LIST="${TORCH_CUDA_ARCH_LIST}"
RUN pip install --user -e detectron2_repo
# Set a fixed model cache directory.
ENV FVCORE_CACHE="/tmp"
WORKDIR /home/appuser/detectron2_repo
# run detectron2 under user "appuser":
# curl -o input.jpg http://images.cocodataset.org/val2017/000000439715.jpg
# python3 demo/demo.py \
#--config-file configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml \
#--input input.jpg --output outputs/ \
#--opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl
Alternatively, this is untested as the following images don't work on my machine (because I run arm64) so I can't debug...
In the original Dockerfile, changing your FROM line to this might resolve it, but I haven't verified this (and the image mentioned in the issue (pytorch/pytorch:1.10.0-cuda11.3-cudnn8-devel) might work as well.
FROM nvidia/cuda:11.1.1-cudnn8-devel-ubuntu20.04

visibility of python output from bash

I've python code that includes tqdm code.
the bash build docker image and container however I can't see any output from the container(in CLI).
#!/bin/sh
docker build . -t traffic
docker run -d --name traffic_con traffic
docker wait traffic_con
docker cp -a traffic_con:/usr/TrafficMannager/out/data/. ./out/data/
docker rm /traffic_con
docker rmi /traffic
I've tried to run the container on interactive mode (-it) however it's throwing an error
[EDIT:]
Docker file:
FROM cityflowproject/cityflow
# Create a folder we'll work in
WORKDIR /usr/TrafficMannager
# Upgrade installed packages
RUN apt-get update && apt-get upgrade -y && apt-get clean
# Install vim to open & edit code\text files
RUN apt-get install -y vim
# Install all python code depentences
RUN pip install gym && \
pip install numpy && \
pip install IPython && \
pip install torch && \
python -m pip install python-dotenv &&\
pip install tqdm
COPY . .
CMD chmod u+x script/container_instructions.sh; ./script/container_instructions.sh
container_instructions:
#!/bin/sh
pip install lib/extern/CityFlow/.
python main.py
You run the Docker container in the background, then immediately docker wait for it. If you run the container in the foreground instead, you'll see its output on stdout, and the docker run command will complete when the container exits.
docker run --name traffic_con traffic # without -d
Given the wrapper script you show, you may find this setup much easier to run in a Python virtual environment. Ignore all the Docker parts and run:
python3 -m venv venv
./venv/bin/pip install gym numpy IPython torch python-dotenv tqdm lib/extern/CityFlow
./venv/bin/python3 main.py
The script will directly write to ./out/data on the host system, without the long-winded privileged script to copy data out.
If you really do need a container here, you can also mount the output directory into the container to avoid the manual copy step.
#!/bin/sh
docker build . -t traffic
docker run --rm -v "$PWD/out/data:/usr/TrafficMannager/out/data" traffic
docker rmi traffic

Make Docker container use newest version of Python installed

I have a couple of Python modules that I use inside my Docker container and they require a higher version of Python that what's being used. I install Python and install the modules using:
RUN apt-get update || : && apt-get install python3 -y
RUN apt-get install -y python3-pip
COPY requirements.txt /project
RUN pip3 install -r requirements.txt
Expecting I would be using the latest version of Python in my Docker container but when I go into it's shell and run python3 --version is comes as 3.4.2 which is incredibly old for my program. How do I make the default Python to be the latest I installed above without messing over the System-level python?
The image runtime I'm using for the Docker container is: node:9-slim
I don't think you can find a prebuilt python3.9 package on a debian 8 distribution as your environment is pretty old.
The only solution is you build the python3.9 out from source code in your base container. A full workable Dockerfile as next:
FROM node:9-slim
RUN apt update; \
apt install -y build-essential zlib1g-dev libncurses5-dev libgdbm-dev libnss3-dev libssl-dev libreadline-dev libffi-dev; \
wget https://www.python.org/ftp/python/3.9.7/Python-3.9.7.tgz; \
tar -zxvf Python-3.9.7.tgz; \
cd Python-3.9.7; \
./configure --prefix=/usr/local/python3; \
make && make install; \
ln -sf /usr/local/python3/bin/python3.9 /usr/bin/python3; \
ln -sf /usr/local/python3/bin/pip3.9 /usr/bin/pip3
Verify it:
$ docker build -t myimage:1 .
$ docker run --rm -it myimage:1 python3 --version
Python 3.9.7
$ docker run --rm -it myimage:1 pip3 --version
pip 21.2.3 from /usr/local/python3/lib/python3.9/site-packages/pip (python 3.9)

is $DISPLAY set properly? - Running a wxPython Phoenix GUI in a docker container

I would like to dockerize a GUI written with wxPython Phoenix in order to have the GUI appear on the host when running the docker image.
Below is a basic wxPython Phoenix GUI and the Dockerfile that creates an image with Ubuntu 18.04, Python 3.7.5 and wxPython Phoenix.
When running the image, it returns the following message:
docker build -t simple-gui:latest .
docker run -it simple-gui /bin/bash
root#97229a17f2cd:~/python# ./simple_gui.py
Unable to access the X Display, is $DISPLAY set properly?
I understand I have to send the address of the host's X server to the docker image that would then be used by wxPython Phoenix, but I'm not sure how to do that.
simple-gui.py: (from wxPython Phoenix wiki)
#!/usr/bin/env python3.7
import wx
app = wx.App(False)
frame = wx.Frame(None, wx.ID_ANY, "Hello World")
frame.Show(True)
app.MainLoop()
Dockerfile:
FROM ubuntu:18.04
# Install dependencies for Python and wxPython Phoenix
RUN apt update && apt install -y \
libwebkitgtk-3.0-dev \
libgtk-3-dev \
libsm-dev \
freeglut3 \
freeglut3-dev \
libnotify-dev \
libgstreamer1.0-dev \
libgstreamer-plugins-base1.0-dev \
dpkg-dev \
build-essential \
python3.7-dev \
libjpeg-dev \
libtiff-dev \
libsdl1.2-dev \
software-properties-common \
# Install Python 3.7 and pip latest versions
&& add-apt-repository ppa:deadsnakes/ppa \
&& apt install -y python3.7 python3-pip \
&& python3.7 -m pip install -U --no-cache-dir pip \
# Install wx
&& python3.7 -m pip install -U --no-cache-dir -f https://extras.wxpython.org/wxPython4/extras/linux/gtk3/ubuntu-18.04 wxPython
# Copy files
COPY simple_gui.py /root/python/
WORKDIR /root/python
ENTRYPOINT ["./simple_gui.py"]
As said in the question, the DISPLAY variable is used in the docker image to store the host's X server address. Depending on the host, it should take different values.
UNIX host (Linux/MacOS):
UNIX already uses an X server for display purposes.
Set a DISPLAY variable such as DISPLAY=:0.0.
Run your image with: docker run -e DISPLAY=$DISPLAY simple-gui
Windows host:
Windows does not use an X server but Windows Desktop Manager.
You need to install an X server for Windows, a popular choice is VcXsrv.
Set a DISPLAY variable to DISPLAY=<HOST_IP>:0.0, the value for <HOST_IP> can be found using ipconfig, it's the one tagged DockerNAT.
Run your image with: docker run -e DISPLAY=$DISPLAY simple-gui (or DISPLAY=%DISPLAY%, depending on your command line...)
Sources:
Run GUI app in linux docker container on a windows host
If you are a Mac user, you probably would have to install XQuartz (X server for MacOS), launch it and then run your container with -e DISPLAY=host.docker.internal:0.

Categories