I have the following in my Dockerfile:
...
USER $user
# Set default python version to 3
RUN alias python=python3
RUN alias pip=pip3
WORKDIR /app
# Install local dependencies
RUN pip install --requirement requirements.txt --user
When building the image, I get the following:
Step 13/22 : RUN alias pip=pip3
---> Running in dc48c9c84c88
Removing intermediate container dc48c9c84c88
---> 6c7757ea2724
Step 14/22 : RUN pip install --requirement requirements.txt --user
---> Running in b829d6875998
/bin/sh: pip: command not found
Why is pip not recognized if I set an alias right on top of it?
Ps: I do not want to use .bashrc for loading aliases.
The problem is that the alias only exists for that intermediate layer in the image. Try the following:
FROM ubuntu
RUN apt-get update && apt-get install python3-pip -y
RUN alias python=python3
Testing here:
❰mm92400❙~/sample❱✔≻ docker build . -t testimage
...
Successfully tagged testimage:latest
❰mm92400❙~/sample❱✔≻ docker run -it testimage bash
root#78e4f3400ef4:/# python
bash: python: command not found
root#78e4f3400ef4:/#
This is because a new bash session is started for each layer, so the alias will be lost in the following layers.
To keep a stable alias, you can use a symlink as python does in their official image:
FROM ubuntu
RUN apt-get update && apt-get install python3-pip -y
# as a quick note, for a proper install of python, you would
# use a python base image or follow a more official install of python,
# changing this to RUN cd /usr/local/bin
# this just replicates your issue quickly
RUN cd "$(dirname $(which python3))" \
&& ln -s idle3 idle \
&& ln -s pydoc3 pydoc \
&& ln -s python3 python \ # this will properly alias your python
&& ln -s python3-config python-config
RUN python -m pip install -r requirements.txt
Note the use of the python3-pip package to bundle pip. When calling pip, it's best to use the python -m pip syntax, as it ensures that the pip you are calling is the one tied to your installation of python:
python -m pip install -r requirements.txt
I managed to do that by setting aliases in the /root/.bashrc file.
I have followed this example to do get an idea on how to do that
PS I am using that in a jenkins/jenkins:lts container so as I looked around and as #C.Nivs said:
The problem is that the alias only exists for that intermediate layer in the image
So in order to do that I had to find a way to add the following commands:
ENV FLAG='--kubeconfig /root/.kube/config'
RUN echo "alias helm='helm $FLAG'" >>/root/.bashrc
CMD /bin/bash -c "source /root/.bashrc" && /usr/local/bin/jenkins.sh
for the CMD part you have to check the image you are using so you wouldn't interrupt its normal behaviour.
Related
For the last couple of days I've struggled to install Dbt in my Windows 10 box. It seems the best way is to emulate Linux, with WSL.
So, in order to help others to save their time and a few neurons, I decided to post a quick recipe in this thread. I summarized the whole process in 7 steps, together with a nice and complete tutorial
Enable WSL
https://learn.microsoft.com/en-us/windows/wsl/install
Install Linux Ubuntu
https://ubuntu.com/tutorials/install-ubuntu-on-wsl2-on-windows-10#1-overview
Install Python
As python3 comes with Ubuntu by default, you won't need to do anything in this step. Otherwise, you can always got to:
https://packaging.python.org/en/latest/tutorials/installing-packages/#requirements-for-installing-packages
Install Pip
https://packaging.python.org/en/latest/guides/installing-using-pip-and-virtual-environments/#creating-a-virtual-environment
Install VirtualEnv
https://docs.python.org/3/library/venv.html
I hope it helps. If not you can always post a message in this thread!
Best wishes,
I
Another way you can run dbt-core on Windows is with Docker. I'm currently on Windows 10 and use a Docker image for my dbt project without needing WSL. Below is my Dockerfile and requirements.txt file with dbt-core and dbt-snowflake but feel free to swap the packages you need.
In my repo, my dbt project is in a folder at the root level named dbt.
requirements.txt
dbt-core==1.1.0
dbt-snowflake==1.1.0
Dockerfile
FROM public.ecr.aws/docker/library/python:3.8-slim-buster
COPY . /dbt
# Update and install system packages
RUN apt-get update -y && \
apt-get install --no-install-recommends -y -q \
git libpq-dev python-dev && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Install dbt
RUN pip install -U pip
RUN pip install -r dbt/requirements.txt
# TEMP FIX due to dependency updates. See https://github.com/dbt-labs/dbt-core/issues/4745
RUN pip install --force-reinstall MarkupSafe==2.0.1
# Install dbt dependencies
WORKDIR /dbt
RUN dbt deps
# Specify profiles directory
ENV DBT_PROFILES_DIR=.dbt
# Expose port for dbt docs
EXPOSE 8080
And then you can build and run it (I personally put both of these commands in a dbt_run.sh file and run with bash dbt_run.sh):
docker build -t dbt_image .
docker run \
-p 8080:8080 \
--env-file .env \
-it \
--mount type=bind,source="$(pwd)",target=/dbt \
dbt_image bash
If you make changes to your dbt project while the container is running they will be reflected in the container which makes it great for developing locally. Hope this helps!
I am trying to have this repo on docker: https://github.com/facebookresearch/detectron2/tree/main/docker
but when I want to docker compose it, I receive this error:
ERROR: Package 'detectron2' requires a different Python: 3.6.9 not in '>=3.7'
The default version of the python I am using is 3.10 but I don't know why through docker it's trying to run it on python 3.6.9.
Is there a way for me to change it to a higher version of python while running the following dockerfile?
FROM nvidia/cuda:11.1.1-cudnn8-devel-ubuntu18.04
# use an older system (18.04) to avoid opencv incompatibility (issue#3524)
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && apt-get install -y \
python3-opencv ca-certificates python3-dev git wget sudo ninja-build
RUN ln -sv /usr/bin/python3 /usr/bin/python
# create a non-root user
ARG USER_ID=1000
RUN useradd -m --no-log-init --system --uid ${USER_ID} appuser -g sudo
RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
USER appuser
WORKDIR /home/appuser
ENV PATH="/home/appuser/.local/bin:${PATH}"
RUN wget https://bootstrap.pypa.io/pip/3.6/get-pip.py && \
python3 get-pip.py --user && \
rm get-pip.py
# install dependencies
# See https://pytorch.org/ for other options if you use a different version of CUDA
RUN pip install --user tensorboard cmake # cmake from apt-get is too old
RUN pip install --user torch==1.10 torchvision==0.11.1 -f https://download.pytorch.org/whl/cu111/torch_stable.html
RUN pip install --user 'git+https://github.com/facebookresearch/fvcore'
# install detectron2
RUN git clone https://github.com/facebookresearch/detectron2 detectron2_repo
# set FORCE_CUDA because during `docker build` cuda is not accessible
ENV FORCE_CUDA="1"
# This will by default build detectron2 for all common cuda architectures and take a lot more time,
# because inside `docker build`, there is no way to tell which architecture will be used.
ARG TORCH_CUDA_ARCH_LIST="Kepler;Kepler+Tesla;Maxwell;Maxwell+Tegra;Pascal;Volta;Turing"
ENV TORCH_CUDA_ARCH_LIST="${TORCH_CUDA_ARCH_LIST}"
RUN pip install --user -e detectron2_repo
# Set a fixed model cache directory.
ENV FVCORE_CACHE="/tmp"
WORKDIR /home/appuser/detectron2_repo
# run detectron2 under user "appuser":
# wget http://images.cocodataset.org/val2017/000000439715.jpg -O input.jpg
# python3 demo/demo.py \
#--config-file configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml \
#--input input.jpg --output outputs/ \
#--opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl
You can use pyenv: https://github.com/pyenv/pyenv
Just google docker pyenv container, will give you some entries like: https://gist.github.com/jprjr/7667947
If you follow the gist you can see how it has been updated, very easy to update to latest python that pyenv support. anything since 2.2 to 3.11
Only drawback is that container becomes quite large because it holds all glibc development tools and libraries to compile cpython, but often it helps in case you need modules without wheels and need to compile because it is already there.
Below is a minimal Pyenv Dockerfile Just change the PYTHONVER or set a --build-arg to anything pythonversion pyenv support have (pyenv install -l):
FROM ubuntu:22.04
ARG MYHOME=/root
ENV MYHOME ${MYHOME}
ARG PYTHONVER=3.10.5
ENV PYTHONVER ${PYTHONVER}
ARG PYTHONNAME=base
ENV PYTHONNAME ${PYTHONNAME}
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get upgrade -y && \
apt-get install -y locales wget git curl zip vim apt-transport-https tzdata language-pack-nb language-pack-nb-base manpages \
build-essential libjpeg-dev libssl-dev xvfb zlib1g-dev libbz2-dev libreadline-dev libreadline6-dev libsqlite3-dev tk-dev libffi-dev libpng-dev libfreetype6-dev \
libx11-dev libxtst-dev libfontconfig1 lzma lzma-dev
RUN git clone https://github.com/pyenv/pyenv.git ${MYHOME}/.pyenv && \
git clone https://github.com/yyuu/pyenv-virtualenv.git ${MYHOME}/.pyenv/plugins/pyenv-virtualenv && \
git clone https://github.com/pyenv/pyenv-update.git ${MYHOME}/.pyenv/plugins/pyenv-update
SHELL ["/bin/bash", "-c", "-l"]
COPY ./.bash_profile /tmp/
RUN cat /tmp/.bash_profile >> ${MYHOME}/.bashrc && \
cat /tmp/.bash_profile >> ${MYHOME}/.bash_profile && \
rm -f /tmp/.bash_profile && \
source ${MYHOME}/.bash_profile && \
pyenv install ${PYTHONVER} && \
pyenv virtualenv ${PYTHONVER} ${PYTHONNAME} && \
pyenv global ${PYTHONNAME}
and the pyenv config to be saved as .bash_profile in Dockerfile directory:
# profile for pyenv
export PYENV_ROOT="$HOME/.pyenv"
export PATH="$PYENV_ROOT/bin:$PATH"
eval "$(pyenv init -)"
eval "$(pyenv init --path)"
eval "$(pyenv virtualenv-init -)"
build with:
docker build -t pyenv:3.10.5 .
Will build the image, but as said it is quite big:
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
pyenv 3.10.5 64a4b91364d4 2 minutes ago 1.04GB
very easy to test any python version only changing PYTHONVER
docker run -ti pyenv:3.10.5 /bin/bash
(base) root#968fd2178c8a:/# python --version
Python 3.10.5
(base) root#968fd2178c8a:/# which python
/root/.pyenv/shims/python
if I build with docker build -t pyenv:3.12-dev --build-arg PYTHONVER=3.12.dev . or change the PYTHONVER in the Dockerfile:
docker run -ti pyenv:3.12-dev /bin/bash
(base) root#c7245ea9f52e:/# python --version
Python 3.12.0a0
This is an open issue with facebookresearch/detectron2. The developers updated the base Python requirement from 3.6+ to 3.7+ with commit 5934a14 last week but didn't modify the Dockerfile.
I've created a Dockerfile based on Nvidia CUDA's CentOS8 image (rather than Ubuntu) that should work.
FROM nvidia/cuda:11.1.1-cudnn8-devel-centos8
RUN cd /etc/yum.repos.d/ && \
sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-* && \
sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-* && \
dnf check-update; dnf install -y ca-certificates python38 python38-devel git sudo which gcc-c++ mesa-libGL && \
dnf clean all
RUN alternatives --set python /usr/bin/python3 && alternatives --install /usr/bin/pip pip /usr/bin/pip3 1
# create a non-root user
ARG USER_ID=1000
RUN useradd -m --no-log-init --system --uid ${USER_ID} appuser -g wheel
RUN echo '%wheel ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
USER appuser
WORKDIR /home/appuser
ENV PATH="/home/appuser/.local/bin:${PATH}"
# install dependencies
# See https://pytorch.org/ for other options if you use a different version of CUDA
ARG CXX="g++"
RUN pip install --user tensorboard ninja cmake opencv-python opencv-contrib-python # cmake from apt-get is too old
RUN pip install --user torch==1.10 torchvision==0.11.1 -f https://download.pytorch.org/whl/cu111/torch_stable.html
RUN pip install --user 'git+https://github.com/facebookresearch/fvcore'
# install detectron2
RUN git clone https://github.com/facebookresearch/detectron2 detectron2_repo
# set FORCE_CUDA because during `docker build` cuda is not accessible
ENV FORCE_CUDA="1"
# This will by default build detectron2 for all common cuda architectures and take a lot more time,
# because inside `docker build`, there is no way to tell which architecture will be used.
ARG TORCH_CUDA_ARCH_LIST="Kepler;Kepler+Tesla;Maxwell;Maxwell+Tegra;Pascal;Volta;Turing"
ENV TORCH_CUDA_ARCH_LIST="${TORCH_CUDA_ARCH_LIST}"
RUN pip install --user -e detectron2_repo
# Set a fixed model cache directory.
ENV FVCORE_CACHE="/tmp"
WORKDIR /home/appuser/detectron2_repo
# run detectron2 under user "appuser":
# curl -o input.jpg http://images.cocodataset.org/val2017/000000439715.jpg
# python3 demo/demo.py \
#--config-file configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml \
#--input input.jpg --output outputs/ \
#--opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl
Alternatively, this is untested as the following images don't work on my machine (because I run arm64) so I can't debug...
In the original Dockerfile, changing your FROM line to this might resolve it, but I haven't verified this (and the image mentioned in the issue (pytorch/pytorch:1.10.0-cuda11.3-cudnn8-devel) might work as well.
FROM nvidia/cuda:11.1.1-cudnn8-devel-ubuntu20.04
I'm trying to update an existing Dockerfile to switch from python3.5 to python3.8, previously it was creating a symlink for python3.5 and pip3 like this:
RUN ln -s /usr/bin/pip3 /usr/bin/pip
RUN ln -s /usr/bin/python3 /usr/bin/python
I've updated the Dockerfile to install python3.8 from deadsnakes:ppa
apt-get install python3-pip python3.8-dev python3.8-distutils python3.8-venv
if I remove python3-pip, it complains about gcc
C compiler or Python headers are not installed on this system. Try to run: sudo apt-get install gcc python3-dev
with these installations in place I'm trying to update existing symlink creation something like this:
RUN ln -s /usr/bin/pip3 /usr/local/lib/python3.8/dist-packages/pip
RUN ln -s /usr/bin/pip /usr/local/lib/python3.8/dist-packages/pip
RUN ln -s /usr/bin/python3.8 /usr/bin/python3
it fails, saying
ln: failed to create symbolic link '/usr/bin/python3': File exists
which I assume fails because python3 points to python3.6.
if I try: RUN ln -s /usr/bin/python3.8 /usr/bin/python it doesn't complain about symlink and image gets build successfully, but fails while installing requirements later (we use Makefile targets to install dependencies inside the container using pip and pip-sync):
ERROR: Cannot uninstall 'python-apt'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
which I assume because python-apt gets installed as part of the default python3.6 installation and python3.8 pip can't uninstall it.
PS: my Dockerfile image is based on Ubunut 18.04 which comes with python3.6 as default.
How can I properly switch Dockerfile / image from python3.5 to python3.8? so I can later use pip directly and it points to python3.8's pip
Replacing the system python in this way is usually not a good idea (as it can break operating-system-level programs which depend on those executables) -- I go over that a little bit in this video I made "why not global pip / virtualenv?"
A better way is to create a prefix and put that on the PATH earlier (this allows system executables to continue to work, but bare python / python3 / etc. will use your other executable)
in the case of deadsnakes which it seems like you're using, something like this should work:
FROM ubuntu:bionic
RUN : \
&& apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
software-properties-common \
&& add-apt-repository -y ppa:deadsnakes \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
python3.8-venv \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& :
RUN python3.8 -m venv /venv
ENV PATH=/venv/bin:$PATH
the ENV line is the key here, that puts the virtualenv on the beginning of the path
$ docker build -t test .
...
$ docker run --rm -ti test bash -c 'which python && python --version && which pip && pip --version'
/venv/bin/python
Python 3.8.5
/venv/bin/pip
pip 20.1.1 from /venv/lib/python3.8/site-packages/pip (python 3.8)
disclaimer: I'm the maintainer of deadsnakes
Why not just build a new image from ubuntu:18.04 with the desired config you need?
Like this:
FROM ubuntu:18.04
RUN apt update && apt install software-properties-common -y
RUN add-apt-repository ppa:deadsnakes/ppa && install python3.8 -y
RUN ln -s /usr/bin/pip3 /usr/bin/pip && \
ln -s /usr/bin/python3.8 /usr/bin/python
You can install and enable your python version.
# Python 3.8 and pip3
RUN apt-get update
RUN apt-get install -y software-properties-common
RUN add-apt-repository ppa:deadsnakes/ppa -y
RUN apt-get install -y python3.8
RUN ln -s /usr/bin/python3.8 /usr/bin/python
RUN apt-get install -y python3-pip
Sometimes, modifying the OS (like getting new Ubuntu clean os) is not favorable, because the current OS is too complicated. For example, my base OS is FROM ufoym/deepo:all-cu101.
So, to modify the existing python (3.6) to python 3.8, I added these 2 lines:
RUN apt-get update -qq && apt-get install -y -qq python3.8
RUN rm /usr/bin/python && rm /usr/bin/python3 && ln -s /usr/bin/python3.8 /usr/bin/python && ln -s /usr/bin/python3.8 /usr/bin/python3 \
&& rm /usr/local/bin/python && rm /usr/local/bin/python3 && ln -s /usr/bin/python3.8 /usr/local/bin/python && ln -s /usr/bin/python3.8 /usr/local/bin/python3 \
&& apt-get install -y python3-pip python-dev python3.8-dev && python3 -m pip install pip --upgrade
The first step is to install the python3.8;
The second step is to modify the softlink of python and python3 to point to python3.8
After that, install python3-pip, and update it to make sure the pip is using the current python 3.8 environment.
I am trying to install the remaining dependencies for Algo VPN using Terminal via step 4 on https://github.com/trailofbits/algo
I believe I was in the folder above the one I was supposed to be in the last time I ran this, and I used the sudo command. So now I think there is an issue with the permissions that I don't know how to fix. It could be a simple fix, but I just don't want to create any more mess with the permissions.
Here is the code that I am running in terminal
$ python -m virtualenv --python=`which python2` env &&
source env/bin/activate &&
python -m pip install -U pip virtualenv &&
python -m pip install -r requirements.txt
I receive the error -
Running virtualenv with interpreter /usr/bin/env
env: /Users/mark/Library/Python/2.7/lib/python/site-packages/virtualenv.py: Permission denied
Below is the code that I was using to try to install the remaining dependencies.
$ python -m virtualenv --python=`which python2` env &&
source env/bin/activate &&
python -m pip install -U pip virtualenv &&
python -m pip install -r requirements.txt
When I ran this about a week ago and I was able to get it to work and I believe it looked like this. I thought I just left it with no version of Python believing it would default to the current version and I believe it worked.
$ python -m virtualenv --python=env &&
source env/bin/activate &&
python -m pip install -U pip virtualenv &&
python -m pip install -r requirements.txt
So I decided to try
$ python -m virtualenv --python=python2.7 env &&
source env/bin/activate &&
python -m pip install -U pip virtualenv &&
python -m pip install -r requirements.txt
And it worked.
So maybe I had an extra space so it looked like
$ python -m virtualenv --python= env &&
source env/bin/activate &&
python -m pip install -U pip virtualenv &&
python -m pip install -r requirements.txt
or maybe I did in fact need the python2.7
$ python -m virtualenv --python=python2.7 env &&
source env/bin/activate &&
python -m pip install -U pip virtualenv &&
python -m pip install -r requirements.txt
I will note that I used terminal to show show hidden files by
defaults write com.apple.finder AppleShowAllFiles YES
and then I navigated in Finder to
/Users/mark/Library/Python/2.7/lib/python/site-packages/virtualenv.py
and it showed that I had the correct permissions. So I don't think had to do with using sudo previously.
I'm not sure what I'm missing here. The canonicaliser_api contains my code and a requirements.txt.
FROM ubuntu:14.04.2
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN apt-get -y update && apt-get upgrade -y
RUN apt-get install python build-essential python-dev python-pip python-setuptools -y
RUN apt-get install libxml2-dev libxslt1-dev python-dev -y
RUN apt-get install libpq-dev postgresql-common postgresql-client -y
RUN apt-get install openssl openssl-blacklist openssl-blacklist-extra -y
RUN apt-get install nginx -y
RUN pip install virtualenv uwsgi
ADD canonicaliser_api /home/ubuntu
RUN virtualenv /home/ubuntu/canonicaliser_api/venv
RUN source /home/ubuntu/canonicaliser_api/venv/bin/activate && pip install -r /home/ubuntu/canonicaliser_api/requirements.txt
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
EXPOSE 80
CMD service nginx start
When I'm trying to build it, everything is fine until step 11:
Step 11 : RUN source /home/ubuntu/canonicaliser_api/venv/bin/activate && pip install -r /home/ubuntu/canonicaliser_api/requirements.txt
---> Running in 7aae5bd92b70
/home/ubuntu/canonicaliser_api/venv/local/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
Could not open requirements file: [Errno 2] No such file or directory: '/home/ubuntu/canonicaliser_api/requirements.txt'
The command '/bin/sh -c source /home/ubuntu/canonicaliser_api/venv/bin/activate && pip install -r /home/ubuntu/canonicaliser_api/requirements.txt' returned a non-zero code: 1
But this makes no sense, I have added the the whole code directory in Dockerfile via the ADD. Am I missing here something?
bash-3.2$ ls canonicaliser_api/requirements.txt
canonicaliser_api/requirements.txt
bash-3.2$
The Usage is: ADD [source directory or URL] [destination directory]
You need to add the folder name to the destination:
ADD canonicaliser_api /home/ubuntu/canonicaliser_api
You have to be careful when copying directories, especially when the destination directory doesn't exist. In short, this won't work:
ADD canonicaliser_api /home/ubuntu
But this should:
ADD canonicaliser_api /home/ubuntu/canonicaliser_api
In general, it's better to avoid the ADD instruction and use COPY instead. In this case, it's just a direct replacement.
In future, a way to debug things like this is to take the last image that was successfully built (in this case, the one from the ADD line) and start a container from it. Then you can try running the problematic instruction and figure out what's going wrong.