Docker: How to source a virtualenv and install requirements.txt? - python

I'm not sure what I'm missing here. The canonicaliser_api contains my code and a requirements.txt.
FROM ubuntu:14.04.2
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN apt-get -y update && apt-get upgrade -y
RUN apt-get install python build-essential python-dev python-pip python-setuptools -y
RUN apt-get install libxml2-dev libxslt1-dev python-dev -y
RUN apt-get install libpq-dev postgresql-common postgresql-client -y
RUN apt-get install openssl openssl-blacklist openssl-blacklist-extra -y
RUN apt-get install nginx -y
RUN pip install virtualenv uwsgi
ADD canonicaliser_api /home/ubuntu
RUN virtualenv /home/ubuntu/canonicaliser_api/venv
RUN source /home/ubuntu/canonicaliser_api/venv/bin/activate && pip install -r /home/ubuntu/canonicaliser_api/requirements.txt
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
EXPOSE 80
CMD service nginx start
When I'm trying to build it, everything is fine until step 11:
Step 11 : RUN source /home/ubuntu/canonicaliser_api/venv/bin/activate && pip install -r /home/ubuntu/canonicaliser_api/requirements.txt
---> Running in 7aae5bd92b70
/home/ubuntu/canonicaliser_api/venv/local/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
Could not open requirements file: [Errno 2] No such file or directory: '/home/ubuntu/canonicaliser_api/requirements.txt'
The command '/bin/sh -c source /home/ubuntu/canonicaliser_api/venv/bin/activate && pip install -r /home/ubuntu/canonicaliser_api/requirements.txt' returned a non-zero code: 1
But this makes no sense, I have added the the whole code directory in Dockerfile via the ADD. Am I missing here something?
bash-3.2$ ls canonicaliser_api/requirements.txt
canonicaliser_api/requirements.txt
bash-3.2$

The Usage is: ADD [source directory or URL] [destination directory]
You need to add the folder name to the destination:
ADD canonicaliser_api /home/ubuntu/canonicaliser_api

You have to be careful when copying directories, especially when the destination directory doesn't exist. In short, this won't work:
ADD canonicaliser_api /home/ubuntu
But this should:
ADD canonicaliser_api /home/ubuntu/canonicaliser_api
In general, it's better to avoid the ADD instruction and use COPY instead. In this case, it's just a direct replacement.
In future, a way to debug things like this is to take the last image that was successfully built (in this case, the one from the ADD line) and start a container from it. Then you can try running the problematic instruction and figure out what's going wrong.

Related

Issue with the installation of Python 3.8 with docker file

I am trying to update my CI Pipeline for git lab, but my pipeline keeps on failing because the docker in docker of my runner fails to install python 3.8.
In my Docker file I am running the following commands
FROM ubuntu:latest
ENV http_proxy $HTTPS_PROXY
ENV https_proxy $HTTPS_PROXY
RUN apt-get update && apt-get install -y \
python3.8 \
python3-pip \
&& rm -rf /var/lib/apt/lists/*
but my pipeline fails giving me the following error
Package python3.8 is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source
E: Package 'python3.8' has no installation candidate
error building image: error building stage: failed to execute command: waiting for process to exit: exit status 100
In many suggestions I have found the using the apt-get update command should solve the problem however that is not working for me.
Latest Ubunt repos don't contain old Python versions by default.
You can either try using a newer Python version or adding the deadsnakes repo with something like this:
FROM ubuntu:latest
ENV http_proxy $HTTPS_PROXY
ENV https_proxy $HTTPS_PROXY
RUN apt-get install -y software-properties-common && sudo add-apt-repository ppa:deadsnakes/ppa && apt-get update && apt-get install -y \
python3.8 \
python3-pip \
&& rm -rf /var/lib/apt/lists/*
You may also need to apt update before installing the software-properties-common package.
As an alternative you could always consider using one of the official python docker images, rather than installing python on top of an ubuntu image yourself.
python:3.8-buster or python:3.8-slim-buster may be close enough to what you need?

Failed to Fetch error when trying to build docker image (while RUN apt-get update)

I'm trying to create a flask-docker project, but i also need some tools from linux.
So i have a debian:latest base image for my dockerfile, in which i want to install
python3 and dieharder(the package i need for my project).
But every time i try to build the image with following command:
docker build --no-cache --pull -t backend .
I get 3 packages that fail to fetch:
E: Failed to fetch http://deb.debian.org/debian/pool/main/g/gcc-defaults/gcc_10.2.1-1_amd64.deb Bad header line Bad header data [IP: 151.101.14.132 80]
E: Failed to fetch http://deb.debian.org/debian/pool/main/g/gcc-defaults/g%2b%2b_10.2.1-1_amd64.deb Bad header line Bad header data [IP: 151.101.14.132 80]
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
The command '/bin/sh -c apt-get update --fix-missing -y && apt-get install -y python3-pip && apt-get install -y dieharder' returned a non-zero code: 100
I'm already running everything in one RUN command and also using --no-cache, but i still
get these errors. Also i could'nt find anyone else with these specific errors.
So i'm asking for Your help, because i don't know what else i can do.
My Dockerfile:
FROM debian:latest
RUN apt-get update --fix-missing -y && apt-get install -y python3-pip && apt-get install -y dieharder
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip install -r requirements.txt
COPY . /app
ENTRYPOINT [ "python" ]
CMD ["main.py"]
Could it be a problem that im doing all this on a VM? Is there any further information
you need to help me? Please let me know, this is my first question ever, so sry if it's not that good :)
P.S. i don't really need to use debian as a base image but i would prefer to. I also tried Ubuntu and got the same error, so i don't think that thats the problem.
Update:
It seems like i solved this problem, although I'm not really sure why it works now.
I had to change my run command and install gcc and g++ independently from
apt-get update. I don't know if this is an elegant solution, but it works for now.
If anyone has a smoother solution pls let me know :)
Anyway here is my final Dockerfile:
FROM debian:latest
RUN apt-get update -y && apt-get install gcc-10 -y && apt-get install g++-10 -y && apt-get install -y python3-pip && apt-get install -y dieharder
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip install -r requirements.txt
COPY . /app
ENTRYPOINT [ "python3" ]
CMD ["main.py"]

Django-apache2 exited with code 0 when I write docker-compose up

I created Dockerfile and docker-compose but gives me this error django-apache2 exited with code 0 when I write docker-compose up
Dockerfile
FROM ubuntu:18.04
RUN apt-get -y update && apt-get -y upgrade
RUN apt-get -y install python3.8
RUN apt-get -y install python3-pip
RUN apt -y install apache2
RUN apt-get install -y apt-utils vim curl apache2 apache2-utils
RUN apt-get -y install python3 libapache2-mod-wsgi-py3
RUN pip3 install --upgrade pip
COPY ./requirements.txt ./requirements.txt
RUN apt-get -y install python3-dev
RUN apt-get -y install python-dev default-libmysqlclient-dev
RUN pip3 install -r ./requirements.txt
COPY ./apache.conf /etc/apache2/sites-available/000-default.conf
RUN mkdir /var/www/api/
COPY ./project/. /var/www/api/
WORKDIR /project/
Docker-compose
version: "3"
services:
django-apache2:
container_name: "django-apache2"
build: .
ports:
- "8005:80"
First, we need to understand that a Docker container runs a single command. The container will be running as long as that process the command started is running. Once the process is completed and exits then the container will stop.
With that understanding, we can make an assumption of what is happening in your case. When you start your service there is no command. At this point, the Docker container is stopped because the process exited (with status 0).
So you need to add command that keeps running on your docker.
Check this link for more information Here.
Your container lacks something to run. You need to add a CMD or ENTRYPOINT instruction to your Dockerfile.
That's why you see such message, which is not an error. The message is telling you that your container django-apache2 finished correctly (exit status 0), and this is because you are running the base image ubuntu which doesn't execute anything.
The problem with this approach is due to www-data apache2 user. If you install python packages from Dockerfile they will be installed for superuser and www-data apache user can not access those packages.
I tried creating a new venv using pip and the same problem happens. Packages installed from superuser in a python virtual environment are not installed inside venv folder.
I created a new repository in github explaining a different approach using miniconda3 as the python packages manager and using sudo -u in order to run commands as a different user.
I am trying to solve this using pip. Changes will be posted in the repository.
I hope this can be useful to you.

apache-airflow fails install

I'm trying to install apache-airflow the recommended way with pip install apache-airflow. During the install of pendulum (a dependency), I get an error:
error: can't copy 'pendulum/parsing': doesn't exist or not a regular file
I think it's related to Python distutils error: "[directory]... doesn't exist or not a regular file", but that doesn't give an answer as to how one resolves this when using pip. Pulling the tar for pendulum and installing using python setup.py install works, but then when subsequently I do pip install apache-airflow again, it sees that pendulum is already installed, UNINSTALLS, and then tries to install again using pip, resulting in the same error. I'm using a docker container and installing python-setuptools with apt-get before I do any of this. Here's my dockerfile, fwiw...
FROM phusion/baseimage:0.10.1
MAINTAINER a curious dev
RUN apt-get update && apt-get install -y python-setuptools python-pip python-dev libffi-dev libssl-dev zip wget
ENV SLUGIFY_USES_TEXT_UNIDECODE=yes
RUN wget https://files.pythonhosted.org/packages/5b/57/71fc910edcd937b72aa0ef51c8f5734fbd8c011fa1480fce881433847ec8/pendulum-2.0.4.tar.gz
RUN tar -xzvf pendulum-2.0.4.tar.gz
RUN cd pendulum-2.0.4/ && python setup.py install
RUN pip install apache-airflow
CMD airflow initdb && airflow webserver -p 8080
Does anyone see anything I'm doing wrong? I haven't found anyone else with this error so I think there's something really obvious I'm missing. Thanks for reading.
Upgrade pip first.
FROM phusion/baseimage:0.10.1
RUN apt-get update && apt-get install -y python-setuptools python-pip python-dev libffi-dev libssl-dev zip wget
ENV SLUGIFY_USES_TEXT_UNIDECODE=yes
RUN pip install -U pip
RUN pip install apache-airflow
CMD airflow initdb && airflow webserver -p 8080
seems to work fine for me.

How to check whether python package is installed or not in Docker?

I used Dockerfile successfully built a container. However, my code doesn't work in the container. It does work if I install all the packages manually. I'm assuming I messed up something that cause docker didn't install the packages properly. So, I want to check whether python package is installed or not in Docker container. What is the best way to check it?
The Dockerfile I used:
# Update the sources list
RUN sudo apt-get update
# Install basic applications
RUN sudo apt-get install -y tar git curl nano wget dialog net-tools build-essential
# First install ZeroMQ
RUN sudo apt-get install -y libzmq-dev
# Install libevent
RUN sudo apt-get install -y libevent-dev
# Install Python and Basic Python Tools
RUN sudo apt-get install -y python python-dev python-setuptools
RUN sudo apt-get install -y python-pip
# Add the current directory to the container
ADD . /root/code
# Get pip to download and install requirements:
RUN sudo pip install -r /root/code/requirements.txt
# Expose ports
EXPOSE 80 4242
# Define working directory.
WORKDIR /root/code
# Start the tcp server.
CMD python app.py
The requirements.txt I used:
gevent==1.0.1
greenlet==0.4.5
msgpack-python==0.4.2
pyzmq==13.1.0
wsgiref==0.1.2
zerorpc==0.4.4
I figured out.
docker exec <container ID> pip list

Categories