I have a folder in an ubuntu VM called "MovieFlix" that contains a dockerfile a python flask app and a "templates" folder with html templates inside. I have managed to build a docker image with the same dockerfile successfully twice but I had to delete it in order to edit my python file . The third time I try to build my docker image the image is not build and I get
Package python3 is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
However the following packages replace it:
dh-python
E: Package 'python3' has no installation candidate
The command '/bin/sh -c apt-get install -y python3 python3-pip' returned a non-zero code: 100
My DockerFile :
FROM ubuntu:16.04
MAINTAINER bill <bill#gmailcom>
RUN apt-get update
RUN apt-get install -y python3 python3-pip
RUN apt-get install -y bcrypt
RUN pip3 install flask pymongo flask_bcrypt
RUN pip3 install Flask-PyMongo py-bcrypt
RUN mkdir /app
RUN mkdir -p /app/templates
COPY webservice.py /app/webservice.py
ADD templates /app/templates
EXPOSE 5000
WORKDIR /app
ENTRYPOINT ["python3" , "-u" , "webservice.py" ]
I tried installing python3-pip but it is already installed in my ubuntu VM
I would appreciate your help . Thank you in advance .
Run below commands in order :
1. sudo apt-get update
2. sudo apt-get install dh-python
SOLVED : I deleted all inactive docker containers and build my image again with the same code
Related
I'm trying to use this tutorial to upload a docker container to AWS ECR for Lambda. My problem is that my python script uses psycopg2, and I couldn't figure out how to install psycopg2 inside the Docker image. I know that I need postgres-devel for the libq library and gcc for compiling, but it still doesn't work.
My requirements.txt:
pandas==1.3.0
requests==2.25.1
psycopg2==2.9.1
pgcopy==1.5.0
Dockerfile:
FROM public.ecr.aws/lambda/python:3.8
WORKDIR /app
COPY my_script.py .
COPY some_file.csv .
COPY requirements.txt .
RUN yum install -y postgresql-devel gcc*
RUN pip install -r requirements.txt
CMD ["/app/my_script.handler"]
After building, running the image, and testing the lambda function locally, I get this error message:
psycopg2.OperationalError: SCRAM authentication requires libpq version 10 or above
So I think the container has the wrong version of postgres(-devel). But I'm not sure how to install the proper version? Any tips for deploying a psycopg2 script to docker for lambda usage?
This might be a little old and too late to answer but figure I post what worked for me.
FROM public.ecr.aws/lambda/python:3.8
COPY . ${LAMBDA_TASK_ROOT}
RUN yum install -y gcc python27 python27-devel postgresql-devel
RUN pip3 install -r requirements.txt --target "${LAMBDA_TASK_ROOT}"
CMD [ "app.handler" ]
I recently started learning docker and I was attempting to build a flask python image by following a tutorial video.
FROM ubuntu:latest
RUN apt-get update
RUN apt-get install python
CMD echo "Python Installed"
RUN pip install flask
COPY . /opt/source-code
ENTRYPOINT FLASK_APP=/opt/source-code/app.py flask run
this is the Dockerfile in my source code working directory, I run sudo docker build . -t nxte/custom-app on a digitalocean droplet with docker installed but it returns The command '/bin/sh -c apt-get install python' returned a non-zero code: 1.
Any suggestions? I have no idea what the problem is since I followed the tutorial to a T.
You should use -y with apt-get:
RUN apt-get -y install python
Also notice that the above does not install pip and it's not
possible to install it with apt-get -y install python-pip so either
switch to Python 3 and then apt-get -y install python3 and apt-get -y install python3-pip or get pip from other sources.
I created Dockerfile and docker-compose but gives me this error django-apache2 exited with code 0 when I write docker-compose up
Dockerfile
FROM ubuntu:18.04
RUN apt-get -y update && apt-get -y upgrade
RUN apt-get -y install python3.8
RUN apt-get -y install python3-pip
RUN apt -y install apache2
RUN apt-get install -y apt-utils vim curl apache2 apache2-utils
RUN apt-get -y install python3 libapache2-mod-wsgi-py3
RUN pip3 install --upgrade pip
COPY ./requirements.txt ./requirements.txt
RUN apt-get -y install python3-dev
RUN apt-get -y install python-dev default-libmysqlclient-dev
RUN pip3 install -r ./requirements.txt
COPY ./apache.conf /etc/apache2/sites-available/000-default.conf
RUN mkdir /var/www/api/
COPY ./project/. /var/www/api/
WORKDIR /project/
Docker-compose
version: "3"
services:
django-apache2:
container_name: "django-apache2"
build: .
ports:
- "8005:80"
First, we need to understand that a Docker container runs a single command. The container will be running as long as that process the command started is running. Once the process is completed and exits then the container will stop.
With that understanding, we can make an assumption of what is happening in your case. When you start your service there is no command. At this point, the Docker container is stopped because the process exited (with status 0).
So you need to add command that keeps running on your docker.
Check this link for more information Here.
Your container lacks something to run. You need to add a CMD or ENTRYPOINT instruction to your Dockerfile.
That's why you see such message, which is not an error. The message is telling you that your container django-apache2 finished correctly (exit status 0), and this is because you are running the base image ubuntu which doesn't execute anything.
The problem with this approach is due to www-data apache2 user. If you install python packages from Dockerfile they will be installed for superuser and www-data apache user can not access those packages.
I tried creating a new venv using pip and the same problem happens. Packages installed from superuser in a python virtual environment are not installed inside venv folder.
I created a new repository in github explaining a different approach using miniconda3 as the python packages manager and using sudo -u in order to run commands as a different user.
I am trying to solve this using pip. Changes will be posted in the repository.
I hope this can be useful to you.
I have a working service running on a python:3.6-jessie image.
I am trying to reduce the size of it to speed up serverless cold starts.
I have tried the images python:3.6-alpine, python:3.6-slim-buster and python:3.6-slim-jessie.
With all of them I end up having to install many additional packages and I end up with the follwing error that I cannot fix with more packages:
ImportError: libmysqlclient.so.18: cannot open shared object file: No such file or directory
My current Dockerfile is
FROM python:3.6-jessie as build
ENV PYTHONUNBUFFERED 0
ENV FLASK_APP "api/app.py"
RUN python -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
WORKDIR /opt/venv
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
FROM python:3.6-slim-jessie
COPY --from=build /opt/venv /opt/venv
WORKDIR /opt/venv
RUN apt-get update
RUN apt-get --assume-yes install gcc
RUN apt-get --assume-yes install python-mysqldb
ENV PATH="/opt/venv/bin:$PATH"
RUN rm -rf configs tests draw_results env .idea .git .pytest_cache
EXPOSE 8000
CMD ["/opt/venv/run.sh"]
The relevant lines from requirements.txt:
mysqlclient==1.4.2.post1
PyMySQL==0.9.3
Flask-SQLAlchemy==2.3.2
SQLAlchemy==1.3.0
The run.sh is just my gunicorn start command.
Is there any package I can use to fix this last issue, is there some other mysql library I should be using or some other way for me to fix this. Or should I just stick to full python:3.6 images when I want a mysql client?
I'm using python:3.7-slim and using the following command
RUN apt-get -y install default-libmysqlclient-dev
Try to add this line to the dockerfile:
RUN apt-get install -y libmysqlclient-dev
For python slim-buster (debian os) use can run this command on Dockerfile.
RUN apt-get update && apt-get install -y default-mysql-client
This worked for me.
I have used python:3.10.6-slim-buster
I'm trying to build a docker image with docker-compose in my ARM64 rasperry pi but it seems to be imposible.
This is my dockerfile:
FROM python:3.6-slim
RUN apt-get update && apt-get -y install python3-dev
RUN apt-get -y install python3-numpy
RUN apt-get -y install python3-pandas
ENTRYPOINT ["python3", "app.py"]
It seems to be OK, but when app.py is run, it gives an error: "Module numpy not found", and the same for pandas module.
If I try to install numpy and pandas using pip:
RUN pip install numpy pandas
It gives me an error or, more usually, the raspberry just gets frozen and I have to unplug it to recover.
I have tried with different versions of python for the source image and also using several ubuntu images and installing python.
Any idea of how can I install numpy and pandas in docker for my raspberry pi (ARM64)?
Thanks
The problems seems to be with the python version. I'm using a python3.6 docker image but, both python3-numpy and python3-pandas packages require python3.5, so when those packages are installed a new version of python is also installed. This is why when I'm trying to import those modules the python interpreter can't found them, because they are installed for another python version.
Finaly I solved it using a generic docker debian image and installing python3.5 myself instead of using a python docker image.
FROM debian:stretch-slim
RUN apt-get update && apt-get -y dist-upgrade
RUN apt-get -y install build-essential libssl-dev libffi-dev python3.5 libblas3 libc6 liblapack3 gcc python3-dev python3-pip cython3
RUN apt-get -y install python3-numpy python3-sklearn
RUN apt-get -y install python3-pandas
COPY requirements.txt /tmp/
RUN pip3 install -r /tmp/requirements.txt
(Disclaimer: The Raspberry Pi 3 B+ is probably too slow to install big dependecies like numpy)
This Dockerfile worked for me on the Raspberry Pi 3 B+ with Software-Version: Linux raspberrypi 5.10.63-v7+ (Consider updating it)
FROM python:3.9-buster
WORKDIR /
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
I am not sure, but I think it helped also to clean docker i.e. remove all images and containers with the following commands:
Warning: This commands deletes all images and containers!
$ docker container prune
$ docker image prune -a
Or reset Docker completely (deletes also volumes and networks):
$ docker system prune --volumes
I recommend to create requirements.txt file.
Inside you can declare packets to install.
The `Dockerfile':
FROM python
COPY app.py /workdir/
COPY requirements.txt /workdir/
WORKDIR /workdir
RUN pip install --trusted-host pypi.python.org -r requirements.txt
CMD python app.py
edit
I create Dockerfile which import pandas lib and then checking if it work:
cat Dockerfile
FROM python
COPY app.py /workdir/
WORKDIR /workdir
RUN python -m pip install pandas
CMD python app.py