command in docker-compose.yaml throwing errors - python

version: "3.4"
services:
app:
build:
context: ./
target: project_build
image: sig-project
environment:
PROJECT_NAME: "${PROJECT_NAME:-NullProject}"
command: /bin/bash -c "init_project.sh"
ports:
- 7777:7777
expose:
- 7777
volumes:
- ./$PROJECT_NAME:/app/$PROJECT_NAME
- .:/home/jovyan/work
working_dir: /app
entrypoint: "jupyter notebook --ip=0.0.0.0 --port=7777 --allow-root --no-browser"
Above is my docker-compose.yaml.
The command doesn't run, I get this error:
Unrecognized alias: 'c', it will have no effect
Furthermore, it runs the jupyter notebook out of /bin instead of /app.
If I change the command to
command: "init_project.sh"
it fails silently, and if try to do something complicated like:
command: python init_project.py
then it gives me this:
No such file or directory: /app/python
note: init_project.sh is just a bash script wrapper for init_project.py
so it seems that for some reason the commands are run in a way that I don't understand and from within the /app directory but without shell or bash.
I've been hitting my head against the wall trying to figure out what I'm missing here and I don't know what else to try.
I've found a variety of issues and discussions that are similar, but nothing seems to resolve it.
these are the contents of the working-dir /app:
#ls
Dockerfile docker-compose.yaml docker_compose.sh init_project.sh poetry.lock pyproject.toml
Makefile create_poetry_lock.sh docker_build.sh init_project.py install-poetry.py poetry_lock_update.sh test.sh
What am I missing?
Thanks!

Your compose file looks weird to me!
You either have a docker image which container will be created with, or you have a Dockerfile present which builds the image and then container will be created with that image.
Based on what has been said above;
Why do you have both image and build attributes on your compose file?
If image is already available (e.g: postgreSQL, rabbitmq) then you don't need to build anything, just provide the image.
If there's no image, then please also add your Dockerfile to the quiestion.
Why are you trying to run /bin/bash -c?
You can simply add:
bash /path/to/ur/shell_script
Lastly, why don't you add your command in the Dockerfile with CMD?
What are init_project.sh and init_project.py? Maybe also share the content inside those files as well.
Also might be good to add tree output so we know how from where different commands are being executed.

Related

Docker containers crashed: /bin/sh: 1: [uvicorn,: not found

I am new to Docker and trying to Dockerize my FastAPI application.
First I created a Dockerfile:
FROM python:3.9.9
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
Then ran the following command:
docker build -t fastapi .
The command ran successfully.
After that I created the following docker-compose.yml:
version: "3"
services:
api:
build: .
ports:
- 8000:8000
env_file:
./.env
Then ran the following command:
docker-compose up -d
Ran successfully:
Network fastapi_default Created 0.7s
- Container fastapi_api_1 Started
Then to check if its running properly I ran the following command:
docker ps -a
And it showed that Container exited few seconds after it was created.
Then I ran this command:
docker logs fastapi_api_1
And it says:
/bin/sh: 1: [uvicorn,: not found
Not sure what is the reason. Tried some solutions that I found online but nothing worked out. I do have uvicorn in my requirements.txt file.
Help will be appriciated. Please let me know if additional information is required.
Note: You don't need to do docker build -t fastapi . manually. Docker-compose will do it for you (because you set build: .) But! You must run up command with --build parameter (docker-compose up --build) to force rebuild image even if it exists.
And about your problem:
Here is a very good article (and one more) about RUN, ENTRYPOINT and CMD
Here is three forms for CMD:
CMD ["executable","param1","param2"] (exec form, preferred)
CMD ["param1","param2"] (sets additional default parameters for ENTRYPOINT in exec form)
CMD command param1 param2 (shell form)
According error, looks like Docker interpreting CMD as a shell form or additional parameters for default ENTRYPOINT
Actually still not sure why it happens, but changing CMD to
CMD uvicorn app.main:app --host 0.0.0.0 --port 8000
or
ENTRYPOINT ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
should solve your problem
Also it will be better to use full path to uvicorn executable (/usr/bin/uvicorn or where it installed by default?). It is just my opinion but, that is may be a reason why CMD is interpreted as parameters instead of command.
PS In addition here is note from docker docs:
Note
The exec form is parsed as a JSON array, which means that you must use double-quotes (“) around words not single-quotes (‘).
So exec form syntax must meet the conditions of JSON syntax.
So, basically there was something wrong with the docker. I had created mulitple images. I removed all of them and ran the same commands again and it worked. I don't know the exact reason but its working now.
What I think was happening is that instead of deleting the old images and creating new one. I was just doing
docker-compose down
and then
docker-compose up -t
I think that command was not taking the changes into consideration.
then i ran:
docker-compose up --build
and I think that created a new image and it worked.
Then I noticed that there were atleast 10 images created. I deleted all of them and ran the same commands:
docker build .
docker-compose up -t
and it worked fine again.
So basically instead of using creating new image it was using the old one which was not created correctly:
docker-compose up --build
In short you should use docker-compose up --build whenever you make changes in your dockerfile or docker-compose.yml instead of docker-compose up -t
It might be confusing but I am also very new to Docker.
Thanks for the help everyone!
I've had the same issue with a Dockerfile in my docker-compose environment containing
COPY ./requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
RUN pip install uvicorn==0.20.0
CMD ["uvicorn", "--host", "0.0.0.0", "--port", "6000", "app:app"]
So I don't need an extra command:line in my docker-compose.yml
It turned out that if you install uvicorn in your requirements.txt, as I like to do for testing purposes
then it gets installed locally, and
RUN pip install uvicorn==0.20.0 is skipped, which means,
there is no /usr/bin/uvicorn 'executable' available, just somewhere in site-packages and CMD will fail.
So, if you use uvicorn in your requirements.txt, and in Dockerfile as well, you can maybe
force the reinstallation
RUN pip install --ignore-installed uvicorn==0.20.0
in the Dockerfile,
or set the PATH to find it somewhere in the guts of python,
or - what I find is a better solution to keep the image size small -
is to remove uvicorn from requirements.txt...

File created by docker is uneditable on host

I have a Python script that creates a folder and writes a file in that folder. I can open the file and see its contents, but unfortunately I cannot edit it. I tried to add the command, RUN chmod -R 777 . but that didn't help either. In the created files and folders I see a lock sign as follows -
I have been able to recreate the same on a small demo. The contents are as follows -
demo.py
from pathlib import Path
Path("./created_folder").mkdir(parents=True, exist_ok=True)
with open("./created_folder/dummy.txt", "w") as f:
f.write("Cannot edit the contents of this file")
Dockerfile
FROM python:buster
COPY . ./data/
WORKDIR /data
RUN chmod -R 777 .
CMD ["python", "demo.py"]
docker-compose.yaml
version: "3.3"
services:
python:
working_dir: /data
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/data/
After making these files run docker-compose up --build and see the results and then try to edit and save the created file dummy.txt - which should fail.
Any idea how to make sure that the created files can be edited and saved on the host?
EDIT:
I am running docker-compose rootless. I had read that it is not a good idea to run docker with sudo, so I followed the official instructions on adding user group etc.
I actually run the command docker-compose up --build not just docker compose up
I am on Ubuntu 20.04
username is same for both
~$ grep /etc/group -e "docker"
docker:x:999:username
~$ grep /etc/group -e "sudo"
sudo:x:27:username
Tried using PUID and PGID environment variables... but still the same issue.
Current docker-compose file -
version: "3.3"
services:
python:
working_dir: /data
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/data/
environment:
- PUID=1000
- PGID=1000
It seems like the Docker group is different from your user group and you're misunderstanding the way Docker RUN works.
Here's what happens when you run Docker build:
You pull the python:buster images
You copy the contents of the current directory to the docker image
You set the Work directory to the existing data directory
You set the permissions of the data directory to 777
Finally, the CMD is set to indicate what program should run
When do a docker-compose, the RUN command has no effect, but it's an Dockerfile instruction and not a runtime command. When your container runs, the writes the files with the user/group of the Docker command which your user doesn't have the permissions to edit.
Docker container runs in an isolated environment so that the UID/GID to user name/group name mapping is not shared to the process inside containers. You are facing two problems.
The files created inside containers is owned by root:root (by default, or a randomly chosen UID by the image)
The files created with permission 644 or 755 that not editable by host user.
You could refer to images provided by linuxserver.io to learn how they solve the two problems. for example qBittorrent. They provide two environment variables PUID and PGID to solve problem 1. UMASK option to solve problem 2.
Below is the related implementation in their custom Ubuntu base image. You could use that as your image base as well.
the place PUID and PGID is handled:
https://github.com/linuxserver/docker-baseimage-ubuntu/blob/b529d350b1/root/etc/cont-init.d/10-adduser
the place UMASK is handled:
https://github.com/linuxserver/docker-baseimage-ubuntu/blob/b529d350b1438aa81e68a5d87eff39ade0f1c879/root/usr/bin/with-contenv#L2-L5
Personally, I'd prefer setting PUID and PGID to the ones in accord with the host (could get from id -u and id -g) and NOT touching UMASK unless absolutely necessary.

Docker Compose "ERROR: file not found", but ubuntu shell can find the file

I'm trying to get a Docker Compose script to test a python program. The following is the .sh to run the docker-compose command.
#!/usr/bin/env bash
script_path=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
base_path=${script_path}/src
container_path=${base_path}/deploy/
docker_compose_path=${container_path}docker-compose.yml
sudo nano ${base_path}/tests/scrap/test_r.py
docker-compose -f "${docker_compose_path}" run service_name pytest -x ${base_path}/tests/scrap/test_r.py
docker-compose -f "${docker_compose_path}" down
The problem when running it is that an error shows up with the following message:
ERROR: file not found: /correct/path/to/test_r.py
NOTE: I have checked many times and the path (absolute) is correct.
The odd thing about the error is that the shell can find it with no problem. The nano command that is right on top of the docker-compose one works perfectly, loading the file with the correct content. Still, with the same exact path, docker-compose seems to not be able to find it and execute it (already tried chmod +x test_r.py but it did not work. Docker Compose simply doesn't find it).
Whatever extra info that you need, you can ask. Thanks :P
NOTE: I'm not considering as a source of the problem the fact that the project worked perfectly on another computer that was running MacOS. My pc is trying to execute the scripts in an Ubuntu environment. If taking that into account is a step in the right direction, I don't know how to move toward solving the issue.
Edit: adding info of the file docker-compose.yml:
version: '3.5'
services:
service_name:
image: image_name
build:
context: ../
dockerfile: Dockerfile
tty: true
volumes:
- ../../../workers:/opt/workers/
working_dir: /opt/workers/src
env_file:
- ../local/worker.env
Turns out all I had to do was update the "working_dir" path in the docker-compose.yml file to make it find the corresponding py file. The problem was that I had no idea that I had to pay attention to that info. Thanks to #David Maze for asking me all of the questions. Sometimes asking the right ones points on in the right direction.

docker-compose: Why is my python application being invoked here?

I've been scratching my head for a while with this. I have the following Dockerfile for my python application:
# Use an official Python runtime as a parent image
FROM frankwolf/rpi-python3
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
RUN chmod 777 docker-entrypoint.sh
# Install any needed packages specified in requirements.txt
RUN pip3 install --trusted-host pypi.python.org -r requirements.txt
# Run __main__.py when the container launches
CMD ["sudo", "python3", "__main__.py", "-debug"] # Not sure if I need sudo here
docker-compose file:
version: "3"
services:
mongoDB:
restart: unless-stopped
volumes:
- "/data/db:/data/db"
ports:
- "27017:27017"
- "28017:28017"
image: "andresvidal/rpi3-mongodb3:latest"
mosquitto:
restart: unless-stopped
ports:
- "1883:1883"
image: "mjenz/rpi-mosquitto"
FG:
privileged: true
network_mode: "host"
depends_on:
- "mosquitto"
- "mongoDB"
volumes:
- "/home/pi:/home/pi"
#image: "arkfreestyle/fg:v1.8"
image: "test:latest"
entrypoint: /app/docker-entrypoint.sh
restart: unless-stopped
And this what docker-entrypoint.sh looks like:
#!/bin/sh
if [ ! -f /home/pi/.initialized ]; then
echo "Initializing..."
echo "Creating .initialized"
# Create .initialized hidden file
touch /home/pi/.initialized
else
echo "Initialized already!"
sudo python3 __main__.py -debug
fi
Here's what I am trying to do:
(This stuff already works)
1) I need a docker image which runs my python application when I run it in a container. (this works)
2) I need a docker-compose file which runs 2 services + my python application, BUT before running my python application I need to do some initialization work, for this I created a shell script which is docker-entrypoint.sh. I want to do this initialization work ONLY ONCE when I deploy my application on a machine for the first time. So I'm creating a .initialized hidden file which I'm using as a check in my shell script.
I read that using entrypoint in a docker-compose file overwrites any old entrypoint/cmd given to the Dockerfile. So that's why in the else portion of my shell script I'm manually running my code using "sudo python3 main.py -debug", this else portion works fine.
(This is the main question)
In the if portion, I do not run my application in the shell script. I've tested the shell script itself separately, both if and else statements work as I expect, but when I run "sudo docker-compose up", the first time when my shell script hits the if portion it echoes the two statements, creates the hidden file and THEN RUNS MY APPLICATION. The console output appears in purple/pink/mauve for the application, while the other two services print their logs out in yellow and cyan. I'm not sure if the colors matter, but in the normal condition my application logs are always green, in fact the first two echoes "Initializing" and "Creating .initialized" are also green! so I thought I'd mention this detail. After those two echoes, my application mysteriously begins and logs console output in purple...
Why/how is my application being invoked in the if statement of the shell script?
(This is only happens if I run through docker-compose, not if I just run the shell script with sh docker-entrypoint.sh)
Problem 1
Using ENTRYPOINT and CMD at the same time has some strange effects.
Problem 2
This happens to your container:
It is started the first time. The .initialized file does not exist.
The if case is executed. The file is created.
The script and therefore the container ends.
The restart: unless-stopped option restarts the container.
The .initialized file exists now, the else case is run.
python3 __main__.py -debug is executed.
BTW the USER command in the Dockerfile or the user option in Docker Compose are better options than sudo.

Using docker to compose a remote image with a local code base for *development*

I have been reading this tutorial:
https://prakhar.me/docker-curriculum/
along with other tutorials, and Docker docks and I am still not completely clear on how to do this task.
The problem
My local machine is running Mac OS X, and I would like to set up a development environment for a python project. In this project I need to run call an api from a docker repo bamos/openface. The project also has some dependencies such as yaml, etc. If I just mount my local to openface, ie:
docker run -v path/to/project:/root/project -p 9000:9000 -p 8000:8000 -t -i bamos/openface /bin/bash
Then I need to install yaml and other dependencies, and every time I exit the container the installations would be lost. Additionally, it is also much slower for some reason. So the right way to do this is using Docker compose, but I am not sure how to proceed from here.
UPDATE
In response to the comments, I will now update the problem:
Right now my Dockerfile looks like this:
FROM continuumio/anaconda
ADD . /face-off
WORKDIR /face-off
RUN pip install -r requirements.txt
EXPOSE 5000
CMD [ "python", "app.py" ]
It is important that I build from anaconda since a lot of my code will use numpy and scipy. Now I also need bamos/openface, so I tried adding that to my docker-compose.yml file:
version: '2'
services:
web:
build: .
command: python app.py
ports:
- "5000:5000"
volumes:
- .:/face-off
openface:
build: bamos/openface
However, I am getting error:
build path path/to/face-off/bamos/openface either does not exist, is not accessible, or is not a valid URL
So I need to pass bamos/openface the right way so I can build a container with it. Right now bamos/openface is listed when I do docker images.

Categories