version: "3.4"
services:
app:
build:
context: ./
target: project_build
image: sig-project
environment:
PROJECT_NAME: "${PROJECT_NAME:-NullProject}"
command: /bin/bash -c "init_project.sh"
ports:
- 7777:7777
expose:
- 7777
volumes:
- ./$PROJECT_NAME:/app/$PROJECT_NAME
- .:/home/jovyan/work
working_dir: /app
entrypoint: "jupyter notebook --ip=0.0.0.0 --port=7777 --allow-root --no-browser"
Above is my docker-compose.yaml.
The command doesn't run, I get this error:
Unrecognized alias: 'c', it will have no effect
Furthermore, it runs the jupyter notebook out of /bin instead of /app.
If I change the command to
command: "init_project.sh"
it fails silently, and if try to do something complicated like:
command: python init_project.py
then it gives me this:
No such file or directory: /app/python
note: init_project.sh is just a bash script wrapper for init_project.py
so it seems that for some reason the commands are run in a way that I don't understand and from within the /app directory but without shell or bash.
I've been hitting my head against the wall trying to figure out what I'm missing here and I don't know what else to try.
I've found a variety of issues and discussions that are similar, but nothing seems to resolve it.
these are the contents of the working-dir /app:
#ls
Dockerfile docker-compose.yaml docker_compose.sh init_project.sh poetry.lock pyproject.toml
Makefile create_poetry_lock.sh docker_build.sh init_project.py install-poetry.py poetry_lock_update.sh test.sh
What am I missing?
Thanks!
Your compose file looks weird to me!
You either have a docker image which container will be created with, or you have a Dockerfile present which builds the image and then container will be created with that image.
Based on what has been said above;
Why do you have both image and build attributes on your compose file?
If image is already available (e.g: postgreSQL, rabbitmq) then you don't need to build anything, just provide the image.
If there's no image, then please also add your Dockerfile to the quiestion.
Why are you trying to run /bin/bash -c?
You can simply add:
bash /path/to/ur/shell_script
Lastly, why don't you add your command in the Dockerfile with CMD?
What are init_project.sh and init_project.py? Maybe also share the content inside those files as well.
Also might be good to add tree output so we know how from where different commands are being executed.
at the moment I am trying to build a Django App, that other users should be able to use as Docker-Container. I want them to easily do a run command or starting a prewritten docker-compose file to start the container.
Now, I have problems with the persistence of the data. I am using the volume flag in docker-compose for example to bind mount a local folder of the host into the container, where the app data and config files are located on the container. The host folder is empty on the first run, as the user just installed docker and is just starting the docker-compose.
As it is a bind mount, the empty folder overrides the folder in Docker as far as I understood and so the Container-Folder, containing the Django-App is now empty and so it is not startable.
I searched a bit and as far as I understood, I need to create a entrypoint.sh file that copies the app data folder into the folder of the container after the startup, where the volume is.
Now to my questions:
Is there a Best Practice of how to copy the files via an entrypoint.sh file?
What about a second run, after 1. worked and files already exist, how to not override the maybe changed config files with the default ones in the temp folder?
My example code for now:
Dockerfile
# pull official base image
FROM python:3.6
# set work directory
RUN mkdir /app
WORKDIR /app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# copy project
COPY . /app/
# install dependencies
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
#one of my tries to make data persistent
VOLUME /app
docker-compose.yml
version: '3.5'
services:
app:
image: app:latest
ports:
- '8000:8000'
command: python manage.py runserver 0.0.0.0:8000
deploy:
replicas: 1
restart_policy:
condition: on-failure
volumes:
- /folder/to/app/data:/app
networks:
- overlay-core
networks:
overlay-core:
external: true
entrypoint.sh
#empty for now
You should restructure your application to store the application code and its data in different directories. Even if the data is a subdirectory of the application, that's good enough. Once you do that, you can bind-mount only the data directory and leave the application code from the image intact.
version: '3.5'
services:
app:
image: app:latest
ports:
- '8000:8000'
volumes:
- ./data:/app/data # not /app
There's no particular reason to put a VOLUME declaration in your Dockerfile, but you should declare the CMD your image should run there.
I've been scratching my head for a while with this. I have the following Dockerfile for my python application:
# Use an official Python runtime as a parent image
FROM frankwolf/rpi-python3
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
RUN chmod 777 docker-entrypoint.sh
# Install any needed packages specified in requirements.txt
RUN pip3 install --trusted-host pypi.python.org -r requirements.txt
# Run __main__.py when the container launches
CMD ["sudo", "python3", "__main__.py", "-debug"] # Not sure if I need sudo here
docker-compose file:
version: "3"
services:
mongoDB:
restart: unless-stopped
volumes:
- "/data/db:/data/db"
ports:
- "27017:27017"
- "28017:28017"
image: "andresvidal/rpi3-mongodb3:latest"
mosquitto:
restart: unless-stopped
ports:
- "1883:1883"
image: "mjenz/rpi-mosquitto"
FG:
privileged: true
network_mode: "host"
depends_on:
- "mosquitto"
- "mongoDB"
volumes:
- "/home/pi:/home/pi"
#image: "arkfreestyle/fg:v1.8"
image: "test:latest"
entrypoint: /app/docker-entrypoint.sh
restart: unless-stopped
And this what docker-entrypoint.sh looks like:
#!/bin/sh
if [ ! -f /home/pi/.initialized ]; then
echo "Initializing..."
echo "Creating .initialized"
# Create .initialized hidden file
touch /home/pi/.initialized
else
echo "Initialized already!"
sudo python3 __main__.py -debug
fi
Here's what I am trying to do:
(This stuff already works)
1) I need a docker image which runs my python application when I run it in a container. (this works)
2) I need a docker-compose file which runs 2 services + my python application, BUT before running my python application I need to do some initialization work, for this I created a shell script which is docker-entrypoint.sh. I want to do this initialization work ONLY ONCE when I deploy my application on a machine for the first time. So I'm creating a .initialized hidden file which I'm using as a check in my shell script.
I read that using entrypoint in a docker-compose file overwrites any old entrypoint/cmd given to the Dockerfile. So that's why in the else portion of my shell script I'm manually running my code using "sudo python3 main.py -debug", this else portion works fine.
(This is the main question)
In the if portion, I do not run my application in the shell script. I've tested the shell script itself separately, both if and else statements work as I expect, but when I run "sudo docker-compose up", the first time when my shell script hits the if portion it echoes the two statements, creates the hidden file and THEN RUNS MY APPLICATION. The console output appears in purple/pink/mauve for the application, while the other two services print their logs out in yellow and cyan. I'm not sure if the colors matter, but in the normal condition my application logs are always green, in fact the first two echoes "Initializing" and "Creating .initialized" are also green! so I thought I'd mention this detail. After those two echoes, my application mysteriously begins and logs console output in purple...
Why/how is my application being invoked in the if statement of the shell script?
(This is only happens if I run through docker-compose, not if I just run the shell script with sh docker-entrypoint.sh)
Problem 1
Using ENTRYPOINT and CMD at the same time has some strange effects.
Problem 2
This happens to your container:
It is started the first time. The .initialized file does not exist.
The if case is executed. The file is created.
The script and therefore the container ends.
The restart: unless-stopped option restarts the container.
The .initialized file exists now, the else case is run.
python3 __main__.py -debug is executed.
BTW the USER command in the Dockerfile or the user option in Docker Compose are better options than sudo.
I want to create a docker image. This is my work directory:
Dockerfile.in test.json test.py
And this is my Dockerfile:
COPY ./test.json /home/test.json
COPY ./test.py /home/test.py
RUN python test.py
When i launch this command:
docker build -f Dockerfile.in -t 637268723/test:1.0 .
It gives me this error:
`Step 1/5 : COPY ./test.json /home/test.json
---> Using cache
---> 6774cd225d60
Step 2/5 : COPY ./test.py /home/test.py
COPY failed: stat /var/lib/docker/tmp/docker-builder428014112/test.py:
no such file or directory`
Can anyone help me?
You should put those files into the same directory with Dockerfile.
Check if there's a .dockerignore file, if so, add:
!mydir/test.json
!mydir/test.py
Q1: Check your .dockerignore file in build path, the files or dir you want to copy may be in the ignore file list!
Q2: The COPY directive is based on the context in which you are building the image, so be aware of any problems with the directory where you are currently building the image! See: https://docs.docker.com/engine/reference/builder/#copy
I had to use the following command to start the build:
docker build .
Removing ./ from source path should resolve your issue:
COPY test.json /home/test.json
COPY test.py /home/test.py
I was also facing the same, I moved my docker file to root of the project. then it worked
Make sure the context you build your image with is set correctly. You can set the context when building as an argument.
Example:
docker build -f ./Dockerfile .. where '..' is the context in this example.
In your case removing ./ should solve the issue. I had another case wherein I was using a directory from the parent directory and docker can only access files present below the directory where Dockerfile is present
so if I have a directory structure /root/dir and Dockerfile /root/dir/Dockerfile
I cannot copy do the following
COPY root/src /opt/src
In my case, it was the comment line that was messing up the COPY command
I removed the comment after the COPY command and placed it to a dedicated line above the command. Surprisingly it resolved the issue.
Faulty Dockerfile command
COPY qt-downloader . # https://github.com/engnr/qt-downloader -> contains the script to auto download qt for different architectures and versions
Working Dockerfile command
# https://github.com/engnr/qt-downloader -> contains the script to auto download qt for different architectures and versions
COPY qt-downloader .
Hope it helps someone.
This may help someone else facing similar issue.
Instead of putting the file floating in the same directory as the Dockerfile, create a dir and place the file to copy and then try.
COPY mydir/test.json /home/test.json
COPY mydir/test.json /home/test.json
Another potential cause is that docker will not follow symbolic links by default (i.e don't use ln -s).
The following structure in docker-compose.yaml will allow you to have the Dockerfile in a subfolder from the root:
version: '3'
services:
db:
image: postgres:11
environment:
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- postgres-data:/var/lib/postgresql/data
ports:
- 127.0.0.1:5432:5432
**web:
build:
context: ".."
dockerfile: dockerfiles/Dockerfile**
command: ...
...
Then, in your Dockerfile, which is in the same directory as docker-compose.yaml, you can do the following:
ENV APP_HOME /home
RUN mkdir -p ${APP_HOME}
# Copy the file to the directory in the container
COPY test.json ${APP_HOME}/test.json
COPY test.py ${APP_HOME}/test.py
# Browse to that directory created above
WORKDIR ${APP_HOME}
You can then run docker-compose from the parent directory like:
docker-compose -f .\dockerfiles\docker-compose.yaml build --no-cache
In my case, I had to put all my project files into a subdirectory
app -|inside app directory we have the following
| package.js
| src
| assets
Dockerfile
Then I copied files in his way
COPY app ./
I had such error while trying to build a docker image and push to the container registry. Inside my docker file I tried to copy a jar file from target folder and try to execute it with java -jar command.
I was solving the issue by removing .jar file and target folder from .gitignore file.
When using the Docker compose files, publish, publishes to obj/Docker/Publish. When I copied my files there and pointed my Dockerfile to this directory (as generated), it works…
The way docker look for file is from the current directory
i.e. if your command is
COPY target/xyz.jar app.jar
ADD target/xyz.jar app.jar
The xyz jar should be in the current/target directory - here current is the place where you have your docker file.
So if you have docker in a different dir. its better bring to main project directory and have a straight path to the jar being added or copied to the image.
I had the same issue with a .tgz file .
It was just about the location of the file. Ensure the file is in the same directory of the Dockerfile.
Also ensure the .dockerignore file directory doesn't exclude the file regex pattern.
In my case the solution was to place file in a directory and copy whole directory content with one command, instead of copying a single file:
COPY --chown=1016:1016 myfiles /home/myapp/myfiles
Make sure your path names are the same (case sensitive), folder name /dist/inventory
COPY /Dist/Inventory ... -- was throwing the error
COPY /dist/inventory ... -- working smoothly
Using nodejs/express/javascript!
In my case I had multiple CMD ["npm" "run"...] on the same Dockerfile, where you can only have 1. Hence, the first CMD ["npm" "run" "build"] was not being run while the /build folder was not created. Therefore the cmd to copy the build folder COPY --from=build /usr/src/app/build ./build failed!
Change from a CMD to a RUN npm run build to fix the issue.
My Dockerfile:
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
# copy everything except content from .dockerignore
COPY . ./
#CMD ["npm", "run", "build"]
RUN npm run build
RUN ls -la | grep build
FROM node:lts-alpine3.17 as production
ARG NODE_ENV=production
ENV NODE_ENV=${NODE_ENV}
WORKDIR /usr/src/app
RUN pwd
COPY package*.json ./
RUN npm ci --only=production
COPY --from=build /usr/src/app/build ./build
CMD ["node", "build/index.js"]```
Here is the reason why it happens, i.e. your local directory in the host OS where you are running the docker should have the file, otherwise you get this error
One solution is to :
use RUN cp <src> <dst> instead of
COPY <src> <dst>
then run the command it works!
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>docker-maven-plugin</artifactId>
<configuration>
<images>

</images>
</configuration>
</plugin>
I have been reading this tutorial:
https://prakhar.me/docker-curriculum/
along with other tutorials, and Docker docks and I am still not completely clear on how to do this task.
The problem
My local machine is running Mac OS X, and I would like to set up a development environment for a python project. In this project I need to run call an api from a docker repo bamos/openface. The project also has some dependencies such as yaml, etc. If I just mount my local to openface, ie:
docker run -v path/to/project:/root/project -p 9000:9000 -p 8000:8000 -t -i bamos/openface /bin/bash
Then I need to install yaml and other dependencies, and every time I exit the container the installations would be lost. Additionally, it is also much slower for some reason. So the right way to do this is using Docker compose, but I am not sure how to proceed from here.
UPDATE
In response to the comments, I will now update the problem:
Right now my Dockerfile looks like this:
FROM continuumio/anaconda
ADD . /face-off
WORKDIR /face-off
RUN pip install -r requirements.txt
EXPOSE 5000
CMD [ "python", "app.py" ]
It is important that I build from anaconda since a lot of my code will use numpy and scipy. Now I also need bamos/openface, so I tried adding that to my docker-compose.yml file:
version: '2'
services:
web:
build: .
command: python app.py
ports:
- "5000:5000"
volumes:
- .:/face-off
openface:
build: bamos/openface
However, I am getting error:
build path path/to/face-off/bamos/openface either does not exist, is not accessible, or is not a valid URL
So I need to pass bamos/openface the right way so I can build a container with it. Right now bamos/openface is listed when I do docker images.