I have to run simple service on Docker Compose. The first image is to host the previously created service while the second image, which is dependent on the first one, is to run the tests. So I created Dockerfile:
FROM python:2.7-slim
WORKDIR /flask
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "routes.py"]
Everything works. I created some simple tests, which also works, and placed the file in the same directory as routes.py.
So I tried to create docker-compose.yml file and did something like that:
version: '2'
services:
app:
build: .
command: 'python MyTest.py'
ports:
- "5000:5000"
tests:
build:
context: Mytest.py
depends_on:
- app
When I run it I received an error:
TypeError: You must specify a directory to build in path
[13341] Failed to execute script docker-compose
So how should I specify these directory and where I can place it in app or tests service?
TypeError: You must specify a directory to build in path
[13341] Failed to execute script docker-compose
Above error tells you context: should be folder to put your Dockerfile, but as you seems could use the same image to test your product, I think no need to specify it.
And I guess your MyTest.py will visit 5000 port of your app container to have a test. So what you needed is next:
version: '2'
services:
app:
build: .
container_name: my_app
ports:
- "5000:5000"
tests:
build: .
depends_on:
- app
command: python MyTest.py
Here, what you need to pay attention is: you should visit http://my_app:5000 for your test in MyTest.py.
Meanwhile, in MyTest.py suggest you to sleep some time, because depends_on just can ensure tests start after app, but cannot assure at that time your flask already ready, you can also consider this to assure the order.
You need to specify dockerfile field as you are using version-2 docker compose.
Check this out.
Modify your build command:
...
build:
context: .
dockerfile: Dockerfile
...
Related
I am trying to dockerize a flask project with Redis and SQLite. I kept getting this error when I run the project using docker. The project works just fine when I run it normally using python manage.py run
Dockerfile
FROM python:3.7.2-slim
COPY . /app
WORKDIR /app
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
EXPOSE 5000
CMD ["python","manage.py run", "--host=0.0.0.0"]
docker-compose.yml
version: '3'
services:
sqlite3:
image: nouchka/sqlite3:latest
stdin_open: true
tty: true
volumes:
- ./db/:/root/db/
api:
container_name: flask-container
build: .
entrypoint: python manage.py run
env_file:
- app/main/.env
ports:
- '5000:5000'
volumes:
- ./db/:/root/db/
- ./app/main/:/app/main/
redis:
image: redis
container_name: redis-container
ports:
- "6379:6379"
Please what could be the problem?
Your docker-compose.yml file has several overrides that fundamentally change the way the image works. In particular, the entrypoint: line suppresses the CMD from the Dockerfile, which loses the key --host option. You also should not need volumes: to inject the application code (it's already in the image), nor should you need to manually specify container_name:.
services:
api:
build: .
env_file:
- app/main/.env
ports:
- '5000:5000'
# and no other settings
In the Dockerfile, your CMD has two shell words combined together. You need to split those up into separate words in the JSON-array syntax.
CMD ["python","manage.py", "run", "--host=0.0.0.0"]
# ^^^^ two words
With these two fixes, you'll be running the CMD from the image, with the code built into the image, and with the critical --host=0.0.0.0 option.
I have a simple script like
print('hey 01')
and I have dockerized it like below:
FROM python:3.8
WORKDIR /crawler_app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "-u", "boot_up.py"]
and also I have a compose file like below:
version: "3.7"
services:
db:
image: mongo:latest
container_name: ${DB_CONTAINER_NAME}
volumes:
- ./mongo-volume:/data/db
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_USER}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_USER_PASS}
ports:
- ${EXPOSED_PORT}:27017
networks:
- crawl-network
crawler:
build: crawler
container_name: ${CRAWLER_APP_NAME}
restart: always
volumes:
- ./crawler:/crawler_app
networks:
- crawl-network
depends_on:
- db
networks:
crawl-network:
The Problem
The problem is that although I have a volume to inside my Docker container, and when I change code in my editor and save it, the source code inside container would update but there is no way to restart the Python script to start with new updated code.
I have searched a lot about this issue and I found some threads on GitHub and Stack Overflow but none of them was not useful to me and I got no answer from them.
My main question is, how can I restart a Python script inside container when I change the source code and save it?
I found a way that mentions to restart your container every time, but I think that there should be a simple way like something like nodemon to JavaScript.
If you want to use watchmedo you need to install library for parsing yaml files by command
python -m pip install pyyaml
I'm trying to prepare a development container with Python + Flask and Postgre.
Since it is a development container, it is meant to be productive, so I don't want to run a build each time I change a file, so I can't COPY the files in the build phase, instead I mount a volume with all the source files, so when I change a python file in the host machine, the Flask server will automatically detect the changes and restart itself, even though it is in the container.
So far so good, running docker-compose up and these containers run fine on Windows, but when I tried to run on Linux, i got:
/bin/sh: 1: ./start.sh: Permission denied
Everyplace I searched tells me to RUN chmod +x start.sh, which doesn't work, because the file doesn't exist at build phase, so I try changing to CMD, instead of RUN... but still same error.
Any ideas why? Aren't containers supposed to help with the 'works on my machine' ? Because these files work on a Windows Host, but not on a Linux Host.
Is what I am doing the right approach in order to make the file changes on the host machine reflect in the container (without a build)?
Thanks in advance!!
Below are my files:
docker-compose.yml:
version: '3'
services:
postgres-docker:
image: postgres:9.6
environment:
POSTGRES_PASSWORD: "Postgres2019!"
ports:
- "9091:5432"
expose:
- "5432"
volumes:
- volpostgre:/var/lib/postgresql/data
networks:
- app-network
rest-server:
build:
context: ./projeto
ports:
- "9092:5000"
depends_on:
- postgres-docker
volumes:
- ./projeto:/app
networks:
- app-network
volumes:
volpostgre:
networks:
app-network:
driver: bridge
and inside projeto folder I got the following Dockerfile
FROM python:3.8.5
WORKDIR /app
CMD ./start.sh
And in start.sh:
#!/bin/bash
pip install -r requirements.txt
python setupdatabase.py
python run.py
One of the options that you can try is to override CMD in docker-compose.yml and first set the permission to file and then start the execute the script.
So by doing this you do not need to build docker image at all as the only thing in the image is you are setting the CMD ./start.sh
webapp:
image: python:3.8.5
volumes:
- $PWD/:/app
working_dir: /app
command: bash -c 'chmod +x start.sh && ./start.sh'
I am running FastApi via docker by creating a sevice called ingestion-data in docker-compose. My Dockerfile :
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7
# Environment variable for directory containing our app
ENV APP /var/www/app
ENV PYTHONUNBUFFERED 1
# Define working directory
RUN mkdir -p $APP
WORKDIR $APP
COPY . $APP
# Install missing dependencies
RUN pip install -r requirements.txt
AND my docker-compose.yml file
version: '3.8'
services:
ingestion-service:
build:
context: ./app
dockerfile: Dockerfile
ports:
- "80:80"
volumes:
- .:/app
restart: always
I am not sure why this is not picking up any change automatically when I make any change in any endpoint of my application. I have to rebuild my images and container every time.
Quick answer: Yes :)
In the Dockerfile, you copying your app into /var/www/app.
The instructions form the Dockerfile are executed when you build your image (docker build -t <imgName>:<tag>)
If you change the code later on, how could the image be aware of that?
However, you can mount a volume(a directory) from your host machine, into the container when you execute the docker run / docker-compose up command, right under /var/www/app. You'll then be able to change the code in your local directory and the changes will automatically be seen in the container as well.
Perhaps you want to mount the current working directory(the one containing your app) at /var/www/app?
volumes:
- .:/var/www/app
at the moment I am trying to build a Django App, that other users should be able to use as Docker-Container. I want them to easily do a run command or starting a prewritten docker-compose file to start the container.
Now, I have problems with the persistence of the data. I am using the volume flag in docker-compose for example to bind mount a local folder of the host into the container, where the app data and config files are located on the container. The host folder is empty on the first run, as the user just installed docker and is just starting the docker-compose.
As it is a bind mount, the empty folder overrides the folder in Docker as far as I understood and so the Container-Folder, containing the Django-App is now empty and so it is not startable.
I searched a bit and as far as I understood, I need to create a entrypoint.sh file that copies the app data folder into the folder of the container after the startup, where the volume is.
Now to my questions:
Is there a Best Practice of how to copy the files via an entrypoint.sh file?
What about a second run, after 1. worked and files already exist, how to not override the maybe changed config files with the default ones in the temp folder?
My example code for now:
Dockerfile
# pull official base image
FROM python:3.6
# set work directory
RUN mkdir /app
WORKDIR /app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# copy project
COPY . /app/
# install dependencies
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
#one of my tries to make data persistent
VOLUME /app
docker-compose.yml
version: '3.5'
services:
app:
image: app:latest
ports:
- '8000:8000'
command: python manage.py runserver 0.0.0.0:8000
deploy:
replicas: 1
restart_policy:
condition: on-failure
volumes:
- /folder/to/app/data:/app
networks:
- overlay-core
networks:
overlay-core:
external: true
entrypoint.sh
#empty for now
You should restructure your application to store the application code and its data in different directories. Even if the data is a subdirectory of the application, that's good enough. Once you do that, you can bind-mount only the data directory and leave the application code from the image intact.
version: '3.5'
services:
app:
image: app:latest
ports:
- '8000:8000'
volumes:
- ./data:/app/data # not /app
There's no particular reason to put a VOLUME declaration in your Dockerfile, but you should declare the CMD your image should run there.