I have a simple script like
print('hey 01')
and I have dockerized it like below:
FROM python:3.8
WORKDIR /crawler_app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "-u", "boot_up.py"]
and also I have a compose file like below:
version: "3.7"
services:
db:
image: mongo:latest
container_name: ${DB_CONTAINER_NAME}
volumes:
- ./mongo-volume:/data/db
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_USER}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_USER_PASS}
ports:
- ${EXPOSED_PORT}:27017
networks:
- crawl-network
crawler:
build: crawler
container_name: ${CRAWLER_APP_NAME}
restart: always
volumes:
- ./crawler:/crawler_app
networks:
- crawl-network
depends_on:
- db
networks:
crawl-network:
The Problem
The problem is that although I have a volume to inside my Docker container, and when I change code in my editor and save it, the source code inside container would update but there is no way to restart the Python script to start with new updated code.
I have searched a lot about this issue and I found some threads on GitHub and Stack Overflow but none of them was not useful to me and I got no answer from them.
My main question is, how can I restart a Python script inside container when I change the source code and save it?
I found a way that mentions to restart your container every time, but I think that there should be a simple way like something like nodemon to JavaScript.
If you want to use watchmedo you need to install library for parsing yaml files by command
python -m pip install pyyaml
Related
I am trying to dockerize a flask project with Redis and SQLite. I kept getting this error when I run the project using docker. The project works just fine when I run it normally using python manage.py run
Dockerfile
FROM python:3.7.2-slim
COPY . /app
WORKDIR /app
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
EXPOSE 5000
CMD ["python","manage.py run", "--host=0.0.0.0"]
docker-compose.yml
version: '3'
services:
sqlite3:
image: nouchka/sqlite3:latest
stdin_open: true
tty: true
volumes:
- ./db/:/root/db/
api:
container_name: flask-container
build: .
entrypoint: python manage.py run
env_file:
- app/main/.env
ports:
- '5000:5000'
volumes:
- ./db/:/root/db/
- ./app/main/:/app/main/
redis:
image: redis
container_name: redis-container
ports:
- "6379:6379"
Please what could be the problem?
Your docker-compose.yml file has several overrides that fundamentally change the way the image works. In particular, the entrypoint: line suppresses the CMD from the Dockerfile, which loses the key --host option. You also should not need volumes: to inject the application code (it's already in the image), nor should you need to manually specify container_name:.
services:
api:
build: .
env_file:
- app/main/.env
ports:
- '5000:5000'
# and no other settings
In the Dockerfile, your CMD has two shell words combined together. You need to split those up into separate words in the JSON-array syntax.
CMD ["python","manage.py", "run", "--host=0.0.0.0"]
# ^^^^ two words
With these two fixes, you'll be running the CMD from the image, with the code built into the image, and with the critical --host=0.0.0.0 option.
I'm trying to prepare a development container with Python + Flask and Postgre.
Since it is a development container, it is meant to be productive, so I don't want to run a build each time I change a file, so I can't COPY the files in the build phase, instead I mount a volume with all the source files, so when I change a python file in the host machine, the Flask server will automatically detect the changes and restart itself, even though it is in the container.
So far so good, running docker-compose up and these containers run fine on Windows, but when I tried to run on Linux, i got:
/bin/sh: 1: ./start.sh: Permission denied
Everyplace I searched tells me to RUN chmod +x start.sh, which doesn't work, because the file doesn't exist at build phase, so I try changing to CMD, instead of RUN... but still same error.
Any ideas why? Aren't containers supposed to help with the 'works on my machine' ? Because these files work on a Windows Host, but not on a Linux Host.
Is what I am doing the right approach in order to make the file changes on the host machine reflect in the container (without a build)?
Thanks in advance!!
Below are my files:
docker-compose.yml:
version: '3'
services:
postgres-docker:
image: postgres:9.6
environment:
POSTGRES_PASSWORD: "Postgres2019!"
ports:
- "9091:5432"
expose:
- "5432"
volumes:
- volpostgre:/var/lib/postgresql/data
networks:
- app-network
rest-server:
build:
context: ./projeto
ports:
- "9092:5000"
depends_on:
- postgres-docker
volumes:
- ./projeto:/app
networks:
- app-network
volumes:
volpostgre:
networks:
app-network:
driver: bridge
and inside projeto folder I got the following Dockerfile
FROM python:3.8.5
WORKDIR /app
CMD ./start.sh
And in start.sh:
#!/bin/bash
pip install -r requirements.txt
python setupdatabase.py
python run.py
One of the options that you can try is to override CMD in docker-compose.yml and first set the permission to file and then start the execute the script.
So by doing this you do not need to build docker image at all as the only thing in the image is you are setting the CMD ./start.sh
webapp:
image: python:3.8.5
volumes:
- $PWD/:/app
working_dir: /app
command: bash -c 'chmod +x start.sh && ./start.sh'
I am trying to incorporate a python container and a dynamodb container into one stack file to experiment with Docker swarm. I have done tutorials on docker swarm seeing web apps running across multiple nodes before but never built anything independently. I am able to run docker-compose up with no issues, but struggling with swarm.
My docker-compose.yml looks like
version: '3.3'
services:
dynamodb:
image: "amazon/dynamodb-local"
ports:
- "8000:8000"
track-count:
image: "my-app"
links:
- "dynamodb:localhost"
Running docker stack deploy -c docker-compose.yml trial_stack brings up no errors however printing 'hello world' as the first line of python code is not displayed in the terminal. I get the following as CMD line output
Ignoring unsupported options: links
Creating network trial_stack_default
Creating service trial_stack_dynamodb
Creating service trial_stack_track-count
My question is:
1) Why is the deploy service ignoring the links? I have noticed this is repeated in the docs https://docs.docker.com/engine/reference/commandline/stack_deploy/ but unsure if this will cause my stack to fail.
2) Assuming the links issue is fixed, where will any command line output be shown, to confirm the system is running? Currently I only have one node, my local machine, which is the manager.
For reference, my python image is being built by the following Dockerfile:
FROM python:3.8-slim-buster
RUN mkdir /app
WORKDIR /app
RUN pip install --upgrade pip
COPY ./requirements.txt ./
RUN pip install -r ./requirements.txt
COPY / /
COPY /resources/secrets.py /resources/secrets.py
CMD [ "python", "/main.py" ]
You can update docker-compose.yaml to enable tty for the services for which you want to see the stdout on console.
Updated docker-compose.yaml should look like this:
version: '3.3'
services:
dynamodb:
image: "amazon/dynamodb-local"
ports:
- "8000:8000"
track-count:
image: "my-app"
tty: true
links:
- "dynamodb:localhost"
and then when once you have the task deployed, to check service logs you can run:
# get the service name
docker stack services <STACK_NAME>
# display the service logs, edited based on user's suggestion
docker service logs --follow --raw <SERVICE_NAME>
I have to run simple service on Docker Compose. The first image is to host the previously created service while the second image, which is dependent on the first one, is to run the tests. So I created Dockerfile:
FROM python:2.7-slim
WORKDIR /flask
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "routes.py"]
Everything works. I created some simple tests, which also works, and placed the file in the same directory as routes.py.
So I tried to create docker-compose.yml file and did something like that:
version: '2'
services:
app:
build: .
command: 'python MyTest.py'
ports:
- "5000:5000"
tests:
build:
context: Mytest.py
depends_on:
- app
When I run it I received an error:
TypeError: You must specify a directory to build in path
[13341] Failed to execute script docker-compose
So how should I specify these directory and where I can place it in app or tests service?
TypeError: You must specify a directory to build in path
[13341] Failed to execute script docker-compose
Above error tells you context: should be folder to put your Dockerfile, but as you seems could use the same image to test your product, I think no need to specify it.
And I guess your MyTest.py will visit 5000 port of your app container to have a test. So what you needed is next:
version: '2'
services:
app:
build: .
container_name: my_app
ports:
- "5000:5000"
tests:
build: .
depends_on:
- app
command: python MyTest.py
Here, what you need to pay attention is: you should visit http://my_app:5000 for your test in MyTest.py.
Meanwhile, in MyTest.py suggest you to sleep some time, because depends_on just can ensure tests start after app, but cannot assure at that time your flask already ready, you can also consider this to assure the order.
You need to specify dockerfile field as you are using version-2 docker compose.
Check this out.
Modify your build command:
...
build:
context: .
dockerfile: Dockerfile
...
I have django application with some model. I have manage.py command that creates n models and saves it to db. It runs with decent speed on my host machine.
But if I run it in docker it runs very slow, 1 instance created and saved in 40-50 seconds. I think I am missing something on how Docker works, can somebody point out why performance is low and what can i do with it?
docker-compose.yml:
version: '2'
services:
db:
restart: always
image: "postgres:9.6"
ports:
- "5432:5432"
volumes:
- /usr/local/var/postgres:/var/lib/postgresql
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=my_db
- POSTGRES_USER=postgres
web:
build: .
command: bash -c "./wait-for-it.sh db:5432 --timeout=15; python manage.py migrate; python manage.py runserver 0.0.0.0:8000; python manage.py mock 5"
ports:
- "8000:8000"
expose:
- "8000"
depends_on:
- db
dockerfile for web service:
FROM python:3.6
ENV PYTHONBUFFERED 1
ADD . .
WORKDIR .
RUN pip install -r requirements.txt
RUN chmod +x wait-for-it.sh
The problem here is most likely the volume /usr/local/var/postgres:/var/lib/postgresql as you are using it on Mac. As I understand the Docker for Mac solution, it uses file sharing to implement host volumes, which is a lot slower then native filesystem access.
A possible workaround is to use a docker volume instead of a host volume. Here is an example:
version: '2'
volumes:
postgres_data:
services:
db:
restart: always
image: "postgres:9.6"
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=my_db
- POSTGRES_USER=postgres
web:
build: .
command: bash -c "./wait-for-it.sh db:5432 --timeout=15; python manage.py migrate; python manage.py runserver 0.0.0.0:8000; python manage.py mock 5"
ports:
- "8000:8000"
expose:
- "8000"
depends_on:
- db
Please note that this may complicate management of the postgres data, as you can't simply access the data from your Mac. You can only use the docker CLI or containers to access, modify and backup this data. Also, I'm not sure what happens if you uninstall Docker from your Mac, it may be that you lose this data.
Two things, can be a probable cause:
Starting of docker container takes some time, so if you start new container for each instance this can add up.
What storage driver do you use? Docker (often) defaults to device mapper loopback storage driver, which is slow. Here is some context. This will be painfull especially if you start this container often.
Other than that your config looks sensibly, and there are no obvious causes problems there. So if the above two points don't apply to you, please add some extra comments --- like how you actually add these model instances.