Docker + Python, issues with own modules - python

I have a project structured like this:
docker-compose.yml
database>
models.py
__init__.py
datajobs>
check_data.py
import_data.py
tasks_name.py
workers>
Dockerfile
worker.py
webapp>
(flask app)
my docker-compose.yml
version: '2'
services:
# Postgres database
postgres:
image: 'postgres:10.3'
env_file:
- '.env'
volumes:
- 'postgres:/var/lib/postgresql/data'
ports:
- '5432:5432'
# Redis message broker
redis:
image: 'redis:3.0-alpine'
command: redis-server
volumes:
- 'redis:/var/lib/redis/data'
ports:
- '6379:6379'
# Flask web app
# webapp:
# build: webapp/.
# command: >
# gunicorn -b 0.0.0.0:8000
# --access-logfile -
# --reload
# app:create_app()
# env_file:
# - '.env'
# volumes:
# - '.:/gameover'
# ports:
# - '8000:8000'
# Celery workers to write and pull data + message APIs
worker:
build: ./worker
env_file:
- '.env'
volumes:
- '.:/gameover'
depends_on:
- redis
beat:
build: ./worker
entrypoint: celery -A worker beat --loglevel=info
env_file:
- '.env'
volumes:
- '.:/gameover'
depends_on:
- redis
# Flower server for monitoring celery tasks
monitor:
build:
context: ./worker
dockerfile: Dockerfile
ports:
- "5555:5555"
entrypoint: flower
command: -A worker --port=5555 --broker=redis://redis:6379
depends_on:
- redis
- worker
volumes:
postgres:
redis:
I want to reference the database modules, and datajobs in my worker. But in docker I can't copy a parent file (so I can't reference the module).
I'd prefer to keep them separate like this, because the flask app will also run these modules. Additionally, if I copy them into each folder there would be a lot of duplicate code.
So in the worker I want to do: from datajobs.data_pull import get_campaigns, but this module isn't copied over in the Dockerfile, as I can't reference it in the parent folder.
Dockerfile in worker
FROM python:3.6-slim
MAINTAINER Gameover
# Redis variables
ENV CELERY_BROKER_URL redis://redis:6379/0
ENV CELERY_RESULT_BACKEND redis://redis:6379/0
# Make worker directory, cd and copy files
ENV INSTALL_PATH /worker
RUN mkdir -p $INSTALL_PATH
WORKDIR /worker
COPY . .
# Install dependencies
RUN pip install -r requirements.txt
# Run the worker
ENTRYPOINT celery -A worker worker --loglevel=info

So, the answer to your question is pretty easy-
worker:
build:
context: .
dockerfile: ./worker
env_file:
- '.env'
volumes:
- '.:/gameover'
depends_on:
- redis
Then in your Dockerfile you can reference all of the paths and copy all of the code you need.
There are a couple other things I notice...
COPY . .
# Install dependencies
RUN pip install -r requirements.txt
This will make you reinstall all your dependencies on every code change. Instead do
COPY requirements.txt .
# Install dependencies
RUN pip install -r requirements.txt
COPY . .
So you only reinstall them when requirements.txt changes.
Finally- when I set this kind of thing up, I generally build a single image and just override the command to get workers and beats, so that I don't have to worry about which code is in which container- and my celery code uses many of the same modules as my flask app does. It will simplify your build process quite a bit... just a suggestion.

RUN pip install -r requirements.txt
Is the above command install content in project or code folder or directly to docker pre-build image of subsequent project or code.
Edit : can't Comment in above post due to reputation points

Related

Docker-Compose Output File To Local Host

I have the below docker-compose.yaml file that sets up a database and runs a python script
version: '3.3'
services:
db:
image: mysql:8.0
cap_add:
- SYS_NICE
restart: always
environment:
- MYSQL_DATABASE=test_db
- MYSQL_ROOT_PASSWORD=xxx
ports:
- '3310:3310'
volumes:
- db:/var/lib/mysql
py_service:
container_name: test_py
build: .
command: ./main.py -r compute_init
depends_on:
- db
ports:
- 80:80
environment:
DB_HOST: db
DB_PORT: 3306
DB_USER: root
DB_PASSWORD: xxx
DB_NAME: test_db
links:
- db
volumes:
- py_output:/app/output
volumes:
db:
driver: local
py_output:
To run it I perform the following
docker-compose build
docker-compose up
docker-compose run -v /home/ubuntu/docker_directory/output:/app/output/* py_service
Here is the Dockerfile
FROM python:3.7
RUN mkdir /app
WORKDIR /app
COPY env/requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD ["python3","main.py","-r","compute_init"]
Now this works fine I can see the data has been properly populated under the generated in the msql database.
The python file at the end of the script should dump a csv file to /app/ouput/output.csv (via pandas library df.to_csv("output/output.csv"))
My question is, how to recover that csv from the container to the local directory.
The script seems to finish off without any errors, but can't find the output file at the end.
it seems using docker-compose run -v $(pwd)/output:/app/output py_service
did the job

Changes made to the flask code not reflecting in the Docker container and Multiple Image creation [duplicate]

This question already has an answer here:
How to reload my gunicorn server automatically?
(1 answer)
Closed 11 months ago.
In the flask code main.py I am using the following script
if __name__ == "__main__":
app.run(host="0.0.0.0", debug=True, port=80)
The docker file sets up the environment and it looks like
Dockerfile
FROM python:3.8-slim
LABEL maintainer="nebu"
ENV GROUP_ID=1000 \
USER_ID=1000
RUN apt-get update && apt-get install -y apt-transport-https ca-certificates
WORKDIR /app
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV VIRTUAL_ENV=/opt/venv
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN ["python", "-m", "pip", "install", "--upgrade", "pip", "wheel"]
RUN apt-get install -y python3-wheel
COPY ./requirements.txt /app/requirements.txt
RUN ["python", "-m", "pip", "install", "--no-cache-dir", "--upgrade", "-r", "/app/requirements.txt"]
COPY ./app /app
The docker-compose.yml contains volumes defined and contents are
version: '3.8'
services:
web:
container_name: "flask_container"
build: ./
volumes:
- ./app:/app
ports:
- "8000:8000"
environment:
- DEPLOYMENT_TYPE=production
- FLASK_APP=app/main.py
- FLASK_DEBUG=1
- MONGODB_DATABASE=testdb
- MONGODB_USERNAME=testuser
- MONGODB_PASSWORD=testuser
- MONGODB_HOSTNAME=mongo
command: gunicorn app.main:app --workers 4 --name main -b 0.0.0.0:8000
depends_on:
- redis
links:
- mongo
nginx:
container_name: "nginx_container"
restart: always
image: nginx
volumes:
- ./app/nginx/conf.d:/etc/nginx/conf.d
ports:
- 80:80
- 443:443
links:
- web
redis:
container_name: "redis_container"
image: redis:6.2.6
ports:
- "6379:6379"
worker:
container_name: "celery_container"
build: ./
hostname: worker
command: "celery -A app.routes.celery_tasks.celery worker --loglevel=info"
volumes:
- ./app:/app
links:
- redis
depends_on:
- redis
mongo:
container_name: "mongo_container"
image: mongo:5.0.6-focal
hostname: mongo
restart: always
ports:
- '27017:27017'
environment:
MONGO_INITDB_ROOT_USERNAME: testuser
MONGO_INITDB_ROOT_PASSWORD: testuser
MONGO_INITDB_DATABASE: testdb
volumes:
- mongo-data:/data/db
- mongo-configdb:/data/configdb
volumes:
app:
mongo-data:
mongo-configdb:
I have two issues with this configuration. I am not sure if both can be asked in this single question. (Sincere apologies if cant be asked like this)
when I use docker-compose up --build real time update of the code is not happening in the container.
Two images are created during the build process. I expected only one image and I dont understand how two images are created like below. Is this due to some mistake done in the configuration.
As #Klaus D suggested reloading gunicorn will solve issue number 1. So the command in docker-compose.yml becomes
command: gunicorn app.main:app --workers 4 --name main --reload -b 0.0.0.0:8000
Thanks a lot #Klaus D

This site can’t be reached 127.0.0.1 refused to connect flask

I am trying to dockerize a flask project with Redis and SQLite. I kept getting this error when I run the project using docker. The project works just fine when I run it normally using python manage.py run
Dockerfile
FROM python:3.7.2-slim
COPY . /app
WORKDIR /app
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
EXPOSE 5000
CMD ["python","manage.py run", "--host=0.0.0.0"]
docker-compose.yml
version: '3'
services:
sqlite3:
image: nouchka/sqlite3:latest
stdin_open: true
tty: true
volumes:
- ./db/:/root/db/
api:
container_name: flask-container
build: .
entrypoint: python manage.py run
env_file:
- app/main/.env
ports:
- '5000:5000'
volumes:
- ./db/:/root/db/
- ./app/main/:/app/main/
redis:
image: redis
container_name: redis-container
ports:
- "6379:6379"
Please what could be the problem?
Your docker-compose.yml file has several overrides that fundamentally change the way the image works. In particular, the entrypoint: line suppresses the CMD from the Dockerfile, which loses the key --host option. You also should not need volumes: to inject the application code (it's already in the image), nor should you need to manually specify container_name:.
services:
api:
build: .
env_file:
- app/main/.env
ports:
- '5000:5000'
# and no other settings
In the Dockerfile, your CMD has two shell words combined together. You need to split those up into separate words in the JSON-array syntax.
CMD ["python","manage.py", "run", "--host=0.0.0.0"]
# ^^^^ two words
With these two fixes, you'll be running the CMD from the image, with the code built into the image, and with the critical --host=0.0.0.0 option.

Django on Docker is starting up but browser gives empty response

For a simple app with Django, Python3, Docker on mac
Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED=1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN python3 -m pip install -r requirements.txt
CMD python3 manage.py runserver
COPY . /code/
docker-compose.yml
version: "3.9"
services:
# DB
db:
image: mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: '****'
MYSQL_USER: '****'
MYSQL_PASSWORD: '****'
MYSQL_DATABASE: 'mydb'
ports:
- "3307:3306"
expose:
# Opens port 3306 on the container
- '3307'
volumes:
- $HOME/proj/sql/mydbdata.sql:/mydbdata.sql
# Web app
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Also, what I wanted is to execute the SQL the first time the image is created,
after that database should be mounted.
volumes:
- $HOME/proj/sql/mydbdata.sql:/mydbdata.sql
Looks like the Docker is starting but from my browser, I get this response
localhost didn’t send any data.
ERR_EMPTY_RESPONSE
what is that I am missing. Please help
Looks like your django project is running already when you create image. Since you use command option docker-compose.yml file, you don't need CMD command in Dockerfile in this case.
I would rewrite Dockerfile and docker-compose.yml as follows:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN python3 -m pip install -r requirements.txt
COPY . /code/
version: "3.9"
services:
db:
image: mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: '****'
MYSQL_USER: '****'
MYSQL_PASSWORD: '****'
MYSQL_DATABASE: 'mydb'
ports:
- "3307:3306" # make sure django project connects to 3306 port
volumes:
- $HOME/proj/sql:/docker-entrypoint-initdb.d
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
A few things to point out.
When you run docker-compose up, you will probably see an error, because your django project will already be running even before db is initialised.
That's natural. So you need customized command or shell program to force django project to wait to try to connect db.
In my case I would use a custom command.
version: "3.9"
services:
db:
image: mysql:8
env_file:
- .env
command:
- --default-authentication-plugin=mysql_native_password
restart: always
ports:
- "3308:3306"
web:
build: .
command: >
sh -c "python manage.py wait_for_db &&
python manage.py makemigrations &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
ports:
- "8001:8000"
depends_on:
- db
env_file:
- .env
Next, wait_for_db.py. This file is what I created in myapp/management/commands/wait_for_db.py. With this you postpone db connection until db is ready. This SO post has helped me a lot.
See Writing custom django-admin command for detail.
import time
from django.db import connection
from django.db.utils import OperationalError
from django.core.management.base import BaseCommand
class Command(BaseCommand):
"""Wait to connect to db until db is initialised"""
def handle(self, *args, **options):
start = time.time()
self.stdout.write('Waiting for database...')
while True:
try:
connection.ensure_connection()
break
except OperationalError:
time.sleep(1)
end = time.time()
self.stdout.write(self.style.SUCCESS(f'Database available! Time taken: {end-start:.4f} second(s)'))
Looks like you want to populate your database with sql file when your db container starts running. Mysql docker hub says
Initializing a fresh instance
When a container is started for the first time, a new database with the specified name will be created and initialized with the provided configuration variables. Furthermore, it will execute files with extensions .sh, .sql and .sql.gz that are found in /docker-entrypoint-initdb.d. Files will be executed in alphabetical order. You can easily populate your mysql services by mounting a SQL dump into that directory and provide custom images with contributed data. SQL files will be imported by default to the database specified by the MYSQL_DATABASE variable.
So your .sql file should be located in /docker-entrypoint-initdb.d in your mysql container. See this post for more.
Last but not least, your db is lost when you run docker-compose down, since you don't have volumes other than sql file. It that's not what you want, you might want to consider the following
version: "3.9"
services:
db:
...
volumes:
- data:/var/lib/mysql
...
volumes:
data:

Applying changes in django/docker files

I'm new at the development with django and docker and I have a problem when I change a file in the project. My problem is as follows:
I make changes in the content of any file in the django project (Template, view, urls) but it does not update in my current running app. Always I want to see my changes I need to restart the server (I'm using nginx) doing docker-compose up.
Is there a package or a alteration that I should install/do to make it able to accept change in running time?
This is my Dockerfile:
FROM python:3.6
ENV PYTHONUNBUFFERED 1
RUN mkdir -p /opt/services/djangoapp/src
COPY Pipfile Pipfile.lock /opt/services/djangoapp/src/
WORKDIR /opt/services/djangoapp/src
RUN pip install pipenv && pipenv install --system
RUN pip install pipenv && pipenv install --system
RUN pip install django-livereload
COPY . /opt/services/djangoapp/src
RUN cd hello && python manage.py collectstatic --no-input
EXPOSE 8000
CMD ["gunicorn", "-c", "config/gunicorn/conf.py", "--bind", ":8000", "--chdir", "hello", "hello.wsgi:application"]
Let me know any other information that I might provide to give a better glimpse of the problem (if it is not clear enough).
version: '3'
services:
# database containers, one for each db
database1:
image: postgres:10
volumes:
- database1_volume:/var/lib/postgresql/data
env_file:
- config/db/database1_env
networks:
- database1_network
# web container, with django + gunicorn
djangoapp:
build: .
environment:
- DJANGO_SETTINGS_MODULE
volumes:
- .:/opt/services/djangoapp/src
- static:/opt/services/djangoapp/static
- media:/opt/services/djangoapp/media
- .:/code
networks:
- database1_network
- nginx_network
depends_on:
- database1
# reverse proxy container (nginx)
nginx:
image: nginx:1.13
ports:
- 8000:80
volumes:
- ./config/nginx/conf.d:/etc/nginx/conf.d
- static:/opt/services/djangoapp/static
- media:/opt/services/djangoapp/media
networks:
- nginx_network
depends_on:
- djangoapp
networks:
database1_network:
driver: bridge
database2_network:
driver: bridge
nginx_network:
driver: bridge
volumes:
database1_volume:
static:
media:
This is pretty simple. What happens here now
You have the Dockerfile and you COPY your current folder(at the time you build your image) to the container. So while you are running the container it DOES NOT sync with you host(current working folder) if you change something in the host after create the container.
If you want to sync your host with the container you have to mount it as volume with, either -v in single container or with volumes in docker compose.
docker run -v /host/directory:/container/directory
docker run -v ./:/opt/services/djangoapp/src
or using docker-compose if you have multiple containers
version: '3'
services:
web-service:
build: . # path to Dockerfile
image: your-image
volumes:
- /host/directory:/container/directory
#- ./:/opt/services/djangoapp/src

Categories