How to change chdir in python3 with comandline args - python

How to change chdir in python3 with comandline args like
python3 project_name/manage.py --chdir=/project_name
Actually I try to setup Docker to run flask app with next settings
version: '3'
services:
flask:
build: .
restart: always
container_name: 'project_name'
command: python3 /project_name/manage.py --chdir=/project_name
# command: gunicorn --bind 0.0.0.0:8000 --worker-class=gevent --workers=4 --chdir /flask --reload wsgi:app
volumes:
- .:/flask
ports:
- '8000:8000'
environment:
- PATH=/project_name:$PATH
And after all python says that it cannot find db file, I know why, because chdir is not root of project, chdir is a folder under the project dir

bash -c "cd /project_name && python3 manage.py"

Related

Docker image ran for Django but cannot access dev server url

Working on containerizing my server. I believe I successfully run build, when I run docker-compose my development server appears to run, but when I try to visit the associated dev server URL:
http://0.0.0.0:8000/
However, I get a page with the error:
This site can’t be reachedThe webpage at http://0.0.0.0:8000/ might be temporarily down or it may have moved permanently to a new web address.
These are the settings on my Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
WORKDIR C:/Users/15512/Desktop/django-project/peerplatform
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . ./
EXPOSE 8000
CMD ["python", "./manage.py", "runserver", "0.0.0.0:8000", "--settings=signup.settings"]
This is my docker-compose.yml file:
version: "3.8"
services:
redis:
restart: always
image: redis:latest
ports:
- "49153:6379"
pairprogramming_be:
restart: always
depends_on:
- redis
command: bash -c "python manage.py makemigrations && python manage.py migrate && python manage.py runserver 0.0.0.0:8000"
env_file:
- ./signup/.env
- ./payments/.env
- ./.env
build:
context: ./
dockerfile: Dockerfile
ports:
- "8000:8001"
container_name: "pairprogramming_be"
volumes:
- "C:/Users/15512/Desktop/django-project/peerplatform://pairprogramming_be"
working_dir:
"/C:/Users/15512/Desktop/django-project/peerplatform"
This is the .env file:
DEBUG=1
DJANGO_ALLOWED_HOSTS=0.0.0.0
FYI: the redis image runs successfully. This is what I have tried:
I tried changing the allowed hosts to localhost and 127.0.0.0.1
I tried running the command python manage.py runserver and eventually added 0.0.0.0:8000
When I run docker inspect --format '{{ .NetworkSettings.IPAddress }} pairprogramming_be I get a blank response/my docker container doesn't appear to have an IP Address
where is the 8001 port taken from? this is the internal (expected) listening port. Since you set your application (inside docker) to listen on 8000, you should map it from 8000 to anything else..
just change compose to:
ports:
- "8000:8000"

Changes made to the flask code not reflecting in the Docker container and Multiple Image creation [duplicate]

This question already has an answer here:
How to reload my gunicorn server automatically?
(1 answer)
Closed 11 months ago.
In the flask code main.py I am using the following script
if __name__ == "__main__":
app.run(host="0.0.0.0", debug=True, port=80)
The docker file sets up the environment and it looks like
Dockerfile
FROM python:3.8-slim
LABEL maintainer="nebu"
ENV GROUP_ID=1000 \
USER_ID=1000
RUN apt-get update && apt-get install -y apt-transport-https ca-certificates
WORKDIR /app
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV VIRTUAL_ENV=/opt/venv
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN ["python", "-m", "pip", "install", "--upgrade", "pip", "wheel"]
RUN apt-get install -y python3-wheel
COPY ./requirements.txt /app/requirements.txt
RUN ["python", "-m", "pip", "install", "--no-cache-dir", "--upgrade", "-r", "/app/requirements.txt"]
COPY ./app /app
The docker-compose.yml contains volumes defined and contents are
version: '3.8'
services:
web:
container_name: "flask_container"
build: ./
volumes:
- ./app:/app
ports:
- "8000:8000"
environment:
- DEPLOYMENT_TYPE=production
- FLASK_APP=app/main.py
- FLASK_DEBUG=1
- MONGODB_DATABASE=testdb
- MONGODB_USERNAME=testuser
- MONGODB_PASSWORD=testuser
- MONGODB_HOSTNAME=mongo
command: gunicorn app.main:app --workers 4 --name main -b 0.0.0.0:8000
depends_on:
- redis
links:
- mongo
nginx:
container_name: "nginx_container"
restart: always
image: nginx
volumes:
- ./app/nginx/conf.d:/etc/nginx/conf.d
ports:
- 80:80
- 443:443
links:
- web
redis:
container_name: "redis_container"
image: redis:6.2.6
ports:
- "6379:6379"
worker:
container_name: "celery_container"
build: ./
hostname: worker
command: "celery -A app.routes.celery_tasks.celery worker --loglevel=info"
volumes:
- ./app:/app
links:
- redis
depends_on:
- redis
mongo:
container_name: "mongo_container"
image: mongo:5.0.6-focal
hostname: mongo
restart: always
ports:
- '27017:27017'
environment:
MONGO_INITDB_ROOT_USERNAME: testuser
MONGO_INITDB_ROOT_PASSWORD: testuser
MONGO_INITDB_DATABASE: testdb
volumes:
- mongo-data:/data/db
- mongo-configdb:/data/configdb
volumes:
app:
mongo-data:
mongo-configdb:
I have two issues with this configuration. I am not sure if both can be asked in this single question. (Sincere apologies if cant be asked like this)
when I use docker-compose up --build real time update of the code is not happening in the container.
Two images are created during the build process. I expected only one image and I dont understand how two images are created like below. Is this due to some mistake done in the configuration.
As #Klaus D suggested reloading gunicorn will solve issue number 1. So the command in docker-compose.yml becomes
command: gunicorn app.main:app --workers 4 --name main --reload -b 0.0.0.0:8000
Thanks a lot #Klaus D

Python Celery trying to occupy a port number in docker-compose and creating problems

docker-compose.yml:
python-api: &python-api
build:
context: /Users/AjayB/Desktop/python-api/
ports:
- "8000:8000"
networks:
- app-tier
expose:
- "8000"
depends_on:
- python-model
volumes:
- .:/python_api/
environment:
- PYTHON_API_ENV=development
command: >
sh -c "ls /python-api/ &&
python_api_setup.sh development
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
python-model: &python-model
build:
context: /Users/AjayB/Desktop/Python/python/
ports:
- "8001:8001"
networks:
- app-tier
environment:
- PYTHON_API_ENV=development
expose:
- "8001"
volumes:
- .:/python_model/
command: >
sh -c "ls /python-model/
python_setup.sh development
cd /server/ &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8001"
python-celery:
<<: *python-api
environment:
- PYTHON_API_ENV=development
networks:
- app-tier
links:
- redis:redis
depends_on:
- redis
command: >
sh -c "celery -A server worker -l info"
redis:
image: redis:5.0.8-alpine
hostname: redis
networks:
- app-tier
expose:
- "6379"
ports:
- "6379:6379"
command: ["redis-server"]
python-celery is inside python-api which should run as a separate container. But it is trying to occupy the same port as python-api, which should never be the case.
The error that I'm getting is:
AjayB$ docker-compose up
Creating integrated_redis_1 ... done
Creating integrated_python-model_1 ... done
Creating integrated_python-api_1 ...
Creating integrated_python-celery_1 ... error
Creating integrated_python-api_1 ... done
e1d1055165b1f85f179f69c): Bind for 0.0.0.0:8000 failed: port is already allocated
ERROR: for python-celery Cannot start service python-celery: driver failed programming external connectivity on endpoint integrated_python-celery_1 (ab5e079dbc3a30223e16052f21744c2b5dfc56adbe1d1055165b1f85f179f69c): Bind for 0.0.0.0:8000 failed: port is already allocated
ERROR: Encountered errors while bringing up the project.
on doing docker ps -a, I get this:
AjayB$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2ff1277fb7a7 integrated_python-celery "sh -c 'celery -A se…" 10 seconds ago Created integrated_python-celery_1
5b60221b42a4 integrated_python-api "sh -c 'ls /crackd-a…" 11 seconds ago Up 9 seconds 0.0.0.0:8000->8000/tcp integrated_python-api_1
bacd8aa3268f integrated_python-model "sh -c 'ls /crackd-m…" 12 seconds ago Exited (2) 10 seconds ago integrated_python-model_1
9fdab833b436 redis:5.0.8-alpine "docker-entrypoint.s…" 12 seconds ago Up 10 seconds 0.0.0.0:6379->6379/tcp integrated_redis_1
Tried force removing the containers and tried docker-compose up getting the same error. :/ where am I making mistake?
Just doubtful on volumes: section. Can anyone please tell me if volumes is correct?
and please help me on this error. PS, first try on docker.
Thanks!
This is because you re-use the full config of python-api including the ports section which will expose port 8000 (by the way, expose is redundant since your ports section already exposes the port).
I would create a common section that could be used by any services. In your case, it would be something like that:
version: '3.7'
x-common-python-api:
&default-python-api
build:
context: /Users/AjayB/Desktop/python-api/
networks:
- app-tier
environment:
- PYTHON_API_ENV=development
volumes:
- .:/python_api/
services:
python-api:
<<: *default-python-api
ports:
- "8000:8000"
depends_on:
- python-model
command: >
sh -c "ls /python-api/ &&
python_api_setup.sh development
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
python-model: &python-model
.
.
.
python-celery:
<<: *default-python-api
links:
- redis:redis
depends_on:
- redis
command: >
sh -c "celery -A server worker -l info"
redis:
.
.
.
There is a lot in that docker-compose.yml file, but much of it is unnecessary. expose: in a Dockerfile does almost nothing; links: aren't needed with the current networking system; Compose provides a default network for you; your volumes: try to inject code into the container that should already be present in the image. If you clean all of this up, the only part that you'd really want to reuse from one container to the other is its build: (or image:), at which point the YAML anchor syntax is unnecessary.
This docker-compose.yml should be functionally equivalent to what you show in the question:
version: '3'
services:
python-api:
build:
context: /Users/AjayB/Desktop/python-api/
ports:
- "8000:8000"
# No networks:, use `default`
# No expose:, use what's in the Dockerfile (or nothing)
depends_on:
- python-model
# No volumes:, use what's in the Dockerfile
# No environment:, this seems to be a required setting in the Dockerfile
# No command:, use what's in the Dockerfile
python-model:
build:
context: /Users/AjayB/Desktop/Python/python/
ports:
- "8001:8001"
python-celery:
build: # copied from python-api
context: /Users/AjayB/Desktop/python-api/
depends_on:
- redis
command: celery -A server worker -l info # one line, no sh -c wrapper
redis:
image: redis:5.0.8-alpine
# No hostname:, it doesn't do anything
ports:
- "6379:6379"
# No command:, use what's in the image
Again, notice that the only thing we've actually copied from the python-api container to the python-celery container is the build: block; all of the other settings that would be shared across the two containers (code, exposed ports) are included in the Dockerfile that describes how to build the image.
The flip side of this is that you need to make sure all of these settings are in fact included in your Dockerfile:
# Copy the application code in
COPY . .
# Set the "development" environment variable
ENV PYTHON_API_ENV=development
# Document which port you'll use by default
EXPOSE 8000
# Specify the default command to run
# (Consider writing a shell script with this content instead)
CMD python_api_setup.sh development && \
python manage.py migrate && \
python manage.py runserver 0.0.0.0:8000

Docker-compose Django Supervisord Configuration

I would like to run some programs when my django application running. That's why I choose supervisord. I configured my docker-compose and Dockerfile like :
Dockerfile:
FROM python:3.6
ENV PYTHONUNBUFFERED 1
# some of project settings here
ADD supervisord.conf /etc/supervisord.conf
ADD supervisor-worker.conf /etc/supervisor/conf.d/
CMD ["/usr/local/bin/supervisord", "-c", "/etc/supervisord.conf"]
docker-compose:
api:
build: .
command: bash -c "python manage.py makemigrations && python manage.py migrate && python manage.py runserver 0.0.0.0:8000"
restart: unless-stopped
container_name: project
volumes:
- .:/project
ports:
- "8000:8000"
network_mode: "host"
supervisord.conf
[supervisord]
nodaemon=true
[include]
files = /etc/supervisor/conf.d/*.conf
[supervisorctl]
[inet_http_server]
port=*:9001
username=root
password=root
So my problem is when I up the docker-compose project and other dependencies (postgresql, redis) works fine but supervisord didn't work. When I run "supervisord" command inside container it's working. But in startup, It don't work.

Running Wagtail site with Docker

I'm trying to convert an existing Wagtail site to run with Docker. I have created the image and then run the container, but I'm unable to connect in the browser window. Getting 0.0.0.0 didn’t send any data. ERR_EMPTY_RESPONSE.
Dockerfile:
FROM python:3.6
RUN mkdir /app
WORKDIR /app
COPY ./ /app
RUN pip install --no-cache-dir -r /app/requirements/base.txt
RUN mkdir -p -m 700 /app/static
RUN mkdir -p -m 700 /app/media
ENV DJANGO_SETTINGS_MODULE=mysite.settings.dev DJANGO_DEBUG=on
ENV SECRET_KEY=notsosecretkey
ENV DATABASE_URL=postgres://none
ENV SENDGRID_KEY=sendgridkey
EXPOSE 8080
RUN chmod +x /app/entrypoint.sh \
&& chmod +x /app/start-app.sh
RUN python manage.py collectstatic --noinput
ENTRYPOINT ["/app/entrypoint.sh"]
CMD ["/app/start-app.sh"]
docker-compose.yml:
version: '2'
services:
db:
environment:
POSTGRES_DB: app_db
POSTGRES_USER: app_user
POSTGRES_PASSWORD: changeme
restart: always
image: postgres:9.6
expose:
- "5432"
ports:
- "5432:5432"
app:
container_name: mysite_dev
build:
context: .
dockerfile: Dockerfile
depends_on:
- db
links:
- db:db
volumes:
- .:/app
ports:
- "8080:8080"
environment:
DATABASE_URL: postgres://app_user:changeme#db/app_db
command: python manage.py runserver 0.0.0.0:8080
entrypoint.sh:
#!/bin/sh
set -e
exec "$#"
start-app.sh:
#!/bin/sh
python manage.py runserver 0.0.0.0:8080
dev.py settings file:
from .base import *
DEBUG = True
for template_engine in TEMPLATES:
template_engine['OPTIONS']['debug'] = True
ALLOWED_HOSTS = (
'0.0.0.0',
)
DATABASES = {
'default': db_cache_url.config('DATABASE_URL', extra={'CONN_MAX_AGE': 0, 'ENGINE': 'django.db.backends.postgresql_psycopg2', 'NAME': 'mysite', 'HOST': 'localhost', 'PORT': '5432'}),
}
STATICFILES_STORAGE = 'django.contrib.staticfiles.storage.StaticFilesStorage'
WSGI_APPLICATION = 'mysite.wsgi.application'
try:
from .local import *
except ImportError:
pass
I run docker run -p 8080:8080 app:latest and it works when I run docker ps showing that it's at 0.0.0.0:8080->8080/tcp, but when I go to 0.0.0.0:8080 in the browser window, I get the error. If I remove the DATABASES setting, I see Django errors loading, but then I'm getting settings.DATABASES is improperly configured. Please supply the NAME value. so I think it needs to be there. What am I missing?

Categories