Every time when I run a debugger there happen many things but not what I expect.
I'm running a project with docker-compose up
Checking the localhost if backend backend is okay. It's down.
What's funny the container is running because I'm attached to this with vscode's remote containers.
The debugpy library is installed.
The first approach to run a debugger end with such info in debug console:
Attached!
System check identified some issues:
WARNINGS:
workflow.State.additional_values: (fields.W904) django.contrib.postgres.fields.JSONField is deprecated. Support for it (except in historical migrations) will be removed in Django 4.0.
HINT: Use django.db.models.JSONField instead.
Operations to perform:
Apply all migrations: accounts, auth, contenttypes, files, mambu, otp_totp, sessions, token_blacklist, workflow, zoho
Running migrations:
No migrations to apply.
and it's down. Backend is also down.
Second try:
Attached!
System check identified some issues:
WARNINGS:
workflow.State.additional_values: (fields.W904) django.contrib.postgres.fields.JSONField is deprecated. Support for it (except in historical migrations) will be removed in Django 4.0.
HINT: Use django.db.models.JSONField instead.
Zoho Configuration failed, check that you have all variables ZOHO_TOKEN_URL, ZOHO_REST_API_KEY, ZOHO_CURRENT_USER_EMAIL
and it's down but backend is up - I'm able to login etc.
The third try ends with such an error connect ECONNREFUSED 127.0.0.1:5678.
Any tips?
Code:
manage.py
#!/usr/bin/env python
"""Django's command-line utility for administrative tasks."""
import os
import sys
def initialize_debugger():
import debugpy
debugpy.listen(("0.0.0.0", 5678))
debugpy.wait_for_client()
print('Attached!')
def main():
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "xxx.settings")
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
execute_from_command_line(sys.argv)
if __name__ == "__main__":
initialize_debugger()
main()
The local docker-compose.yml
version: "3.2"
services:
backend:
container_name: xxx
build:
context: ./backend
dockerfile: ../build/backend.Dockerfile
volumes:
- ./backend:/opt/app
command: ./run.sh
ports:
- "8000:8000"
- "5678:5678"
env_file:
- build/.env-local
links:
- db:db
- rabbit:rabbit
- memcached:memcached
celery:
container_name: xxx
restart: always
build:
dockerfile: ../build/backend.Dockerfile
context: ./backend
command: ./run_celery.sh
env_file:
- build/.env-local
working_dir: /opt/app/
volumes:
- ./backend/:/opt/app
links:
- db:db
- rabbit:rabbit
frontend:
container_name: xxx
build:
context: frontend
dockerfile: ../build/frontend.Dockerfile
environment:
- BROWSER=none
- CI=true
volumes:
- ./frontend/src/:/frontend/src
- ./frontend/public/:/frontend/public
nginx:
container_name: xxx
build:
dockerfile: build/nginx.Dockerfile
context: .
args:
REACT_APP_GOOGLE_ANALYTICS_TOKEN: $REACT_APP_GOOGLE_ANALYTICS_TOKEN
REACT_APP_PAGESENSE_LINK: $REACT_APP_PAGESENSE_LINK
REACT_APP_CHATBOT_TOKEN: $REACT_APP_CHATBOT_TOKEN
REACT_APP_SENTRY_DSN: $REACT_APP_SENTRY_DSN
REACT_APP_SENTRY_ENVIRONMENT: $REACT_APP_SENTRY_ENVIRONMENT
REACT_APP_SENTRY_TRACES_SAMPLE_RATE: $REACT_APP_SENTRY_TRACES_SAMPLE_RATE
REACT_APP_THIRD_PARTY_API_URL: $REACT_APP_THIRD_PARTY_API_URL
ports:
- "5000:80"
depends_on:
- backend
- frontend
env_file:
- build/.env-local
volumes:
- ./build/nginx/nginx.conf:/etc/nginx.conf
db:
container_name: xxx
image: postgres:12
ports:
- "5432:5432"
restart: on-failure
environment:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
rabbit:
container_name: xxx
image: rabbitmq
ports:
- "5672:5672"
memcached:
container_name: xxx
image: memcached
ports:
- "11211:11211"
restart: always
flower:
image: mher/flower:0.9.5
environment:
- CELERY_BROKER_URL=amqp://xxx-rabbitmq//
- FLOWER_PORT=8888
ports:
- 8888:8888
and the launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": "CF: Remote Attach",
"type": "python",
"request": "attach",
"connect": {
"host": "localhost",
"port": 5678
},
"pathMappings": [
{
"localRoot": "${workspaceFolder}/backend",
"remoteRoot": "/opt/app/"
}
],
"django": true
}
]
}
Django don't support debugging by its own
this is what I fount of surfing in 2 min
this might help you
There could be many reasons why debugging does not work as intended. Troubleshooting is usually the reasonable thing to do. Starting from something simple and adding complexity until figuring out what step is not working as intended. I would recommend starting with a simple debugging session using pdb, before adding VS Code complexity. In order to accomplish that, you just need to add a breakpoint() in your backend code where you want to debug. In your docker-compose.yaml, you want to add to your backend service, the following additional configurations
services:
backend:
- tty: true
- stdin_open: true
In your terminal, start your application with docker-compose up. Open a second terminal and attach to your container with docker attach <project name>_backend. You should normally get a prompt pdb> at the location where your breakpoint was hit.
Based on your description, here are the points I would investigate.
debugpy installation
Make sure debugpy is installed in the Docker image and not locally.
WSGI HTTP server
I presume you're using python manage.py runserver 0.0.0.0:8000 to start the WSGI HTTP server. Just in case you're using something like gunicorn, it's worth mentioning that you should only use 1 worker. As an example, if using gunicorn, you can provide the amount of workers at the command line: gunicorn --workers=1 --timeout=1200 --bind 0.0.0.0:8000 your_application.wsgi:application.
Note also the huge timeout. You might want to set a high value both for your WSGI HTTP server and for Nginx. If one of them times out while you're debugging, you will get a 502 or 504 error depending on which one timed out first and your debugging session will terminate.
debugpy location
I usually place the code importing debugpy in wsgi.py, right before the call to get_wsgi_application()
"""
WSGI config for {{ project_name }} project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/{{ docs_version }}/howto/deployment/wsgi/
"""
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', '{{ project_name }}.settings')
import debugpy
debugpy.listen(('0.0.0.0', 5678))
debugpy.wait_for_client()
print('Attached!')
application = get_wsgi_application()
Related
I am trying to run my django application using docker which involves celery. I am able to set everything on local and it works perfectly fine. However, when I run it docker, and my task gets executed, it throws me the following error:
myapp.models.mymodel.DoesNotExist: mymodel matching query does not exist.
I am particularly new to celery and docker so not sure what am I doing wrong.
Celery is set up correctly, I have made sure of that. Following are the broker_url and backend:
CELERY_BROKER_URL = 'redis://redis:6379/0'
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_BACKEND = 'django-db'
This is my docker-compose.yml file:
version: "3.8"
services:
redis:
image: redis:alpine
container_name: rz01
ports:
- "6379:6379"
networks:
- npm-nw
- braythonweb-network
braythonweb:
build: .
command: >
sh -c "python manage.py makemigrations &&
python manage.py migrate &&
gunicorn braython.wsgi:application -b 0.0.0.0:8000 --workers=1 --timeout 10000"
volumes:
- .:/code
ports:
- "8000:8000"
restart: unless-stopped
env_file: .env
networks:
- npm-nw
- braythonweb-network
celery:
build: .
restart: always
container_name: cl01
command: celery -A braython worker -l info
depends_on:
- redis
networks:
- npm-nw
- braythonweb-network
networks:
braythonweb-network:
npm-nw:
external: false
I have tried few things from different stackoverflow posts like apply_async. I have also made sure that my model existed.
Update On further investigating the issue, I have noticed that the celery task does not get created in the database in the first place. Don't know why, may be I have to the following with something else:
CELERY_RESULT_BACKEND = 'django-db'
The exception is telling you that you are looking for an entry in your database, that does not exist (yet). Look for any function where you query the database and make sure you create the needed entry before looking for it. I'm assuming you have a table in your database for some configuration, that is read in a function, but the database is empty at the beginning.
I had to add the following to the celery container too to provide access to it:
volumes:
- .:/code
I have a docker-compose file for a Django application.
Below is the structure of my docker-compose.yml
version: '3.8'
volumes:
pypi-server:
services:
backend:
command: "bash ./install-ppr_an_run_dphi.sh"
build:
context: ./backend
dockerfile: ./Dockerfile
volumes:
- ./backend:/usr/src/app
expose:
- 8000:8000
depends_on:
- db
pypi-server:
image: pypiserver/pypiserver:latest
ports:
- 8080:8080
volumes:
- type: volume
source: pypi-server
target: /data/packages
command: -P . -a . /data/packages
restart: always
db:
image: mysql:8
ports:
- 3306:3306
volumes:
- ~/apps/mysql:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=gary
- MYSQL_PASSWORD=tempgary
- MYSQL_USER=gary_user
- MYSQL_DATABASE=gary_db
nginx:
build: ./nginx
ports:
- 80:80
depends_on:
- backend
Django app is dependent on a couple of private packages hosted on the private-pypi-server without which the app won't run.
I created a separate dockerfile for django-backend alone which install packages of requirements.txt and the packages from private-pypi-server. But the dockerfile of django-backend service is running even before the private pypi server is running.
If I move the installation of private packages to docker-compose.yml command code under django-backend service in , then it works fine. Here the issue is that, if the backend is running and I want to run some commands in django-backend(./manage.py migrat) then it says that the private packages are not installed.
Im not sure how to proceed with this, it would be really helpful If i can get all these services running at once by just running the command docker-compose up --build -d
Created a separate docker-compose for pypi-server, which will be up and running even before I build/start other services.
Have you tried adding the pipy service to depends_on of the backend app?
backend:
command: "bash ./install-ppr_an_run_dphi.sh"
build:
context: ./backend
dockerfile: ./Dockerfile
volumes:
- ./backend:/usr/src/app
expose:
- 8000:8000
depends_on:
- db
- pypi-server
Your docker-compose file begs a few questions though.
Why to install custom packages to the backend service at a run time? I can see so many problems which might arise from this such as latency during service restarts, possibly different environments between runs of the same version of the backend service, any problems with the installation would come up during the deployment bring it down, etc. Installation should be done during the build of the docker image. Could you provide your Dockerfile maybe?
Is there any reason why the pypi server has to share docker-compose with the application? I'd suggest having it in a separate deployment especially if it is to be shared among other projects.
Is the pypi server supposed to be used for anything else than a source of the custom packages for the backend service? If not then I'd consider getting rid of it / using it for the builds only.
Is there any good reason why you want to have all the ports exposed? This creates a significant attack surface. E.g. an attacker could bypass the reverse proxy and talk directly to the backend service using port 8000 or they'd be able to connect to the db on the port 3306. Nb docker-compose creates subnetworks among the containers so they can access each other's ports even if those ports are not forwarded to the host machine.
Consider using docker secrets to store db credentials.
I've created FasAPI app with Postgres DB which lives in docker container.
So now I have docker-compose.yml file with my app and postgres DB:
version: '3.9'
services:
app:
container_name: app_container
build: .
volumes:
- .:/code
ports:
- '8000:8000'
depends_on:
- my_database
#networks:
# - postgres
my_database:
container_name: db_container
image: postgres
environment:
POSTGRES_NAME: dbf
POSTGRES_USER: myuser
POSTGRES_PASSWORD: password
volumes:
- postgres:/data/postgres
ports:
- '5432:5432'
restart: unless-stopped
volumes:
postgres:
And now I want to make pytest over my DB with testing endpoints and testing my DB
BUT, when I run python -m pytest cmd I got the error can not translate hostname "my_database" as in my database.py file I have to set DATABASE_URL = 'postgresql://myuser:password#my_database'. As according to userguide, when I build docker-compose file, in DATABASE_URL I must put name of service instead of hostname.
Anyone have an idea how to solve it?!!
The problem is that, if you use docker-compose to run your app in separate container and run database in another container. It is like your DB is not launched and pytest can't connect to it. This is wrong way to implement pytests in this way!!!!
To run pytest correctly you should:
You must in DATABASE_URL write the name of service instead of the name of host! In my case my_database is name of service in docker-compose.yml file, so I should set it as hostname, like: DATABASE_ULR = postgres://<username>:<password>#<name of service>
pytest must be run in app container! What it means! First of all, start your containers: docker-copose up --build where --build is optional (it just rebuilds your images if you made some changes to code in your programm files. After this, you should jump into app container. It can be done from Docker application on your computer or through the terminal. To make it with terinal window:
cmd: docker exec -it <name of container with your application>. You will dive into container and after this you can simply run cmd pytest or python -m pytest. And your tests will run as allways.
If you will have some questions you can write me anytime)))
So, the reason of this Error was that I run pytest and it tried to connect to DATABASE_URL which, em... has not been launched already (as I understand).
I had my Django web app running on the Azure App Services using a single docker container instances. However, I plan to add one more container to run the celery service.
Before going to try the compose with celery and Django web app, I first tried using their docker-compose option to run the Django web app before including the compose with celery service.
Following is my docker-compose configuration for Azure App Service
version: '3.3'
services:
web:
image: azureecr.azurecr.io/image_name:15102020155932
command: gunicorn DjangoProj.wsgi:application --workers=4 --bind 0.0.0.0:8000 --log-level=DEBUG
ports:
- 8000:8000
However, the only thing that I see in my App Service logs is:
2020-10-16T07:02:31.653Z INFO - Stopping site MYSITE because it failed during startup.
2020-10-16T13:26:20.047Z INFO - Stopping site MYSITE because it failed during startup.
2020-10-16T14:51:07.482Z INFO - Stopping site MYSITE because it failed during startup.
2020-10-16T16:40:49.109Z INFO - Stopping site MYSITE because it failed during startup.
2020-10-16T16:43:05.980Z INFO - Stopping site MYSITE because it failed during startup.
I tried the combination of celery and Django app using docker-compose on my LOCAL environment and it seems to be working as expected.
Following is the docker-compose file that I am using to run it on local:
version: '3'
services:
web:
image: azureecr.azurecr.io/image_name:15102020155932
build: .
command: gunicorn DjangoProj.wsgi:application --workers=4 --bind 0.0.0.0:8000 --log-level=DEBUG
ports:
- 8000:8000
env_file:
- .env.file
celery:
image: azureecr.azurecr.io/image_name:15102020155932
build: .
command: celery -A DjangoProj worker -l DEBUG
depends_on:
- web
restart: on-failure
env_file:
- .env.file
What am I missing?
I have checked multiple SO questions but they are all left without an answer.
I can provide more details if required.
P.S. there's an option to run both Django and Celery in the same container and call it a day, but I am looking for a cleaner and scalable solution.
You have to change port because Azure does not support multi container app on port 8000.
Exemple of Configuration-file.yaml
version: '3.3'
services:
api:
image: containerdpt.azurecr.io/xxxxxxx
command: python manage.py runserver 0.0.0.0:8080
ports:
- "8080:8080"
Is there any chance you can time the startup of your site? My first concern with this is it's not starting up within 230 seconds or an external dependency such as the celery container is not ready within 230 seconds.
To see if this is the issue, can you try raising the startup time?
Set the WEBSITES_CONTAINER_START_TIME_LIMIT App Setting to the value you want.
Default Value = 230 Sec.
Max Value= 1800 Sec
My current docker-compose.yml file:
version: '2'
services:
app:
restart: always
build: ./web
ports:
- "8000:8000"
volumes:
- ./web:/app/web
command: /usr/local/bin/gunicorn -w 3 -b :8000 project:create_app()
environment:
FLASK_APP: project/__init__.py
depends_on:
- db
working_dir: /app/web
db:
image: postgres:9.6-alpine
restart: always
volumes:
- dbvolume:/var/lib/postgresql/data
environment:
POSTGRES_DB: app
POSTGRES_USER: app
POSTGRES_PASSWORD: app
volumes:
dbvolume:
I'm now trying to create a docker-compose-test.yml file that overrides the previous file for testing. What came to my mind was to use this:
version: '2'
services:
app:
command: pytest
db:
volumes:
- dbtestvolume:/var/lib/postgresql/data
volumes:
dbtestvolume:
And then run the tests with the command:
docker-compose -f docker-compose.yml -f docker-compose-test.yml run --rm app
that as far as I understand should override only the different aspects compared to the docker-file used for development, that is the command used and the data volume where the data is stored.
The command is successfully overridden, while unfortunately the data volume stays the same and so the data of my application get overwritten if I run my tests.
Is this the correct way to set up a docker configuration for the tests? Any suggestion about what is going wrong?
If this is not the correct way, what is the proper way to setup a docker-compose configuration for testing?
Alternative test
I tried to change my docker-compose-test.yml file to use a different service (db-test) for testing:
version: '2'
services:
app:
command: pytest
depends_on:
- db-test
db-test:
image: postgres:9.6-alpine
restart: always
environment:
POSTGRES_DB: app
POSTGRES_USER: app
POSTGRES_PASSWORD: app
What happens now is that I have data is not overwritten (so, in a way, it works, hurray!) when a run my tests, but if I try to run the command:
docker-compose down
I get this ouput:
Stopping app_app_1 ... done
Stopping app_db_1 ... done
Found orphan containers (app_db-test_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
and then the docker-compose down fails. So something is not configured properly.
Any idea?
If you don't want to persist the DB data, don't use volumes, so you will have a fresh database everytime you start the container.
I guess you need some prepopulated data in your tables, so just build a new DB image copying the data you need. The Docker file could be something like:
FROM postgres:9.6-alpine
COPY db-data/ /var/lib/postgresql/data
In case you need to update the data, mount the db-data/ using -v, change it and rebuild the image.
BTW, it would be better to use an automated pipeline to test your builds, using Jenkins, GitLab CI, Travis or whatever solution that suits you. Anyway, you can use docker-compose in your pipeline as well to keep it consistent with your local development environment.