Good morning or afternoon even good evening!
I have been trying to achieve the separating resource server from the auth server using OAuth Toolkit with Django and I got stuck.
Tryed:
First, I have already tried the following:
Follow the tutorial with this tutorial and it works when it comes to serving projects with python manage.py runserver.
The whole structure is that I use Postmen as client and request to resource server and check the authenticated user with auth server so there is introspection process between resource and auth server.
Isuss:
As I mentioned, the whole idea works only when I serve project with python manage.py runserver. When deployed projects in Docker-Compose using Nginx and Gunicorn to serve projects, headache has come.
This was the final error - Max retries exceeded with url: /o/introspect/
When I tracked back to the root - Introspection: Failed POST to localhost:8000/o/introspect/ in token lookup
This is error in the client app - "Authentication credentials were not provided."
I found this issue is happened when the access token is expired or revoked and the system try to get a new access token to resource server from auth server.
Somehow, the introspection process is failed by for me an unknown reason!
Anybody hit this wall before?
Edit: (Thu Mar 4, 2021)
I found another reason that can more related to the exact issue!
As the docker compose create services that each service serves one container consisting of image of the project(Django). Therefore, each project is isolated from each other!
This results in A project can be harder to request to B project as the port for B project cannot be reach in the A project.
A potential solution may be using the Nginx server proxy name (which is gonna be the same as the name of each service in docker compose) to make a request.
I am still trying to handle this! If anyone can help that would be really appreciate!
Edit: (Thu Mar 4, 2021 5:07PM Taiwan) Problem Sovled
Solution is demoed!
Before you READ: This solution is to handle projects using Django OAuth Toolkit deployed with Docker-Compose which is occurred the Failed Introspection Request issue
So first, let me demo you the docker compose structure:
version: "3.4"
x-service-volumes: &service-volumes
- ./:/usr/proj/:rw,cached
services:
ShopDjangoBN_Nginx:
image: ${DJ_NGINX_IMAGE}
ports:
- 8001:8001
volumes: *service-volumes
environment:
- NGINX_SHOP_HOST=${NGINX_SHOP_HOST}
depends_on:
- "ShopDjangoBN"
ShopDjangoBN:
image: ${SHOP_DJANGO_IMAGE}
command: gunicorn -w 2 -b 0.0.0.0:8001 main.wsgi:application
volumes: *service-volumes
depends_on:
- "ShopDjangoBN_Migrate"
expose:
- 8001
ShopDjangoBN_CollectStatic:
image: ${SHOP_DJANGO_IMAGE}
command: python manage.py collectstatic --noinput
volumes: *service-volumes
ShopDjangoBN_Migrate:
image: ${SHOP_DJANGO_IMAGE}
command: python manage.py migrate
volumes: *service-volumes
OAuthDjangoBN_Nginx:
image: ${DJ_NGINX_IMAGE}
ports:
- 8000:8000
volumes: *service-volumes
environment:
- NGINX_OAUTH_HOST=${NGINX_OAUTH_HOST}
depends_on:
- "OAuthDjangoBN"
OAuthDjangoBN:
image: ${O_AUTH_DJANGO_IMAGE}
command: gunicorn -w 2 -b 0.0.0.0:8000 main.wsgi:application
volumes: *service-volumes
depends_on:
- "OAuthDjangoBN_Migrate"
expose:
- 8000
OAuthDjangoBN_CollectStatic:
image: ${O_AUTH_DJANGO_IMAGE}
command: python manage.py collectstatic --noinput
volumes: *service-volumes
OAuthDjangoBN_Migrate:
image: ${O_AUTH_DJANGO_IMAGE}
command: python manage.py migrate
volumes: *service-volumes
volumes:
auth-data:
shop-data:
static-volume:
media-volume:
There are two Nginx server services that handle Django's network ShopDjangoBN_Nginx and OAuthDjangoBN_Nginx in the docker-compose yml file! Generally speaking, if we serve projects without docker-compose and nginx, you wouldn't meet this issue. However you would meet this issue when it comes to the deployment using docker tech, I assume.
To set up the idea of Separate Server, You need to follow this tutorial and you will need to complete the following code in Django settings file for the Resource server project:
OAUTH2_PROVIDER = {
...
'RESOURCE_SERVER_INTROSPECTION_URL': 'https://example.org/o/introspect/',
'RESOURCE_SERVER_AUTH_TOKEN': '3yUqsWtwKYKHnfivFcJu', # OR this but not both:
#'RESOURCE_SERVER_INTROSPECTION_CREDENTIALS' ('rs_client_id','rs_client_secret'),
...
}
The key here is the 'RESOURCE_SERVER_INTROSPECTION_URL' variable! This variable is used to request the Introspection Request from Resource to Auth server and therefore, this url is pretty recommended to be set correctly and it has to be the introspection endpoint in the Auth Server.
Next, if you still remember, the OAuthDjangoBN_Nginx, which handles any request for Auth Server, is a reverse proxy service! And technically speaking, the OAuthDjangoBN_Nginx is gonna be our host for the Auth Server. So... the introspection url in Resource server Django settings file will be like:
'RESOURCE_SERVER_INTROSPECTION_URL': 'https://OAuthDjangoBN_Nginx:<port>/o/introspect/'
And the nginx.conf
upstream OAuthDjangoBN {
server OAuthDjangoBN:8000;
}
server {
listen 8000;
location / {
proxy_pass http://OAuthDjangoBN;
# proxy_set_header Host $NGINX_SHOP_HOST;
proxy_set_header Host "localhost:8000";
proxy_redirect off;
}
location /static/ {
alias /usr/proj/OAuthDjangoBN/static/;
}
}
This proxy_set_header is better to be set with env variable, I found some solutions on the internet so wouldn't be a problem. It is also important to be set up as the reverse host name is gonna be OAuthDjangoBN_Nginx: which will be not recognised as a valid host name and therefore, set it up!
proxy_set_header Host "localhost:8000";
Alright, I think this idea can be a solution if someone has met or will meet the same issue. I also believe this can be still a confusion. Just let me know if you hit the wall!
Related
I have a docker-compose file for a Django application.
Below is the structure of my docker-compose.yml
version: '3.8'
volumes:
pypi-server:
services:
backend:
command: "bash ./install-ppr_an_run_dphi.sh"
build:
context: ./backend
dockerfile: ./Dockerfile
volumes:
- ./backend:/usr/src/app
expose:
- 8000:8000
depends_on:
- db
pypi-server:
image: pypiserver/pypiserver:latest
ports:
- 8080:8080
volumes:
- type: volume
source: pypi-server
target: /data/packages
command: -P . -a . /data/packages
restart: always
db:
image: mysql:8
ports:
- 3306:3306
volumes:
- ~/apps/mysql:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=gary
- MYSQL_PASSWORD=tempgary
- MYSQL_USER=gary_user
- MYSQL_DATABASE=gary_db
nginx:
build: ./nginx
ports:
- 80:80
depends_on:
- backend
Django app is dependent on a couple of private packages hosted on the private-pypi-server without which the app won't run.
I created a separate dockerfile for django-backend alone which install packages of requirements.txt and the packages from private-pypi-server. But the dockerfile of django-backend service is running even before the private pypi server is running.
If I move the installation of private packages to docker-compose.yml command code under django-backend service in , then it works fine. Here the issue is that, if the backend is running and I want to run some commands in django-backend(./manage.py migrat) then it says that the private packages are not installed.
Im not sure how to proceed with this, it would be really helpful If i can get all these services running at once by just running the command docker-compose up --build -d
Created a separate docker-compose for pypi-server, which will be up and running even before I build/start other services.
Have you tried adding the pipy service to depends_on of the backend app?
backend:
command: "bash ./install-ppr_an_run_dphi.sh"
build:
context: ./backend
dockerfile: ./Dockerfile
volumes:
- ./backend:/usr/src/app
expose:
- 8000:8000
depends_on:
- db
- pypi-server
Your docker-compose file begs a few questions though.
Why to install custom packages to the backend service at a run time? I can see so many problems which might arise from this such as latency during service restarts, possibly different environments between runs of the same version of the backend service, any problems with the installation would come up during the deployment bring it down, etc. Installation should be done during the build of the docker image. Could you provide your Dockerfile maybe?
Is there any reason why the pypi server has to share docker-compose with the application? I'd suggest having it in a separate deployment especially if it is to be shared among other projects.
Is the pypi server supposed to be used for anything else than a source of the custom packages for the backend service? If not then I'd consider getting rid of it / using it for the builds only.
Is there any good reason why you want to have all the ports exposed? This creates a significant attack surface. E.g. an attacker could bypass the reverse proxy and talk directly to the backend service using port 8000 or they'd be able to connect to the db on the port 3306. Nb docker-compose creates subnetworks among the containers so they can access each other's ports even if those ports are not forwarded to the host machine.
Consider using docker secrets to store db credentials.
I have two docker containers: nginx_server and django_server (with UWSGI) and a posgres with postgresql. Now the thing is, My nginx setup seems to be a bit off, because if gives me a 502 Bad Gateway any time I curl 0.0.0.0:8000 (both on host and inside container).
django_server is runs on UWSGI and is configured such as:
uwsgi --http-socket 0.0.0.0:80 \
--master \
--module label_it.wsgi \
--static-map /static=/app/label_it/front/static \
I'am able to connect to it both outside and inside container and get a proper result.
Inside django_server container I can ping and curl nginx_server with positive result and 502 Bad Gateway.
curl django_server in nginx_server container outputs:
<html lang="en">
<head>
<title>Bad Request (400)</title>
</head>
<body>
<h1>Bad Request (400)</h1><p></p>
</body>
</html>
(I receive above output in django_server container when curl localhost:80 rather than curl 0.0.0.0:80 / curl 127.0.0.1:80)
It seems to me, like nginx_server is not properly requesting data from uwsgi.
nginx_server configuration:
upstream django {
server django_server:8000;
}
# configuration of the server
server {
# the port your site will be served on
listen 0.0.0.0:8000;
# the domain name it will serve for
server_name label_it.com; # substitute your machine's IP address or FQDN
charset utf-8;
# max upload size
client_max_body_size 75M; # adjust to taste
location /static {
alias /vol/static; # your Django project's static files - amend as required
}
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass django;
include /etc/nginx/uwsgi_params; # the uwsgi_params file you installed
}
}
I've researched that connection must be set to 0.0.0.0 for container to be accessible from outside, and it is done.
Another thing, I call django_server in nginx_server by it's container name, which should allow mi for connection on the docker network since I docker-compose up.
Main problem is:
django_server content (requesting for uwsgi) is not accessible from other containers, but is accessible from a host.
Additionally (with netstat -tlnp those two are availible):
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:8001 0.0.0.0:* LISTEN -
docker-compose.prod.yaml:
version: "3.8"
services:
db:
image: postgres
container_name: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- ./private/posgtesql/data/data:/var/lib/postgresql/data
app:
build: .
container_name: django_server
environment:
- DJANGO_SECRET_KEY=***
- DJANGO_DEBUG=False
- PATH=/app/scripts:${PATH}
- DJANGO_SETTINGS_MODULE=label_it.settings
volumes:
- .:/app
- static_data:/vol/static
ports:
- "8001:80"
depends_on:
- db
nginx:
container_name: nginx_server
build:
context: ./nginx
volumes:
- static_data:/vol/static
- ./nginx/conf.d:/etc/nginx/conf.d
- ./nginx/uwsgi_params:/etc/nginx/uwsgi_params
ports:
- "8000:80"
depends_on:
- app
volumes:
static_data:
EDIT:
According to David Maze, I've changed port mapping to be simplier and easier to track.
django_server` is now `port 8001:8001
nginx_server` is now `port 8000:8000
Respectively:
django_server uwsgi: --http-socket 0.0.0.0:8001
nginx_server: listen 0.0.0.0:8000 and server django_server:8001
Even tho above changes are applied, issue persist.
You need to make sure all of the port numbers match up.
If you start the back-end service with a command
uwsgi --http-socket 0.0.0.0:80 ...
then cross-container calls need to connect to port 80; ports: are not considered here at all (and aren't necessary if you don't want to connect to the container from outside Docker). Your Nginx configuration needs to say
upstream django {
server django_server:80; # same port as process in container
}
and in the Compose ports: setting, the second port number needs to match this port
services:
app:
ports:
- "8001:80" # second port must match process in container
Similarly, since you've configure the Nginx container to
listen 0.0.0.0:8000;
then you need to use that port 8000 for inter-container communication and as the second ports: number
services:
nginx:
ports:
- "8000:8000" # second port must match "listen" statement
(If you use the Docker Hub nginx image it by default listens on the standard HTTP port 80, and so you'd need to publish ports: ["8000:80"] to match that port number.)
Solution:
Official UWSGI site https://uwsgi-docs.readthedocs.io/en/latest/Options.html
described a uwsgi-socket connection.
Nginx is capable of connecting to uwsgi based on uwsgi_params, but then socket in uwsgi configuration must match the type of request. My explanation is a bit messy, but it did work.
My uwsgi.ini file for uwsgi configuration:
[uwsgi]
uwsgi-socket = 0.0.0.0:8001 <- this line solved the issue
master = true
module = label_it.wsgi:application
chdir = /app/label_it <- important
static-map = /static=/app/label_it/front/static <- might not be neccessery
chmod-socket = 666 <- might not be neccessery
ngxing default.conf file:
upstream django {
server django_server:8001;
}
# configuration of the server
server {
# the port your site will be served on
listen 0.0.0.0:8000;
server_name label_it.com;
charset utf-8;
client_max_body_size 75M;
location /static {
alias /vol/static; # your Django project's static files - amend as required
}
location / {
uwsgi_pass django;
include /etc/nginx/uwsgi_params; # the uwsgi_params file you installed
}
}
My uwsgi version is: uwsgi --version: 2.0.19.1
Nginx installed with official docker hub image: nginx version: nginx/1.19.7
Solution: add uwsgi-socket = 0.0.0.0:some_port to uwsgi.ini
I had my Django web app running on the Azure App Services using a single docker container instances. However, I plan to add one more container to run the celery service.
Before going to try the compose with celery and Django web app, I first tried using their docker-compose option to run the Django web app before including the compose with celery service.
Following is my docker-compose configuration for Azure App Service
version: '3.3'
services:
web:
image: azureecr.azurecr.io/image_name:15102020155932
command: gunicorn DjangoProj.wsgi:application --workers=4 --bind 0.0.0.0:8000 --log-level=DEBUG
ports:
- 8000:8000
However, the only thing that I see in my App Service logs is:
2020-10-16T07:02:31.653Z INFO - Stopping site MYSITE because it failed during startup.
2020-10-16T13:26:20.047Z INFO - Stopping site MYSITE because it failed during startup.
2020-10-16T14:51:07.482Z INFO - Stopping site MYSITE because it failed during startup.
2020-10-16T16:40:49.109Z INFO - Stopping site MYSITE because it failed during startup.
2020-10-16T16:43:05.980Z INFO - Stopping site MYSITE because it failed during startup.
I tried the combination of celery and Django app using docker-compose on my LOCAL environment and it seems to be working as expected.
Following is the docker-compose file that I am using to run it on local:
version: '3'
services:
web:
image: azureecr.azurecr.io/image_name:15102020155932
build: .
command: gunicorn DjangoProj.wsgi:application --workers=4 --bind 0.0.0.0:8000 --log-level=DEBUG
ports:
- 8000:8000
env_file:
- .env.file
celery:
image: azureecr.azurecr.io/image_name:15102020155932
build: .
command: celery -A DjangoProj worker -l DEBUG
depends_on:
- web
restart: on-failure
env_file:
- .env.file
What am I missing?
I have checked multiple SO questions but they are all left without an answer.
I can provide more details if required.
P.S. there's an option to run both Django and Celery in the same container and call it a day, but I am looking for a cleaner and scalable solution.
You have to change port because Azure does not support multi container app on port 8000.
Exemple of Configuration-file.yaml
version: '3.3'
services:
api:
image: containerdpt.azurecr.io/xxxxxxx
command: python manage.py runserver 0.0.0.0:8080
ports:
- "8080:8080"
Is there any chance you can time the startup of your site? My first concern with this is it's not starting up within 230 seconds or an external dependency such as the celery container is not ready within 230 seconds.
To see if this is the issue, can you try raising the startup time?
Set the WEBSITES_CONTAINER_START_TIME_LIMIT App Setting to the value you want.
Default Value = 230 Sec.
Max Value= 1800 Sec
I am having issues with getting data back from a docker-selenium container, via a Flask application (also dockerized).
When I have the Flask application running in one container, I get the following error on http://localhost:5000, which goes to the selenium driver using a Remote driver that is running on http://localhost:4444/wd/hub
The error that is generated is:
urllib.error.URLError: <urlopen error [Errno 99] Cannot assign requested address>
I have created a github repo with my code to test, see here.
My docker-compose file below seems ok:
version: '3.5'
services:
web:
volumes:
- ./app:/app
ports:
- "5000:80"
environment:
- FLASK_APP=main.py
- FLASK_DEBUG=1
- 'RUN=flask run --host=0.0.0.0 --port=80'
command: flask run --host=0.0.0.0 --port=80
# Infinite loop, to keep it alive, for debugging
# command: bash -c "while true; do echo 'sleeping...' && sleep 10; done"
selenium-hub:
image: selenium/hub:3.141
container_name: selenium-hub
ports:
- 4444:4444
chrome:
shm_size: 2g
volumes:
- /dev/shm:/dev/shm
image: selenium/node-chrome:3.141
# image: selenium/standalone-chrome:3.141.59-copernicium
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
What is strange is that when I run the Flask application in Pycharm, and the selenium grid is up in docker, I am able to get the data back through http://localhost:5000. The issue is only happening when the Flask app is running inside docker.
Thanks for the help in advance, let me know if you require further information.
Edit
So I amended my docker-compose.yml file to include a network (updated the code in github. As I've had the Flask app code running in debug and in a volume, any update to the code results in a refresh of the debugger.
I ran docker network inspect on the created network, and found the local docker IP address of selenium-hub. I updated the app/utils.py code, in get_driver() to use the IP address in command_executor rather than localhost. Saving, and re-running from my browser results in a successful return of data.
But I don't understand why http://localhost:4444/wd/hub would not work, the docker containers should see each other in the network as localhost, right?
the docker containers should see each other in the network as localhost, right?
No, this is only true when they use the host networking and expose ports through the host.
When you have services interacting with each other in docker-compose (or stack) the services should refer to each other by the service name. E.g. you would reach the hub container at http://selenium-hub:4444/wd/hub. Your Flask application could be reached by another container on the same network at http://web
You may be confused if your default when running docker normally is to use host networking because on the host network selenium-hub is also exposed on the same port 4444. So, if you started a container with host networking it could use http://localhost:4444 just fine there as well.
Could potentially be a port in use issue related to the execution?
See:
Python urllib2: Cannot assign requested address
Just started using visual studio code for python development and I like it so far. I am having issues doing remote debugging in a django app running on docker. Was hoping someone would point me in the right direction to setup V.S debugging in django through docker.
I am running Django rest framework in docker container. I am using asgi server daphne because of websocket connections. I want to know if there is a way to use remote debugging in django running in docker with daphne workers.
I have looked at some tutorial but they are for wsgi part. The problem I am having is that daphne uses workers to connect to daphne server and process the requests that way.
Things I have tried:
Able to use pdb to debug. Have to connect to the worker container and also specify stdin_open: true and tty: true. It works but is a tedious process.
Tried the steps involved in the tutorial URL specified but it didn't work for Visual Studio Code.
Also tried using remote debugging in pycharm but the background skeleton processes make my computer really slow. But I was able to make it work. Hoping I could do the same in Visual Studio Code.
Here is my daphne and daphne worker settings for docker-compose.yml
#-- Daphne Webserver
web:
build:
context: .
dockerfile: Dockerfile
hostname: web
command: daphne -b 0.0.0.0 -p 8001 datuhweb.asgi:channel_layer --access-log=logs/daphne.access.log
environment:
- ENV: 'LOCAL'
expose:
- "8001"
volumes:
- .:/app
#-- Daphne worker
daphne_worker:
build:
context: .
dockerfile: Dockerfile
hostname: daphne_worker
environment:
- ENV: 'LOCAL'
volumes:
- .:/app
command: python -u manage.py runworker -v2
stdin_open: true
tty: true
I am getting a connection error.
Here is my setting for visual studio code
{
"name": "Attach (Remote Debug)",
"type": "python",
"request": "attach",
"localRoot": "/app/",
"remoteRoot": "/app/",
"port": 3500,
"secret": "my_secret",
"host": "172.18.0.8"
}
This gives me a connection error.