Python WebSocket connection refused in Docker - python

I have a python socketio application that works just fine when run locally. However, when I move it into Docker, external clients are unable to connect and throw this error:
socketio.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
The WebSocket server is really basic and publishes to 0.0.0.0:8080. My client apps listen to localhost:8080. In my Docker container, I've exposed port 8080. I'm guessing that I'm setting up my container incorrectly.:
Dockerfile:
FROM python:latest
WORKDIR /path/to/project
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
docker-compose.yml
version: "3.9"
services:
app:
build: .
working_dir: /path/to/project
stdin_open: true
tty: true
ports:
- "8080:8080"
volumes:
- type: bind
source: .
target: /path/to/project
I start up my container with docker-compose up and I'm using Docker Desktop (if that helps). I use this environment for development so I start my server with python my_server.py. The server starts successfully. What else am I missing?
I've tried the following based on what others have said about this problem online
Explicitly setting 0.0.0.0 to be the host
Using EXPOSE 8080 in my Dockerfile
Set the network mode to host

Your container connectivity appears to be fine.
You have not specified any command to run in your container. In order to ensure your server is actually running on port 8080, you can add a CMD instruction in your Dockerfile or specify cmd in your docker-compose.yaml. This can be done with:
CMD ["python", "my_server.py"]
Or with docker-compose.yaml file as:
version: "3.9"
services:
app:
build: .
working_dir: /path/to/project
stdin_open: true
tty: true
command: python my_server.py #<== this overrides the Dockerfile CMD instruction
ports:
- "8080:8080"
volumes:
- type: bind
source: .
target: /path/to/project
You can learn more about command in docker-compose.yaml here.
That's also the recommended way to develop with docker. Running commands inside running containers for dev work is not the right way. You essentially run docker compose every time you test a new change. That avoids a lot of problems like these where connectivity doesn't work.

Related

Docker Compose with Django and Postgres Fails with "django.db.utils.OperationalError: could not connect to server:" [duplicate]

This question already has an answer here:
Docker-compose: App container can't connect to Postgres
(1 answer)
Closed yesterday.
I am trying to run my django/postgres application with docker compose. When I run docker compose up -d I get the the following logs on my postgres container running on port 5432:
2023-02-18 00:10:25.049 UTC [1] LOG: starting PostgreSQL 13.8 on aarch64-unknown-linux-musl, compiled by gcc (Alpine 11.2.1_git20220219) 11.2.1 20220219, 64-bit
2023-02-18 00:10:25.049 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2023-02-18 00:10:25.049 UTC [1] LOG: listening on IPv6 address "::", port 5432
2023-02-18 00:10:25.052 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2023-02-18 00:10:25.054 UTC [22] LOG: database system was shut down at 2023-02-18 00:10:06 UTC
2023-02-18 00:10:25.056 UTC [1] LOG: database system is ready to accept connections
It appears my postgres container is working properly. However my python/django container has the following logs:
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
docker-compose.yml
version: '3.8'
services:
web:
build: .
command: sh -c "python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
ports:
- "8000:8000"
env_file:
- ./xi/.env
depends_on:
- db
db:
image: postgres:13-alpine
environment:
- POSTGRES_DB=***
- POSTGRES_USER=***
- POSTGRES_PASSWORD=***
volumes:
- dev-db-data:/var/lib/postgresql/data
ports:
- 5432:5432
volumes:
dev-db-data:
Dockerfile:
FROM python:3.8
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
WORKDIR /code
# install psycopg2 dependencies
RUN apt-get update
RUN apt-get install nano python3-dev libpq-dev -y
COPY requirements-prod.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
I must be missing something small that allows the python container to communicate with the postgres container.
Also a few additional questions:
What does it mean that the container is "listening on IPv4 address '0.0.0.0', port 5432"? To my understanding 0.0.0.0 encapsulates all ip addresses including 127.0.0.1. So, in this case that shouldn't be an issue (correct me if I'm wrong)
I have been struggling with this for a few days. I have followed the getting started docs as well as the python usage guides on the docker docs, and it appears that I feel that I understand everything, but I am unable to debug my containers efficiently. What additional supplemental knowledge can I acquire that helps me debug a container with the same level of comfortability as I would a python script?
I tried a few things:
swapping env_file with the credentials hard coded in
changing python with python3
removing sh -c
I tried building my database first with docker-compose up -d --build db and then building my web app with docker-compose up -d --build web and the issue persisted.
I tried everything with the environment variables and it appears improper credentials is not the issue. Running python manage.py runserver without docker it successfully connects to the database. There are some similar stack overflow questions, but I have tried their solutions and they do not work.
Part of my issue is I don't know what to try and how to efficiently debug docker containers yet (hence the question above).
What have you set as your HOST variable in DATABASES['default'] in settings.py? If it's '127.0.0.1', try changing to 'db' to match the container service name.

Docker - Build a service after the dependant service is up and running

I have a docker-compose file for a Django application.
Below is the structure of my docker-compose.yml
version: '3.8'
volumes:
pypi-server:
services:
backend:
command: "bash ./install-ppr_an_run_dphi.sh"
build:
context: ./backend
dockerfile: ./Dockerfile
volumes:
- ./backend:/usr/src/app
expose:
- 8000:8000
depends_on:
- db
pypi-server:
image: pypiserver/pypiserver:latest
ports:
- 8080:8080
volumes:
- type: volume
source: pypi-server
target: /data/packages
command: -P . -a . /data/packages
restart: always
db:
image: mysql:8
ports:
- 3306:3306
volumes:
- ~/apps/mysql:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=gary
- MYSQL_PASSWORD=tempgary
- MYSQL_USER=gary_user
- MYSQL_DATABASE=gary_db
nginx:
build: ./nginx
ports:
- 80:80
depends_on:
- backend
Django app is dependent on a couple of private packages hosted on the private-pypi-server without which the app won't run.
I created a separate dockerfile for django-backend alone which install packages of requirements.txt and the packages from private-pypi-server. But the dockerfile of django-backend service is running even before the private pypi server is running.
If I move the installation of private packages to docker-compose.yml command code under django-backend service in , then it works fine. Here the issue is that, if the backend is running and I want to run some commands in django-backend(./manage.py migrat) then it says that the private packages are not installed.
Im not sure how to proceed with this, it would be really helpful If i can get all these services running at once by just running the command docker-compose up --build -d
Created a separate docker-compose for pypi-server, which will be up and running even before I build/start other services.
Have you tried adding the pipy service to depends_on of the backend app?
backend:
command: "bash ./install-ppr_an_run_dphi.sh"
build:
context: ./backend
dockerfile: ./Dockerfile
volumes:
- ./backend:/usr/src/app
expose:
- 8000:8000
depends_on:
- db
- pypi-server
Your docker-compose file begs a few questions though.
Why to install custom packages to the backend service at a run time? I can see so many problems which might arise from this such as latency during service restarts, possibly different environments between runs of the same version of the backend service, any problems with the installation would come up during the deployment bring it down, etc. Installation should be done during the build of the docker image. Could you provide your Dockerfile maybe?
Is there any reason why the pypi server has to share docker-compose with the application? I'd suggest having it in a separate deployment especially if it is to be shared among other projects.
Is the pypi server supposed to be used for anything else than a source of the custom packages for the backend service? If not then I'd consider getting rid of it / using it for the builds only.
Is there any good reason why you want to have all the ports exposed? This creates a significant attack surface. E.g. an attacker could bypass the reverse proxy and talk directly to the backend service using port 8000 or they'd be able to connect to the db on the port 3306. Nb docker-compose creates subnetworks among the containers so they can access each other's ports even if those ports are not forwarded to the host machine.
Consider using docker secrets to store db credentials.

Docker image ran for Django but cannot access dev server url

Working on containerizing my server. I believe I successfully run build, when I run docker-compose my development server appears to run, but when I try to visit the associated dev server URL:
http://0.0.0.0:8000/
However, I get a page with the error:
This site can’t be reachedThe webpage at http://0.0.0.0:8000/ might be temporarily down or it may have moved permanently to a new web address.
These are the settings on my Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
WORKDIR C:/Users/15512/Desktop/django-project/peerplatform
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . ./
EXPOSE 8000
CMD ["python", "./manage.py", "runserver", "0.0.0.0:8000", "--settings=signup.settings"]
This is my docker-compose.yml file:
version: "3.8"
services:
redis:
restart: always
image: redis:latest
ports:
- "49153:6379"
pairprogramming_be:
restart: always
depends_on:
- redis
command: bash -c "python manage.py makemigrations && python manage.py migrate && python manage.py runserver 0.0.0.0:8000"
env_file:
- ./signup/.env
- ./payments/.env
- ./.env
build:
context: ./
dockerfile: Dockerfile
ports:
- "8000:8001"
container_name: "pairprogramming_be"
volumes:
- "C:/Users/15512/Desktop/django-project/peerplatform://pairprogramming_be"
working_dir:
"/C:/Users/15512/Desktop/django-project/peerplatform"
This is the .env file:
DEBUG=1
DJANGO_ALLOWED_HOSTS=0.0.0.0
FYI: the redis image runs successfully. This is what I have tried:
I tried changing the allowed hosts to localhost and 127.0.0.0.1
I tried running the command python manage.py runserver and eventually added 0.0.0.0:8000
When I run docker inspect --format '{{ .NetworkSettings.IPAddress }} pairprogramming_be I get a blank response/my docker container doesn't appear to have an IP Address
where is the 8001 port taken from? this is the internal (expected) listening port. Since you set your application (inside docker) to listen on 8000, you should map it from 8000 to anything else..
just change compose to:
ports:
- "8000:8000"

Running a django application on both the host machine and docker container at the same time

I have created a simple django application that has one endpoint /health/live and it returns a success message upon receiving a get request.
I run the application locally with python manage.py runserver on port 8000
I also have a docker-compose and Dockerfile as below:
FROM python
ENV PYTHONUNBUFFERED 1
RUN mkdir /inventory
WORKDIR /inventory
COPY . /inventory
WORKDIR /inventory
RUN pip install -r requirements.txt
and
version: '3'
networks:
kong-net:
name: kong-net
driver: bridge
ipam:
config:
- subnet: 172.1.1.0/24
services:
inventory:
container_name: inventory
build:
context: .
dockerfile: Dockerfile
restart: unless-stopped
networks:
kong-net:
ipv4_address: 172.1.1.11
ports:
- "8000:8000"
environment:
DEBUG: 'true'
command: python manage.py runserver 0.0.0.0:8000
I then run docker-compose up (I don't detach it to be able to see the logs)
They both work. I send a get request to http://127.0.0.1:8000/health/live:
based on the logs I see, the request goes through the service running directly on the system and not on the docker container
If I stop the service running directly without docker, and send the request, the request goes through the one deployed on docker
is there a reason this is happening? why the first one takes priority?
And shouldn't I see an error when trying to run the docker container or start the application locally? because they are both listening to port 8000!

Docker & Python, permission denied on Linux, but works when runnning on Windows

I'm trying to prepare a development container with Python + Flask and Postgre.
Since it is a development container, it is meant to be productive, so I don't want to run a build each time I change a file, so I can't COPY the files in the build phase, instead I mount a volume with all the source files, so when I change a python file in the host machine, the Flask server will automatically detect the changes and restart itself, even though it is in the container.
So far so good, running docker-compose up and these containers run fine on Windows, but when I tried to run on Linux, i got:
/bin/sh: 1: ./start.sh: Permission denied
Everyplace I searched tells me to RUN chmod +x start.sh, which doesn't work, because the file doesn't exist at build phase, so I try changing to CMD, instead of RUN... but still same error.
Any ideas why? Aren't containers supposed to help with the 'works on my machine' ? Because these files work on a Windows Host, but not on a Linux Host.
Is what I am doing the right approach in order to make the file changes on the host machine reflect in the container (without a build)?
Thanks in advance!!
Below are my files:
docker-compose.yml:
version: '3'
services:
postgres-docker:
image: postgres:9.6
environment:
POSTGRES_PASSWORD: "Postgres2019!"
ports:
- "9091:5432"
expose:
- "5432"
volumes:
- volpostgre:/var/lib/postgresql/data
networks:
- app-network
rest-server:
build:
context: ./projeto
ports:
- "9092:5000"
depends_on:
- postgres-docker
volumes:
- ./projeto:/app
networks:
- app-network
volumes:
volpostgre:
networks:
app-network:
driver: bridge
and inside projeto folder I got the following Dockerfile
FROM python:3.8.5
WORKDIR /app
CMD ./start.sh
And in start.sh:
#!/bin/bash
pip install -r requirements.txt
python setupdatabase.py
python run.py
One of the options that you can try is to override CMD in docker-compose.yml and first set the permission to file and then start the execute the script.
So by doing this you do not need to build docker image at all as the only thing in the image is you are setting the CMD ./start.sh
webapp:
image: python:3.8.5
volumes:
- $PWD/:/app
working_dir: /app
command: bash -c 'chmod +x start.sh && ./start.sh'

Categories