Docker + SQL Server Cannot assign requested address - python

I have a docker image using the compose.yml file as below
version: "3.9"
services:
flask:
build:
context: consumelogs/
dockerfile: Dockerfile.web
env_file:
- ./consumelogs/.env
ports:
- "5000:5000"
redis:
image: "redis:alpine"
ports:
- "6379:6379"
worker:
build:
context: consumelogs/
dockerfile: Dockerfile.worker
env_file:
- ./consumelogs/.env
depends_on:
- redis
And my SQL Server's (which is a Docker image) command is
docker run -d --name sql_server -e ‘ACCEPT_EULA=Y’ -e ‘SA_PASSWORD=123456’ --net slackbot-net -p 1433:1433 mcr.microsoft.com/mssql/server:2019-latest
where the default userName = sa and my bridge network = slackbot-net. My python app docker is started using the command
docker-compose up --build --force-recreate
I know that if I use localhost as ServerName, it points to docker itself, but I've tried my machine's IP address, 0.0.0.0, 127.0.0.1 and even the IP address from the query
SELECT
CONNECTIONPROPERTY('net_transport') AS net_transport,
CONNECTIONPROPERTY('protocol_type') AS protocol_type,
CONNECTIONPROPERTY('auth_scheme') AS auth_scheme,
CONNECTIONPROPERTY('local_net_address') AS local_net_address,
CONNECTIONPROPERTY('local_tcp_port') AS local_tcp_port,
CONNECTIONPROPERTY('client_net_address') AS client_net_address
and I still get Cannot assign requested address. Is there something obvious that I'm missing?
Azure Data Studio is able to connect to the SQL Server from docker using localhost and username/password combo, just fine and so does my python tool using pytds, but just not from within docker.
Any help is greatly appreciated!

Related

psycopg.OperationalError: connection failed: Connection refused in Docker

so I tried to connect my docker app (python-1) into another docker app (postgres). But it giving me this error:
psycopg.OperationalError: connection failed: Connection refused
python-1 | Is the server running on host "localhost" (127.0.0.1) and accepting
python-1 | TCP/IP connections on port 25432?
I've tried using condition: service_healthy but it doesn't work. In fact, I already make sure my database is running before python-1 is trying to connect. But the problem seems not about the database hasn't turned on yet. I already use 0.0.0.0 or postgres container's IP using postgres on the host and it also doesn't work.
Here is my docker-compose.yml
version: "3.8"
services:
postgres:
image: postgres:14.6
ports:
- 25432:5432
healthcheck:
test: ["CMD-SHELL", "PGPASSWORD=${DB_PASSWORD}", "pg_isready", "-U", "${DB_USERNAME}", "-d", "${DB_NAME}"]
interval: 30s
timeout: 60s
retries: 5
start_period: 80s
environment:
POSTGRES_USER: ${DB_USERNAME}
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: ${DB_NAME}
python:
build:
context: .
dockerfile: Dockerfile
depends_on:
postgres:
condition: service_healthy
command: flask --app app init-db && flask --app app run -h 0.0.0.0 -p ${PORT}
ports:
- ${PORT}:${PORT}
environment:
DB_HOST: localhost
DB_PORT: 25432
DB_NAME: ${DB_NAME}
DB_USERNAME: ${DB_USERNAME}
DB_PASSWORD: ${DB_PASSWORD}
And this is my Dockerfile:
# syntax=docker/dockerfile:1
FROM python:3.10
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
In a container, localhost means the container itself.
Different containers on a docker network can communicate using the service names as host names. Also, on the docker network, you use the unmapped port numbers.
So change your environment variables to
environment:
DB_HOST: postgres
DB_PORT: 5432
and you should be able to connect.

Docker - Build a service after the dependant service is up and running

I have a docker-compose file for a Django application.
Below is the structure of my docker-compose.yml
version: '3.8'
volumes:
pypi-server:
services:
backend:
command: "bash ./install-ppr_an_run_dphi.sh"
build:
context: ./backend
dockerfile: ./Dockerfile
volumes:
- ./backend:/usr/src/app
expose:
- 8000:8000
depends_on:
- db
pypi-server:
image: pypiserver/pypiserver:latest
ports:
- 8080:8080
volumes:
- type: volume
source: pypi-server
target: /data/packages
command: -P . -a . /data/packages
restart: always
db:
image: mysql:8
ports:
- 3306:3306
volumes:
- ~/apps/mysql:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=gary
- MYSQL_PASSWORD=tempgary
- MYSQL_USER=gary_user
- MYSQL_DATABASE=gary_db
nginx:
build: ./nginx
ports:
- 80:80
depends_on:
- backend
Django app is dependent on a couple of private packages hosted on the private-pypi-server without which the app won't run.
I created a separate dockerfile for django-backend alone which install packages of requirements.txt and the packages from private-pypi-server. But the dockerfile of django-backend service is running even before the private pypi server is running.
If I move the installation of private packages to docker-compose.yml command code under django-backend service in , then it works fine. Here the issue is that, if the backend is running and I want to run some commands in django-backend(./manage.py migrat) then it says that the private packages are not installed.
Im not sure how to proceed with this, it would be really helpful If i can get all these services running at once by just running the command docker-compose up --build -d
Created a separate docker-compose for pypi-server, which will be up and running even before I build/start other services.
Have you tried adding the pipy service to depends_on of the backend app?
backend:
command: "bash ./install-ppr_an_run_dphi.sh"
build:
context: ./backend
dockerfile: ./Dockerfile
volumes:
- ./backend:/usr/src/app
expose:
- 8000:8000
depends_on:
- db
- pypi-server
Your docker-compose file begs a few questions though.
Why to install custom packages to the backend service at a run time? I can see so many problems which might arise from this such as latency during service restarts, possibly different environments between runs of the same version of the backend service, any problems with the installation would come up during the deployment bring it down, etc. Installation should be done during the build of the docker image. Could you provide your Dockerfile maybe?
Is there any reason why the pypi server has to share docker-compose with the application? I'd suggest having it in a separate deployment especially if it is to be shared among other projects.
Is the pypi server supposed to be used for anything else than a source of the custom packages for the backend service? If not then I'd consider getting rid of it / using it for the builds only.
Is there any good reason why you want to have all the ports exposed? This creates a significant attack surface. E.g. an attacker could bypass the reverse proxy and talk directly to the backend service using port 8000 or they'd be able to connect to the db on the port 3306. Nb docker-compose creates subnetworks among the containers so they can access each other's ports even if those ports are not forwarded to the host machine.
Consider using docker secrets to store db credentials.

Running a django application on both the host machine and docker container at the same time

I have created a simple django application that has one endpoint /health/live and it returns a success message upon receiving a get request.
I run the application locally with python manage.py runserver on port 8000
I also have a docker-compose and Dockerfile as below:
FROM python
ENV PYTHONUNBUFFERED 1
RUN mkdir /inventory
WORKDIR /inventory
COPY . /inventory
WORKDIR /inventory
RUN pip install -r requirements.txt
and
version: '3'
networks:
kong-net:
name: kong-net
driver: bridge
ipam:
config:
- subnet: 172.1.1.0/24
services:
inventory:
container_name: inventory
build:
context: .
dockerfile: Dockerfile
restart: unless-stopped
networks:
kong-net:
ipv4_address: 172.1.1.11
ports:
- "8000:8000"
environment:
DEBUG: 'true'
command: python manage.py runserver 0.0.0.0:8000
I then run docker-compose up (I don't detach it to be able to see the logs)
They both work. I send a get request to http://127.0.0.1:8000/health/live:
based on the logs I see, the request goes through the service running directly on the system and not on the docker container
If I stop the service running directly without docker, and send the request, the request goes through the one deployed on docker
is there a reason this is happening? why the first one takes priority?
And shouldn't I see an error when trying to run the docker container or start the application locally? because they are both listening to port 8000!

testdriven.io How do I fix "Name or service not known" during docker run?

testdriven.io
docker build -f project/Dockerfile.prod -t registry.heroku.com/mighty-savannah-85236/web ./project
Successfully built 3df1e0c4eea4
Successfully tagged registry.heroku.com/mighty-savannah-85236/web:latest
docker run --name fastapi-tdd -e PORT=8765 -e DATABASE_URL=sqlite://sqlite.db -p 5003:8765 registry.heroku.com/mighty-savannah-85236/web:latest
nc: getaddrinfo for host "web-db" port 5432: Name or service not known
docker-compose file
services:
web:
build: ./project
command: uvicorn app.main:app --reload --workers 1 --host 0.0.0.0 --port 8000
volumes:
- ./project:/usr/src/app
ports:
- 8004:8000
environment:
- ENVIRONMENT=dev
- TESTING=0
- DATABASE_URL=postgres://postgres:postgres#web-db:5432/web_dev
- DATABASE_TEST_URL=postgres://postgres:postgres#web-db:5432/web_test
depends_on:
- web-db
web-db:
build:
context: ./project/db
dockerfile: Dockerfile
expose:
- 5432
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
It seems your container is trying to connect to web-db:5432 which given the port likely is a Postgres database server. And as web-db is not a real domain most likely what happens is, that there should be another container called web-db which probably should be a Postgres database which your container wants to connect to.
This connection will only work though if both containers - the one you are starting and the Postgres database container - are in the same user-defined Docker network as only then Docker service discovery works. You might wanna have a look at the Docker documentation for this.
But essentially you need to create a Docker network using
docker network create my-network
and then attach both containers - again, your container and the Postgres database - to that network using the --network option.
Additionally your Postgres container must be called web-db so that the service discovery will work.
So the skeleton of the command to start the DB would be the following:
docker run --name web-db --network my-network -p 5432:5432 your-database-image
The command to start your application would be
docker run --name fastapi-tdd --network my-network -e PORT=8765 -e DATABASE_URL=sqlite://sqlite.db -p 5003:8765 registry.heroku.com/mighty-savannah-85236/web:latest
Might also be worth exploring Docker-compose to simplify this whole process.
Edit
Now with your docker-compose.yaml file the same rule applies. Both containers need to be in the same user-defined network bridge, which can be declared using networks: (be aware: don't put it into services:).
services:
web:
build: ./project
command: uvicorn app.main:app --reload --workers 1 --host 0.0.0.0 --port 8000
volumes:
- ./project:/usr/src/app
# attach this container to the network
networks:
- my-network
ports:
- 8004:8000
environment:
- ENVIRONMENT=dev
- TESTING=0
- DATABASE_URL=postgres://postgres:postgres#web-db:5432/web_dev
- DATABASE_TEST_URL=postgres://postgres:postgres#web-db:5432/web_test
depends_on:
- web-db
web-db:
# attach this container to the network
networks:
- my-network
# name this container web-db
container_name: web-db
build:
context: ./project/db
dockerfile: Dockerfile
expose:
- 5432
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
# declare the network ressource
networks:
my-network
Now a connection should be possible. Be aware that you also need to configure PostgreSQL correctly to allow you to connect to it setting listen_address='*' in postgresql.conf.

Celery workers unable to connect to redis on docker instances

I have a dockerized setup running a Django app within which I use Celery tasks. Celery uses Redis as the broker.
Versioning:
Docker version 17.09.0-ce, build afdb6d4
docker-compose version 1.15.0, build e12f3b9
Django==1.9.6
django-celery-beat==1.0.1
celery==4.1.0
celery[redis]
redis==2.10.5
Problem:
My celery workers appear to be unable to connect to the redis container located at localhost:6379. I am able to telnet into the redis server on the specified port. I am able to verify redis-server is running on the container.
When I manually connect to the Celery docker instance and attempt to create a worker using the command celery -A backend worker -l info I get the notice:
[2017-11-13 18:07:50,937: ERROR/MainProcess] consumer: Cannot connect to redis://localhost:6379/0: Error 99 connecting to localhost:6379. Cannot assign requested address..
Trying again in 4.00 seconds...
Notes:
I am able to telnet in to the redis container on port 6379. On the redis container, redis-server is running.
Is there anything else that I'm missing? I've gone pretty far down the rabbit hole, but feel like I'm missing something really simple.
DOCKER CONFIG FILES:
docker-compose.common.yml here
docker-compose.dev.yml here
When you use docker-compose, you aren't going to be using localhost for inter-container communication, you would be using the compose-assigned hostname of the container. In this case, the hostname of your redis container is redis. The top level elements under services: are your default host names.
So for celery to connect to redis, you should try redis://redis:6379/0. Since the protocol and the service name are the same, I'll elaborate a little more: if you named your redis service "butter-pecan-redis" in your docker-compose, you would instead use redis://butter-pecan-redis:6379/0.
Also, docker-compose.dev.yml doesn't appear to have celery and redis on a common network, which might cause them not to be able to see each other. I believe they need to share at least one network in common to be able to resolve their respective host names.
Networking in docker-compose has an example in the first handful of paragraphs, with a docker-compose.yml to look at.
You may need to add the link and depends_on sections to your docker compose file, and then reference the containers by their hostname.
Updated docker-compose.yml:
version: '2.1'
services:
db:
image: postgres
memcached:
image: memcached
redis:
image: redis
ports:
- '6379:6379'
backend-base:
build:
context: .
dockerfile: backend/Dockerfile-base
image: "/backend:base"
backend:
build:
context: .
dockerfile: backend/Dockerfile
image: "/backend:${ENV:-local}"
command: ./wait-for-it.sh db:5432 -- gunicorn backend.wsgi:application -b 0.0.0.0:8000 -k gevent -w 3
ports:
- 8000
links:
- db
- redis
- memcached
depends_on:
- db
- redis
- memcached
celery:
image: "/backend:${ENV:-local}"
command: ./wait-for-it.sh db:5432 -- celery worker -E -B --loglevel=INFO --concurrency=1
environment:
C_FORCE_ROOT: "yes"
links:
- db
- redis
- memcached
depends_on:
- db
- redis
- memcached
frontend-base:
build:
context: .
dockerfile: frontend/Dockerfile-base
args:
NPM_REGISTRY: http://.view.build
PACKAGE_INSTALLER: yarn
image: "/frontend:base"
links:
- db
- redis
- memcached
depends_on:
- db
- redis
- memcached
frontend:
build:
context: .
dockerfile: frontend/Dockerfile
image: "/frontend:${ENV:-local}"
command: 'bash -c ''gulp'''
working_dir: /app/user
environment:
PORT: 3000
links:
- db
- redis
- memcached
depends_on:
- db
- redis
- memcached
Then configure the urls to redis, postgres, memcached, etc. with:
redis://redis:6379/0
postgres://user:pass#db:5432/database
The issue for me was that all of the containers, including celery had a network argument specified. If this is the case the redis container must also have the same argument otherwise you will get this error. See below, the fix was adding 'networks':
redis:
image: redis:alpine
ports:
- '6379:6379'
networks:
- server

Categories