How to share the UDP port of a container with app container (Both are in the same compose.yml file) - python

I made the image of my Django project with a docker file. Then I used that image in my docker-compose.yml file.
My app is using udp://:14540 to connect with the simulator (while I am using both things on my host machine).
When I run the simulator in docker on my host machine and run the app on my host machine, everything works fine. Here are the details of the simulator image.
But when I put these images in my docker-compose file and do the docker compose up, my app is working, celery, redis and simulator are also working. But simulator isn’t connecting with my app.
docker-compose.yml:
services:
redis:
image: redis
container_name: redis
app:
build: .
ports:
- "8000:8000"
volumes:
- type: bind
source: /home/user/mydjangodirectory
target: /mydjangodirectory
image: mydjangoproj:1.0.0
container_name: djangoapp-container
command: python manage.py runserver 0.0.0.0:8000
celery-beat:
restart: always
build:
context: .
command: celery -A djangoproj beat -l INFO
volumes:
- type: bind
source: /home/user/mydjangodirectory
target: /mydjangodirectory
depends_on:
- redis
- app
celery-worker:
restart: always
build:
context: .
command: celery -A djangoproj worker -l INFO
volumes:
- type: bind
source: /home/user/mydjangodirectory
target: /mydjangodirectory
depends_on:
- redis
- app
- celery-beat
simulator:
image: jonasvautherin/px4-gazebo-headless
ports:
- "14540:14540/udp"
depends_on:
- app
After running the docker compose up, I ran the netstat -tunlp on the simulator and app containers with the hope that it might help you give some more information that you can use to help me fix my problem. The output image is attached.
Here is the code of my app source file which is using the simulator for connection:
#!/usr/bin/env python3
import asyncio
from mavsdk import System
async def run():
# Init the drone
drone = System()
await drone.connect(system_address="udp://:14540")
print("Waiting for drone to connect...")
async for state in drone.core.connection_state():
if state.is_connected:
print(f"-- Connected to drone!")
break
# Execute the maneuvers
print("-- Arming")
await drone.action.arm()
print("-- Taking off")
await drone.action.set_takeoff_altitude(10.0)
await drone.action.takeoff()
await asyncio.sleep(10)
print("-- Landing")
await drone.action.land()
if __name__ == "__main__":
# Run the asyncio loop
asyncio.run(run())
Approach # 1- await drone.connect(system_address="udp://:14540") (this works when things are on my host machine)
Approach # 2- await drone.connect(system_address="udp://localhost:14540") (not working in docker)
Approach # 3- await drone.connect(system_address="udp://simulator:14540") (name of the service, as described in docker documentation) (not working in docker)

Related

connection refused: Celery-RabbitMQ

I am integrating celery with FastAPI. I am using rabbitmq as a broker with celery. When ever I submit task to celery I get this error: "kombu.exceptions.OperationalError: [Errno 111] Connection refused". I don't understand maybe its due to the connection with rabbitmq but when I start the celery worker, It didn't give any connection error, but at the time of task submission.
Following is my code:
main.py
from fastapi import FastAPI
from scraper import crawl_data
from task import sample_task
app = FastAPI()
#app.get("/test")
def test():
data = sample_task.delay()
return {'MESSAGE': 'DONE'}
task.py
from celery_config import app
import time
#app.task
def sample_task():
for i in range(1, 10):
time.sleep(10)
print("DONE TASK")
celery_config.py
from celery import Celery
app = Celery('celery_tutorial',
broker="amqp://guest:guest#localhost:5672//",
include=['task'])
docker-compose.yml
version: "3.9"
services:
main_app:
build:
context: .
dockerfile: fastapi.Dockerfile
command: uvicorn main:app --host 0.0.0.0 --reload
ports:
- "8000:8000"
rabbitmq:
image: rabbitmq:3.8-management-alpine
ports:
- 15673:15672
# celery_worker:
# build:
# context: .
# dockerfile: fastapi.Dockerfile
# command: celery -A celery worker --loglevel=info
# depends_on:
# - rabbitmq
# - main_app
stdin_open: true
I start the FastAPI server and rabbitmq with docker compose, and celery worker with following command:
celery -A celery_config worker --loglevel=info
Assuming your celery_config.py run within the main_app container, the broker's host should be rabbitmq (service name) rather than localhost:
app = Celery('celery_tutorial',
broker="amqp://guest:guest#rabbitmq:5672/vhost",
include=['task'])
EDIT:
seems like you didn't set the relevant env vars:
rabbitmq:
image: rabbitmq:3.8-management-alpine
ports:
- 15673:15672
environment:
- RABBITMQ_DEFAULT_VHOST=vhost
- RABBITMQ_DEFAULT_USER=guest
- RABBITMQ_DEFAULT_PASS=guest
make sure you add them, see my answer here.

Implement pytest over FastAPI app running in Docker

I've created FasAPI app with Postgres DB which lives in docker container.
So now I have docker-compose.yml file with my app and postgres DB:
version: '3.9'
services:
app:
container_name: app_container
build: .
volumes:
- .:/code
ports:
- '8000:8000'
depends_on:
- my_database
#networks:
# - postgres
my_database:
container_name: db_container
image: postgres
environment:
POSTGRES_NAME: dbf
POSTGRES_USER: myuser
POSTGRES_PASSWORD: password
volumes:
- postgres:/data/postgres
ports:
- '5432:5432'
restart: unless-stopped
volumes:
postgres:
And now I want to make pytest over my DB with testing endpoints and testing my DB
BUT, when I run python -m pytest cmd I got the error can not translate hostname "my_database" as in my database.py file I have to set DATABASE_URL = 'postgresql://myuser:password#my_database'. As according to userguide, when I build docker-compose file, in DATABASE_URL I must put name of service instead of hostname.
Anyone have an idea how to solve it?!!
The problem is that, if you use docker-compose to run your app in separate container and run database in another container. It is like your DB is not launched and pytest can't connect to it. This is wrong way to implement pytests in this way!!!!
To run pytest correctly you should:
You must in DATABASE_URL write the name of service instead of the name of host! In my case my_database is name of service in docker-compose.yml file, so I should set it as hostname, like: DATABASE_ULR = postgres://<username>:<password>#<name of service>
pytest must be run in app container! What it means! First of all, start your containers: docker-copose up --build where --build is optional (it just rebuilds your images if you made some changes to code in your programm files. After this, you should jump into app container. It can be done from Docker application on your computer or through the terminal. To make it with terinal window:
cmd: docker exec -it <name of container with your application>. You will dive into container and after this you can simply run cmd pytest or python -m pytest. And your tests will run as allways.
If you will have some questions you can write me anytime)))
So, the reason of this Error was that I run pytest and it tried to connect to DATABASE_URL which, em... has not been launched already (as I understand).

python script shows 100% cpu utilization after running in dockerfile

I am dockerizing a python script, and I run it as CMD ['python', 'script.py'], in the Dockerfile. When I up the container using docker-compose.yml, it runs,
But when I docker exec and go inside the container and do a ps -aux, I see the %CPU is 100%, because of this the purpose of the service is not met.
If I do the same process, i.e, by doing a docker exec and run the script python script.py manually in the container, It works good and I can see only a 5% of the CPU is utilized, as well as the service works and gives the expected result.
Service wrote in docker-compose:
consumer:
restart: always
image: consumer:latest
build: ./consumer
ports:
- "8283:8283"
depends_on:
- redis
environment:
- REDIS_HOST = redis
redis:
image: redis
command: redis-server
volumes:
- ./redis_data:/data
ports:
- "6379:6379"
restart: unless-stopped
It is a consumer application, which consumes the message from the producer and writes into a Redis server.
Can someone advice why such behavior is observed.

Shared volume between Docker containers with python code

Maybe i'm going at it wrong, but i can't seem to get a shared volume working between two docker containers running custom python code.
I'm using the following docker-compose.yml:
version: "2"
services:
rabbitmq:
image: username/rabbitmq
ports:
- 15672:15672
- 5672:5672
producer:
image: username/producer
depends_on:
- rabbitmq
volumes:
- pdffolder:/temp
consumer:
image: username/consumer
depends_on:
- producer
volumes:
- pdffolder:/temp
volumes:
pdffolder:
The idea is that the producer service polls an exchange server for information and a pdf-file. The consumer service then has to send this information and pdf-file elsewhere. During this action I have to store the pdf locally temporally.
I access the volumes from the custom python-code like this:
producer
# attachment = object I get when requesting attachments from an exchange server
# path to pdf to be saved
pdf_path = os.path.join("temp", attachment.name)
with open(pdf_path, 'wb') as f:
f.write(attachment.content)
# now in this container, /temp/attachment.pdf exists. I then send this path in a message to the consumer (along with other information)
consumer
# consumer tries to find path created by producer (/temp/attachment.pdf) via
pdf_path = os.path.join("temp", "attachment.pdf")
Via the command line i can see that the producer-container is writing the files to temp/attachment.pdf like expected. The consumer-container however sees no files (resulting in errors).
Btw, I am running the containers on docker for windows
I think I figured out what was wrong. I used the following in both the Dockerfiles for the producer and consumer:
FROM python:3.7-slim
WORKDIR /main
ADD . /main
RUN pip install --trusted-host pypi.python.org -r requirements.txt
CMD ["python", "-u", "main.py"]
Because I moved the python code to the /main folder in both containers, the temp folder created later (via docker-compose) was to be found at /main/temp, and not just /temp. A little bit weird because the main.py should be at the same level as /temp, but hey it works. I got it working with the following docker-compose.yml:
version: "2"
services:
rabbitmq:
image: username/rabbitmq
ports:
- 15672:15672
- 5672:5672
producer:
image: username/producer
depends_on:
- rabbitmq
volumes:
- pdffolder:/main/temp
consumer:
image: username/consumer
depends_on:
- producer
volumes:
- pdffolder:/main/temp
volumes:
pdffolder:
So i guess the steps to debugging this are:
Check the spelling of all mentions of volumes in the docker-compose.yml file
Check the way paths are being built/referenced in the python code (Linux uses a different format to windows)
Check if the paths that have to be accessed from the python code actually exist

Celery workers unable to connect to redis on docker instances

I have a dockerized setup running a Django app within which I use Celery tasks. Celery uses Redis as the broker.
Versioning:
Docker version 17.09.0-ce, build afdb6d4
docker-compose version 1.15.0, build e12f3b9
Django==1.9.6
django-celery-beat==1.0.1
celery==4.1.0
celery[redis]
redis==2.10.5
Problem:
My celery workers appear to be unable to connect to the redis container located at localhost:6379. I am able to telnet into the redis server on the specified port. I am able to verify redis-server is running on the container.
When I manually connect to the Celery docker instance and attempt to create a worker using the command celery -A backend worker -l info I get the notice:
[2017-11-13 18:07:50,937: ERROR/MainProcess] consumer: Cannot connect to redis://localhost:6379/0: Error 99 connecting to localhost:6379. Cannot assign requested address..
Trying again in 4.00 seconds...
Notes:
I am able to telnet in to the redis container on port 6379. On the redis container, redis-server is running.
Is there anything else that I'm missing? I've gone pretty far down the rabbit hole, but feel like I'm missing something really simple.
DOCKER CONFIG FILES:
docker-compose.common.yml here
docker-compose.dev.yml here
When you use docker-compose, you aren't going to be using localhost for inter-container communication, you would be using the compose-assigned hostname of the container. In this case, the hostname of your redis container is redis. The top level elements under services: are your default host names.
So for celery to connect to redis, you should try redis://redis:6379/0. Since the protocol and the service name are the same, I'll elaborate a little more: if you named your redis service "butter-pecan-redis" in your docker-compose, you would instead use redis://butter-pecan-redis:6379/0.
Also, docker-compose.dev.yml doesn't appear to have celery and redis on a common network, which might cause them not to be able to see each other. I believe they need to share at least one network in common to be able to resolve their respective host names.
Networking in docker-compose has an example in the first handful of paragraphs, with a docker-compose.yml to look at.
You may need to add the link and depends_on sections to your docker compose file, and then reference the containers by their hostname.
Updated docker-compose.yml:
version: '2.1'
services:
db:
image: postgres
memcached:
image: memcached
redis:
image: redis
ports:
- '6379:6379'
backend-base:
build:
context: .
dockerfile: backend/Dockerfile-base
image: "/backend:base"
backend:
build:
context: .
dockerfile: backend/Dockerfile
image: "/backend:${ENV:-local}"
command: ./wait-for-it.sh db:5432 -- gunicorn backend.wsgi:application -b 0.0.0.0:8000 -k gevent -w 3
ports:
- 8000
links:
- db
- redis
- memcached
depends_on:
- db
- redis
- memcached
celery:
image: "/backend:${ENV:-local}"
command: ./wait-for-it.sh db:5432 -- celery worker -E -B --loglevel=INFO --concurrency=1
environment:
C_FORCE_ROOT: "yes"
links:
- db
- redis
- memcached
depends_on:
- db
- redis
- memcached
frontend-base:
build:
context: .
dockerfile: frontend/Dockerfile-base
args:
NPM_REGISTRY: http://.view.build
PACKAGE_INSTALLER: yarn
image: "/frontend:base"
links:
- db
- redis
- memcached
depends_on:
- db
- redis
- memcached
frontend:
build:
context: .
dockerfile: frontend/Dockerfile
image: "/frontend:${ENV:-local}"
command: 'bash -c ''gulp'''
working_dir: /app/user
environment:
PORT: 3000
links:
- db
- redis
- memcached
depends_on:
- db
- redis
- memcached
Then configure the urls to redis, postgres, memcached, etc. with:
redis://redis:6379/0
postgres://user:pass#db:5432/database
The issue for me was that all of the containers, including celery had a network argument specified. If this is the case the redis container must also have the same argument otherwise you will get this error. See below, the fix was adding 'networks':
redis:
image: redis:alpine
ports:
- '6379:6379'
networks:
- server

Categories