localhost using docker compose up not working - python

I am trying to test a simple server endpoint on my local machine when running docker compose up but it does not seem the ports are exposed when running docker this way. If I just do a docker build and docker run I can use localhost to get a successful endpoint call but not when I use my docker compose file.
docker-compose.yml file:
version: '3'
services:
simple:
build:
context: .
dockerfile: Dockerfile
container_name: simple
ports:
- 3000:80
environment:
- SOMEKEY=ABCD
- ANOTHERKEY=EFG
Dockerfile
FROM python:3.9.5
ARG VERSION
ARG SERVICE_NAME
ENV PYTHONPATH=/app
COPY requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
COPY app /app/app
COPY main.py /app/
CMD ["python", "./app/main.py"]
And then my main.py file
import uvicorn
from fastapi import FastAPI
app = FastAPI()
#app.get("/")
def read_root():
return {"Hello": "World"}
if __name__ == '__main__':
uvicorn.run(app, port=3000, host="0.0.0.0")
docker compose up does not seem to want to expose to local host.
What I use with build and run that does expose:
docker build -t test-test .
docker run -p 3000:3000 test-test
Is there a way to expose the port to localhost with docker compose up?

The syntax for ports is HOST:CONTAINER. The port on the container is 3000, so you've got it backwards.
version: '3'
services:
simple:
build:
context: .
dockerfile: Dockerfile
container_name: simple
ports:
- 80:3000
environment:
- SOMEKEY=ABCD
- ANOTHERKEY=EFG

Related

Docker compose build after service is up

I have a service web and a service ipfs which runs my web app and the ipfs server, the problem is, I need ipfs to be up before I can build web. This is the docker compose I have so far:
version: "3.9"
services:
web:
build: .
ports:
- "5000:5000"
environment:
- FLASK_APP=app.py
- FLASK_ENV=development
- IPFS_ADDR=/dns/ipfs/tcp/5001
- PIN_DATA=False
depends_on:
- ipfs
ipfs:
image: ipfs/go-ipfs:v0.7.0
ports:
- "4001:4001"
- "5001:5001"
- "8080:8080"
This is my Dockerfile:
FROM python:3.9
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
RUN python ./wait_ipfs.py
RUN python -m unittest
CMD [ "python3", "-m" , "flask", "run", "--host=0.0.0.0"]
And this is the content of wait_ipfs.py
from time import sleep
from ipfshttpclient.exceptions import ConnectionError
import ipfshttpclient
import sys
IPFS_ADDR = "/dns/ipfs/tcp/5001"
CID = "QmdMxMx29KVYhHnaCc1icWYxQqXwUNCae6t1wS2NqruiHd"
while True:
try:
with ipfshttpclient.connect(IPFS_ADDR) as client:
data = client.get(CID)
break
except ConnectionError as e:
sleep(5)
The problem is, docker compose is building web before running ipfs, and since that is happening I am never able to connect and never able to finish the build. Is there a way I can get ipfs to run before I build web?

Node + Flask server Docker containers not able to communicate over same network

I have 2 microservices that I'm dockerizing via docker-compose. Once my node service pings my python service, I get a connection refused.
I can ping both services independently via Postman and everything looks fine. It seems the container-to-container networking is what I'm having issues with. The node server is pinging a request via Axios like so:
const res = await axios.get('bot:9000/test')
and the server code on the Python side looks like:
#app.route('/test', methods=['GET'])
async def tester():
return jsonify(data='hi'), 200
Compose File
version: '3'
services:
bot:
build:
dockerfile: Dockerfile.dev
context: ./app-bot
volumes:
- /app/node_modules
- ./app-bot:/app
environment:
- TELEGRAM_API_KEY=xxxx
- BOT_PORT=4040
channel-scraper:
restart: always
image: quart-app
environment:
- QUART_APP=api
build:
context: ./app-channelscrape/server
dockerfile: Dockerfile
ports:
- "9000:9000"
env_file:
- .env
Node Docker File
FROM node:alpine
WORKDIR "/app"
COPY ./package.json ./
RUN npm install
RUN npm install -g nodemon
COPY . .
CMD ["npm", "run", "start"]
Flask Docker File
FROM python:3.7-alpine
RUN adduser -D quart
WORKDIR /home/quart
COPY ./requirements.txt ./
RUN rm -rf /var/cache/apk/*
RUN pip install --no-cache-dir -r requirements.txt --upgrade && \
chown -R quart:quart ./
COPY ./ /home/quart/
USER quart
CMD ["quart", "run", "-h", "0.0.0.0", "-p", "9000"]
You just need to make sure that the containers you want to talk to each other are on the same network.
add it this code in the end of your docker-compose file
networks:
some-net:
driver: bridge
And then add this like in which service that you want to be in the this network
networks:
- some-net
Your code will be like this
version: '3'
services:
bot:
build:
dockerfile: Dockerfile.dev
context: ./app-bot
volumes:
- /app/node_modules
- ./app-bot:/app
environment:
- TELEGRAM_API_KEY=xxxx
- BOT_PORT=4040
networks:
- some-net
channel-scraper:
restart: always
image: quart-app
environment:
- QUART_APP=api
build:
context: ./app-channelscrape/server
dockerfile: Dockerfile
ports:
- "9000:9000"
env_file:
- .env
networks:
- some-net
networks:
some-net:
driver: bridge

Running Redis rq worker on Docker

I am trying to make a queue of tasks using redis rq. I was trying to follow a tutorial but I am using docker. Below is my code-
app.py
from flask import Flask, request
import redis
from rq import Queue
import time
app = Flask(__name__)
r = redis.Redis()
q = Queue(connection=r)
def background_task(n):
""" Function that returns len(n) and simulates a delay """
delay = 2
print("Task running")
print(f"Simulating a {delay} second delay")
time.sleep(delay)
print(len(n))
print("Task complete")
return len(n)
def index():
if request.args.get("n"):
job = q.enqueue(background_task, request.args.get("n"))
return f"Task ({job.id}) added to queue at {job.enqueued_at}"
return "No value for count provided"
if __name__ == "__main__":
app.run()
Docker compose file-
version: "3.8"
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
environment:
FLASK_ENV: development
redis:
image: "redis:alpine"
Dockerfile
FROM python:3.7-alpine
WORKDIR /code
ENV FLASK_APP=app.py
ENV FLASK_RUN_HOST=0.0.0.0
RUN apk add --no-cache gcc musl-dev linux-headers
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
EXPOSE 5000
COPY . .
CMD ["flask", "run"]
Whenever I run '''docker-compose up --build''' and open http://localhost:5000/ I get Url not found
Where am I going wrong?
How is one supposed to use rq worker command in docker containers.
redis:
image: "redis:alpine"
The issue is that the image specified in your Docker compose YAML should be the image built by your Dockerfile.
Because you have a dockerfile you want to use for this image you can specify it in-line, see the documentation here:
https://docs.docker.com/compose/compose-file/compose-file-v3/
version: "3.9"
services:
webapp:
build:
context: ./dir
dockerfile: Dockerfile-alternate
args:
buildno: 1
As a good practice, instead of calling your service "redis" in the docker compose file, you should provide a custom name to represent your worker script.

Can't access flask site outside Docker container

I'm running Linux Mint with Python 3.6.
I have read through every link on here but can't figure out what is wrong. I am running a simple flask app which works fine when I'm running it locally on my machine, but then running it with Docker I can't access the IP in my browser.
I have set the flask app to run on host 0.0.0.0, with app.run(host='0.0.0.0').
Dockerfile:
FROM python:3.7
RUN mkdir -p /var/app
WORKDIR /var/app
COPY . /var/app
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
EXPOSE 5000
CMD ["pytest", "-v", "tests/test_flask_api.py"]
# CMD ["python3", "app.py"]
CMD ["python3", "-m", "Flask", "run", "--host=0.0.0.0"]
docker-compose.yml:
web:
build: ./app
ports:
- "5000:5000"
volumes:
- .:/code
After running the command docker-compose up -d to build and run the container, I run the command docker inspect --format '{{ .NetworkSettings.IPAddress }}' to get the IP address of the container as 172.17.0.2.
I try to access the site via 172.17.0.2:5000 and localhost:5000, but both just hang and don't load.
Finally, I ran docker exec -it restapimma_web_1 /bin/bash to get into the container. Then I ran curl localhost:5000 and was able to get the correct response. So the flask app is running inside the container I just can't access it outside the container.
I had a similar problem. To get it working:
Allow your flask app to accept a HOST argument from your environment
if __name__ == "__main__":
app.run(
host=os.environ.get("BACKEND_HOST", "172.0.0.1"),
port=your_port,
debug=True,
)
set your host environmental var in your composition
services:
[your service name]:
image:[your image]
environment:
- BACKEND_HOST=[your service name]
ports:
- "[etc]"
Basically flask wants to be called using the right hostname

Unable to build the image in docker

When trying to build the image, i'm getting the error below. I am also adding the related project files.
Dockerfile
docker-compose.yml
init.py
manage.py
Error:
Building users-service
Step 1/7 : FROM python:3.6.1
ERROR: Service 'users-service' failed to build: Get https://registry-1.docker.io/v2/: dial tcp 52.206.156.207:443: getsockopt: connection refused
Here is my Dockerfile
FROM python:3.6.1
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
ADD ./requirements.txt /usr/src/app/requirements.txt
RUN pip install -r requirements.txt
ADD . /usr/src/app
CMD python manage.py runserver -h 0.0.0.0
Here is the docker-compose.yml
version: '2.1'
services:
users-service:
container_name: users-service
build: .
volumes:
- '.:/usr/src/app'
ports:
- 5001:5000 # expose ports - HOST:CONTAINER
init.py
from flask import Flask, jsonify
# instantiate the app
app = Flask(__name__)
# set config
app.config.from_object('project.config.DevelopmentConfig')
#app.route('/ping', methods=['GET'])
def ping_pong():
return jsonify({'status': 'success',
'message': "pong"})
manage.py
from flask_script import Manager
from project import app
# configure your app
manager = Manager(app)
if __name__ == '__main__':
manager.run()

Categories