I have a service web and a service ipfs which runs my web app and the ipfs server, the problem is, I need ipfs to be up before I can build web. This is the docker compose I have so far:
version: "3.9"
services:
web:
build: .
ports:
- "5000:5000"
environment:
- FLASK_APP=app.py
- FLASK_ENV=development
- IPFS_ADDR=/dns/ipfs/tcp/5001
- PIN_DATA=False
depends_on:
- ipfs
ipfs:
image: ipfs/go-ipfs:v0.7.0
ports:
- "4001:4001"
- "5001:5001"
- "8080:8080"
This is my Dockerfile:
FROM python:3.9
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
RUN python ./wait_ipfs.py
RUN python -m unittest
CMD [ "python3", "-m" , "flask", "run", "--host=0.0.0.0"]
And this is the content of wait_ipfs.py
from time import sleep
from ipfshttpclient.exceptions import ConnectionError
import ipfshttpclient
import sys
IPFS_ADDR = "/dns/ipfs/tcp/5001"
CID = "QmdMxMx29KVYhHnaCc1icWYxQqXwUNCae6t1wS2NqruiHd"
while True:
try:
with ipfshttpclient.connect(IPFS_ADDR) as client:
data = client.get(CID)
break
except ConnectionError as e:
sleep(5)
The problem is, docker compose is building web before running ipfs, and since that is happening I am never able to connect and never able to finish the build. Is there a way I can get ipfs to run before I build web?
Related
I am trying to dockerize a flask project with Redis and SQLite. I kept getting this error when I run the project using docker. The project works just fine when I run it normally using python manage.py run
Dockerfile
FROM python:3.7.2-slim
COPY . /app
WORKDIR /app
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
EXPOSE 5000
CMD ["python","manage.py run", "--host=0.0.0.0"]
docker-compose.yml
version: '3'
services:
sqlite3:
image: nouchka/sqlite3:latest
stdin_open: true
tty: true
volumes:
- ./db/:/root/db/
api:
container_name: flask-container
build: .
entrypoint: python manage.py run
env_file:
- app/main/.env
ports:
- '5000:5000'
volumes:
- ./db/:/root/db/
- ./app/main/:/app/main/
redis:
image: redis
container_name: redis-container
ports:
- "6379:6379"
Please what could be the problem?
Your docker-compose.yml file has several overrides that fundamentally change the way the image works. In particular, the entrypoint: line suppresses the CMD from the Dockerfile, which loses the key --host option. You also should not need volumes: to inject the application code (it's already in the image), nor should you need to manually specify container_name:.
services:
api:
build: .
env_file:
- app/main/.env
ports:
- '5000:5000'
# and no other settings
In the Dockerfile, your CMD has two shell words combined together. You need to split those up into separate words in the JSON-array syntax.
CMD ["python","manage.py", "run", "--host=0.0.0.0"]
# ^^^^ two words
With these two fixes, you'll be running the CMD from the image, with the code built into the image, and with the critical --host=0.0.0.0 option.
I am trying to test a simple server endpoint on my local machine when running docker compose up but it does not seem the ports are exposed when running docker this way. If I just do a docker build and docker run I can use localhost to get a successful endpoint call but not when I use my docker compose file.
docker-compose.yml file:
version: '3'
services:
simple:
build:
context: .
dockerfile: Dockerfile
container_name: simple
ports:
- 3000:80
environment:
- SOMEKEY=ABCD
- ANOTHERKEY=EFG
Dockerfile
FROM python:3.9.5
ARG VERSION
ARG SERVICE_NAME
ENV PYTHONPATH=/app
COPY requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
COPY app /app/app
COPY main.py /app/
CMD ["python", "./app/main.py"]
And then my main.py file
import uvicorn
from fastapi import FastAPI
app = FastAPI()
#app.get("/")
def read_root():
return {"Hello": "World"}
if __name__ == '__main__':
uvicorn.run(app, port=3000, host="0.0.0.0")
docker compose up does not seem to want to expose to local host.
What I use with build and run that does expose:
docker build -t test-test .
docker run -p 3000:3000 test-test
Is there a way to expose the port to localhost with docker compose up?
The syntax for ports is HOST:CONTAINER. The port on the container is 3000, so you've got it backwards.
version: '3'
services:
simple:
build:
context: .
dockerfile: Dockerfile
container_name: simple
ports:
- 80:3000
environment:
- SOMEKEY=ABCD
- ANOTHERKEY=EFG
I have 2 microservices that I'm dockerizing via docker-compose. Once my node service pings my python service, I get a connection refused.
I can ping both services independently via Postman and everything looks fine. It seems the container-to-container networking is what I'm having issues with. The node server is pinging a request via Axios like so:
const res = await axios.get('bot:9000/test')
and the server code on the Python side looks like:
#app.route('/test', methods=['GET'])
async def tester():
return jsonify(data='hi'), 200
Compose File
version: '3'
services:
bot:
build:
dockerfile: Dockerfile.dev
context: ./app-bot
volumes:
- /app/node_modules
- ./app-bot:/app
environment:
- TELEGRAM_API_KEY=xxxx
- BOT_PORT=4040
channel-scraper:
restart: always
image: quart-app
environment:
- QUART_APP=api
build:
context: ./app-channelscrape/server
dockerfile: Dockerfile
ports:
- "9000:9000"
env_file:
- .env
Node Docker File
FROM node:alpine
WORKDIR "/app"
COPY ./package.json ./
RUN npm install
RUN npm install -g nodemon
COPY . .
CMD ["npm", "run", "start"]
Flask Docker File
FROM python:3.7-alpine
RUN adduser -D quart
WORKDIR /home/quart
COPY ./requirements.txt ./
RUN rm -rf /var/cache/apk/*
RUN pip install --no-cache-dir -r requirements.txt --upgrade && \
chown -R quart:quart ./
COPY ./ /home/quart/
USER quart
CMD ["quart", "run", "-h", "0.0.0.0", "-p", "9000"]
You just need to make sure that the containers you want to talk to each other are on the same network.
add it this code in the end of your docker-compose file
networks:
some-net:
driver: bridge
And then add this like in which service that you want to be in the this network
networks:
- some-net
Your code will be like this
version: '3'
services:
bot:
build:
dockerfile: Dockerfile.dev
context: ./app-bot
volumes:
- /app/node_modules
- ./app-bot:/app
environment:
- TELEGRAM_API_KEY=xxxx
- BOT_PORT=4040
networks:
- some-net
channel-scraper:
restart: always
image: quart-app
environment:
- QUART_APP=api
build:
context: ./app-channelscrape/server
dockerfile: Dockerfile
ports:
- "9000:9000"
env_file:
- .env
networks:
- some-net
networks:
some-net:
driver: bridge
I am trying to make a queue of tasks using redis rq. I was trying to follow a tutorial but I am using docker. Below is my code-
app.py
from flask import Flask, request
import redis
from rq import Queue
import time
app = Flask(__name__)
r = redis.Redis()
q = Queue(connection=r)
def background_task(n):
""" Function that returns len(n) and simulates a delay """
delay = 2
print("Task running")
print(f"Simulating a {delay} second delay")
time.sleep(delay)
print(len(n))
print("Task complete")
return len(n)
def index():
if request.args.get("n"):
job = q.enqueue(background_task, request.args.get("n"))
return f"Task ({job.id}) added to queue at {job.enqueued_at}"
return "No value for count provided"
if __name__ == "__main__":
app.run()
Docker compose file-
version: "3.8"
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
environment:
FLASK_ENV: development
redis:
image: "redis:alpine"
Dockerfile
FROM python:3.7-alpine
WORKDIR /code
ENV FLASK_APP=app.py
ENV FLASK_RUN_HOST=0.0.0.0
RUN apk add --no-cache gcc musl-dev linux-headers
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
EXPOSE 5000
COPY . .
CMD ["flask", "run"]
Whenever I run '''docker-compose up --build''' and open http://localhost:5000/ I get Url not found
Where am I going wrong?
How is one supposed to use rq worker command in docker containers.
redis:
image: "redis:alpine"
The issue is that the image specified in your Docker compose YAML should be the image built by your Dockerfile.
Because you have a dockerfile you want to use for this image you can specify it in-line, see the documentation here:
https://docs.docker.com/compose/compose-file/compose-file-v3/
version: "3.9"
services:
webapp:
build:
context: ./dir
dockerfile: Dockerfile-alternate
args:
buildno: 1
As a good practice, instead of calling your service "redis" in the docker compose file, you should provide a custom name to represent your worker script.
I have react app which communicates with flask API and display data. I had both of these projects in separate folders and everything worked fine.
Then I wanted to containerize Flask + React app with docker-compose for practice and then I created a folder in which I have my middleware(flask) and frontend(react) folders. Then I created a virtual environment and installed flask. Now when I import flask inside python file I get an error.
I do not understand why simply adding the folder inside another folder would affect my project. You can see the project structure and error in the picture below.
Dockerfile react app
FROM node:latest
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
CMD [ "npm", "start" ]
Dockerfile flask api
FROM python:3.7.2
# set working directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# add requirements (to leverage Docker cache)
ADD ./requirements.txt /usr/src/app/requirements.txt
# install requirements
RUN pip install -r requirements.txt
# add app
ADD . /usr/src/app
# run server
CMD python app.py runserver -h 0.0.0.0
docker-compose.yml
version: '3'
services:
middleware:
build: ./middleware
expose:
- 5000
ports:
- 5000:5000
volumes:
- ./middleware:/usr/src/app
environment:
- FLASK_ENV=development
- FLASK_APP=app.py
- FLASK_DEBUG=1
frontend:
build: ./frontend
expose:
- 3000
ports:
- 3000:3000
volumes:
- ./frontend/src:/usr/src/app/src
- ./frontend/public:/usr/src/app/public
links:
- "middleware:middleware"
When moving folders around, you should change the python path in your vscode/.settings file. Otherwise you'll be using the wrong Python interpreter - one without Flask.