I am trying to make a queue of tasks using redis rq. I was trying to follow a tutorial but I am using docker. Below is my code-
app.py
from flask import Flask, request
import redis
from rq import Queue
import time
app = Flask(__name__)
r = redis.Redis()
q = Queue(connection=r)
def background_task(n):
""" Function that returns len(n) and simulates a delay """
delay = 2
print("Task running")
print(f"Simulating a {delay} second delay")
time.sleep(delay)
print(len(n))
print("Task complete")
return len(n)
def index():
if request.args.get("n"):
job = q.enqueue(background_task, request.args.get("n"))
return f"Task ({job.id}) added to queue at {job.enqueued_at}"
return "No value for count provided"
if __name__ == "__main__":
app.run()
Docker compose file-
version: "3.8"
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
environment:
FLASK_ENV: development
redis:
image: "redis:alpine"
Dockerfile
FROM python:3.7-alpine
WORKDIR /code
ENV FLASK_APP=app.py
ENV FLASK_RUN_HOST=0.0.0.0
RUN apk add --no-cache gcc musl-dev linux-headers
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
EXPOSE 5000
COPY . .
CMD ["flask", "run"]
Whenever I run '''docker-compose up --build''' and open http://localhost:5000/ I get Url not found
Where am I going wrong?
How is one supposed to use rq worker command in docker containers.
redis:
image: "redis:alpine"
The issue is that the image specified in your Docker compose YAML should be the image built by your Dockerfile.
Because you have a dockerfile you want to use for this image you can specify it in-line, see the documentation here:
https://docs.docker.com/compose/compose-file/compose-file-v3/
version: "3.9"
services:
webapp:
build:
context: ./dir
dockerfile: Dockerfile-alternate
args:
buildno: 1
As a good practice, instead of calling your service "redis" in the docker compose file, you should provide a custom name to represent your worker script.
Related
I have a service web and a service ipfs which runs my web app and the ipfs server, the problem is, I need ipfs to be up before I can build web. This is the docker compose I have so far:
version: "3.9"
services:
web:
build: .
ports:
- "5000:5000"
environment:
- FLASK_APP=app.py
- FLASK_ENV=development
- IPFS_ADDR=/dns/ipfs/tcp/5001
- PIN_DATA=False
depends_on:
- ipfs
ipfs:
image: ipfs/go-ipfs:v0.7.0
ports:
- "4001:4001"
- "5001:5001"
- "8080:8080"
This is my Dockerfile:
FROM python:3.9
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
RUN python ./wait_ipfs.py
RUN python -m unittest
CMD [ "python3", "-m" , "flask", "run", "--host=0.0.0.0"]
And this is the content of wait_ipfs.py
from time import sleep
from ipfshttpclient.exceptions import ConnectionError
import ipfshttpclient
import sys
IPFS_ADDR = "/dns/ipfs/tcp/5001"
CID = "QmdMxMx29KVYhHnaCc1icWYxQqXwUNCae6t1wS2NqruiHd"
while True:
try:
with ipfshttpclient.connect(IPFS_ADDR) as client:
data = client.get(CID)
break
except ConnectionError as e:
sleep(5)
The problem is, docker compose is building web before running ipfs, and since that is happening I am never able to connect and never able to finish the build. Is there a way I can get ipfs to run before I build web?
I am trying to test a simple server endpoint on my local machine when running docker compose up but it does not seem the ports are exposed when running docker this way. If I just do a docker build and docker run I can use localhost to get a successful endpoint call but not when I use my docker compose file.
docker-compose.yml file:
version: '3'
services:
simple:
build:
context: .
dockerfile: Dockerfile
container_name: simple
ports:
- 3000:80
environment:
- SOMEKEY=ABCD
- ANOTHERKEY=EFG
Dockerfile
FROM python:3.9.5
ARG VERSION
ARG SERVICE_NAME
ENV PYTHONPATH=/app
COPY requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
COPY app /app/app
COPY main.py /app/
CMD ["python", "./app/main.py"]
And then my main.py file
import uvicorn
from fastapi import FastAPI
app = FastAPI()
#app.get("/")
def read_root():
return {"Hello": "World"}
if __name__ == '__main__':
uvicorn.run(app, port=3000, host="0.0.0.0")
docker compose up does not seem to want to expose to local host.
What I use with build and run that does expose:
docker build -t test-test .
docker run -p 3000:3000 test-test
Is there a way to expose the port to localhost with docker compose up?
The syntax for ports is HOST:CONTAINER. The port on the container is 3000, so you've got it backwards.
version: '3'
services:
simple:
build:
context: .
dockerfile: Dockerfile
container_name: simple
ports:
- 80:3000
environment:
- SOMEKEY=ABCD
- ANOTHERKEY=EFG
I have 2 microservices that I'm dockerizing via docker-compose. Once my node service pings my python service, I get a connection refused.
I can ping both services independently via Postman and everything looks fine. It seems the container-to-container networking is what I'm having issues with. The node server is pinging a request via Axios like so:
const res = await axios.get('bot:9000/test')
and the server code on the Python side looks like:
#app.route('/test', methods=['GET'])
async def tester():
return jsonify(data='hi'), 200
Compose File
version: '3'
services:
bot:
build:
dockerfile: Dockerfile.dev
context: ./app-bot
volumes:
- /app/node_modules
- ./app-bot:/app
environment:
- TELEGRAM_API_KEY=xxxx
- BOT_PORT=4040
channel-scraper:
restart: always
image: quart-app
environment:
- QUART_APP=api
build:
context: ./app-channelscrape/server
dockerfile: Dockerfile
ports:
- "9000:9000"
env_file:
- .env
Node Docker File
FROM node:alpine
WORKDIR "/app"
COPY ./package.json ./
RUN npm install
RUN npm install -g nodemon
COPY . .
CMD ["npm", "run", "start"]
Flask Docker File
FROM python:3.7-alpine
RUN adduser -D quart
WORKDIR /home/quart
COPY ./requirements.txt ./
RUN rm -rf /var/cache/apk/*
RUN pip install --no-cache-dir -r requirements.txt --upgrade && \
chown -R quart:quart ./
COPY ./ /home/quart/
USER quart
CMD ["quart", "run", "-h", "0.0.0.0", "-p", "9000"]
You just need to make sure that the containers you want to talk to each other are on the same network.
add it this code in the end of your docker-compose file
networks:
some-net:
driver: bridge
And then add this like in which service that you want to be in the this network
networks:
- some-net
Your code will be like this
version: '3'
services:
bot:
build:
dockerfile: Dockerfile.dev
context: ./app-bot
volumes:
- /app/node_modules
- ./app-bot:/app
environment:
- TELEGRAM_API_KEY=xxxx
- BOT_PORT=4040
networks:
- some-net
channel-scraper:
restart: always
image: quart-app
environment:
- QUART_APP=api
build:
context: ./app-channelscrape/server
dockerfile: Dockerfile
ports:
- "9000:9000"
env_file:
- .env
networks:
- some-net
networks:
some-net:
driver: bridge
I have to run simple service on Docker Compose. The first image is to host the previously created service while the second image, which is dependent on the first one, is to run the tests. So I created Dockerfile:
FROM python:2.7-slim
WORKDIR /flask
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "routes.py"]
Everything works. I created some simple tests, which also works, and placed the file in the same directory as routes.py.
So I tried to create docker-compose.yml file and did something like that:
version: '2'
services:
app:
build: .
command: 'python MyTest.py'
ports:
- "5000:5000"
tests:
build:
context: Mytest.py
depends_on:
- app
When I run it I received an error:
TypeError: You must specify a directory to build in path
[13341] Failed to execute script docker-compose
So how should I specify these directory and where I can place it in app or tests service?
TypeError: You must specify a directory to build in path
[13341] Failed to execute script docker-compose
Above error tells you context: should be folder to put your Dockerfile, but as you seems could use the same image to test your product, I think no need to specify it.
And I guess your MyTest.py will visit 5000 port of your app container to have a test. So what you needed is next:
version: '2'
services:
app:
build: .
container_name: my_app
ports:
- "5000:5000"
tests:
build: .
depends_on:
- app
command: python MyTest.py
Here, what you need to pay attention is: you should visit http://my_app:5000 for your test in MyTest.py.
Meanwhile, in MyTest.py suggest you to sleep some time, because depends_on just can ensure tests start after app, but cannot assure at that time your flask already ready, you can also consider this to assure the order.
You need to specify dockerfile field as you are using version-2 docker compose.
Check this out.
Modify your build command:
...
build:
context: .
dockerfile: Dockerfile
...
I have a project structured like this:
docker-compose.yml
database>
models.py
__init__.py
datajobs>
check_data.py
import_data.py
tasks_name.py
workers>
Dockerfile
worker.py
webapp>
(flask app)
my docker-compose.yml
version: '2'
services:
# Postgres database
postgres:
image: 'postgres:10.3'
env_file:
- '.env'
volumes:
- 'postgres:/var/lib/postgresql/data'
ports:
- '5432:5432'
# Redis message broker
redis:
image: 'redis:3.0-alpine'
command: redis-server
volumes:
- 'redis:/var/lib/redis/data'
ports:
- '6379:6379'
# Flask web app
# webapp:
# build: webapp/.
# command: >
# gunicorn -b 0.0.0.0:8000
# --access-logfile -
# --reload
# app:create_app()
# env_file:
# - '.env'
# volumes:
# - '.:/gameover'
# ports:
# - '8000:8000'
# Celery workers to write and pull data + message APIs
worker:
build: ./worker
env_file:
- '.env'
volumes:
- '.:/gameover'
depends_on:
- redis
beat:
build: ./worker
entrypoint: celery -A worker beat --loglevel=info
env_file:
- '.env'
volumes:
- '.:/gameover'
depends_on:
- redis
# Flower server for monitoring celery tasks
monitor:
build:
context: ./worker
dockerfile: Dockerfile
ports:
- "5555:5555"
entrypoint: flower
command: -A worker --port=5555 --broker=redis://redis:6379
depends_on:
- redis
- worker
volumes:
postgres:
redis:
I want to reference the database modules, and datajobs in my worker. But in docker I can't copy a parent file (so I can't reference the module).
I'd prefer to keep them separate like this, because the flask app will also run these modules. Additionally, if I copy them into each folder there would be a lot of duplicate code.
So in the worker I want to do: from datajobs.data_pull import get_campaigns, but this module isn't copied over in the Dockerfile, as I can't reference it in the parent folder.
Dockerfile in worker
FROM python:3.6-slim
MAINTAINER Gameover
# Redis variables
ENV CELERY_BROKER_URL redis://redis:6379/0
ENV CELERY_RESULT_BACKEND redis://redis:6379/0
# Make worker directory, cd and copy files
ENV INSTALL_PATH /worker
RUN mkdir -p $INSTALL_PATH
WORKDIR /worker
COPY . .
# Install dependencies
RUN pip install -r requirements.txt
# Run the worker
ENTRYPOINT celery -A worker worker --loglevel=info
So, the answer to your question is pretty easy-
worker:
build:
context: .
dockerfile: ./worker
env_file:
- '.env'
volumes:
- '.:/gameover'
depends_on:
- redis
Then in your Dockerfile you can reference all of the paths and copy all of the code you need.
There are a couple other things I notice...
COPY . .
# Install dependencies
RUN pip install -r requirements.txt
This will make you reinstall all your dependencies on every code change. Instead do
COPY requirements.txt .
# Install dependencies
RUN pip install -r requirements.txt
COPY . .
So you only reinstall them when requirements.txt changes.
Finally- when I set this kind of thing up, I generally build a single image and just override the command to get workers and beats, so that I don't have to worry about which code is in which container- and my celery code uses many of the same modules as my flask app does. It will simplify your build process quite a bit... just a suggestion.
RUN pip install -r requirements.txt
Is the above command install content in project or code folder or directly to docker pre-build image of subsequent project or code.
Edit : can't Comment in above post due to reputation points