I have the problem to use celery in docker.
I configured two docker container, web_server and celery_worker. celery_worker includes rabbitmq-server. The web_server calls the task from celery worker.
I configured same thing in VM by vagrant. And it works. But docker speak out the error message like below.
Traceback (most recent call last):
File "/web_server/test/test_v1_data_description.py", line 58, in test_create_description
headers=self.get_basic_header()
.........
.........
File "../task_runner/__init__.py", line 31, in run_describe_task
kwargs={})
File "/usr/local/lib/python3.4/dist-packages/celery/app/base.py", line 349, in send_task
self.backend.on_task_call(P, task_id)
File "/usr/local/lib/python3.4/dist-packages/celery/backends/rpc.py", line 32, in on_task_call
maybe_declare(self.binding(producer.channel), retry=True)
File "/usr/local/lib/python3.4/dist-packages/kombu/messaging.py", line 194, in _get_channel
channel = self._channel = channel()
File "/usr/local/lib/python3.4/dist-packages/kombu/utils/__init__.py", line 425, in __call__
value = self.__value__ = self.__contract__()
File "/usr/local/lib/python3.4/dist-packages/kombu/messaging.py", line 209, in <lambda>
channel = ChannelPromise(lambda: connection.default_channel)
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 756, in default_channel
self.connection
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 741, in connection
self._connection = self._establish_connection()
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 696, in _establish_connection
conn = self.transport.establish_connection()
File "/usr/local/lib/python3.4/dist-packages/kombu/transport/pyamqp.py", line 116, in establish_connection
conn = self.Connection(**opts)
File "/usr/local/lib/python3.4/dist-packages/amqp/connection.py", line 165, in __init__
self.transport = self.Transport(host, connect_timeout, ssl)
File "/usr/local/lib/python3.4/dist-packages/amqp/connection.py", line 186, in Transport
return create_transport(host, connect_timeout, ssl)
File "/usr/local/lib/python3.4/dist-packages/amqp/transport.py", line 299, in create_transport
return TCPTransport(host, connect_timeout)
File "/usr/local/lib/python3.4/dist-packages/amqp/transport.py", line 95, in __init__
raise socket.error(last_err)
nose.proxy.OSError: [Errno 111] Connection refused
These are Dockerfiles for two containers.
Dockerfile for web_server.
FROM ubuntu:14.04
MAINTAINER Jinho Yoo
# Update packages.
RUN apt-get clean
RUN apt-get update
# Create work folder.
RUN mkdir /web_server
WORKDIR /web_server
# Setup web server and celery.
ADD ./install_web_server_conf.sh ./install_web_server_conf.sh
RUN chmod +x ./install_web_server_conf.sh
RUN ./install_web_server_conf.sh
#Reduce docker size.
RUN rm -rf /var/lib/apt/lists/*
# Run web server.
CMD ["python3","web_server.py"]
# Expose port.
EXPOSE 5000
Dockerfile for celery_worker.
FROM ubuntu:14.04
MAINTAINER Jinho Yoo
# Update packages.
RUN apt-get clean
RUN apt-get update
RUN apt-get install -y wget build-essential ca-certificates-java
# Setup python environment.
ADD ./bootstrap/install_python_env.sh ./install_python_env.sh
RUN chmod +x ./install_python_env.sh
RUN ./install_python_env.sh
# Install Python libraries including celery.
RUN pip3 install -r ./core/requirements.txt
# Add mlcore user for Celery worker
RUN useradd --uid 1234 -M mlcore
RUN usermod -L mlcore
# Celery configuration for supervisor
ADD celeryd.conf /etc/supervisor/conf.d/celeryd.conf
RUN mkdir -p /var/log/celery
# Reduce docker size.
RUN rm -rf /var/lib/apt/lists/*
# Run celery server by supervisor.
CMD ["supervisord", "-c", "/ml_core/supervisord.conf"]
# Expose port.
EXPOSE 8080
EXPOSE 8081
EXPOSE 4040
EXPOSE 7070
EXPOSE 5672
EXPOSE 5671
EXPOSE 15672
Docker containers can't talk to each other normally. My guess is that your connection string is something like localhost:<port>?
There's a couple ways to have your containers be able to communicate.
1: linking
http://rominirani.com/2015/07/31/docker-tutorial-series-part-8-linking-containers/
essentially, at runtime, docker adds an entry into your hosts file that points to the internal IP address of the docker container within the same private docker network stack.
2: docker run --net=host:
this binds your containers to your host network stack, thus, all containers will appear to be running from localhost, and can be accessed as such. You may run into port conflict issues if you are running multiple containers that bind to the same external port, just be aware of that.
3: external HAProxy:
you can bind a DNS entry to an HAProxy, and config the proxy to redirect traffic with a hostheader that matches the DNS entry to the host:port your container is running on, and any calls from other containers will "leave" the private docker network stack, hit the DNS server, and circle back to the HAProxy, which will direct to the proper container.
I found the reason. Docker container in celery_worker doesn't run rabbitmq-server. So I added two lines in Dockerfile of celery_worker like below.
# Run rabbitmq server and celery.
ENTRYPOINT service rabbitmq-server start && supervisord -c /ml_core/supervisord.conf
Related
I have migrated my Flask application to FastApi application and I'm trying to deploy the new fastapi application to coud run using Dockerfile but I got an error due to port Issue.
I have tried all the solutions given before on such error but nothing works, also I have try many different way to write the dockerfile but yet I had the same issue.
Last try I use FastApi Documentation to create my docker file it didn't work also.
My Dockerfile:
# Start from the official slim Python base image.
FROM python:3.9-slim
# Set the current working directory to /code.
#This is where we'll put the requirements.txt file and the app directory.
WORKDIR /code
# Copy the file with the requirements to the /code directory.
# Copy only the file with the requirements first, not the rest of the code.
# As this file doesn't change often, Docker will detect it and use the cache for this step, enabling the cache for the next step too.
COPY ./requirements.txt /code/requirements.txt
# Install the package dependencies in the requirements file.
# The --no-cache-dir option tells pip to not save the downloaded packages locally,
# as that is only if pip was going to be run again to install the same packages,
# but that's not the case when working with containers.
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
# As this has all the code which is what changes most frequently the Docker
# cache won't be used for this or any following steps easily
COPY ./app /code/app
# Because the program will be started at /code and inside of it is the directory ./app with your code,
# Uvicorn will be able to see and import app from app.main.
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "80"]
Also I have tried to config the running port in the main:
if __name__ == "__main__":
uvicorn.run("main:app", host="0.0.0.0", port=int(os.environ.get('PORT', 8080)), log_level="info")
I'm deploying my application using cloudbuild.yaml file:
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/cog-dev/new-serving', '.']
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/cog-dev/new-serving']
# Deploy container image to Cloud Run
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args: ['run', 'deploy', 'new-serving', '--image', 'gcr.io/cog-dev/new-serving', '--region', 'europe-west1', '--allow-unauthenticated', '--platform', 'managed']
# Store images in Google Artifact Registry
images:
- gcr.io/cog-dev/new-serving
Most of the solutions on stackoverflow have been tried even changing the ports number
Update:
after following Use Google Cloud user credentials when testing containers locally I test the docker image locally and I get this error:
File "/code/./app/endpoints/campaign.py", line 11, in <module>
from app.services.recommend_service import RecommendService
File "/code/./app/services/recommend_service.py", line 19, in <module>
datastore_client = datastore.Client()
File "/usr/local/lib/python3.9/site-packages/google/cloud/datastore/client.py", line 301, in __init__
super(Client, self).__init__(
File "/usr/local/lib/python3.9/site-packages/google/cloud/client/__init__.py", line 320, in __init__
_ClientProjectMixin.__init__(self, project=project, credentials=credentials)
File "/usr/local/lib/python3.9/site-packages/google/cloud/client/__init__.py", line 271, in __init__
raise EnvironmentError(
OSError: Project was not passed and could not be determined from the environment.
Good morning,
I am facing a weird issue during the composing of my RabbitMQ container.
When I build the container without my python script, which creates the test structure of my RabbitMQ, it works fine. I access the container, run the script manually, and everything gets created perfectly, with no errors.
If I run the script with the same command ("python3 manager.py"), but in a RUN entry at the Dockerfile, it's like it is suddenly unable to find the hostname or something like that during the RabbitMQ connector creation. So, it aborts the creation of the container.
I have tried executing it as a background Linux process, and the container is created, but the RabbitMQ structure creation keeps failing.
Docker-compose
version: "3.8"
services:
rabbitmq:
container_name: rabbitmq
image: rabbitmq
build: src/server/
env_file:
- src/server/server.env
ports:
- "15672:15672"
- "5672:5672"
hostname: rabbitmq
networks:
- rabbitmqnet
networks:
rabbitmqnet:
name: rabbitmqnet
driver: bridge
Dockerfile
FROM rabbitmq:3-management
WORKDIR /app
EXPOSE 15672
COPY . /app
RUN apt-get update -y
RUN apt-get install -y python python3-pip
RUN pip install -r requirements.txt
RUN python3 manager.py
manager.py
import pika
import config
connection = pika.BlockingConnection(pika.ConnectionParameters(config.server, config.port, '/', pika.PlainCredentials(config.user, config.password)))
channel = connection.channel()
def main():
createQueue("test-queue")
createExchange("test-exchange")
createBinding("test-exchange", "test-queue", "test")
# This method creates a queue.
def createQueue(qName):
channel.queue_declare(queue=qName)
# This method creates an exchange.
def createExchange(eName):
channel.exchange_declare(
exchange=eName,
exchange_type='direct'
)
# This method creates a binding routing key between an exchange and a queue. This allows the publisher to send messages to the queue through the exchange.
def createBinding(eName, qName, routingKey):
channel.queue_bind(
exchange=eName,
queue=qName,
routing_key=routingKey
)
if __name__ == "__main__":
main()
config.py
server= 'rabbitmq'
port= 5672
user= 'user'
password= 'password'
Error
[7/7] RUN python3 manager.py:
#0 0.276 Traceback (most recent call last):
#0 0.276 File "manager.py", line 4, in <module>
#0 0.276 connection = pika.BlockingConnection(pika.ConnectionParameters(config.server, config.port, '/', pika.PlainCredentials(config.user, config.password)))
#0 0.276 File "/usr/local/lib/python3.8/dist-packages/pika/adapters/blocking_connection.py", line 360, in init
#0 0.276 self._impl = self._create_connection(parameters, _impl_class)
#0 0.276 File "/usr/local/lib/python3.8/dist-packages/pika/adapters/blocking_connection.py", line 451, in _create_connection
#0 0.276 raise self._reap_last_connection_workflow_error(error)
#0 0.276 File "/usr/local/lib/python3.8/dist-packages/pika/adapters/utils/selector_ioloop_adapter.py", line 565, in _resolve
#0 0.276 result = socket.getaddrinfo(self._host, self._port, self._family,
#0 0.276 File "/usr/lib/python3.8/socket.py", line 918, in getaddrinfo
#0 0.276 for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
#0 0.276 socket.gaierror: [Errno -2] Name or service not known
failed to solve: executor failed running [/bin/sh -c python3 manager.py]: exit code: 1
I think you need to use CMD or ENTRYPOINT instead of RUN in your dockerfile.
Please read more about the topic: Difference between RUN and CMD in a Dockerfile
RUN is an image build step, the state of the container after a RUN
command will be committed to the container image. A Dockerfile can
have many RUN steps that layer on top of one another to build the
image.
CMD is the command the container executes by default when you launch
the built image. A Dockerfile will only use the final CMD defined. The
CMD can be overridden when starting a container with docker run $image
$other_command.
ENTRYPOINT is also closely related to CMD and can modify the way a
container starts an image.
Solved! I am so newbie with docker. the issue is that the image by default ends starting the RabbitMQ.
So, if I executed with RUN the script, the container has not started nor the server, then for sure it cannot connect.
Instead, when I used CMD, I was overriding the default initialization, so also the RabbitMQ did not ever start.
Thank you!
I am trying to learn how to use docker and Rabbitmq at the same time now.
#Specifying the base image
FROM python:3.10
ADD ./task.py ./home/
#Here we added the python file that we want to run in docker and define its location.
RUN pip install requests celery pika
#Here we installed the dependencies, we are using the pygame library in our main.py file so we
have to use the pip command for installing the library
CMD [ "python3" ,"/home/task.py" ]
#lastly we specified the entry command this line is simply running python
This is how my Dockerfile looks like and then I setup another container with rabbitmq through the command :
docker run -it --rm --name rabbitmq -p 5672:5672 -p 15672:15672 rabbitmq:3.9-management
This is what task.py looks like :
from celery import Celery
import pika
from time import sleep
connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost', port=5672))
channel = connection.channel()
channel.queue_declare(queue='hello')
channel.basic_publish(exchange='',
routing_key='hello',
body='Hello World!')
connection.close()
app = Celery('task', broker="localhost")
#app.task()
def reverse(text):
sleep(5)
return text[::-1 ]
And i run the docker run command , but I keep getting this error.
PS C:\Users\xyz\PycharmProjects\Sellerie> docker run sellerie
Traceback (most recent call last):
File "/home/task.py", line 5, in <module>
connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost', port=5672))
File "/usr/local/lib/python3.10/site-packages/pika/adapters/blocking_connection.py", line
360, in __init__
self._impl = self._create_connection(parameters, _impl_class)
File "/usr/local/lib/python3.10/site-packages/pika/adapters/blocking_connection.py", line 451, in _create_connection
raise self._reap_last_connection_workflow_error(error)
pika.exceptions.AMQPConnectionError
PS >
Can anyone help me better understand where the problem is. Maybe how to connect rabbitmq with the other docker container where my python file is located??
Thank you so much in advance
I'm trying to configure debugger for dockerized Flask application on VSCode. In order to do that, I've attached ptvsd to my app and exposed its port.
from flask import Flask, redirect, url_for
app = Flask(__name__)
if app.debug:
print("attaching ptvsd")
import ptvsd
ptvsd.enable_attach(address = ('0.0.0.0', 3000), redirect_output=True)
ptvsd.wait_for_attach()
ptvsd.break_into_debugger()
Dockerfile
FROM python:3
ENV FLASK_APP app/main.py
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD [ "flask", "run", "--host=0.0.0.0" ]
And command used to run container:
docker run -it --rm -p 5000:5000 -p 3000:3000 -e FLASK_ENV=development webserver
No error is shown by default after running container. When I try to attach debugger from vscode on port 3000, nothing happens. When I open any page of my app in web browser, this message appears:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/flask/_compat.py", line 35, in reraise
raise value
File "/usr/src/app/app/main.py", line 9, in <module>
ptvsd.enable_attach(address = ('0.0.0.0', 3000), redirect_output=True)
File "/usr/local/lib/python3.7/site-packages/ptvsd/attach_server.py", line 101, in enable_attach
ptvsd_enable_attach(address)
File "/usr/local/lib/python3.7/site-packages/ptvsd/_remote.py", line 64, in enable_attach
**kwargs)
File "/usr/local/lib/python3.7/site-packages/ptvsd/pydevd_hooks.py", line 128, in install
daemon = Daemon(**kwargs)
File "/usr/local/lib/python3.7/site-packages/ptvsd/daemon.py", line 503, in __init__
super(Daemon, self).__init__(wait_for_user, **kwargs)
File "/usr/local/lib/python3.7/site-packages/ptvsd/daemon.py", line 100, in __init__
self._install_exit_handlers()
File "/usr/local/lib/python3.7/site-packages/ptvsd/daemon.py", line 425, in _install_exit_handlers
self._exithandlers.install()
File "/usr/local/lib/python3.7/site-packages/ptvsd/exit_handlers.py", line 62, in install
self._install_signal_handler()
File "/usr/local/lib/python3.7/site-packages/ptvsd/exit_handlers.py", line 103, in _install_signal_handler
orig[sig] = signal.signal(sig, self._signal_handler)
File "/usr/local/lib/python3.7/signal.py", line 47, in signal
handler = _signal.signal(_enum_to_int(signalnum), _enum_to_int(handler))
ValueError: signal only works in main thread
Can someone explain me a meaning of error message? What is possible fix for that problem?
I've figured that out thanks to this repo. In short, PTVSD is not compatible with Flask's "auto reload" feature. It needs to be disabled.
Ideally, I would like to keep both of those features (debugger and auto-reload) enabled, I haven't managed to that yet. Here is my temporary workaround.
I've created docker-compose.yml file. It's not necessary, but simplified process of starting server.
version: '3'
services:
web:
build: .
ports:
- "5000:5000"
- "3000:3000"
environment:
- FLASK_ENV=${FLASK_ENV}
volumes:
- .:/server_apps/webserver
Then I've created docker-compose.debugger.yml.
version: '3'
services:
web:
# Replace default run function
# --no-reload is necessary to run PTVSD debugger
command: ["flask", "run", "--no-reload"]
Which replaces default run command with another, having --no-reload argument.
Now, when I'm developing my app, I'm using
$ docker-compose up
which deploys default config (docker-compose.yml) auto-reloading app on every code change. When something's wrong and I want to use debugger I type
$ docker-compose -f docker-compose.yml -f docker-compose.debugger.yml up
I'm working on a simple setup for a Raspberry PI to communicate with a server via RabbitMQ and I'm not making a connection. Here is the setup:
Client: Raspberry PI (Raspbien Debian 8.0) with Python 2.7 and Pika 0.10.0
RabbitMQ Server: MacMini running 10.11.6 - OS X El Capitan with Docker
Docker execution on Mac:
docker run -v /Users/tigelane/Documents/Development/brimstone_manager:/var/lib/rabbitmq -d --hostname my-rabbit --name some-rabbit rabbitmq:3
Python code to execute on client:
def rabbit_post():
entry = get_ipaddress()
post_time = datetime.datetime.now().strftime("%d %B %Y : %I:%M%p")
rabbit_server = 'machine.tigelane.com'
credentials = pika.PlainCredentials('machine', 'machine')
connectionParams = pika.ConnectionParameters(host=rabbit_server, credentials=credentials)
connection = pika.BlockingConnection(connectionParams)
channel = connection.channel()
channel.queue_declare(queue='hello')
try:
channel.basic_publish(exchange='', routing_key='hello', body='Hello World!')
print(" [x] Sent 'Hello World!'")
except:
print ("Unable to post to server: {}".format(rabbit_server))
connection.close()
I kept seeing this error:
Traceback (most recent call last): File "./brimstone_post.py", line
75, in
main() File "./brimstone_post.py", line 71, in main
rabbit_post() File "./brimstone_post.py", line 58, in rabbit_post
connection = pika.BlockingConnection(connectionParams) File "/usr/local/lib/python2.7/dist-packages/pika/adapters/blocking_connection.py",
line 339, in init
self._process_io_for_connection_setup() File "/usr/local/lib/python2.7/dist-packages/pika/adapters/blocking_connection.py",
line 374, in _process_io_for_connection_setup
self._open_error_result.is_ready) File "/usr/local/lib/python2.7/dist-packages/pika/adapters/blocking_connection.py",
line 395, in _flush_output
raise exceptions.ConnectionClosed() pika.exceptions.ConnectionClosed
I had read a few other posts and tried some things like the following to troubleshoot:
Viewing the rabbitmq logs: docker logs some-rabbit The log file
didn't show any connection.
Capturing traffic on the raspberry to
see if I'm even trying to send traffic to the server: sudo tcpdump
port 5672
Making sure the Firewall was turned off / ports open on the Mac.
I finally realized that on the Docker container there is not -p option to forward ports to the container. I changed the container to open ports 5672 on the command-line and and now it's working. Hope this can help someone else that might be using the documents from hub.docker.com on RabbitMQ.
This is the example they give for the docker container: $ docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3
This is the new command I'm using to start my RabbitMQ docker container and it's working well: docker run -v /Users/tigelane/Documents/Development/brimstone_manager:/var/lib/rabbitmq -d --hostname my-rabbit --name some-rabbit -p 5672:5672 rabbitmq:3
While I think I have solved my problem, I have this nagging feeling that I'm missing something (besides other ports that I might need to add) and that there may have been a reason that the port was omitted on the example, or maybe they just left it off thinking everyone would add the port naturally because that's how you use Docker containers. Either way, please feel free to correct my mistake.