I'm working on a simple setup for a Raspberry PI to communicate with a server via RabbitMQ and I'm not making a connection. Here is the setup:
Client: Raspberry PI (Raspbien Debian 8.0) with Python 2.7 and Pika 0.10.0
RabbitMQ Server: MacMini running 10.11.6 - OS X El Capitan with Docker
Docker execution on Mac:
docker run -v /Users/tigelane/Documents/Development/brimstone_manager:/var/lib/rabbitmq -d --hostname my-rabbit --name some-rabbit rabbitmq:3
Python code to execute on client:
def rabbit_post():
entry = get_ipaddress()
post_time = datetime.datetime.now().strftime("%d %B %Y : %I:%M%p")
rabbit_server = 'machine.tigelane.com'
credentials = pika.PlainCredentials('machine', 'machine')
connectionParams = pika.ConnectionParameters(host=rabbit_server, credentials=credentials)
connection = pika.BlockingConnection(connectionParams)
channel = connection.channel()
channel.queue_declare(queue='hello')
try:
channel.basic_publish(exchange='', routing_key='hello', body='Hello World!')
print(" [x] Sent 'Hello World!'")
except:
print ("Unable to post to server: {}".format(rabbit_server))
connection.close()
I kept seeing this error:
Traceback (most recent call last): File "./brimstone_post.py", line
75, in
main() File "./brimstone_post.py", line 71, in main
rabbit_post() File "./brimstone_post.py", line 58, in rabbit_post
connection = pika.BlockingConnection(connectionParams) File "/usr/local/lib/python2.7/dist-packages/pika/adapters/blocking_connection.py",
line 339, in init
self._process_io_for_connection_setup() File "/usr/local/lib/python2.7/dist-packages/pika/adapters/blocking_connection.py",
line 374, in _process_io_for_connection_setup
self._open_error_result.is_ready) File "/usr/local/lib/python2.7/dist-packages/pika/adapters/blocking_connection.py",
line 395, in _flush_output
raise exceptions.ConnectionClosed() pika.exceptions.ConnectionClosed
I had read a few other posts and tried some things like the following to troubleshoot:
Viewing the rabbitmq logs: docker logs some-rabbit The log file
didn't show any connection.
Capturing traffic on the raspberry to
see if I'm even trying to send traffic to the server: sudo tcpdump
port 5672
Making sure the Firewall was turned off / ports open on the Mac.
I finally realized that on the Docker container there is not -p option to forward ports to the container. I changed the container to open ports 5672 on the command-line and and now it's working. Hope this can help someone else that might be using the documents from hub.docker.com on RabbitMQ.
This is the example they give for the docker container: $ docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3
This is the new command I'm using to start my RabbitMQ docker container and it's working well: docker run -v /Users/tigelane/Documents/Development/brimstone_manager:/var/lib/rabbitmq -d --hostname my-rabbit --name some-rabbit -p 5672:5672 rabbitmq:3
While I think I have solved my problem, I have this nagging feeling that I'm missing something (besides other ports that I might need to add) and that there may have been a reason that the port was omitted on the example, or maybe they just left it off thinking everyone would add the port naturally because that's how you use Docker containers. Either way, please feel free to correct my mistake.
Related
Good morning,
I am facing a weird issue during the composing of my RabbitMQ container.
When I build the container without my python script, which creates the test structure of my RabbitMQ, it works fine. I access the container, run the script manually, and everything gets created perfectly, with no errors.
If I run the script with the same command ("python3 manager.py"), but in a RUN entry at the Dockerfile, it's like it is suddenly unable to find the hostname or something like that during the RabbitMQ connector creation. So, it aborts the creation of the container.
I have tried executing it as a background Linux process, and the container is created, but the RabbitMQ structure creation keeps failing.
Docker-compose
version: "3.8"
services:
rabbitmq:
container_name: rabbitmq
image: rabbitmq
build: src/server/
env_file:
- src/server/server.env
ports:
- "15672:15672"
- "5672:5672"
hostname: rabbitmq
networks:
- rabbitmqnet
networks:
rabbitmqnet:
name: rabbitmqnet
driver: bridge
Dockerfile
FROM rabbitmq:3-management
WORKDIR /app
EXPOSE 15672
COPY . /app
RUN apt-get update -y
RUN apt-get install -y python python3-pip
RUN pip install -r requirements.txt
RUN python3 manager.py
manager.py
import pika
import config
connection = pika.BlockingConnection(pika.ConnectionParameters(config.server, config.port, '/', pika.PlainCredentials(config.user, config.password)))
channel = connection.channel()
def main():
createQueue("test-queue")
createExchange("test-exchange")
createBinding("test-exchange", "test-queue", "test")
# This method creates a queue.
def createQueue(qName):
channel.queue_declare(queue=qName)
# This method creates an exchange.
def createExchange(eName):
channel.exchange_declare(
exchange=eName,
exchange_type='direct'
)
# This method creates a binding routing key between an exchange and a queue. This allows the publisher to send messages to the queue through the exchange.
def createBinding(eName, qName, routingKey):
channel.queue_bind(
exchange=eName,
queue=qName,
routing_key=routingKey
)
if __name__ == "__main__":
main()
config.py
server= 'rabbitmq'
port= 5672
user= 'user'
password= 'password'
Error
[7/7] RUN python3 manager.py:
#0 0.276 Traceback (most recent call last):
#0 0.276 File "manager.py", line 4, in <module>
#0 0.276 connection = pika.BlockingConnection(pika.ConnectionParameters(config.server, config.port, '/', pika.PlainCredentials(config.user, config.password)))
#0 0.276 File "/usr/local/lib/python3.8/dist-packages/pika/adapters/blocking_connection.py", line 360, in init
#0 0.276 self._impl = self._create_connection(parameters, _impl_class)
#0 0.276 File "/usr/local/lib/python3.8/dist-packages/pika/adapters/blocking_connection.py", line 451, in _create_connection
#0 0.276 raise self._reap_last_connection_workflow_error(error)
#0 0.276 File "/usr/local/lib/python3.8/dist-packages/pika/adapters/utils/selector_ioloop_adapter.py", line 565, in _resolve
#0 0.276 result = socket.getaddrinfo(self._host, self._port, self._family,
#0 0.276 File "/usr/lib/python3.8/socket.py", line 918, in getaddrinfo
#0 0.276 for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
#0 0.276 socket.gaierror: [Errno -2] Name or service not known
failed to solve: executor failed running [/bin/sh -c python3 manager.py]: exit code: 1
I think you need to use CMD or ENTRYPOINT instead of RUN in your dockerfile.
Please read more about the topic: Difference between RUN and CMD in a Dockerfile
RUN is an image build step, the state of the container after a RUN
command will be committed to the container image. A Dockerfile can
have many RUN steps that layer on top of one another to build the
image.
CMD is the command the container executes by default when you launch
the built image. A Dockerfile will only use the final CMD defined. The
CMD can be overridden when starting a container with docker run $image
$other_command.
ENTRYPOINT is also closely related to CMD and can modify the way a
container starts an image.
Solved! I am so newbie with docker. the issue is that the image by default ends starting the RabbitMQ.
So, if I executed with RUN the script, the container has not started nor the server, then for sure it cannot connect.
Instead, when I used CMD, I was overriding the default initialization, so also the RabbitMQ did not ever start.
Thank you!
I am trying to learn how to use docker and Rabbitmq at the same time now.
#Specifying the base image
FROM python:3.10
ADD ./task.py ./home/
#Here we added the python file that we want to run in docker and define its location.
RUN pip install requests celery pika
#Here we installed the dependencies, we are using the pygame library in our main.py file so we
have to use the pip command for installing the library
CMD [ "python3" ,"/home/task.py" ]
#lastly we specified the entry command this line is simply running python
This is how my Dockerfile looks like and then I setup another container with rabbitmq through the command :
docker run -it --rm --name rabbitmq -p 5672:5672 -p 15672:15672 rabbitmq:3.9-management
This is what task.py looks like :
from celery import Celery
import pika
from time import sleep
connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost', port=5672))
channel = connection.channel()
channel.queue_declare(queue='hello')
channel.basic_publish(exchange='',
routing_key='hello',
body='Hello World!')
connection.close()
app = Celery('task', broker="localhost")
#app.task()
def reverse(text):
sleep(5)
return text[::-1 ]
And i run the docker run command , but I keep getting this error.
PS C:\Users\xyz\PycharmProjects\Sellerie> docker run sellerie
Traceback (most recent call last):
File "/home/task.py", line 5, in <module>
connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost', port=5672))
File "/usr/local/lib/python3.10/site-packages/pika/adapters/blocking_connection.py", line
360, in __init__
self._impl = self._create_connection(parameters, _impl_class)
File "/usr/local/lib/python3.10/site-packages/pika/adapters/blocking_connection.py", line 451, in _create_connection
raise self._reap_last_connection_workflow_error(error)
pika.exceptions.AMQPConnectionError
PS >
Can anyone help me better understand where the problem is. Maybe how to connect rabbitmq with the other docker container where my python file is located??
Thank you so much in advance
I am using dronekit-python in a docker container and am attempting to connect to an instance of MAVProxy running on my host machine (Mac OSX) using the following command:
vehicle = connect('udp:host.docker.internal:14551', wait_ready=True)
but am getting the following error:
File "/usr/local/lib/python3.7/site-packages/pymavlink/mavutil.py", line 1015, in __init__
self.port.bind((a[0], int(a[1])))
OSError: [Errno 99] Cannot assign requested address
Does anyone know what the issue is here? I am able to successfully connect using the above command when I run the python script locally on host but not when I have it running in a docker container.
I found a similar stackoverflow question here but the accepted answer did not work for me. Not sure if I need to be exposing ports or something like that.
Here is the command that I am running on my host machine to kick off MAVProxy:
mavproxy.py --master=127.0.0.1:14550 --out udp:127.0.0.1:14551 --out udp:10.55.222.120:14550 --out udp:127.0.0.1:14552
I ended up getting MAVProxy on host and dronekit-python in the docker flask container properly connected.
Seemus790's answer in this gitter thread did the trick.
Working solution:
MAVProxy on host machine (Mac OS in my case)
mavproxy.py --master=127.0.0.1:14550 --out udp:127.0.0.1:14551 --out udp:10.55.222.120:14550 --out=tcpin:0.0.0.0:14552
dronekit-python command in docker container:
vehicle = connect('tcp:host.docker.internal:14552', wait_ready=True)
The trick was the --out=tcpin:0.0.0.0:14552 part of the mavproxy command which is documented here
I have the problem to use celery in docker.
I configured two docker container, web_server and celery_worker. celery_worker includes rabbitmq-server. The web_server calls the task from celery worker.
I configured same thing in VM by vagrant. And it works. But docker speak out the error message like below.
Traceback (most recent call last):
File "/web_server/test/test_v1_data_description.py", line 58, in test_create_description
headers=self.get_basic_header()
.........
.........
File "../task_runner/__init__.py", line 31, in run_describe_task
kwargs={})
File "/usr/local/lib/python3.4/dist-packages/celery/app/base.py", line 349, in send_task
self.backend.on_task_call(P, task_id)
File "/usr/local/lib/python3.4/dist-packages/celery/backends/rpc.py", line 32, in on_task_call
maybe_declare(self.binding(producer.channel), retry=True)
File "/usr/local/lib/python3.4/dist-packages/kombu/messaging.py", line 194, in _get_channel
channel = self._channel = channel()
File "/usr/local/lib/python3.4/dist-packages/kombu/utils/__init__.py", line 425, in __call__
value = self.__value__ = self.__contract__()
File "/usr/local/lib/python3.4/dist-packages/kombu/messaging.py", line 209, in <lambda>
channel = ChannelPromise(lambda: connection.default_channel)
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 756, in default_channel
self.connection
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 741, in connection
self._connection = self._establish_connection()
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 696, in _establish_connection
conn = self.transport.establish_connection()
File "/usr/local/lib/python3.4/dist-packages/kombu/transport/pyamqp.py", line 116, in establish_connection
conn = self.Connection(**opts)
File "/usr/local/lib/python3.4/dist-packages/amqp/connection.py", line 165, in __init__
self.transport = self.Transport(host, connect_timeout, ssl)
File "/usr/local/lib/python3.4/dist-packages/amqp/connection.py", line 186, in Transport
return create_transport(host, connect_timeout, ssl)
File "/usr/local/lib/python3.4/dist-packages/amqp/transport.py", line 299, in create_transport
return TCPTransport(host, connect_timeout)
File "/usr/local/lib/python3.4/dist-packages/amqp/transport.py", line 95, in __init__
raise socket.error(last_err)
nose.proxy.OSError: [Errno 111] Connection refused
These are Dockerfiles for two containers.
Dockerfile for web_server.
FROM ubuntu:14.04
MAINTAINER Jinho Yoo
# Update packages.
RUN apt-get clean
RUN apt-get update
# Create work folder.
RUN mkdir /web_server
WORKDIR /web_server
# Setup web server and celery.
ADD ./install_web_server_conf.sh ./install_web_server_conf.sh
RUN chmod +x ./install_web_server_conf.sh
RUN ./install_web_server_conf.sh
#Reduce docker size.
RUN rm -rf /var/lib/apt/lists/*
# Run web server.
CMD ["python3","web_server.py"]
# Expose port.
EXPOSE 5000
Dockerfile for celery_worker.
FROM ubuntu:14.04
MAINTAINER Jinho Yoo
# Update packages.
RUN apt-get clean
RUN apt-get update
RUN apt-get install -y wget build-essential ca-certificates-java
# Setup python environment.
ADD ./bootstrap/install_python_env.sh ./install_python_env.sh
RUN chmod +x ./install_python_env.sh
RUN ./install_python_env.sh
# Install Python libraries including celery.
RUN pip3 install -r ./core/requirements.txt
# Add mlcore user for Celery worker
RUN useradd --uid 1234 -M mlcore
RUN usermod -L mlcore
# Celery configuration for supervisor
ADD celeryd.conf /etc/supervisor/conf.d/celeryd.conf
RUN mkdir -p /var/log/celery
# Reduce docker size.
RUN rm -rf /var/lib/apt/lists/*
# Run celery server by supervisor.
CMD ["supervisord", "-c", "/ml_core/supervisord.conf"]
# Expose port.
EXPOSE 8080
EXPOSE 8081
EXPOSE 4040
EXPOSE 7070
EXPOSE 5672
EXPOSE 5671
EXPOSE 15672
Docker containers can't talk to each other normally. My guess is that your connection string is something like localhost:<port>?
There's a couple ways to have your containers be able to communicate.
1: linking
http://rominirani.com/2015/07/31/docker-tutorial-series-part-8-linking-containers/
essentially, at runtime, docker adds an entry into your hosts file that points to the internal IP address of the docker container within the same private docker network stack.
2: docker run --net=host:
this binds your containers to your host network stack, thus, all containers will appear to be running from localhost, and can be accessed as such. You may run into port conflict issues if you are running multiple containers that bind to the same external port, just be aware of that.
3: external HAProxy:
you can bind a DNS entry to an HAProxy, and config the proxy to redirect traffic with a hostheader that matches the DNS entry to the host:port your container is running on, and any calls from other containers will "leave" the private docker network stack, hit the DNS server, and circle back to the HAProxy, which will direct to the proper container.
I found the reason. Docker container in celery_worker doesn't run rabbitmq-server. So I added two lines in Dockerfile of celery_worker like below.
# Run rabbitmq server and celery.
ENTRYPOINT service rabbitmq-server start && supervisord -c /ml_core/supervisord.conf
I have app, that used Tornado and tornado-redis. [image "app" in docker images]
I start redis:
docker run --name some-redis -d redis
Then I want to link my app with redis:
docker run --name some-app --link some-redis:redis app
And I have error:
Traceback (most recent call last):
File "./app.py", line 41, in <module>
c.connect()
File "/usr/local/lib/python3.4/site-packages/tornadoredis/client.py", line 333
, in connect
self.connection.connect()
File "/usr/local/lib/python3.4/site-packages/tornadoredis/connection.py", line
79, in connect
raise ConnectionError(str(e))
tornadoredis.exceptions.ConnectionError: [Errno 111] Connection refused
I have tested my code with local tornado and redis, and it works. The problem in
c = tornadoredis.Client()
c.connect()
Why my app cant connet to redis-container? How to fix that? I use standart port 6379.
Thanks!
tornadoredis attempts to use redis on localhost. (See source here)
So you need to inform tornadoredis where redis is running (since to the docker image it is not running on localhost).
For example:
c = tornadoredis.Client(host="<hostname>")
c.connect()
In your specific case, substitute "redis" for "<hostname>".