I am trying to learn how to use docker and Rabbitmq at the same time now.
#Specifying the base image
FROM python:3.10
ADD ./task.py ./home/
#Here we added the python file that we want to run in docker and define its location.
RUN pip install requests celery pika
#Here we installed the dependencies, we are using the pygame library in our main.py file so we
have to use the pip command for installing the library
CMD [ "python3" ,"/home/task.py" ]
#lastly we specified the entry command this line is simply running python
This is how my Dockerfile looks like and then I setup another container with rabbitmq through the command :
docker run -it --rm --name rabbitmq -p 5672:5672 -p 15672:15672 rabbitmq:3.9-management
This is what task.py looks like :
from celery import Celery
import pika
from time import sleep
connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost', port=5672))
channel = connection.channel()
channel.queue_declare(queue='hello')
channel.basic_publish(exchange='',
routing_key='hello',
body='Hello World!')
connection.close()
app = Celery('task', broker="localhost")
#app.task()
def reverse(text):
sleep(5)
return text[::-1 ]
And i run the docker run command , but I keep getting this error.
PS C:\Users\xyz\PycharmProjects\Sellerie> docker run sellerie
Traceback (most recent call last):
File "/home/task.py", line 5, in <module>
connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost', port=5672))
File "/usr/local/lib/python3.10/site-packages/pika/adapters/blocking_connection.py", line
360, in __init__
self._impl = self._create_connection(parameters, _impl_class)
File "/usr/local/lib/python3.10/site-packages/pika/adapters/blocking_connection.py", line 451, in _create_connection
raise self._reap_last_connection_workflow_error(error)
pika.exceptions.AMQPConnectionError
PS >
Can anyone help me better understand where the problem is. Maybe how to connect rabbitmq with the other docker container where my python file is located??
Thank you so much in advance
Related
I have a test Kubernetes cluster (Minikube) running in a Docker container.
I want to write some Python code to start jobs on that cluster. If I go inside the container:
$ docker exec -it kubernetes /bin/sh
#/ apk add python3
#/ apk add py3-pip
#/ pip install kubernertes
#/ pyhton3
>>> from kubernetes import config, client
>>> config.load_kube_config()
It works, and I can then create a client and do my stuff.
However, if I run the same Python commands from the host machine:
>>> config.load_kube_config()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/path/to/python/kubernetes/config/kube_config.py", line 814, in load_kube_config
loader = _get_kube_config_loader(
File "/path/to/python/kubernetes/config/kube_config.py", line 773, in _get_kube_config_loader
raise ConfigException(
kubernetes.config.config_exception.ConfigException: Invalid kube-config file. No configuration found.
That sort of makes sense, since kubernetes is not running on the machine. How do I go around this ? Is there another way to fetch the config file ?
config should be in ~/.kube/config path
mkdir -p ~/.kube/
cp <current config path> ~/.kube/config
#below steps verify if your config is proper
export KUBECONFIG=~/.kube/config
kubectl get pods
once above steps are working fine you could try running the python script
I'm trying to configure debugger for dockerized Flask application on VSCode. In order to do that, I've attached ptvsd to my app and exposed its port.
from flask import Flask, redirect, url_for
app = Flask(__name__)
if app.debug:
print("attaching ptvsd")
import ptvsd
ptvsd.enable_attach(address = ('0.0.0.0', 3000), redirect_output=True)
ptvsd.wait_for_attach()
ptvsd.break_into_debugger()
Dockerfile
FROM python:3
ENV FLASK_APP app/main.py
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD [ "flask", "run", "--host=0.0.0.0" ]
And command used to run container:
docker run -it --rm -p 5000:5000 -p 3000:3000 -e FLASK_ENV=development webserver
No error is shown by default after running container. When I try to attach debugger from vscode on port 3000, nothing happens. When I open any page of my app in web browser, this message appears:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/flask/_compat.py", line 35, in reraise
raise value
File "/usr/src/app/app/main.py", line 9, in <module>
ptvsd.enable_attach(address = ('0.0.0.0', 3000), redirect_output=True)
File "/usr/local/lib/python3.7/site-packages/ptvsd/attach_server.py", line 101, in enable_attach
ptvsd_enable_attach(address)
File "/usr/local/lib/python3.7/site-packages/ptvsd/_remote.py", line 64, in enable_attach
**kwargs)
File "/usr/local/lib/python3.7/site-packages/ptvsd/pydevd_hooks.py", line 128, in install
daemon = Daemon(**kwargs)
File "/usr/local/lib/python3.7/site-packages/ptvsd/daemon.py", line 503, in __init__
super(Daemon, self).__init__(wait_for_user, **kwargs)
File "/usr/local/lib/python3.7/site-packages/ptvsd/daemon.py", line 100, in __init__
self._install_exit_handlers()
File "/usr/local/lib/python3.7/site-packages/ptvsd/daemon.py", line 425, in _install_exit_handlers
self._exithandlers.install()
File "/usr/local/lib/python3.7/site-packages/ptvsd/exit_handlers.py", line 62, in install
self._install_signal_handler()
File "/usr/local/lib/python3.7/site-packages/ptvsd/exit_handlers.py", line 103, in _install_signal_handler
orig[sig] = signal.signal(sig, self._signal_handler)
File "/usr/local/lib/python3.7/signal.py", line 47, in signal
handler = _signal.signal(_enum_to_int(signalnum), _enum_to_int(handler))
ValueError: signal only works in main thread
Can someone explain me a meaning of error message? What is possible fix for that problem?
I've figured that out thanks to this repo. In short, PTVSD is not compatible with Flask's "auto reload" feature. It needs to be disabled.
Ideally, I would like to keep both of those features (debugger and auto-reload) enabled, I haven't managed to that yet. Here is my temporary workaround.
I've created docker-compose.yml file. It's not necessary, but simplified process of starting server.
version: '3'
services:
web:
build: .
ports:
- "5000:5000"
- "3000:3000"
environment:
- FLASK_ENV=${FLASK_ENV}
volumes:
- .:/server_apps/webserver
Then I've created docker-compose.debugger.yml.
version: '3'
services:
web:
# Replace default run function
# --no-reload is necessary to run PTVSD debugger
command: ["flask", "run", "--no-reload"]
Which replaces default run command with another, having --no-reload argument.
Now, when I'm developing my app, I'm using
$ docker-compose up
which deploys default config (docker-compose.yml) auto-reloading app on every code change. When something's wrong and I want to use debugger I type
$ docker-compose -f docker-compose.yml -f docker-compose.debugger.yml up
I'm working on a simple setup for a Raspberry PI to communicate with a server via RabbitMQ and I'm not making a connection. Here is the setup:
Client: Raspberry PI (Raspbien Debian 8.0) with Python 2.7 and Pika 0.10.0
RabbitMQ Server: MacMini running 10.11.6 - OS X El Capitan with Docker
Docker execution on Mac:
docker run -v /Users/tigelane/Documents/Development/brimstone_manager:/var/lib/rabbitmq -d --hostname my-rabbit --name some-rabbit rabbitmq:3
Python code to execute on client:
def rabbit_post():
entry = get_ipaddress()
post_time = datetime.datetime.now().strftime("%d %B %Y : %I:%M%p")
rabbit_server = 'machine.tigelane.com'
credentials = pika.PlainCredentials('machine', 'machine')
connectionParams = pika.ConnectionParameters(host=rabbit_server, credentials=credentials)
connection = pika.BlockingConnection(connectionParams)
channel = connection.channel()
channel.queue_declare(queue='hello')
try:
channel.basic_publish(exchange='', routing_key='hello', body='Hello World!')
print(" [x] Sent 'Hello World!'")
except:
print ("Unable to post to server: {}".format(rabbit_server))
connection.close()
I kept seeing this error:
Traceback (most recent call last): File "./brimstone_post.py", line
75, in
main() File "./brimstone_post.py", line 71, in main
rabbit_post() File "./brimstone_post.py", line 58, in rabbit_post
connection = pika.BlockingConnection(connectionParams) File "/usr/local/lib/python2.7/dist-packages/pika/adapters/blocking_connection.py",
line 339, in init
self._process_io_for_connection_setup() File "/usr/local/lib/python2.7/dist-packages/pika/adapters/blocking_connection.py",
line 374, in _process_io_for_connection_setup
self._open_error_result.is_ready) File "/usr/local/lib/python2.7/dist-packages/pika/adapters/blocking_connection.py",
line 395, in _flush_output
raise exceptions.ConnectionClosed() pika.exceptions.ConnectionClosed
I had read a few other posts and tried some things like the following to troubleshoot:
Viewing the rabbitmq logs: docker logs some-rabbit The log file
didn't show any connection.
Capturing traffic on the raspberry to
see if I'm even trying to send traffic to the server: sudo tcpdump
port 5672
Making sure the Firewall was turned off / ports open on the Mac.
I finally realized that on the Docker container there is not -p option to forward ports to the container. I changed the container to open ports 5672 on the command-line and and now it's working. Hope this can help someone else that might be using the documents from hub.docker.com on RabbitMQ.
This is the example they give for the docker container: $ docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3
This is the new command I'm using to start my RabbitMQ docker container and it's working well: docker run -v /Users/tigelane/Documents/Development/brimstone_manager:/var/lib/rabbitmq -d --hostname my-rabbit --name some-rabbit -p 5672:5672 rabbitmq:3
While I think I have solved my problem, I have this nagging feeling that I'm missing something (besides other ports that I might need to add) and that there may have been a reason that the port was omitted on the example, or maybe they just left it off thinking everyone would add the port naturally because that's how you use Docker containers. Either way, please feel free to correct my mistake.
I have the problem to use celery in docker.
I configured two docker container, web_server and celery_worker. celery_worker includes rabbitmq-server. The web_server calls the task from celery worker.
I configured same thing in VM by vagrant. And it works. But docker speak out the error message like below.
Traceback (most recent call last):
File "/web_server/test/test_v1_data_description.py", line 58, in test_create_description
headers=self.get_basic_header()
.........
.........
File "../task_runner/__init__.py", line 31, in run_describe_task
kwargs={})
File "/usr/local/lib/python3.4/dist-packages/celery/app/base.py", line 349, in send_task
self.backend.on_task_call(P, task_id)
File "/usr/local/lib/python3.4/dist-packages/celery/backends/rpc.py", line 32, in on_task_call
maybe_declare(self.binding(producer.channel), retry=True)
File "/usr/local/lib/python3.4/dist-packages/kombu/messaging.py", line 194, in _get_channel
channel = self._channel = channel()
File "/usr/local/lib/python3.4/dist-packages/kombu/utils/__init__.py", line 425, in __call__
value = self.__value__ = self.__contract__()
File "/usr/local/lib/python3.4/dist-packages/kombu/messaging.py", line 209, in <lambda>
channel = ChannelPromise(lambda: connection.default_channel)
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 756, in default_channel
self.connection
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 741, in connection
self._connection = self._establish_connection()
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 696, in _establish_connection
conn = self.transport.establish_connection()
File "/usr/local/lib/python3.4/dist-packages/kombu/transport/pyamqp.py", line 116, in establish_connection
conn = self.Connection(**opts)
File "/usr/local/lib/python3.4/dist-packages/amqp/connection.py", line 165, in __init__
self.transport = self.Transport(host, connect_timeout, ssl)
File "/usr/local/lib/python3.4/dist-packages/amqp/connection.py", line 186, in Transport
return create_transport(host, connect_timeout, ssl)
File "/usr/local/lib/python3.4/dist-packages/amqp/transport.py", line 299, in create_transport
return TCPTransport(host, connect_timeout)
File "/usr/local/lib/python3.4/dist-packages/amqp/transport.py", line 95, in __init__
raise socket.error(last_err)
nose.proxy.OSError: [Errno 111] Connection refused
These are Dockerfiles for two containers.
Dockerfile for web_server.
FROM ubuntu:14.04
MAINTAINER Jinho Yoo
# Update packages.
RUN apt-get clean
RUN apt-get update
# Create work folder.
RUN mkdir /web_server
WORKDIR /web_server
# Setup web server and celery.
ADD ./install_web_server_conf.sh ./install_web_server_conf.sh
RUN chmod +x ./install_web_server_conf.sh
RUN ./install_web_server_conf.sh
#Reduce docker size.
RUN rm -rf /var/lib/apt/lists/*
# Run web server.
CMD ["python3","web_server.py"]
# Expose port.
EXPOSE 5000
Dockerfile for celery_worker.
FROM ubuntu:14.04
MAINTAINER Jinho Yoo
# Update packages.
RUN apt-get clean
RUN apt-get update
RUN apt-get install -y wget build-essential ca-certificates-java
# Setup python environment.
ADD ./bootstrap/install_python_env.sh ./install_python_env.sh
RUN chmod +x ./install_python_env.sh
RUN ./install_python_env.sh
# Install Python libraries including celery.
RUN pip3 install -r ./core/requirements.txt
# Add mlcore user for Celery worker
RUN useradd --uid 1234 -M mlcore
RUN usermod -L mlcore
# Celery configuration for supervisor
ADD celeryd.conf /etc/supervisor/conf.d/celeryd.conf
RUN mkdir -p /var/log/celery
# Reduce docker size.
RUN rm -rf /var/lib/apt/lists/*
# Run celery server by supervisor.
CMD ["supervisord", "-c", "/ml_core/supervisord.conf"]
# Expose port.
EXPOSE 8080
EXPOSE 8081
EXPOSE 4040
EXPOSE 7070
EXPOSE 5672
EXPOSE 5671
EXPOSE 15672
Docker containers can't talk to each other normally. My guess is that your connection string is something like localhost:<port>?
There's a couple ways to have your containers be able to communicate.
1: linking
http://rominirani.com/2015/07/31/docker-tutorial-series-part-8-linking-containers/
essentially, at runtime, docker adds an entry into your hosts file that points to the internal IP address of the docker container within the same private docker network stack.
2: docker run --net=host:
this binds your containers to your host network stack, thus, all containers will appear to be running from localhost, and can be accessed as such. You may run into port conflict issues if you are running multiple containers that bind to the same external port, just be aware of that.
3: external HAProxy:
you can bind a DNS entry to an HAProxy, and config the proxy to redirect traffic with a hostheader that matches the DNS entry to the host:port your container is running on, and any calls from other containers will "leave" the private docker network stack, hit the DNS server, and circle back to the HAProxy, which will direct to the proper container.
I found the reason. Docker container in celery_worker doesn't run rabbitmq-server. So I added two lines in Dockerfile of celery_worker like below.
# Run rabbitmq server and celery.
ENTRYPOINT service rabbitmq-server start && supervisord -c /ml_core/supervisord.conf
I have app, that used Tornado and tornado-redis. [image "app" in docker images]
I start redis:
docker run --name some-redis -d redis
Then I want to link my app with redis:
docker run --name some-app --link some-redis:redis app
And I have error:
Traceback (most recent call last):
File "./app.py", line 41, in <module>
c.connect()
File "/usr/local/lib/python3.4/site-packages/tornadoredis/client.py", line 333
, in connect
self.connection.connect()
File "/usr/local/lib/python3.4/site-packages/tornadoredis/connection.py", line
79, in connect
raise ConnectionError(str(e))
tornadoredis.exceptions.ConnectionError: [Errno 111] Connection refused
I have tested my code with local tornado and redis, and it works. The problem in
c = tornadoredis.Client()
c.connect()
Why my app cant connet to redis-container? How to fix that? I use standart port 6379.
Thanks!
tornadoredis attempts to use redis on localhost. (See source here)
So you need to inform tornadoredis where redis is running (since to the docker image it is not running on localhost).
For example:
c = tornadoredis.Client(host="<hostname>")
c.connect()
In your specific case, substitute "redis" for "<hostname>".