I am using "hello world" tutorial in :http://www.rabbitmq.com/tutorials/tutorial-two-python.html .
worker.py looks like this
import pika
import time
connection = pika.BlockingConnection(pika.ConnectionParameters(
host='localhost'))
channel = connection.channel()
channel.queue_declare(queue='task_queue', durable=True)
print ' [*] Waiting for messages. To exit press CTRL+C'
def callback(ch, method, properties, body):
print " [x] Received %r" % (body,)
time.sleep( body.count('.') )
print " [x] Done"
ch.basic_ack(delivery_tag = method.delivery_tag)
channel.basic_qos(prefetch_count=1)
channel.basic_consume(callback,
queue='task_queue')
channel.start_consuming()
I have used this code to implement in my work. Everything works smoothly untill there comes a point in a queue for which it raises an exception after printing [x] Done
Traceback (most recent call last):
File "hullworker2.py", line 242, in <module>
channel.basic_consume(callback,queue='test_queue2')
File "/usr/local/lib/python2.7/dist-packages/pika/channel.py", line 211, in basic_consume
{'consumer_tag': consumer_tag})])
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/blocking_connection.py", line 904, in _rpc
self.connection.process_data_events()
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/blocking_connection.py", line 88, in process_data_events
if self._handle_read():
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/blocking_connection.py", line 184, in _handle_read
super(BlockingConnection, self)._handle_read()
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/base_connection.py", line 300, in _handle_read
return self._handle_error(error)
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/base_connection.py", line 264, in _handle_error
self._handle_disconnect()
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/blocking_connection.py", line 181, in _handle_disconnect
self._on_connection_closed(None, True)
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/blocking_connection.py", line 232, in _on_connection_closed
self._channels[channel]._on_close(method_frame)
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/blocking_connection.py", line 817, in _on_close
self._send_method(spec.Channel.CloseOk(), None, False)
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/blocking_connection.py", line 920, in _send_method
self.connection.send_method(self.channel_number, method_frame, content)
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/blocking_connection.py", line 120, in send_method
self._send_method(channel_number, method_frame, content)
File "/usr/local/lib/python2.7/dist-packages/pika/connection.py", line 1331, in _send_method
self._send_frame(frame.Method(channel_number, method_frame))
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/blocking_connection.py", line 245, in _send_frame
super(BlockingConnection, self)._send_frame(frame_value)
File "/usr/local/lib/python2.7/dist-packages/pika/connection.py", line 1312, in _send_frame
raise exceptions.ConnectionClosed
pika.exceptions.ConnectionClosed
I don't understand how the connection is closing automatically in between the process. Process runs fine for 100's of messages in the queue then suddenly this error comes up.
Any help appreciated.
There is a concept of heartbeats. It's basically a way how the server can make sure that the client is still connected.
when you do
time.sleep( body.count('.') )
You blocking the code by N number of seconds. It means that if server would like to send a heartbeat frame to check if your client is still alive, then it will not get a response back, because your code is blocked and doesn't know if heartbeat arrived.
Instead of using time.sleep() you should use connection.sleep() this will also make the code "sleep" for N number of seconds, but it will also communicate with the server and will respond back.
sleep(duration)[source]
A safer way to sleep than calling time.sleep() directly which will keep the adapter from ignoring frames sent from RabbitMQ. The connection will “sleep” or block the number of seconds specified in duration in small intervals.
Related
I've been trying to write code that collects crypto data from Binance. Binance auto disconnects after 24 hours. Is there any way for me to reconnect after disconnection? I believe running forever should take care of that for me, but it dies when an error is thrown. I will be running this program on a server 24/7. I will also need a way to be notified maybe telegram/discord bot that I can build where do I type the code to send when it is disconnected
This is the error I get.
Traceback (most recent call last):
File "exchanges/binance/binance_ticker.py", line 97, in <module>
start()
File "exchanges/binance/binance_ticker.py", line 94, in start
rel.dispatch()
File "/home/pyjobs/.local/lib/python3.8/site-packages/rel/rel.py", line 205, in dispatch
registrar.dispatch()
File "/home/pyjobs/.local/lib/python3.8/site-packages/rel/registrar.py", line 72, in dispatch
if not self.loop():
File "/home/pyjobs/.local/lib/python3.8/site-packages/rel/registrar.py", line 81, in loop
e = self.check_events()
File "/home/pyjobs/.local/lib/python3.8/site-packages/rel/registrar.py", line 232, in check_events
self.callback('read', fd)
File "/home/pyjobs/.local/lib/python3.8/site-packages/rel/registrar.py", line 125, in callback
self.events[etype][fd].callback()
File "/home/pyjobs/.local/lib/python3.8/site-packages/rel/listener.py", line 108, in callback
if not self.cb(*self.args) and not self.persist and self.active:
File "/home/pyjobs/.local/lib/python3.8/site-packages/websocket/_app.py", line 349, in read
op_code, frame = self.sock.recv_data_frame(True)
File "/home/pyjobs/.local/lib/python3.8/site-packages/websocket/_core.py", line 401, in recv_data_frame
frame = self.recv_frame()
File "/home/pyjobs/.local/lib/python3.8/site-packages/websocket/_core.py", line 440, in recv_frame
return self.frame_buffer.recv_frame()
File "/home/pyjobs/.local/lib/python3.8/site-packages/websocket/_abnf.py", line 352, in recv_frame
payload = self.recv_strict(length)
File "/home/pyjobs/.local/lib/python3.8/site-packages/websocket/_abnf.py", line 373, in recv_strict
bytes_ = self.recv(min(16384, shortage))
File "/home/pyjobs/.local/lib/python3.8/site-packages/websocket/_core.py", line 524, in _recv
return recv(self.sock, bufsize)
File "/home/pyjobs/.local/lib/python3.8/site-packages/websocket/_socket.py", line 122, in recv
raise WebSocketConnectionClosedException(
websocket._exceptions.WebSocketConnectionClosedException: Connection to remote host was lost.
My code:
import websocket
import rel
uri = "wss://stream.binance.com:9443/ws/!ticker#arr"
def on_message(ws, message):
print(message)
def on_error(ws, error):
print(error)
write_logs(error)
def on_close(ws, close_status_code, close_msg):
print("### closed ###")
write_logs(str(close_status_code) + str(close_msg))
start(
def on_open(ws):
print("Opened connection")
start()
websocket.enableTrace(True)
ws = websocket.WebSocketApp(uri,
on_open = on_open,
on_message=on_message,
on_error = on_error,
on_close (on_close)
ws.run_forever(dispatcher=rel) #Set the dispatcher to automatic reconnection.
rel.signal(2, rel.abort) # Keyboard Interrupt
rel.dispatch()
start()
The comment in this line of code ws.run_forever(dispatcher=rel) #Set the dispatcher to automatic reconnection. could auto reconnection depending on rel module? And how the module rel and func dispatcher work together?
Before you tell me, yes I am aware that selfbots can get you banned. My selfbot is for work purposes in a server with me and three others. I'm doing nothing shady or weird over here.
I'm using the following selfbot code: https://github.com/Supersebi3/Selfbot
Upon logging in, being that I'm in about 50 servers, I experience the following:
This carries on for several minutes, until I eventually get a MemoryError:
File "main.py", line 96, in <module>
bot.run(token, bot=False)
File "D:\Python\Python36-32\lib\site-packages\discord\client.py", line 519, in run
self.loop.run_until_complete(self.start(*args, **kwargs))
File "D:\Python\Python36-32\lib\asyncio\base_events.py", line 468, in run_until_complete
return future.result()
File "D:\Python\Python36-32\lib\site-packages\discord\client.py", line 491, in start
yield from self.connect()
File "D:\Python\Python36-32\lib\site-packages\discord\client.py", line 448, in connect
yield from self.ws.poll_event()
File "D:\Python\Python36-32\lib\site-packages\discord\gateway.py", line 431, in poll_event
yield from self.received_message(msg)
File "D:\Python\Python36-32\lib\site-packages\discord\gateway.py", line 327, in received_message
log.debug('WebSocket Event: {}'.format(msg))
MemoryError
Can anyone explain to why this is happening and how I can fix it? Is there any way I can skip the chunk processing for the members of every server my selfbot account is in?
I can use KafkaConsumer to consume messages in separate threads.
However, when I use multiprocessing.Process instead of threading.Thread, I get an error:
OSError: [Errno 9] Bad file descriptor
This question and documentation suggests that using multiprocessing to consume messages in parallel is possible. Would someone please share a working example?
Edit
Here's some sample code. Sorry the original code is too involved, so I created a sample here that I hope communicates what is happening. This code works fine if I use threading.Thread instead of multiprocessing.Process.
from multiprocessing import Process
class KafkaWrapper():
def __init__(self):
self.consumer = KafkaConsumer(bootstrap_servers='my.server.com')
def consume(self, topic):
self.consumer.subscribe(topic)
for message in self.consumer:
print(message.value)
class ServiceInterface():
def __init__(self):
self.kafka_wrapper = KafkaWrapper()
def start(self, topic):
self.kafka_wrapper.consume(topic)
class ServiceA(ServiceInterface):
pass
class ServiceB(ServiceInterface):
pass
def main():
serviceA = ServiceA()
serviceB = ServiceB()
jobs=[]
# The code works fine if I used threading.Thread here instead of Process
jobs.append(Process(target=serviceA.start, args=("my-topic",)))
jobs.append(Process(target=serviceB.start, args=("my-topic",)))
for job in jobs:
job.start()
for job in jobs:
job.join()
if __name__ == "__main__":
main()
And here's the error I see (Again, my actual code is different from the above sample, and it works fine if I use threading.Thread but not if I use multiprocessing.Process):
File "/usr/local/Cellar/python3/3.6.2/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/process.py", line 249, in _bootstrap
self.run()
File "/usr/local/Cellar/python3/3.6.2/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "service_interface.py", line 58, in start
self._kafka_wrapper.start_consuming(self.service_object_id)
File "kafka_wrapper.py", line 141, in start_consuming
for message in self._consumer:
File "venv/lib/python3.6/site-packages/kafka/consumer/group.py", line 1082, in __next__
return next(self._iterator)
File "venv/lib/python3.6/site-packages/kafka/consumer/group.py", line 1022, in _message_generator
self._client.poll(timeout_ms=poll_ms, sleep=True)
File "venv/lib/python3.6/site-packages/kafka/client_async.py", line 556, in poll
responses.extend(self._poll(timeout, sleep=sleep))
File "venv/lib/python3.6/site-packages/kafka/client_async.py", line 573, in _poll
ready = self._selector.select(timeout)
File "/usr/local/Cellar/python3/3.6.2/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/process.py", line 249, in _bootstrap
self.run()
File "/usr/local/Cellar/python3/3.6.2/Frameworks/Python.framework/Versions/3.6/lib/python3.6/selectors.py", line 577, in select
kev_list = self._kqueue.control(None, max_ev, timeout)
File "/usr/local/Cellar/python3/3.6.2/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "service_interface.py", line 58, in start
self._kafka_wrapper.start_consuming(self.service_object_id)
File "kafka_wrapper.py", line 141, in start_consuming
for message in self._consumer:
File "venv/lib/python3.6/site-packages/kafka/consumer/group.py", line 1082, in __next__
return next(self._iterator)
File "venv/lib/python3.6/site-packages/kafka/consumer/group.py", line 1022, in _message_generator
self._client.poll(timeout_ms=poll_ms, sleep=True)
File "venv/lib/python3.6/site-packages/kafka/client_async.py", line 556, in poll
responses.extend(self._poll(timeout, sleep=sleep))
OSError: [Errno 9] Bad file descriptor
File "venv/lib/python3.6/site-packages/kafka/client_async.py", line 573, in _poll
ready = self._selector.select(timeout)
File "/usr/local/Cellar/python3/3.6.2/Frameworks/Python.framework/Versions/3.6/lib/python3.6/selectors.py", line 577, in select
kev_list = self._kqueue.control(None, max_ev, timeout)
OSError: [Errno 9] Bad file descriptor
Kafka consumers could be either multi process or multi threading (make sure the client library used correctly supports Kafka Consumer Group, required in early version of Kafka), the choice is up to you.
However if we want to using processes, the Kafka client library need to do something, to guaranteed itself fork safe, that the underlying TCP connections used (connecting to Kafka servers) ought not be shared by more than one processes. And this is why you got an connection error.
As a workaround, you should not create KafkaConsumer before spawning processes. Instead, move the operation into each process.
Another way is to use a single thread/process fetching message, and use an extra process pool to do the real operations.
I have a RabbitMQ (version 3.2.4) async consumer (as described here) implemented and listens to a queue / routing-key and was running without any issues until I recently made some changes.
Certain tasks are time-consuming, hence I decided to use the multiprocessing library to spin off sub-processes which do these intensive tasks using a multiprocessing Queue / Pool design so that my main task is performed without any waiting.
my_queue = multiprocessing.Queue()
my_pool = multiprocessing.Pool(2, my_method, (my_queue,))
Once the queue and pool are initialised, I pass on the queue as an argument while initializing the consumer (ExampleConsumer's __init__ method, as in the example link above). Then, within the on_message method, I push messages to the my_queue for doing the time-intensive tasks.
Edit:
some code sample:
def main():
logging.basicConfig(level=logging.INFO, format=LOG_FORMAT)
my_queue = multiprocessing.Queue()
my_pool = multiprocessing.Pool(2, my_class().my_method, (my_queue,))
example = ExampleConsumer('amqp://guest:guest#localhost:5672/%2F', my_queue)
try:
example.run()
my_pool.close()
my_pool.join()
except KeyboardInterrupt:
my_pool.terminate()
example.stop()
The init method and on_message method of consumer:
def __init__(self, amqp_url, queue):
"""Create a new instance of the consumer class, passing in the AMQP
URL used to connect to RabbitMQ.
:param str amqp_url: The AMQP url to connect with
"""
self._connection = None
self._channel = None
self._closing = False
self._consumer_tag = None
self._url = amqp_url
self.queue = queue
def on_message(self, unused_channel, basic_deliver, properties, body):
"""Invoked by pika when a message is delivered from RabbitMQ. The
channel is passed for your convenience. The basic_deliver object that
is passed in carries the exchange, routing key, delivery tag and
a redelivered flag for the message. The properties passed in is an
instance of BasicProperties with the message properties and the body
is the message that was sent.
:param pika.channel.Channel unused_channel: The channel object
:param pika.Spec.Basic.Deliver: basic_deliver method
:param pika.Spec.BasicProperties: properties
:param str|unicode body: The message body
"""
LOGGER.info('Received message # %s from %s: %s',
basic_deliver.delivery_tag, properties.app_id, body)
self.acknowledge_message(basic_deliver.delivery_tag)
self.queue.put(str(body))
After making these changes I have started seeing an exception of the following type :
File "consumer_new.py", line 500, in run
self._connection.ioloop.start()
File "/usr/local/lib/python2.7/site-packages/pika/adapters/select_connection.py", line 355, in start
self.process_timeouts()
File "/usr/local/lib/python2.7/site-packages/pika/adapters/select_connection.py", line 283, in process_timeouts
timer['callback']()
File "consumer_new.py", line 290, in reconnect
self._connection.ioloop.start()
File "/usr/local/lib/python2.7/site-packages/pika/adapters/select_connection.py", line 354, in start
self.poll()
File "/usr/local/lib/python2.7/site-packages/pika/adapters/select_connection.py", line 602, in poll
self._process_fd_events(fd_event_map, write_only)
File "/usr/local/lib/python2.7/site-packages/pika/adapters/select_connection.py", line 443, in _process_fd_events
handler(fileno, events, write_only=write_only)
File "/usr/local/lib/python2.7/site-packages/pika/adapters/base_connection.py", line 364, in _handle_events
self._handle_read()
File "/usr/local/lib/python2.7/site-packages/pika/adapters/base_connection.py", line 415, in _handle_read
self._on_data_available(data)
File "/usr/local/lib/python2.7/site-packages/pika/connection.py", line 1347, in _on_data_available
self._process_frame(frame_value)
File "/usr/local/lib/python2.7/site-packages/pika/connection.py", line 1427, in _process_frame
self._deliver_frame_to_channel(frame_value)
File "/usr/local/lib/python2.7/site-packages/pika/connection.py", line 1028, in _deliver_frame_to_channel
return self._channels[value.channel_number]._handle_content_frame(value)
File "/usr/local/lib/python2.7/site-packages/pika/channel.py", line 896, in _handle_content_frame
self._on_deliver(*response)
File "/usr/local/lib/python2.7/site-packages/pika/channel.py", line 983, in _on_deliver
header_frame.properties, body)
File "consumer_new.py", line 452, in on_message
self.acknowledge_message(basic_deliver.delivery_tag)
File "consumer_new.py", line 463, in acknowledge_message
self._channel.basic_ack(delivery_tag)
File "/usr/local/lib/python2.7/site-packages/pika/channel.py", line 159, in basic_ack
return self._send_method(spec.Basic.Ack(delivery_tag, multiple))
File "/usr/local/lib/python2.7/site-packages/pika/channel.py", line 1150, in _send_method
self.connection._send_method(self.channel_number, method_frame, content)
File "/usr/local/lib/python2.7/site-packages/pika/connection.py", line 1569, in _send_method
self._send_frame(frame.Method(channel_number, method_frame))
File "/usr/local/lib/python2.7/site-packages/pika/connection.py", line 1554, in _send_frame
self._flush_outbound()
File "/usr/local/lib/python2.7/site-packages/pika/adapters/base_connection.py", line 282, in _flush_outbound
self._handle_write()
File "/usr/local/lib/python2.7/site-packages/pika/adapters/base_connection.py", line 452, in _handle_write
return self._handle_error(error)
File "/usr/local/lib/python2.7/site-packages/pika/adapters/base_connection.py", line 338, in _handle_error
self._handle_disconnect()
File "/usr/local/lib/python2.7/site-packages/pika/adapters/base_connection.py", line 288, in _handle_disconnect
self._adapter_disconnect()
File "/usr/local/lib/python2.7/site-packages/pika/adapters/select_connection.py", line 94, in _adapter_disconnect
self.ioloop.remove_handler(self.socket.fileno())
File "/usr/local/lib/python2.7/site-packages/pika/adapters/select_connection.py", line 579, in remove_handler
super(PollPoller, self).remove_handler(fileno)
File "/usr/local/lib/python2.7/site-packages/pika/adapters/select_connection.py", line 328, in remove_handler
self.update_handler(fileno, 0)
File "/usr/local/lib/python2.7/site-packages/pika/adapters/select_connection.py", line 571, in update_handler
self._poll.modify(fileno, events)
IOError: [Errno 9] Bad file descriptor
The run() method keeps on running in the main process without any intervention. If that's the case I don't understand why a Bad File Descriptor error would arise, as nobody else could close the rmq connection. Also, the consumer seems to run without any issues for 3-4 hours before it fails due to the above reason.
I checked on the Rabbitmq UI if there are insufficient amount of file descriptors. But that doesn't seem to be the problem. I can't get a lead on what might be the problem.
Any help is appreciated! Thanks.
Pika is not thread safe. It says so clearly in the documentation. All sorts of things will eventually go wrong and your program will crash to weird and uninformative errors if you do anything to your connections or channels in threads or subprocesses. It may seem to work for a while but eventually Pika structuress will get corrupted.
If you need multiprocessing and rabbitmq, you have a couple of options.
Use rabbitpy instead of Pika. I have not used it so I cannot comment on its suitability to you, but it is thread safe.
If you can, separate tasks so that your subprocesses can open their own Pika connections. This does not work if your main program receives a request, has a subprocess to process it and then send a result. If you need to send an ack for example, you cannot have your subprocesses ack messages received in main process.
Remove Pika from subprocesses. If the idea of your subprocesses is to dispatch calculations or time consuming tasks to them, you can try creating two queues: one for subprocess input and one for output, and have your subprocess return results to main program in a queue. Then the main program can handle rabbitmq traffic based on this.
If your program is a server of some kind that processes requests, split everything to subprocesses ("Work queue" -model) https://www.rabbitmq.com/tutorials/tutorial-two-python.html and have every subprocess subscribe independently as a consumer to the queue. Rabbitmq takes care of round-robin dispatch, and by limiting prefetch you can make it so that a subprocess picks exactly one task, and until processing of that task is completed, it will not pick up anything else, ensuring tasks sent immediately after the first one will be picked up by idle threads or subprocesses. In this model your main does not need Pika connection at all, and every subprocess has an independent connection as in 2).
Hope this helps.
Hannu
I need to listen for a particular message from a rabbit queue.
For eg: JSON response in queue notifications.info is as follows:
{"event_type": "compute.instance.create.end",
"timestamp": "2012-03-12 17:00:24.156710",
"message_id": "00004e00-8da5-4c39-8ffb-c94ed0b5278c",
"priority": "INFO",
"publisher_id": "compute.compute-1-5-6-7",
.
.
.
So this is an JSON response or message which I am getting in a Queue notifications.info.
I need to listen synchronously for the particular message and also needs to perform certain operations after that.
Please anyone let me know the way for doing the same.
EDIT
Here I have elaborated in detail.This is what I have done so far.
Actually my aim is to get some notification on new instance creation.
So that I have set up the notifications.info to receive message during instance creation.
Now I have framed the basic script which is as follows(Using Rabbitmq site guide):
#!/usr/bin/env python
import pika
import sys
connection = pika.BlockingConnection(pika.ConnectionParameters(
host='localhost'))
channel = connection.channel()
channel.exchange_declare(exchange='nova',
type='direct')
result = channel.queue_declare(exclusive=True)
queue_name = result.method.queue
severities = sys.argv[1:]
if not severities:
print >> sys.stderr, "Usage: %s [info] [warning] [error]" % \
(sys.argv[0],)
sys.exit(1)
for severity in severities:
channel.queue_bind(exchange='nova',
queue=queue_name,
routing_key=severity)
print ' [*] Waiting for logs. To exit press CTRL+C'
def callback(ch, method, properties, body):
print " [x] %r:%r" % (method.routing_key, body,)
channel.basic_consume(callback,
queue=queue_name,
no_ack=True)
channel.start_consuming()
There is more modifications were needs to be done in scipts.
But now issue is that while executing the script I am getting error which is as follows:
Traceback (most recent call last):
File "new1.py", line 6, in <module>
host='localhost'))
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/blocking_connection .py", line 339, in __init__
self._process_io_for_connection_setup()
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/blocking_connection .py", line 374, in _process_io_for_connection_setup
self._open_error_result.is_ready)
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/blocking_connection .py", line 410, in _flush_output
self._impl.ioloop.poll()
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/select_connection.p y", line 602, in poll
self._process_fd_events(fd_event_map, write_only)
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/select_connection.p y", line 443, in _process_fd_events
handler(fileno, events, write_only=write_only)
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/base_connection.py" , line 364, in _handle_events
self._handle_read()
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/base_connection.py" , line 407, in _handle_read
return self._handle_error(error)
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/base_connection.py" , line 338, in _handle_error
self._handle_disconnect()
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/base_connection.py" , line 288, in _handle_disconnect
self._adapter_disconnect()
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/select_connection.p y", line 95, in _adapter_disconnect
super(SelectConnection, self)._adapter_disconnect()
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/base_connection.py" , line 154, in _adapter_disconnect
self._check_state_on_disconnect()
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/base_connection.py" , line 173, in _check_state_on_disconnect
raise exceptions.ProbableAuthenticationError
pika.exceptions.ProbableAuthenticationError
Log is showing the error is as follows:
{handshake_error,starting,0,
{amqp_error,access_refused,
"PLAIN login refused: user 'guest' - invalid credentials",
'connection.start_ok'}}
So some one let me know what is needs to be done here to fix this.
Note: I am able to access a rabbitmq front end with guest user.
I have found the issue.
Actually there was password conflict that is the reason why it results in such an error.
When I am configuring new password rather than default one for rabbitmq it results in an error,
So I tried using the default password 'guest' itself for 'guest' user.
Then the issue is fixed here.