implementing threading to consume queues from rabbitmq - python

This is an update to my previous question. I realized that I should have added some code to explain my issue further. I am currently trying to implement threading to queues being consumed from a rabbitmq exchange. As I am new to rabbitmq and threading, I am finding it difficult to amalgamate both and apply them. I was wondering if anyone could provide any templates I could apply to begin with.
I am coding in visual studio platform, where a simulator is being used to generate data, emulating the producer(a smart device).
The first part of the code assigns a few variables relevant to the project. I have imported the required libraries and a few extra scripts I have made myself. The import inthe second line are scripts that assist with communicating with the smart device.
import pika, sys, os
import SockAlertMessage_pb2, SockDataProcessedMessage_pb2, SockDataRawMessage_pb2, SockDataSessionEndMessage_pb2, SockDataSessionStartMessage_pb2, SockMessage_pb2
import numpy as np
import scipy
import ampd
import python_file_3
import heartpy
import time
import threading
from scipy.signal import detrend
from python_file_3 import filter_signal, get_hrv, get_rmssd, get_std, heart_rate
from ampd import find_peaks_originalode here
The second part of the code
sock_data_session_start_queue = 'sock_data_session_start_queue'
sock_data_session_end_queue= 'sock_data_session_end_queue'
sock_data_raw_queue = 'sock_data_raw_queue'
tx_queue = 'tbd'
# establish connection with rabbitmq server
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
# create rx message queues
channel.queue_declare(queue=sock_data_session_start_queue, durable = True)
channel.queue_declare(queue=sock_data_session_end_queue, durable = True)
# create tx message queue
channel.queue_declare(queue=tx_queue, durable = True)e here
def sock_data_session_start_callback(ch, method, properties, body):
message = SockDataSessionStartMessage_pb2.SockDataSessionStartMessage()
message.ParseFromString(body)
new_dict['1'] = message
print(new_dict)
# todo: create thread w/ state
print(" [x] Start session %r" % message)
# send message
def sock_data_session_end_callback(ch, method, properties, body):
message = SockDataSessionEndMessage_pb2.SockDataSessionEndMessage()
message.ParseFromString(body)
# todo: destroy thread w/ state
print(" [x] End session %r" % message)
def sock_data_raw_callback(ch, method, properties, body):
message = SockDataRawMessage_pb2.SockDataRawMessage()
message.ParseFromString(body)
print(message)
# todo: destroy thread w/ state
print(" [x] Sock data raw %r" % message)
if __name__ == '__main__':
try:
channel.basic_consume(queue=sock_data_session_start_queue, auto_ack=True, on_message_callback=sock_data_session_start_callback)
channel.basic_consume(queue=sock_data_session_end_queue, auto_ack=True, on_message_callback=sock_data_session_end_callback)
channel.basic_consume(queue=sock_data_raw_queue, auto_ack=True, on_message_callback=sock_data_raw_callback)
print(' [*] Waiting for messages. To exit press CTRL+C')
channel.start_consuming()
except KeyboardInterrupt:
print('Interrupted')
# close connection
connection.close()
try:
sys.exit(0)
except SystemExit:
os._exit(0)
The start and end data callbacks refer to data sessions, where data session connection is acknowledged and started and then later ended. I believe the raw data callback is where I may need to implement my threads, where data will be processed and then sent back to another queue. The challenge is to make each data session a thread, and then process that.

Related

Multiple consumer Rabbitmq through multiprocessing

New to python.
I am trying to create multiple consumer for a RabbitMQ client.
I am using PIKA and trying to do with multiprocessing.
It seems connecting but not being able to sustain the loop.
Can you please help.
The part of the code should also take care the writer option through the call back.
it should start the loop and should consume always
import multiprocessing
import time
import pika
# this is the writer part
def callback(ch, method, properties, body):
print (" [x] %r received %r" % (multiprocessing.current_process(), body,))
time.sleep(body.count('.'))
# print " [x] Done"
ch.basic_ack(delivery_tag=method.delivery_tag)
def consume():
credentials = pika.PlainCredentials(userid, password)
parameters = pika.ConnectionParameters(url, port, '/', credentials)
connection = pika.BlockingConnection(
parameters=parameters)
channel = connection.channel()
channel.queue_declare(queue='queuename', durable=True)
channel.basic_consume('queuename',callback)
print (' [*] Waiting for messages. To exit press CTRL+C')
channel.start_consuming()
userid = "user"
password = "pwd"
url = "localhost"
port = 5672
if __name__ == "__main__":
workers = 5
pool = multiprocessing.Pool(processes=workers)
for i in range(0, workers):
pool.apply_async(consume)
#Stay alive
try:
while True:
You aren't doing any exception handling in your sub-processes, so my guess is that exceptions are being thrown that you don't expect. This code works fine in my environment, using Pika 1.1.0 and Python 3.7.3.
Before I checked for exceptions in body.count() a TypeError would be thrown because body was not a str in that case.
Please note that I'm using the correct method to wait for sub-processes, according to these docs.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.

Tornado Asynchronous Writes

The code below is a simplified version of a Tornado based TCP server that is currently used to host a Videotex system. This code was derived from the Tornado documentation and the server has been running in a live environment for some time without issue, however, there is a feature I need to add.
The system currently blocks until a character is received from the client before returning the data via the stream.write. As the system typically runs at 1200 baud at the client end (via a telnet modem), this means that the user has to wait until all stream writes have completed before the next 'user entered' character is processed.
What I would like to do is find a way that would allow me to abandon writing data to stream.write if another character is received form the client.
I am new to Tornado and fairly new to Python, however, I have coded asynchronous functions and threaded solutions in the past using C#.
From the documentation the stream.write operation is asynchronous, I am assuming therefore that the call may return before the data is completely written, I am left thinking that I need a method to abandon/empty/advance the write buffer to stop the write operation if a new char is detected on the stream.read.
One option that would seem to give me what I need is to somehow perform the stream.writes on another thread , however, this approach seems inappropriate when using Tornado's IOLoop etc.
Is there a way to give me the facility I am after? I have full control of the code and am happy to restructure the app if needed.
import logging
import struct
import os
import traceback
from tornado import gen
from tornado.ioloop import IOLoop
from tornado.iostream import StreamClosedError
from tornado.tcpserver import TCPServer
# Configure logging.
logger = logging.getLogger(os.path.basename(__file__))
logger.setLevel(logging.INFO)
# Cache this struct definition; important optimization.
int_struct = struct.Struct("<i")
_UNPACK_INT = int_struct.unpack
_PACK_INT = int_struct.pack
class TornadoServer(TCPServer):
def start(self, port):
self.port = port
server.listen(port)
#gen.coroutine
def handle_stream(self, stream, address):
logging.info("[viewdata] Connection from client address {0}.".format(address))
try:
while True:
char = yield stream.read_bytes(1) # this call blocks
asc = ord(char)
logger.info('[viewdata] Byte Received {0} ({1})'.format(hex(asc), asc))
# Do some processing using the received char and return the appropriate page of data
stream.write('This is the data you asked for...'.encode())
except StreamClosedError as ex:
logger.info("[viewdata] {0} Disconnected: {1} Message: {2}".format(address, type(ex), str(ex)))
except Exception as ex:
logger.error("[viewdata] {0} Exception: {1} Message: {2}".format(address, type(ex), str(ex)))
logger.error(traceback.format_exc())
if __name__ == '__main__':
server = TornadoServer()
server.start(25232)
loop = IOLoop.current()
loop.start()
The main idea is that you move long processing into separate task.
When you receive some new data, you choose what to do (in case below I cancel current operation)
import logging
import os
import traceback
import threading
from tornado import gen
from tornado.ioloop import IOLoop
from tornado.iostream import StreamClosedError
from tornado.tcpserver import TCPServer
# Configure logging.
logger = logging.getLogger(os.path.basename(__file__))
logger.setLevel(logging.INFO)
class TornadoServer(TCPServer):
def start(self, port):
self.port = port
server.listen(port)
async def process_stream(self, stream, char, cancel_event):
asc = ord(char)
logger.info('[viewdata] Byte Received {0} ({1})'.format(hex(asc), asc))
N = 5
for i in range(N):
if cancel_event.is_set():
logger.info('[viewdata] Abort streaming')
break
# Do some processing using the received char and return the appropriate page of data
msg = 'This is the {0} data you asked for...'.format(i)
logger.info(msg)
await stream.write('This is the part {0} of {1} you asked for...'.format(i, N).encode())
await gen.sleep(1.0) # make this processing longer..
async def handle_stream(self, stream, address):
process_stream_future = None
cancel_event = None
logging.info("[viewdata] Connection from client address {0}.".format(address))
while True:
try:
char = await stream.read_bytes(1) # this call blocks
# when received client input, cancel running job
if process_stream_future:
process_stream_future.cancel()
if cancel_event:
cancel_event.set()
cancel_event = threading.Event()
process_stream_future = gen.convert_yielded(
self.process_stream(stream, char, cancel_event))
self.io_loop.add_future(process_stream_future, lambda f: f.result())
except StreamClosedError as ex:
logger.info("[viewdata] {0} Disconnected: {1} Message: {2}".format(address, type(ex), str(ex)))
except Exception as ex:
logger.error("[viewdata] {0} Exception: {1} Message: {2}".format(address, type(ex), str(ex)))
logger.error(traceback.format_exc())
if __name__ == '__main__':
server = TornadoServer()
server.listen(25232)
loop = IOLoop.current()
loop.start()

Receive multiple amqp queues in python / pika

I'm trying to receive multiple queues, I tried the code: https://stackoverflow.com/a/42351395/3303330
But it's necessary declare the "queue_declare". Hope you can help me guys, it's my code:
import pika
import time
from zeep import Client
parameters = pika.URLParameters('amqp://user:pass#theurl:5672/%2F')
connection = pika.BlockingConnection(parameters)
channel = connection.channel()
channel.queue_declare(queue='queue1', passive=True, durable=True, exclusive=False, auto_delete=False)
print(' [*] Waiting for messages. To exit press CTRL+C')
def callback(ch, method, header, body):
print(" [x] Received %r" % body)
time.sleep(body.count(b'.'))
ch.basic_ack(delivery_tag = method.delivery_tag)
channel.basic_consume(callback, queue='queue1')
channel.start_consuming()
It is not necessary to declare a queue more than once as long as you delcare it to be durable. You can declare more than one queue in your client code or using the RabbitMQ admin interface.
You can use your channel to consume messages from more than one queue. Just execute channel.basic_consume more than once using different queue parameter values.

Trying to receive messages from a publish/subscribe socket in the background with Python

I'm working on a tank battle AI and as such, I'm trying to create a function that constantly updates the map and player information in the background by receiving messages from a socket with a PUB/SUB formal-pattern.
comm is a communication object in my main client.
This movement_strategy() function is called from another module, but I keep getting an error:
Exception in thread Thread-1, ZMQError: Socket operation on non-socket.
Any ideas on what's causing it or how to fix it?
import communication
import gamestate
import json
import threading
import time
def movement_strategy(comm):
def get_gamestate():
[token, msg] = comm.pub_socket.recv_multipart()
msg = json.loads(msg)
if msg["comm_type"] == "GAMESTATE":
game_info = gamestate.GameState( msg["comm_type"],
msg["timestamp"],
msg["timeRemaining"],
msg["map"],
msg["players"]
)
print game_info.timeRemaining
print game_info.timestamp
time.sleep(1)
thread = threading.Thread(target=get_gamestate, args())
thread.start()

Attaching ZMQStream with existing tornado ioloop

I have an application where every websocket connection (within tornado open callback) creates a zmq.SUB socket to an existing zmq.FORWARDER device. Idea is to receive data from zmq as callbacks, which can then be relayed to frontend clients over websocket connection.
https://gist.github.com/abhinavsingh/6378134
ws.py
import zmq
from zmq.eventloop import ioloop
from zmq.eventloop.zmqstream import ZMQStream
ioloop.install()
from tornado.websocket import WebSocketHandler
from tornado.web import Application
from tornado.ioloop import IOLoop
ioloop = IOLoop.instance()
class ZMQPubSub(object):
def __init__(self, callback):
self.callback = callback
def connect(self):
self.context = zmq.Context()
self.socket = self.context.socket(zmq.SUB)
self.socket.connect('tcp://127.0.0.1:5560')
self.stream = ZMQStream(self.socket)
self.stream.on_recv(self.callback)
def subscribe(self, channel_id):
self.socket.setsockopt(zmq.SUBSCRIBE, channel_id)
class MyWebSocket(WebSocketHandler):
def open(self):
self.pubsub = ZMQPubSub(self.on_data)
self.pubsub.connect()
self.pubsub.subscribe("session_id")
print 'ws opened'
def on_message(self, message):
print message
def on_close(self):
print 'ws closed'
def on_data(self, data):
print data
def main():
application = Application([(r'/channel', MyWebSocket)])
application.listen(10001)
print 'starting ws on port 10001'
ioloop.start()
if __name__ == '__main__':
main()
forwarder.py
import zmq
def main():
try:
context = zmq.Context(1)
frontend = context.socket(zmq.SUB)
frontend.bind('tcp://*:5559')
frontend.setsockopt(zmq.SUBSCRIBE, '')
backend = context.socket(zmq.PUB)
backend.bind('tcp://*:5560')
print 'starting zmq forwarder'
zmq.device(zmq.FORWARDER, frontend, backend)
except KeyboardInterrupt:
pass
except Exception as e:
logger.exception(e)
finally:
frontend.close()
backend.close()
context.term()
if __name__ == '__main__':
main()
publish.py
import zmq
if __name__ == '__main__':
context = zmq.Context()
socket = context.socket(zmq.PUB)
socket.connect('tcp://127.0.0.1:5559')
socket.send('session_id helloworld')
print 'sent data for channel session_id'
However, my ZMQPubSub class doesn't seem like is receiving any data at all.
I further experimented and realized that I need to call ioloop.IOLoop.instance().start() after registering on_recv callback within ZMQPubSub. But, that will just block the execution.
I also tried passing main.ioloop instance to ZMQStream constructor but doesn't help either.
Is there a way by which I can bind ZMQStream to existing main.ioloop instance without blocking flow within MyWebSocket.open?
In your now complete example, simply change frontend in your forwarder to a PULL socket and your publisher socket to PUSH, and it should behave as you expect.
The general principles of socket choice that are relevant here:
use PUB/SUB when you want to send a message to everyone who is ready to receive it (may be no one)
use PUSH/PULL when you want to send a message to exactly one peer, waiting for them to be ready
it may appear initially that you just want PUB-SUB, but once you start looking at each socket pair, you realize that they are very different. The frontend-websocket connection is definitely PUB-SUB - you may have zero-to-many receivers, and you just want to send messages to everyone who happens to be available when a message comes through. But the backend side is different - there is only one receiver, and it definitely wants every message from the publishers.
So there you have it - backend should be PULL and frontend PUB. All your sockets:
PUSH -> [PULL-PUB] -> SUB
publisher.py: socket is PUSH, connected to backend in device.py
forwarder.py: backend is PULL, frontend is PUB
ws.py: SUB connects and subscribes to forwarder.frontend.
The relevant behavior that makes PUB/SUB fail on the backend in your case is the slow joiner syndrome, which is described in The Guide. Essentially, subscribers take a finite time to tell publishers about there subscriptions, so if you send a message immediately after opening a PUB socket, the odds are it hasn't been told that it has any subscribers yet, so it's just discarding messages.
ZeroMq subscribers have to subscribe on what messages they wish to receive; I don't see that in your code. I believe the Python way is this:
self.socket.setsockopt(zmq.SUBSCRIBE, "")

Categories