Using Kombu ConsumerMixin, how to declare multiple bindings? - python

I have a RabbitMQ topic exchange named experiment. I'm building a consumer where I'd like to receive all messages whose routing key begins with "foo" and all messages whose routing key begins with "bar".
According to the RabbitMQ docs, and based on my own experimentation in the management UI, it should be possible to have one exchange, one queue, and two bindings (foo.# and bar.#) that connect them.
I can't figure out how to express this using Kombu's ConsumerMixin. I feel like I should be able to do:
q = Queue(exchange=exchange, routing_key=['foo.#', 'bar.#'])
...but it does not like that at all. I've also tried:
q.bind_to(exchange=exchange, routing_key='foo.#')
q.bind_to(exchange=exchange, routing_key='bar.#')
...but every time I try I get:
kombu.exceptions.NotBoundError: Can't call method on Queue not bound to a channel
...which I guess manes sense. However I can't see a place in the mixin's interface where I can easily hook onto the queues once they are bound to the channel. Here's the base (working) code:
from kombu import Connection, Exchange, Queue
from kombu.mixins import ConsumerMixin
class Worker(ConsumerMixin):
exchange = Exchange('experiment', type='topic')
q = Queue(exchange=exchange, routing_key='foo.#', exclusive=True)
def __init__(self, connection):
self.connection = connection
def get_consumers(self, Consumer, channel):
return [Consumer(queues=[self.q], callbacks=[self.on_task])]
def on_task(self, body, message):
print body
message.ack()
if __name__ == '__main__':
with Connection('amqp://guest:guest#localhost:5672//') as conn:
worker = Worker(conn)
worker.run()
...which works, but only gives me foo messages. Other than creating a new Queue for each routing key I'm interested in and passing them all to the Consumer, is there a clean way to do this?

After digging a little bit, I found a way to accomplish this that is fairly close to the first idea I had. Instead of passing a routing_key string to the Queue, pass a bindings list. Each element in the list is an instance of a binding object that specifies the exchange and the routing key.
An example is worth a thousand words:
from kombu import Exchange, Queue, binding
exchange = Exchange('experiment', type='topic')
q = Queue(exchange=exchange, bindings=[
binding(exchange, routing_key='foo.#'),
binding(exchange, routing_key='bar.#')
], exclusive=True)
And it works great!

Here is a small adjustment of the answer by smitelli. When the bindings parameter is used for defining bindings, the exchange parameter is ignored.
Adjusted example:
from kombu import Exchange, Queue, binding
exchange = Exchange('experiment', type='topic')
q = Queue(bindings=[
binding(exchange, routing_key='foo.#'),
binding(exchange, routing_key='bar.#'),
])
The exchange parameter is discarded during the Queue init:
if self.bindings:
self.exchange = None

Related

How can i subscribe a consumer and notify him of any changes in django channels

I'm currently building an application that allows users to collaborate together and create things, as I require a sort of discord like group chatfeed. I need to be able to subscribe logged in users to a project for notifications.
I have a method open_project that retrieves details from a project that has been selected by the user, which I use to subscribe him to any updates for that project.
So I can think of 2 ways of doing this. I have created a instance variable in my connect function, like this:
def connect(self):
print("connected to projectconsumer...")
self.accept()
self.projectSessions = {}
And here is the open_project method:
def open_project(self, message):
p = Project.objects.values("projectname").get(id=message)
if len(self.projectSessions) == 0:
self.projectSessions[message] = []
pass
self.projectSessions[message] = self.projectSessions[message].append(self)
print(self.projectSessions[message])
self.userjoinedMessage(self.projectSessions[message])
message = {}
message["command"] = "STC-openproject"
message["message"] = p
self.send_message(json.dumps(message))
Then when the user opens a project, he is added to the projectSessions list, this however doesn't work (I think) whenever a new user connects to the websocket, he gets his own projectconsumer.
The second way I thought of doing this is to create a managing class that only has 1 instance and keeps track of all the users connected to a project. I have not tried this yet as I would like some feedback on if I'm even swinging in the right ball park. Any and all feedback is appreciated.
EDIT 1:
I forgot to add the userjoinedMessage method to the question, this method is simply there to mimic future mechanics and for feedback to see if my solution actually works, but here it is:
def userjoinedMessage(self, pointer):
message = {}
message["command"] = "STC-userjoinedtest"
message["message"] = ""
pointer.send_message(json.dumps(message))
note that i attempt to reference the instance of the consumer.
I will also attempt to implement a consumer manager that has the role of keeping track of what consumers are browsing what projects and sending updates to the relevant channels.
From the question, the issue is how to save projectSessions and have it accessible across multiple instances of the consumer. Instead of trying to save it in memory, you can save in a database. It is a dictionary with project as key. You can make it a table with ForeignKey to the Project model.
In that way, it is persisted and there would be no issue retrieving it even across multiple channels server instances if you ever decide to scale your channels across multiple servers.
Also, if you feel that a traditional database will slow down the retrieval of the sessions, then you can use faster storage systems like redis
Right, this is probably a horrible way of doing things and i should be taken out back and shot for doing it but i have a fix for my problem. I have made a ProjectManager class that handles subscriptions and updates to the users of a project:
import json
class ProjectManager():
def __init__(self):
if(hasattr(self, 'projectSessions')):
pass
else:
self.projectSessions = {}
def subscribe(self, projectid, consumer):
print(projectid not in self.projectSessions)
if(projectid not in self.projectSessions):
self.projectSessions[projectid] = []
self.projectSessions[projectid].append(consumer)
self.update(projectid)
def unsubscribe(self, projectid, consumer):
pass
def update(self, projectid):
if projectid in self.projectSessions:
print(self.projectSessions[projectid])
for consumer in self.projectSessions[projectid]:
message = {}
message["command"] = "STC-userjoinedtest"
message["message"] = ""
consumer.send_message(json.dumps(message))
pass
in my apps.py file i initialize the above ProjectManager class and assign it to a variable.
from django.apps import AppConfig
from .manager import ProjectManager
class ProjectConfig(AppConfig):
name = 'project'
manager = ProjectManager()
Which i then use in my consumers.py file. I import the manager from the projectconfig class and assign it to a instance variable inside the created consumer whenever its connected:
def connect(self):
print("connected to projectconsumer...")
self.accept()
self.manager = ProjectConfig.manager
and whenever i call open_project i subscribe to that project with the given project id recieved from the front-end:
def open_project(self, message):
p = Project.objects.values("projectname").get(id=message)
self.manager.subscribe(message, self)
message = {}
message["command"] = "STC-openproject"
message["message"] = p
self.send_message(json.dumps(message))
as i said i in no way claim that this is the correct way of doing it and i am also aware that channel_layers supposedly does this for you in a neat way. i however don't really have the time to get into channel_layers and will therefore be using this.
I am still open to suggestions ofcourse and am always happy to learn more.

Why is the perspective argument in a pb.Viewable passed as None?

I am trying to understand how to find out how to allow a server to know which client is making remote requests in twisted's perspective broker. I think I'm supposed to use twisted.spread.pb.Viewable for this, but when I try the perspective argument in the Viewable's view_* methods is None.
I run this server
import twisted.spread.pb as pb
import twisted.internet.reactor as reactor
class Server(pb.Root):
def __init__(self):
self.v = MyViewable()
def remote_getViewable(self):
return self.v
class MyViewable(pb.Viewable):
def view_foo(self, perspective):
print ("Perspective %s"%perspective)
if __name__ == "__main__":
reactor.listenTCP(54321, pb.PBServerFactory(Server()))
print("Starting reactor")
reactor.run()
and this client
import twisted.spread.pb as pb
import twisted.internet.reactor as reactor
from twisted.internet.defer import inlineCallbacks
#inlineCallbacks
def gotRoot(root):
v1 = yield root.callRemote("getViewable")
v2 = yield root.callRemote("getViewable")
print(v1)
print(v2)
yield v1.callRemote("foo")
yield v2.callRemote("foo")
factory = pb.PBClientFactory()
reactor.connectTCP("localhost", 54321, factory)
d = factory.getRootObject()
d.addCallback(gotRoot)
reactor.run()
The output from the server is
Starting reactor
Perspective None
Perspective None
Why are the perspective arguments None?
Through experimentation I believe I have determined the answer.
In order for remote invocations of a view_* method on a pb.Viewable to properly receive the perspective argument, the reference to that Viewable held by the client must have been obtained as the return value from a perspective_* method called on an instance of pb.Avatar (or subclass). The perspective argument passed into the view_* methods then corresponds to the Avatar that originally gave the client the reference to the Viewable.
The example code in the original posting doesn't work properly because the remote references to the Viewable are passed to the client from a pb.Root, not as return values from a perspective_* method on a pb.Avatar.
I note here that while this information is implied by the way the examples in the official documents are written, it does not seem to be explicitly stated there.
EDIT: I've figured out the right way to do this. One of the arguments to the Realm's requstAvatar method is the user's mind. All you have to do is set mind.perspective to the new Avatar instance and all subsequent remote calls work how you'd expect. For example:
class SimpleRealm:
implements(IRealm)
def requestAvatar(self, avatarId, mind, *interfaces):
avatar = MyAvatarSubclass()
mind.perspective = avatar
return pb.IPerspective, avatar, avatar.logout
OLD EDIT: A (crummy) way to make this work is to explicitly contruct a pb.ViewPoint and pass that as an argument to the remote client. For example if p is an instance of an Avatar subclass and v is a viewable on the server side, we can do this on the server
referenceToClient.callRemote("take", ViewPoint(p, v))
where on the client side we have something like
def remote_take(self, objToReceive):
self.myView = objToReceive
Subsequent invocations of self.myView.callRemote(...) by the client will work properly

Celery dynamic queue creation and routing

I'm trying to call a task and create a queue for that task if it doesn't exist then immediately insert to that queue the called task. I have the following code:
#task
def greet(name):
return "Hello %s!" % name
def run():
result = greet.delay(args=['marc'], queue='greet.1',
routing_key='greet.1')
print result.ready()
then I have a custom router:
class MyRouter(object):
def route_for_task(self, task, args=None, kwargs=None):
if task == 'tasks.greet':
return {'queue': kwargs['queue'],
'exchange': 'greet',
'exchange_type': 'direct',
'routing_key': kwargs['routing_key']}
return None
this creates an exchange called greet.1 and a queue called greet.1 but the queue is empty. The exchange should be just called greet which knows how to route a routing key like greet.1 to the queue called greet.1.
Any ideas?
When you do the following:
task.apply_async(queue='foo', routing_key='foobar')
Then Celery will take default values from the 'foo' queue in CELERY_QUEUES,
or if it does not exist then automatically create it using (queue=foo, exchange=foo, routing_key=foo)
So if 'foo' does not exist in CELERY_QUEUES you will end up with:
queues['foo'] = Queue('foo', exchange=Exchange('foo'), routing_key='foo')
The producer will then declare that queue, but since you override the routing_key,
actually send the message using routing_key = 'foobar'
This may seem strange but the behavior is actually useful for topic exchanges,
where you publish to different topics.
It's harder to do what you want though, you can create the queue yourself
and declare it, but that won't work well with automatic message publish retries.
It would be better if the queue argument to apply_async could support
a custom kombu.Queue instead that will be both declared and used as the destination.
Maybe you could open an issue for that at http://github.com/celery/celery/issues

Writing a blocking wrapper around twisted's IRC client

I'm trying to write a dead-simple interface for an IRC library, like so:
import simpleirc
connection = simpleirc.Connect('irc.freenode.net', 6667)
channel = connection.join('foo')
find_command = re.compile(r'google ([a-z]+)').findall
for msg in channel:
for t in find_command(msg):
channel.say("http://google.com/search?q=%s" % t)
Working from their example, I'm running into trouble (code is a bit lengthy, so I pasted it here). Since the call to channel.__next__ needs to be returned when the callback <IRCClient instance>.privmsg is called, there doesn't seem to be a clean option. Using exceptions or threads seems like the wrong thing here, is there a simpler (blocking?) way of using twisted that would make this possible?
In general, if you're trying to use Twisted in a "blocking" way, you're going to run into a lot of difficulties, because that's neither the way it's intended to be used, nor the way in which most people use it.
Going with the flow is generally a lot easier, and in this case, that means embracing callbacks. The callback-style solution to your question would look something like this:
import re
from twisted.internet import reactor, protocol
from twisted.words.protocols import irc
find_command = re.compile(r'google ([a-z]+)').findall
class Googler(irc.IRCClient):
def privmsg(self, user, channel, message):
for text in find_command(message):
self.say(channel, "http://google.com/search?q=%s" % (text,))
def connect():
cc = protocol.ClientCreator(reactor, Googler)
return cc.connectTCP(host, port)
def run(proto):
proto.join(channel)
def main():
d = connect()
d.addCallback(run)
reactor.run()
This isn't absolutely required (but I strongly suggest you consider trying it). One alternative is inlineCallbacks:
import re
from twisted.internet import reactor, protocol, defer
from twisted.words.protocols import irc
find_command = re.compile(r'google ([a-z]+)').findall
class Googler(irc.IRCClient):
def privmsg(self, user, channel, message):
for text in find_command(message):
self.say(channel, "http://google.com/search?q=%s" % (text,))
#defer.inlineCallbacks
def run():
cc = protocol.ClientCreator(reactor, Googler)
proto = yield cc.connectTCP(host, port)
proto.join(channel)
def main():
run()
reactor.run()
Notice no more addCallbacks. It's been replaced by yield in a decorated generator function. This could get even closer to what you asked for if you had a version of Googler with a different API (the one above should work with IRCClient from Twisted as it is written - though I didn't test it). It would be entirely possible for Googler.join to return a Channel object of some sort, and for that Channel object to be iterable like this:
#defer.inlineCallbacks
def run():
cc = protocol.ClientCreator(reactor, Googler)
proto = yield cc.connectTCP(host, port)
channel = proto.join(channel)
for msg in channel:
msg = yield msg
for text in find_command(msg):
channel.say("http://google.com/search?q=%s" % (text,))
It's only a matter of implementing this API on top of the ones already present. Of course, the yield expressions are still there, and I don't know how much this will upset you. ;)
It's possible to go still further away from callbacks and make the context switches necessary for asynchronous operation to work completely invisible. This is bad for the same reason it would be bad for sidewalks outside your house to be littered with invisible bear traps. However, it's possible. Using something like corotwine, itself based on a third-party coroutine library for CPython, you can have the implementation of Channel do the context switching itself, rather than requiring the calling application code to do it. The result might look something like:
from corotwine import protocol
def run():
proto = Googler()
transport = protocol.gConnectTCP(host, port)
proto.makeConnection(transport)
channel = proto.join(channel)
for msg in channel:
for text in find_command(msg):
channel.say("http://google.com/search?q=%s" % (text,))
with an implementation of Channel that might look something like:
from corotwine import defer
class Channel(object):
def __init__(self, ircClient, name):
self.ircClient = ircClient
self.name = name
def __iter__(self):
while True:
d = self.ircClient.getNextMessage(self.name)
message = defer.blockOn(d)
yield message
This in turn depends on a new Googler method, getNextMessage, which is a straightforward feature addition based on existing IRCClient callbacks:
from twisted.internet import defer
class Googler(irc.IRCClient):
def connectionMade(self):
irc.IRCClient.connectionMade(self)
self._nextMessages = {}
def getNextMessage(self, channel):
if channel not in self._nextMessages:
self._nextMessages[channel] = defer.DeferredQueue()
return self._nextMessages[channel].get()
def privmsg(self, user, channel, message):
if channel not in self._nextMessages:
self._nextMessages[channel] = defer.DeferredQueue()
self._nextMessages[channel].put(message)
To run this, you create a new greenlet for the run function and switch to it, and then start the reactor.
from greenlet import greenlet
def main():
greenlet(run).switch()
reactor.run()
When run gets to its first asynchronous operation, it switches back to the reactor greenlet (which is the "main" greenlet in this case, but it doesn't really matter) to let the asynchronous operation complete. When it completes, corotwine turns the callback into a greenlet switch back into run. So run is granted the illusion of running straight through, like a "normal" synchronous program. Keep in mind that it is just an illusion, though.
So, it's possible to get as far away from the callback-oriented style that is most commonly used with Twisted as you want. It's not necessarily a good idea, though.

How to wait for messages on multiple queues using py-amqplib

I'm using py-amqplib to access RabbitMQ in Python. The application receives requests to listen on certain MQ topics from time to time.
The first time it receives such a request it creates an AMQP connection and a channel and starts a new thread to listen for messages:
connection = amqp.Connection(host = host, userid = "guest", password = "guest", virtual_host = "/", insist = False)
channel = connection.channel()
listener = AMQPListener(channel)
listener.start()
AMQPListener is very simple:
class AMQPListener(threading.Thread):
def __init__(self, channel):
threading.Thread.__init__(self)
self.__channel = channel
def run(self):
while True:
self.__channel.wait()
After creating the connection it subscribes to the topic of interest, like this:
channel.queue_declare(queue = queueName, exclusive = False)
channel.exchange_declare(exchange = MQ_EXCHANGE_NAME, type = "direct", durable = False, auto_delete = True)
channel.queue_bind(queue = queueName, exchange = MQ_EXCHANGE_NAME, routing_key = destination)
def receive_callback(msg):
self.queue.put(msg.body)
channel.basic_consume(queue = queueName, no_ack = True, callback = receive_callback)
The first time this all works fine. However, it fails on a subsequent request to subscribe to another topic. On subsequent requests I re-use the AMQP connection and AMQPListener thread (since I don't want to start a new thread for each topic) and when I call the code block above the channel.queue_declare() method call never returns. I've also tried creating a new channel at that point and the connection.channel() call never returns, either.
The only way I've been able to get it to work is to create a new connection, channel and listener thread per topic (ie. routing_key), but this is really not ideal. I suspect it's the wait() method that's somehow blocking the entire connection, but I'm not sure what to do about it. Surely I should be able to receive messages with several routing keys (or even on several channels) using a single listener thread?
A related question is: how do I stop the listener thread when that topic is no longer of interest? The channel.wait() call appears to block forever if there are no messages. The only way I can think of is to send a dummy message to the queue that would "poison" it, ie. be interpreted by the listener as a signal to stop.
If you want more than one comsumer per channel just attach another one using basic_consume() and use channel.wait() after. It will listen to all queues attached via basic_consume(). Make sure you define different consumer tags for each basic_consume().
Use channel.basic_cancel(consumer_tag) if you want to cancel a specific consumer on a queue (cancelling listen to a specific topic).

Categories