Related
In our applications have enabled exactly-once in both Producer and Consumer.
Producer is a python component.We have enabled:
idempotence
use transactions (new transactionId is used every time when we send messages)
Consumer is a Spring Boot application. We have enabled:
read_committed isolation level
use manual acknowledgement for messages
We have multi-partition Kafka topic (lets say 3 partitions) on ConfluentCloud.
Our application design is as follows:
multiple Producer app instances
for performance ,we have lots of Consumer app instances (currently around 24)
Problem:
We noticed that sometimes the same Kafka message is consumed more than once in the Consumer.We detected this by using following consumer code. We keep the previously consumed kafka message Id (with offset) in Redis and compare them with newly consumed message.
Consumer code:
#KafkaListener(topics = "${datalake.datasetevents.topic}", groupId = "${spring.kafka.consumer.group-id}")
public void listen(#Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) String key,
#Header(KafkaHeaders.OFFSET) String offset,
#Payload InputEvent inputEvent, Acknowledgment acknowledgment) {
//KafkaHeaders.
Event event = new Event();
event.setCorrId(inputEvent.getCorrId());
event.setQn(inputEvent.getQn());
event.setCreatedTs(new Date());
event.setEventTs(inputEvent.getEventTs());
event.setMeta(inputEvent.getMeta() != null ? inputEvent.getMeta(): new HashMap<>());
event.setType(inputEvent.getType());
event.setUlid(key);
//detect message duplications
try {
String eventRedisKey = "tg_e_d_" + key.toLowerCase();
String redisVal = offset;
String tmp = redisTemplateString.opsForValue().get(eventRedisKey);
if (tmp != null) {
dlkLogging.error("kafka_event_dup", "Event consumed more than once ulid:" + event.getUlid()+ " redis offset: "+tmp+ " event offset:"+offset);
redisTemplateString.delete(eventRedisKey);
}
redisTemplateString.opsForValue().set(eventRedisKey, redisVal, 30, TimeUnit.SECONDS);
} catch (Exception e) {
dlkLogging.error("kafka_consumer_redis","Redis error at kafka consumere ", e);
}
//process the message and ack
try {
eventService.saveEvent(persistEvent, event);
ack.acknowledge();
} catch (Exception ee) {
//Refer : https://stackoverflow.com/questions/62413270/kafka-what-is-the-point-of-using-acknowledgment-nack-if-i-can-simply-not-ack
ack.nack(1);
dlkLogging.error("event_sink_error","error sinking kafka event.Will retry", ee);
}
}
Behavior:
We notice the "kafka_event_dup" is sent several times per day.
Error Message: Event consumed more than once
ulid:01G77G8KNTSM2Q01SB1MK60BTH redis offset: 659238 event
offset:659238
Question:
Why consumer read the same message even though we have configured exactly-once in both Producer and Consumer?
Update : After reading several SO posts, seems we still need to implement deduplication logic in Consumer side even though exactly-once is configured?
Additional Info:
Consumer configuration:
public DefaultKafkaConsumerFactory kafkaDatasetEventConsumerFactory(KafkaProperties properties) {
Map<String, Object> props = properties.buildConsumerProperties();
props.put(ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ErrorHandlingDeserializer.class);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, ErrorHandlingDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ErrorHandlingDeserializer.class);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, ErrorHandlingDeserializer.class);
props.put(ConsumerConfig.ISOLATION_LEVEL_CONFIG, "read_committed");
props.put(ErrorHandlingDeserializer.KEY_DESERIALIZER_CLASS, StringDeserializer.class);
props.put(ErrorHandlingDeserializer.VALUE_DESERIALIZER_CLASS, CustomJsonDeserializer.class.getName());
props.put(JsonDeserializer.VALUE_DEFAULT_TYPE, "com.fr.det.datalake.eventdriven.model.kafka.InputEvent");
return new DefaultKafkaConsumerFactory(props);
}
Producer code (python):
def __get_producer(self):
conf = {
'bootstrap.servers': self.server,
'enable.idempotence': True,
'acks': 'all',
'retry.backoff.ms': self.sleep_seconds * 100
}
if self.sasl_mechanism:
conf['sasl.mechanisms'] = self.sasl_mechanism
if self.security_protocol:
conf['security.protocol'] = self.security_protocol
if self.sasl_username:
conf['sasl.username'] = self.sasl_username
if self.sasl_username:
conf['sasl.password'] = self.sasl_password
if self.transaction_prefix:
conf['transactional.id'] = self.__get_transaction_id()
producer = Producer(conf)
return producer
#_retry_on_error
def send_messages(self, messages, *args, **kwargs):
ts = time.time()
producer = kwargs.get('producer', None)
if producer is not None:
for message in messages:
key = message.get('key', str(ulid.from_timestamp(ts)))
value = message.get('value', None)
topic = message.get('topic', self.topic)
producer.produce(topic=topic,
value=value,
key=key,
on_delivery=self.acked)
producer.commit_transaction(30)
def _retry_on_error(func, *args, **kwargs):
def inner(self, messages, *args, **kwargs):
attempts = 0
while True:
attempts += 1
sleep_time = attempts * self.sleep_seconds
try:
producer = self.__get_producer()
self.logger.info(f"Producer: {producer}, Attempt: {attempts}")
producer.init_transactions(30)
producer.begin_transaction()
res = func(self, messages, *args, producer=producer, **kwargs)
return res
except KafkaException as e:
if attempts <= self.retry_count:
if e.args[0].txn_requires_abort():
producer.abort_transaction(30)
time.sleep(sleep_time)
continue
self.logger.error(str(e), exc_info=True, extra=extra)
break
return inner
Kafka exactly-once is essentially a Kafka-Streams feature, although it can be used with regular consumer and producers as well.
Exactly once can only be achieved in a context where your applications are only interacting with Kafka: there is no XA nor other kind of distributed transactions across technologies that would enable a Kafka consumer interact with some other storage (like Redis) in an exactly-once manner.
In a distributed world, we have to acknowledge that is not desirable, since it introduce locking, contention, and exponentially degrading performance under load. If we don't need to be in a distributed world, then we don't need Kafka and many things become easier.
Transactions in Kafka are meant to be used within one application that is only interacting with Kafka, it lets you guarantee that the app will 1) read from some topic partitions, 2) write some result in some other topic partitions and 3) commit the read offsets related to 1, or do none of those things. If several apps are put back-to-back and interacting through Kafka in such manner, then you could achieve exactly once if you're very careful. If your consumer needs to 4) interact with Redis 5) interact with some other storage or do some side effect somewhere (like sending an email or so), then there is in general no way to perform steps 1,2,3,4,5 atomically as part of a distributed application. You can achieve this kind of things with other storage technologies (yes, Kafka is essentially a storage), but they cannot be distributed and your application cannot either. That's essentially what the CAP theorem tells us.
That's also why exactly-once is essentially a Kafka-streams stuff: Kafka Stream is just a smart wrapper around the Kafka consumer/producer clients that lest you build applications that interact only with Kafka.
You can also achieve exactly-once streaming processing with other data-processing framework, like Spark Streaming or Flink.
In practice it's often much simpler to not bother with transactions and just de-duplicate in the consumer. You have the guarantee that at max one consumer of the consumer group is connected to each partition at any point in time, so duplicates will always happen in the same instance of your app (until it re-scales), and, depending on your config, the duplication should typically only happen within one single Kafka consumer buffer, so you don't need to store much state in your consumer to de-duplicate. If you use some kind of event-ids that can only increase for example (which is essentially what the Kafka offset is BTW, and it's no coincidence), then you just need to keep in the state of each instance of your app the maximum event-id per partition that you've successfully processed.
I can see you have set ENABLE_AUTO_COMMIT_CONFIG to false, that means you are having a manual commit process in place. If we are not committing the offset of the messages read efficiently, then we will end up in processing duplicate messages.
Kindly refer session from 4.6 from https://www.baeldung.com/kafka-exactly-once
Also, for processing.guarantee : exactly_once the following parameters you no need to set explicitly.
isolation.level=read_committed
enable.idempotence=true
MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION=5
I'm writing an asyncio application to monitor prices of crypto markets and trade/order events, but for an unknown reason some streams stop receiving data after few hours. I'm not familiar with the asyncio package and I would appreciate help in finding a solution.
Basically, the code below establishs websocket connections with a crypto exchange to listen streams of six symbols (ETH/USD, BTC/USD, BNB/USD,...) and trades events from two accounts (user1, user2). The application uses the library ccxtpro. The public method watch_ohlcv get price steams, while private methods watchMyTrades and watchOrders get new orders and trades events at account level.
The problem is that one or several streams are interrupted after few hours, and the object response get empty or None. I would like to detect and restart these streams after they stops working, how can I do that ?
# tasks.py
#app.task(bind=True, name='Start websocket loops')
def start_ws_loops(self):
ws_loops()
# methods.py
def ws_loops():
async def method_loop(client, exid, wallet, method, private, args):
exchange = Exchange.objects.get(exid=exid)
if private:
account = args['account']
else:
symbol = args['symbol']
while True:
try:
if private:
response = await getattr(client, method)()
if method == 'watchMyTrades':
do_stuff(response)
elif method == 'watchOrders':
do_stuff(response)
else:
response = await getattr(client, method)(**args)
if method == 'watch_ohlcv':
do_stuff(response)
# await asyncio.sleep(3)
except Exception as e:
print(str(e))
break
await client.close()
async def clients_loop(loop, dic):
exid = dic['exid']
wallet = dic['wallet']
method = dic['method']
private = dic['private']
args = dic['args']
exchange = Exchange.objects.get(exid=exid)
parameters = {'enableRateLimit': True, 'asyncio_loop': loop, 'newUpdates': True}
if private:
log.info('Initialize private instance')
account = args['account']
client = exchange.get_ccxt_client_pro(parameters, wallet=wallet, account=account)
else:
log.info('Initialize public instance')
client = exchange.get_ccxt_client_pro(parameters, wallet=wallet)
mloop = method_loop(client, exid, wallet, method, private, args)
await gather(mloop)
await client.close()
async def main(loop):
lst = []
private = ['watchMyTrades', 'watchOrders']
public = ['watch_ohlcv']
for exid in ['binance']:
for wallet in ['spot', 'future']:
# Private
for method in private:
for account in ['user1', 'user2']:
lst.append(dict(exid=exid,
wallet=wallet,
method=method,
private=True,
args=dict(account=account)
))
# Public
for method in public:
for symbol in ['ETH/USD', 'BTC/USD', 'BNB/USD']:
lst.append(dict(exid=exid,
wallet=wallet,
method=method,
private=False,
args=dict(symbol=symbol,
timeframe='5m',
limit=1
)
))
loops = [clients_loop(loop, dic) for dic in lst]
await gather(*loops)
loop = asyncio.new_event_loop()
loop.run_until_complete(main(loop))
let me share with you my experience since I am dealing with the same problem.
CCXT is not expected to get stalled streams after some time running it.
Unfortunately practice and theory are different and error 1006 happens quite often. I am using Binance, OKX, Bitmex and BTSE ( BTSE is not supported by CCXT) and my code runs on AWS server so I should not have any connection issue. Binance and OKX are the worst as far as error 1006 is concerned.. Honestly, after researching it on google, I have only understood 1006 is a NetworkError and I know CCXT tries to resubscribe the channel automatically. All other explanations I found online did not convince me. If somebody could give me more info about this error I would appreciate it.
In any case, every time an exception is raised, I put it in an exception_list as a dictionary containing info like time in mls, method, exchange, description ecc. The exception_list is then passed to a handle_exception method. In this case, if the list contains two 1006 exception within X time handle_exception returns we are not on sync with market data and trading must stop. I cancel all my limit order and I emit a beep ( calling human intervention).
As for your second question:
restart these streams after they stops working, how can I do that
remember that you are Running Tasks Concurrently
If return_exceptions is False (default), the first raised exception is
immediately propagated to the task that awaits on gather(). Other
awaitables in the aws sequence won’t be cancelled and will continue to
run.
here you can find info about restarting individual task in a a gather()
In your case, since you are using a single exchange (Binance) and unsubscribe is not implemented in CCXT, you will have to close the connection and restart all the task. You can still use the above example in the link for automating it. In case you are using more then one exchange you can design your code in a way that let you close and restart only the Exchange that failed.
Another option for you would be defining the tasks with more granularity in the main so that every task is related to a single and well defined exchange/user/method/symbol and every task subscribes a single channel. This will result in a more verbose and less elegant code but it will help you catching the exception and eventually restart only a specific coroutine.
I am obviously assuming that after error 1006 the channel status is unsubscribed
final thought:
never leave a robot unattended
Professional market makers with a team of engineers working in London do not go to the pub while their algos ( usually co-located within the exchange ) execute thousands of trades.
I hope this can help you or, at least, get you in the right directions for handling exceptions and restart tasks
You need to use callbacks.
For example:
ws = self.ws = await websockets.connect(END_POINTS, compression=None) # step 1
await self.ws.send(SEND_YOUR_SUBSCRIPTION_MESSAGES) # step 2
while True:
response = await self.ws.recv()
if response:
await handler(response)
In the last like await handler(response) you are sending the response to the handler().
This handler() is the callback, it is the function that actually consumes your data that you receive from the exchange server.
In this handler(), what you can do is you check if the response is your desired data (bid/ask price etc) or it throws an exception like ConnectionClosedError, in which case you restart the websocket by doing STEP 1 and STEP 2 from within your handler.
So basically in the callback method, you need to either process the data
or restart the websocket and pass the handler to it again to receive the responses.
Hope this helps. I could not share the complete code as i need to clean it for sensitive business logic.
I am looking to RabbitPy for consuming and publishing messages. The consumption part is fine, However I want to send back messages to a specific queue after doing something with the inbound message. However I cannot find anyway to do this in the documentation.
https://rabbitpy.readthedocs.io/en/latest/api/message.html
Here is my consume code:
with rabbitpy.Connection('amqp://guest:guest#localhost:5672/%2f') as conn:
with conn.channel() as channel:
queue_read = rabbitpy.Queue(channel, QUEUE_NAME)
queue_write = rabbitpy.Queue(channel, QUEUE_NAME_RESPONSE)
# Exit on CTRL-C
try:
# Consume the message
for body in queue_read:
# do something on inboud...
Here is what it seems I can only set in creating and sending a message:
message = rabbitpy.Message(channel, result)
message.publish(EXCHANGE, RESPONSE_ROUTING_KEY, mandatory=False)
There is nothing related to a queue. How can I send a message to a specific queue with a routing key?
In Pika I would use the following:
channel = cnn.channel()
channel.exchange_declare(exchange=EXCHANGE, durable='True')
channel.exchange_declare(exchange=EXCHANGE, type='direct', durable='True')
channel.queue_declare(queue=QUEUE_NAME_RESPONSE, durable='True')
channel.queue_bind(exchange=EXCHANGE, queue=QUEUE_NAME_RESPONSE)
channel.basic_publish(exchange=EXCHANGE, routing_key=RESPONSE_ROUTING_KEY, body=return_JSON,properties=pika.BasicProperties(content_type='text/plain', delivery_mode=2))
channel.close()
How can I do this with RabbitPy?
I possibly have this by binding the routing key to the exchange and the queue. I have the following:
queue_write.bind(EXCHANGE, routing_key=RESPONSE_ROUTING_KEY, arguments=None)
using EXCHANGE, RESPONSE_ROUTING_KEY the queue is binded.
For example
We have 2 queue, Queue1 and Queue2. 1 exchange as Exchange and routingky as
routingkey1 and routingkey2.
Exchange + routingkey1 ==> Queue1
Exchange + routingkey1 ==> Queue2
Assume that w ehave the above binding.
When your publisher send a message using Exchange + routingkey1 it will be send to Queue 1. If the publisher use routingkey1 message will be sent to Queue2.
You can create the necessary binding using Rabbitmq Management UI or form your code. more details can be found here
In Rabbitpy you can do something like this,
amqp = rabbitpy.AMQP(channel)
amqp.queue_bind(queue='Queuq1', exchange='Exchange', routing_key='RoutingKey1')
What is the easiest way to create a delay (or parking) queue with Python, Pika and RabbitMQ? I have seen an similar questions, but none for Python.
I find this an useful idea when designing applications, as it allows us to throttle messages that needs to be re-queued again.
There are always the possibility that you will receive more messages than you can handle, maybe the HTTP server is slow, or the database is under too much stress.
I also found it very useful when something went wrong in scenarios where there is a zero tolerance to losing messages, and while re-queuing messages that could not be handled may solve that. It can also cause problems where the message will be queued over and over again. Potentially causing performance issues, and log spam.
I found this extremely useful when developing my applications. As it gives you an alternative to simply re-queuing your messages. This can easily reduce the complexity of your code, and is one of many powerful hidden features in RabbitMQ.
Steps
First we need to set up two basic channels, one for the main queue, and one for the delay queue. In my example at the end, I include a couple of additional flags that are not required, but makes the code more reliable; such as confirm delivery, delivery_mode and durable. You can find more information on these in the RabbitMQ manual.
After we have set up the channels we add a binding to the main channel that we can use to send messages from the delay channel to our main queue.
channel.queue_bind(exchange='amq.direct',
queue='hello')
Next we need to configure our delay channel to forward messages to the main queue once they have expired.
delay_channel.queue_declare(queue='hello_delay', durable=True, arguments={
'x-message-ttl' : 5000,
'x-dead-letter-exchange' : 'amq.direct',
'x-dead-letter-routing-key' : 'hello'
})
x-message-ttl (Message - Time To Live)
This is normally used to automatically remove old messages in the
queue after a specific duration, but by adding two optional arguments we
can change this behaviour, and instead have this parameter determine
in milliseconds how long messages will stay in the delay queue.
x-dead-letter-routing-key
This variable allows us to transfer the message to a different queue
once they have expired, instead of the default behaviour of removing
it completely.
x-dead-letter-exchange
This variable determines which Exchange used to transfer the message from hello_delay to hello queue.
Publishing to the delay queue
When we are done setting up all the basic Pika parameters you simply send a message to the delay queue using basic publish.
delay_channel.basic_publish(exchange='',
routing_key='hello_delay',
body="test",
properties=pika.BasicProperties(delivery_mode=2))
Once you have executed the script you should see the following queues created in your RabbitMQ management module.
Example.
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters(
'localhost'))
# Create normal 'Hello World' type channel.
channel = connection.channel()
channel.confirm_delivery()
channel.queue_declare(queue='hello', durable=True)
# We need to bind this channel to an exchange, that will be used to transfer
# messages from our delay queue.
channel.queue_bind(exchange='amq.direct',
queue='hello')
# Create our delay channel.
delay_channel = connection.channel()
delay_channel.confirm_delivery()
# This is where we declare the delay, and routing for our delay channel.
delay_channel.queue_declare(queue='hello_delay', durable=True, arguments={
'x-message-ttl' : 5000, # Delay until the message is transferred in milliseconds.
'x-dead-letter-exchange' : 'amq.direct', # Exchange used to transfer the message from A to B.
'x-dead-letter-routing-key' : 'hello' # Name of the queue we want the message transferred to.
})
delay_channel.basic_publish(exchange='',
routing_key='hello_delay',
body="test",
properties=pika.BasicProperties(delivery_mode=2))
print " [x] Sent"
You can use RabbitMQ official plugin: x-delayed-message .
Firstly, download and copy the ez file into Your_rabbitmq_root_path/plugins
Secondly, enable the plugin (do not need to restart the server):
rabbitmq-plugins enable rabbitmq_delayed_message_exchange
Finally, publish your message with "x-delay" headers like:
headers.put("x-delay", 5000);
Notice:
It does not ensure your message's safety, cause if your message expires just during your rabbitmq-server's downtime, unfortunately the message is lost. So be careful when you use this scheme.
Enjoy it and more info in rabbitmq-delayed-message-exchange
FYI, how to do this in Spring 3.2.x.
<rabbit:queue name="delayQueue" durable="true" queue-arguments="delayQueueArguments"/>
<rabbit:queue-arguments id="delayQueueArguments">
<entry key="x-message-ttl">
<value type="java.lang.Long">10000</value>
</entry>
<entry key="x-dead-letter-exchange" value="finalDestinationTopic"/>
<entry key="x-dead-letter-routing-key" value="finalDestinationQueue"/>
</rabbit:queue-arguments>
<rabbit:fanout-exchange name="finalDestinationTopic">
<rabbit:bindings>
<rabbit:binding queue="finalDestinationQueue"/>
</rabbit:bindings>
</rabbit:fanout-exchange>
NodeJS implementation.
Everything is pretty clear from the code.
Hope it will save somebody's time.
var ch = channel;
ch.assertExchange("my_intermediate_exchange", 'fanout', {durable: false});
ch.assertExchange("my_final_delayed_exchange", 'fanout', {durable: false});
// setup intermediate queue which will never be listened.
// all messages are TTLed so when they are "dead", they come to another exchange
ch.assertQueue("my_intermediate_queue", {
deadLetterExchange: "my_final_delayed_exchange",
messageTtl: 5000, // 5sec
}, function (err, q) {
ch.bindQueue(q.queue, "my_intermediate_exchange", '');
});
ch.assertQueue("my_final_delayed_queue", {}, function (err, q) {
ch.bindQueue(q.queue, "my_final_delayed_exchange", '');
ch.consume(q.queue, function (msg) {
console.log("delayed - [x] %s", msg.content.toString());
}, {noAck: true});
});
Message in Rabbit queue can be delayed in 2 ways
- using QUEUE TTL
- using Message TTL
If all messages in queue are to be delayed for fixed time use queue TTL.
If each message has to be delayed by varied time use Message TTL.
I have explained it using python3 and pika module.
pika BasicProperties argument 'expiration' in milliseconds has to be set to delay message in delay queue.
After setting expiration time, publish message to a delayed_queue ("not actual queue where consumers are waiting to consume") , once message in delayed_queue expires, message will be routed to a actual queue using exchange 'amq.direct'
def delay_publish(self, messages, queue, headers=None, expiration=0):
"""
Connect to RabbitMQ and publish messages to the queue
Args:
queue (string): queue name
messages (list or single item): messages to publish to rabbit queue
expiration(int): TTL in milliseconds for message
"""
delay_queue = "".join([queue, "_delay"])
logging.info('Publishing To Queue: {queue}'.format(queue=delay_queue))
logging.info('Connecting to RabbitMQ: {host}'.format(
host=self.rabbit_host))
credentials = pika.PlainCredentials(
RABBIT_MQ_USER, RABBIT_MQ_PASS)
parameters = pika.ConnectionParameters(
rabbit_host, RABBIT_MQ_PORT,
RABBIT_MQ_VHOST, credentials, heartbeat_interval=0)
connection = pika.BlockingConnection(parameters)
channel = connection.channel()
channel.queue_declare(queue=queue, durable=True)
channel.queue_bind(exchange='amq.direct',
queue=queue)
delay_channel = connection.channel()
delay_channel.queue_declare(queue=delay_queue, durable=True,
arguments={
'x-dead-letter-exchange': 'amq.direct',
'x-dead-letter-routing-key': queue
})
properties = pika.BasicProperties(
delivery_mode=2, headers=headers, expiration=str(expiration))
if type(messages) not in (list, tuple):
messages = [messages]
try:
for message in messages:
try:
json_data = json.dumps(message)
except Exception as err:
logging.error(
'Error Jsonify Payload: {err}, {payload}'.format(
err=err, payload=repr(message)), exc_info=True
)
if (type(message) is dict) and ('data' in message):
message['data'] = {}
message['error'] = 'Payload Invalid For JSON'
json_data = json.dumps(message)
else:
raise
try:
delay_channel.basic_publish(
exchange='', routing_key=delay_queue,
body=json_data, properties=properties)
except Exception as err:
logging.error(
'Error Publishing Data: {err}, {payload}'.format(
err=err, payload=json_data), exc_info=True
)
raise
except Exception:
raise
finally:
logging.info(
'Done Publishing. Closing Connection to {queue}'.format(
queue=delay_queue
)
)
connection.close()
Depends on your scenario and needs, I would recommend the following approaches,
Using the official plugin, https://www.rabbitmq.com/blog/2015/04/16/scheduling-messages-with-rabbitmq/, but it will have a capacity issue if the total count of delayed messages exceeds certain number (https://github.com/rabbitmq/rabbitmq-delayed-message-exchange/issues/72), it will not have the high availability option and it will suffer lose of data when it runs out of delayed time during a MQ restart.
Implement a set of cascading delayed queues just like NServiceBus did (https://docs.particular.net/transports/rabbitmq/delayed-delivery).
As a simple example, I'm adding 5 items to a new RabbitMQ(v 2.6.1) queue:
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters(host='not.my.real.server.net'))
channel = connection.channel()
channel.queue_declare(queue='dw.neil',durable=True)
# add 5 messages to the queue, the numbers 1-5
for x in range(5):
message = x+1
channel.basic_publish(exchange='',routing_key='dw.neil', body=str(message))
print " [x] Sent '%s'" % message
connection.close()
I purge my queue and then run the above code to add the 5 items:
nkodner#hadoop4 sports_load_v2$ python send_5.py
[x] Sent '1'
[x] Sent '2'
[x] Sent '3'
[x] Sent '4'
[x] Sent '5'
Now, I'm trying to simulate failed processing. Given the following code to consume from the queue. Notice that I have the call to basic_ack commented out:
#!/usr/bin/env python
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters(host='not.my.real.server.net'))
channel = connection.channel()
channel.queue_declare(queue='dw.neil',durable=True)
method_frame, header_frame, body=channel.basic_get(queue='dw.neil')
print method_frame, header_frame
print "body: %s" % body
#channel.basic_ack(delivery_tag=method_frame.delivery_tag)
connection.close()
I run the receiving code to grab an item off the queue. As I'd expect, I get item #1:
nkodner#hadoop4 sports_load_v2$ python r5.py
<Basic.GetOk(['message_count=9', 'redelivered=False', 'routing_key=dw.neil', 'delivery_tag=1', 'exchange='])>
<BasicProperties([])>
body: 1
Since the call to channel.basic_ack() is commented out, I would expect the unacknowledged message to be placed on the queue so that the next consumer will get it. I would hope that message #1 is the first message (again) out of the queue, with the Redelivered property set to True. Instead, message #2 is received:
nkodner#hadoop4 sports_load_v2$ python r5.py
<Basic.GetOk(['message_count=9', 'redelivered=False', 'routing_key=dw.neil', 'delivery_tag=1', 'exchange='])>
<BasicProperties([])>
body: 2
And all of the other messages in the queue are received before #1 comes back with Redelivered flag set to True:
...
nkodner#hadoop4 sports_load_v2$ python r5.py
<Basic.GetOk(['message_count=9', 'redelivered=False', 'routing_key=dw.neil', 'delivery_tag=1', 'exchange='])>
<BasicProperties([])>
body: 5
nkodner#hadoop4 sports_load_v2$ python r5.py
<Basic.GetOk(['message_count=9', 'redelivered=True', 'routing_key=dw.neil', 'delivery_tag=1', 'exchange='])>
<BasicProperties([])>
body: 1
Are there any properties or options I could be setting so that I keep getting #1 delivered until its acknowledged?
My use-case is loading a data warehouse with sequentially-generated files. We're using message-based processing to let my program know some new files are ready and are to be loaded into the DW. We have to process the files in the order that they're generated.
This has been addressed in RabbitMQ 2.7.0 - we were running 2.6.1.
From the release notes:
New features in this release include:
order preserved of messages re-queued for a consumer
Try using channel.basic_reject - this should push the unacknowledged message back to RabbitMQ which will treat the message as a new message. Also -- if you have a failed message stuck, you can use channel.basic_recover to tell RabbitMQ to redeliver all non-acknowledged messages.
http://pika.github.com/communicating.html#channel.Channel.basic_reject
http://pika.github.com/communicating.html#channel.Channel.basic_recover
http://www.rabbitmq.com/extensions.html#negative-acknowledgements provides distinguishing information on Basic.Reject vs Basic.Nack.
Message ordering semantics are explained at http://www.rabbitmq.com/semantics.html