Queue-Based Messaging in Windows Azure

A typical messaging solution exchanges data between its distributed components using message queues, which includes publishers publishing messages into queues and subscribers intended to receive messages. The subscriber can be implemented either as a single or a multiple threaded process, either continuously running or initiated on demand.

At a higher level there are two primary queuing mechanisms used to enable a queue listener (receiver) to receive messages stored on a queue:

Polling or Poll- based model: A listener monitors a queue by checking the queue at regular intervals. A listener is a part of a worker role instance. The main processing logic is comprised of a loop in which messages are dequeued and dispatched for processing. The listener checks for messages periodically. The queue is polled until the listener is notified to exit the loop. The Windows Azure pricing model measures storage transactions based on requests performed against the queue, regardless of the queue is empty or not.

Triggering or Push- based model: A listener subscribes to an event triggered either by the publisher or by the queue service manager, whenever a message arrives on a queue. Then the listener dispatches the message for processing. So, it does not have to poll the queue in order to determine whether any new work is available or not. A notification can be pushed to the queue listeners for every new message, or when the first message arrives to an empty queue, or when queue reaches a certain level. While using Windows Azure, the Service Bus volume of messaging entities like queues or topics should be considered.

Best Practices for Optimizing Transaction Costs

In a queue-based messaging solution, the volume of storage transactions can be reduced using a combination of the following methods:

  1. Group related messages into a single larger batch, and compress and store the compressed image in the blob storage, while keeping a reference of the blob in the queue.
  2. Batch multiple messages together in a single storage transaction. The GetMessages method in the Queue Service API enables de-queuing the specified number of messages in a single transaction.
  3. While polling, avoid aggressive polling intervals and implement a back-off delay that increases the time between polling requests if a queue remains continuously empty.
  4. Reduce the number of queue listeners - when using a pull-based model, use only 1 queue listener per role instance when a queue is empty. To further reduce the number of queue listeners per role instance to zero, use a notification mechanism to instantiate queue listeners when the queue receives work items.
  5. If queues remain empty for most of the time, automatically scale down the number of role instances and continue to monitor relevant system metrics to determine if and when the application should scale up the number of role instances to handle increasing workload.
  6. Using a combination of polling and push-based notifications, enabling the listeners to subscribe to a notification event (trigger) that is raised upon certain conditions to indicate that a new workload is put on the queue.