Federated queues in 3.2.0
So we added support for federated queues in RabbitMQ 3.2.0. This blog post explains what they're for and how to use them.
So we added support for federated queues in RabbitMQ 3.2.0. This blog post explains what they're for and how to use them.
So we've talked about how RabbitMQ 3.0 can break things, but that's not very positive. Let's have a look at some of the new features! Just some of them - quite a lot changed in 3.0, and we don't have all day...
RabbitMQ includes a bunch of cool new features. But in order to implement some of them we needed to change some things. So in this blog post I'm going to list some of those things in case you need to do anything about them.
I've written a plugin for RabbitMQ that adds support for the MQTT 3.1 protocol. MQ Telemetry Transport is a light-weight PUB/SUB protocol designed for resource-constrained devices and limited bandwidth situations, making it ideally suited to sensors and mobile devices. The implementation is a protocol adapter plugin, allowing MQTT clients to connect to a RabbitMQ broker simultaneously with clients implementing other protocols. We encourage projects that demand the combination of a low-overhead protocol on a robust, scalable broker with high reliability and enterprise features to consider this option.
For quite a while here, at RabbitMQ headquarters, we were struggling to find a good way to expose messaging in a web browser. In the past we tried many things ranging from the old-and-famous JsonRPC plugin (which basically exposes AMQP via AJAX), to Rabbit-Socks (an attempt to create a generic protocol hub), to the management plugin (which can be used for basic things like sending and receiving messages from the browser).
Over time we've learned that the messaging on the web is very different to what we're used to. None of our attempts really addressed that, and it is likely that messaging on the web will not be a fully solved problem for some time yet.
That said, there is a simple thing RabbitMQ users keep on asking about, and although not perfect, it's far from the worst way do messaging in the browser: exposing STOMP through Websockets.
You have a queue in Rabbit. You have some clients consuming from that
queue. If you don't set a QoS setting at all (basic.qos
), then
Rabbit will push all the queue's messages to the clients as fast as
the network and the clients will allow. The consumers will balloon in
memory as they buffer all the messages in their own RAM. The queue may
appear empty if you ask Rabbit, but there may be millions of messages
unacknowledged as they sit in the clients, ready for processing by the
client application. If you add a new consumer, there are no messages
left in the queue to be sent to the new consumer. Messages are just
being buffered in the existing clients, and may be there for a long
time, even if there are other consumers that become available to
process such messages sooner. This is rather sub optimal.
So, the default QoS prefetch
setting gives clients an unlimited
buffer, and that can result in poor behaviour and performance. But
what should you set the QoS prefetch
buffer size to? The goal is to
keep the consumers saturated with work, but to minimise the client's
buffer size so that more messages stay in Rabbit's queue and are thus
available for new consumers or to just be sent out to consumers as
they become free.
The previous release of RabbitMQ (2.7.0) brought with it a better way of managing plugins, one-stop URI connecting by clients, thread-safe consumers in the Java client, and a number of performance improvements and bug-fixes. The latest release (2.7.1) is essentially a bug-fix release; though it also makes RabbitMQ compatible with Erlang R15B and enhances some of the management interface. The previous release didn't get a blog post, so I've combined both releases in this one. (These are my own personal remarks and are NOT binding; errors of commission or omission are entirely my own -- Steve Powell.)
Since the new persister arrived in RabbitMQ 2.0.0 (yes, it's not so new anymore), Rabbit has had a relatively good story to tell about coping with queues that grow and grow and grow and reach sizes that preclude them from being able to be held in RAM. Rabbit starts writing out messages to disk fairly early on, and continues to do so at a gentle rate so that by the time RAM gets really tight, we've done most of the hard work already and thus avoid sudden bursts of writes. Provided your message rates aren't too high or too bursty, this should all happen without any real impact on any connected clients.
Some recent discussion with a client made us return to what we'd thought was a fairly solved problem and has prompted us to make some changes.
In RabbitMQ 2.6.0 we introduced Highly Available queues. These necessitated a new extension to AMQP, and a fair amount of documentation, but to date, little has been written on how they work.