Message Queue supports the use of broker clusters: groups of brokers working together to provide message delivery services to clients. Clusters enable an administrator to scale messaging operations with the volume of message traffic by distributing client connections among multiple brokers.
This chapter discusses the architecture and internal functioning of such broker clusters. It covers the following topics:
For complete information about administering broker clusters, see Chapter 8, Broker Clusters, in Sun Java System Message Queue 4.1 Administration Guide. For information about the effect of reconnection on the client, see Connection Event Notification in Sun Java System Message Queue 4.1 Developer’s Guide for Java Clients and Client Connection Failover (Auto-Reconnect) in Sun Java System Message Queue 4.1 Developer’s Guide for Java Clients.
Message Queue offers two cluster architectures, depending on the degree of availability desired.
Conventional clusters provide service availability but not data availability. If one broker in a cluster fails, clients connected to that broker can reconnect to another broker in the cluster but may be unable to access some data while they are reconnected to the alternate broker.
High availability clusters provide both service availability and data availability. If one broker in a cluster fails, clients connected to that broker are automatically reconnected to that broker in the cluster which takes over the failed broker's store. Clients continue to operate with all persistent data available to the new broker at all times.
The ability of brokers to work together in a cluster is provided by a cluster connection service. This service is configured using broker properties; which properties you configure depend on the model you want to use.
Each cluster model is described next and the sections that follow describe additional concerns and tasks that you need to consider when working with clusters. The chapter ends with a summary of the differences between the two models.
Figure 4–1 shows Message Queue’s architecture for conventional broker clusters. Each broker within a cluster is directly connected to all the others. Each client (message producer or consumer) has a single home broker with which it communicates directly, sending and receiving messages as if that broker were the only one in the cluster. Behind the scenes, the home broker works in concert with the other brokers to provide delivery services for all connected clients.
In a cluster, service availability depends on brokers being able to share information about destinations and durable subscribers. If a clustered broker fails, it is possible that this state information gets out of sync. To guard against this possibility, you can designate one broker within the cluster as the master broker. The master broker maintains a configuration change record to track changes to the cluster’s persistent entities (destinations and durable subscriptions). This record is used to propagate such change information to brokers that were offline when the changes occurred.
Following a discussion of the high availability model, this chapter explains how message delivery takes place within a cluster and how the brokers are configured and synchronized.
High availability clusters provide both service and data availability.
Each broker within a cluster is directly connected to all the others. Each client (message producer or consumer) has a single home broker with which it communicates directly, sending and receiving messages as if that broker were the only one in the cluster. Behind the scenes, the home broker works in concert with the other brokers to provide delivery services for all connected clients.
All brokers in a high availability cluster share a common JDBC-based persistent data store that holds dynamic state information (destinations, persistent messages, durable subscriptions, open transactions, and so on) for each broker. If a broker in the cluster fails, another broker takes over the failed broker's lock in the persistent store. Clients connected to the failed broker are reconnected to the broker that has taken over the failed broker's store. The broker that takes over the connection becomes the client's new home broker.
Figure 4–2 shows three brokers connected into a high availability cluster. The dotted line represents the cluster service. In the event that Broker 1 fails or the connection (C1) between clients at Broker 1 is broken, clients are reconnected to Broker 3 using a new connection (C2). Note that all brokers belonging to the high availability cluster are connected to the same highly available database.
To configure a high availability cluster you set cluster configuration properties for each broker in the cluster. These specify the cluster id and the broker id in the cluster and they configure the protocol governing the failover process.
In a cluster configuration using either model, brokers share information about destinations and message consumers: each broker knows the following information.
The name, type, and attributes of all physical destinations in the cluster
The name, location, and interests of each message consumer
Updates (deletions, additions, or reconfiguration) to the above
This allows each broker to route messages from its own directly connected message producers to remote message consumers. The home broker of a producer has different responsibilities from the home broker of the consumer:
The producer’s home broker is responsible for persisting and routing messages originating from that producer, for logging, for managing transactions, and for processing acknowledgements from consuming clients.
The consumer’s home broker is responsible for persisting information about consumers, for forwarding the message to the consumer, and for letting the producer’s broker know whether the consumer is still available and whether the message was successfully consumed.
Clustered brokers work together to minimize message traffic within the cluster; for example, if a remote broker has two identical subscriptions for the same topic destination, the message is sent over the wire only once. You can further reduce traffic by setting a destination property specifying that delivery to local consumers has priority over delivery to remote consumers.
If secure, encrypted message delivery between client and broker is required, you can configure a cluster to provide secure delivery of messages between brokers.
Attributes set for a physical destination on a clustered broker apply to all instances of that destination in the cluster; however, some limits specified by these attributes apply to the cluster as a whole and others to individual destination instances. This behavior is the same for both clustering models. Table 4–1 lists the attributes you can set for a physical destination and specifies their scope.
Table 4–1 Properties for Physical Destinations on Clustered Brokers
Property Name |
Scope |
---|---|
maxNumMsgs |
Per broker. Thus, distributing producers across a cluster, allows you to raise the limit on total unconsumed messages. |
maxTotalMsgBytes |
Per broker. Thus, distributing producers across a cluster, allows you to raise the limit on total memory reserved for unconsumed messages. |
lmitBehavior |
Global |
maxBytesPerMsg |
Global |
maxNumProducers |
Per broker |
maxNumActiveConsumers |
Global |
maxNumBackupConsumers |
Global |
consumerFlowLimit |
Global |
localDeliveryPreferred |
Global |
isLocalOnly |
Global |
useDMQ |
Per broker |
How a destination is created (by an administrator, automatically, or as a temporary destination) determines how the destination is propagated in a cluster and how it is handled in the event of connection or broker failure. This behavior is the same for both cluster models. The following subsections examine a few use cases to determine when a destination is created and how it's replicated. These include the following.
The figure below shows how destinations are created and replicated when a client produces to a queue and uses the reply-to model.
The administrator creates the physical destination QW. The queue is replicated throughout the cluster at creation time.
Producer ProdQW sends a message to queue QW and uses the reply-to model, directing replies to temporary queue TempQ1W. (The temporary queue is created and replicated when an application creates a temporary destination and adds a consumer.)
The home broker, BrokerW, persists the message sent to QW and routes the message to the first active consumer that meets the selection criteria for this message. Depending on which consumer is ready to receive the message, the message is delivered either to consumer C1QW (on BrokerX) or to consumer C2QW (on BrokerZ). The consumer receiving the message, sends a reply to the destination TempQ1W.
The next figure shows how destinations are created and replicated in the case of a producer that sends a message to a destination that does not exist and has to be automatically created.
Producer ProdAutoQY sends a message to a destination AutoQY that does not exist on the broker.
The broker persists the message and creates destination AutoQY.
Auto-created destinations are not automatically replicated across the cluster. Only when a consumer elects to receive messages from a queue AutoQY, would that consumer’s home broker create the destination AutoQY and convey the message to the consumer. At the point where one consumer creates the autocreated destination, the destination is replicated across the cluster. In this example, when the consumer CAutoQY, creates the destination, the replication takes place.
The following figure shows how destinations are created and replicated in a cluster when a client publishes a message to a topic destination that is created by the administrator.
The administrator creates the physical topic destination TY. The admin-created destination TY is replicated throughout the broker cluster (before the destination is used).
Publisher PubTY, sends a message to topicTY.
The home broker, BrokerY, persists any messages published to TY and routes the messages to all topic subscribers that match the selection criteria for this message. In this example C1TY and C2TY are subscribed to topicTY.
Table 4–2 explains how different kinds of destinations are replicated and deleted in a cluster.
Table 4–2 Handling Destinations in a Cluster
Depending on the clustering model used, you must specify appropriate broker properties to enable the Message Queue service to manage the cluster. This information is specified by a set of cluster configuration properties,. Some of these properties must have the same value for all brokers in the cluster; others must be specified for each broker individually. It is recommended that you place all configuration properties that must be the same for all brokers in one central cluster configuration file that is referenced by each broker at startup time. This ensures that all brokers share the same common cluster configuration information.
See Configuring Clusters in Sun Java System Message Queue 4.1 Administration Guidefor detailed information on cluster configuration properties.
Although the cluster configuration file was originally intended for configuring clusters, it is also a convenient place to store other (non-cluster-related) properties that are shared by all brokers in a cluster.
Whenever a cluster’s configuration is changed, information about the change is automatically propagated to all brokers in the cluster. A cluster configuration changes when one of the following events occurs:
A destination on one of the cluster’s brokers is created or destroyed.
The properties of a destination are changed.
A message consumer is registered with its home broker.
A message consumer is disconnected from its home broker (whether explicitly or through failure of the client, the broker, or the network).
A message consumer establishes a durable subscription to a topic.
Information about these changes is propagated immediately to all brokers in the cluster that are online at the time of the change. However, a broker that is offline (one that has crashed, for example) will not receive notice of the change when it occurs. How such a broker is resynchronized with the cluster depends on the clustering model used.
Using high availability clustering, synchronization is enabled by the shared persistent store. When a broker that has been offline rejoins the cluster (or when a new broker is added to the cluster) it is able to access the most current information simply by accessing the shared persistent database.
Using conventional clustering, to accommodate offline brokers, the Message Queue service maintains a configuration change record for the cluster, recording all persistent entities (destinations and durable subscriptions) that have been created or destroyed. When an offline broker comes back online (or when a new broker is added to the cluster), it consults this record for information about destinations and durable subscribers, then exchanges information with other brokers about currently active message consumers.
One broker in the cluster, designated as the master broker, is responsible for maintaining the configuration change record. Because other brokers cannot complete their initialization without the master broker, it should always be the first broker started within the cluster. If the master broker goes offline, configuration information cannot be propagated throughout the cluster, because other brokers cannot access the configuration change record. Under these conditions, you will get an exception if you try to create, reconfigure, or destroy a destination or a durable subscription or attempt a related operation such as reactivating a durable subscription. (Non-administrative message delivery continues to work normally, however.) The use of a master broker and a configuration change record is optional. They are only required if you are concerned with cluster synchronization after cluster configuration changes or a broker failure.
The following table summarizes the differences between the two models. Use this information in deciding which model to use or in switching from one model to another.
Table 4–3 Clustering Model Differences
Functionality |
Conventional |
High Availability |
---|---|---|
Performance |
Slightly faster than high availability model. |
Slightly slower than conventional model |
Service availability |
Yes, but some operations are not possible when master broker is down. |
Yes. |
Data availability |
No, when a broker in the cluster id down. |
Yes at all times. |
Transparent failover recovery |
May not be possible if failover occurs during a commit. Rare. |
May not be possible if failover occurs during a commit and the client cannot reconnect to any other broker in the cluster. Extremely rare. |
Configuration |
Done by setting appropriate cluster configuration broker properties. |
Done by setting appropriate cluster configuration broker properties. |
Additional requirements |
None. |
Highly available database. |