As the number of clients or the number of connections grows, you may need to scale the message service to eliminate bottlenecks or to improve performance. The Message Queue message service offers a number of scaling options, depending on your needs. These may be conveniently sorted into the following categories:
Vertical scaling is achieved by adding more processing power and by expanding available resources. You can do this by adding more processors or memory, by switching to a shared thread model, or by running the Java VM in 64 bit mode.
If you are using the point-to-point domain, you can scale the consumer side by allowing multiple consumers to access a queue. Using this approach, you can specify the maximum number of active and backup consumers. The load-balancing mechanism also takes into account a consumer’s current capacity and message processing rate. This is a Message Queue feature. (The JMS specification defines messaging behavior if only one consumer is accessing a queue; behavior for queues allowing more than one consumer is provider-specific. The Message Queue developer guides provide more information about this scaling option.)
Stateless horizontal scaling is achieved by using additional brokers and redistributing existing clients to these brokers. This approach is easy to implement, but it is appropriate only if your messaging operations can be divided into independent work groups.
Stateful horizontal scaling is achieved by connecting brokers into a cluster. In a broker cluster, each broker is connected to every other broker in the cluster as well as to its local application clients. Brokers can be on the same host or distributed across a network. Information about destinations and consumers is replicated on all the brokers in the cluster. Updates to destinations or subscribers is also propagated Each broker can therefore route messages from producers to which it is directly connected to consumers that are connected to other brokers in the cluster. In situations where backup consumers are used, if one broker or connection fails, messages sent to inaccessible consumers can be forwarded to a backup consumers on another broker.
In the event of broker or connection failure, state information about persistent entities (destinations and durable subscriptions) can get out of sync. For example, if a clustered broker goes down and a destination is created on another broker in the cluster, when the first broker restarts, it will not know about the new destination. Message Queue uses two different models to resolve this problem: conventional clustering and high availability clustering.
Using conventional clustering, you set broker properties to designate one broker in the cluster to be the master broker. This broker is responsible for tracking all changes to destinations and durable subscriptions in a master configuration file and for updating brokers in the cluster that are temporarily offline.
When using a master broker, Message Queue only provides service availability, not data availability in the case of broker or connection failure. For example, if a clustered broker becomes unavailable, any persistent messages held by that broker become unavailable until that broker recovers. To get data availability you can use a SunCluster Message Queue agent or you can use high availability clustering, described next. (In the SunCluster case, a persistent store is kept on a shared file system. If a broker fails the Message Queue agent on a second node starts a broker that takes over the shared store. Clients are reconnected to that broker, thereby getting both continuous service and access to persistent data.)
Using high availability clustering, you set broker properties to specify a highly available database that is shared by all brokers in the cluster. The shared store holds updated information about the state of each broker in the cluster. If one broker fails, another broker assumes ownership of the failed broker's persistent state (in the shared store) and provides uninterrupted service to the failed broker's clients.
For additional information, see Chapter 4, Broker Clusters