In pull mode, notifications are not immediately sent to their remote listeners. Rather, they are stored in the connector server's internal buffer until the connector client requests that they be forwarded. Instead of being sent individually, the notifications are grouped to reduce the load on the communication layer. Pull mode has the following settings that let the manager define the notification forwarding policy:
A period for automatic pulling
The size of the agent-side notification buffer (also called the cache)
A policy for discarding notifications when this buffer is full
For a given connection, there is one cache for all listeners, not one cache per listener. This cache therefore has one buffering policy whose settings are controlled through the methods exposed by the connector client. The cache buffer contains an unpredictable mix of notifications in transit to all manager-side listeners added through a given connector client or through one of its bound proxy MBeans. The buffer operations such as pulling or overflowing apply to this mix of notifications, not to any single listener's notifications.
Pull mode forwarding is necessarily a compromise between receiving notifications in a timely manner, not saturating the communication layer, and not overflowing the buffer. Notifications are stored temporarily in the agent-side buffer, but the manager-side listeners still need to receive them. Pull mode includes automatic pulling that retrieves all buffered notifications regularly.
The frequency of the pull forwarding is controlled by the pull period expressed in milliseconds. By default, when pull mode is enabled, the manager will automatically begin pulling notifications once per second. Whether or not there are any notifications to receive depends upon events in the agent.
Our manager application sets a half-second pull period and then triggers the notification broadcaster.
System.out.println(">>> Set notification forward mode to PULL."); connectorClient.setMode(ClientNotificationHandler.PULL_MODE); // Retrieve buffered notifications from the agent twice per second System.out.println(">>> Set the forward period to 500 milliseconds."); connectorClient.setPeriod(500); System.out.println(">>> Have our MBean broadcast 20 notifications..."); params = new Integer(20); signatures = "java.lang.Integer"; connectorClient.invoke(mbean, "sendNotifications", params, signatures); System.out.println(">>> Done."); // Wait for the handler to process all notifications System.out.println(">>> Receiving notifications...\n"); Thread.sleep(2000);
When notifications are pulled, all notifications in the agent-side buffer are forwarded to the manager and the registered listeners. It is not possible to set a limit on the number of notifications that are forwarded except by limiting the size of the buffer (see 22.3.3 Agent-Side Buffering). Even in a controlled example such as ours, the number of notifications in the agent-side buffer at each pull period is completely dependent upon the agent's execution paths, and therefore unpredictable from the manager-side.
You can disable automatic pulling by setting the pull period to zero. In this case, the connector client will not pull any notifications from the agent until instructed to do so. Use the getNotifications method of the connector client to pull all notifications when desired. This method will immediately forward all notifications in the agent-side buffer. Again, it is not possible to limit the number of notifications that are forwarded except by limiting the buffer size.
In this example, we disable the automatic pulling and then trigger the notification broadcaster. The notifications will not be received until we request that the connector server pull them. Then, all of the notifications will be received at once.
System.out.println(">>> Use pull mode with period set to zero."); connectorClient.setMode(ClientNotificationHandler.PULL_MODE); connectorClient.setPeriod(0); System.out.println(">>> Have our MBean broadcast 30 notifications..."); params = new Integer(30); signatures = "java.lang.Integer"; connectorClient.invoke(mbean, "sendNotifications", params, signatures); System.out.println(">>> Done."); // Call getNotifications to pull all buffered notifications from the agent System.out.println("\n>>> Press Enter to pull the notifications."); System.in.read(); connectorClient.getNotifications(); // Wait for the handler to process all notifications Thread.sleep(100);
In the rest of our example, we use the on–demand forwarding mechanism to control how many notifications are buffered on the agent-side and thereby test the different caching policies.
In pull mode, notifications are stored by the connector server in a buffer until they are pulled by the connector client. Any one of the pull operations, whether on-demand or periodic, empties this buffer, and it fills up again as new notifications are triggered.
By default, this buffer will grow to contain all notifications. The ClientNotificationHandler interface defines the static NO_CACHE_LIMIT field to represent an unlimited buffer size. If the notifications are allowed to accumulate indefinitely in the cache, this can lead either to an “out of memory” error in the agent application, a saturation of the communication layer, or an overload of the manager's listeners when the notifications are finally pulled.
To change the size of the agent's cache, call the connector client's setCacheSize method. The size of the cache is expressed as the number of notifications that can be stored in its buffer. When a cache buffer of limited size is full, new notifications will overflow and be lost. Therefore, you should also choose an overflow mode when using a limited cache size. The two overflow modes are defined by static fields of the ClientNotificationHandler interface:
DISCARD_OLD: The oldest notifications will be lost and the buffer will always be renewed with the latest notifications that have been triggered. This is the default value when a limit is first set for the cache size.
DISCARD_NEW: Once the notification buffer is full, any new notifications will be lost until the buffer is emptied by forwarding the messages. The buffer will always contain the first notifications triggered after the previous pull operation.
We demonstrate each of these modes in our sample manager, first by setting the cache size and the overflow mode, then by triggering more notifications than the cache buffer can hold.
System.out.println(">>> Use pull mode with period set to zero, " + "buffer size set to 10, and overflow mode set to DISCARD_OLD."); connectorClient.setMode(ClientNotificationHandler.PULL_MODE); connectorClient.setPeriod(0); connectorClient.setCacheSize(10, true); // see "Buffering Specifics" connectorClient.setOverflowMode(ClientNotificationHandler.DISCARD_OLD); System.out.println(">>> Have our MBean broadcast 30 notifications..."); params = new Integer(30); signatures = "java.lang.Integer"; connectorClient.invoke(mbean, "sendNotifications", params, signatures); System.out.println(">>> Done."); // Call getNotifications to pull all buffered notifications from the agent System.out.println("\n>>> Press Enter to get notifications."); System.in.read(); connectorClient.getNotifications(); // Wait for the handler to process the 10 notifications // These should be the 10 most recent notifications // (the greatest sequence numbers) Thread.sleep(100); System.out.println("\n>>> Press Enter to continue."); System.in.read(); // We should see that the 20 other notifications overflowed the agent buffer System.out.println(">>> Get overflow count = " + connectorClient.getOverflowCount());
The overflow count gives the total number of notifications that have been discarded because the buffer has overflowed. The number is cumulative from the first manger-side listener registration until all of the manager's listeners have been unregistered. The manager application can modify or reset this value by calling the setOverflowCount method.
In our example application, we repeat the actions above, to cause the buffer to overflow again, but this time using the DISCARD_NEW policy. Again, the buffer size is ten and there are thirty notifications. In this mode, the first ten sequence numbers remain in the cache to be forwarded when the manager pulls them from the agent, and twenty more will overflow.
When the buffer is full and notifications need to be discarded, the time reference for applying the overflow mode is the order in which notifications have arrived in the buffer. Neither the time stamps nor the sequence numbers of the notifications are considered, because neither of these are necessarily absolute; even the sequence of notifications from the same broadcaster can be non-deterministic. And in any case, broadcasters are free to set both time stamps and sequence numbers as they see fit, or even to make them null.
The second parameter of the setCacheSize method is a boolean that determines whether or not the potential overflow of the cache is discarded when reducing the cache size. If the currently buffered notifications do not fit into the new cache size and this parameter is true, excess notifications are discarded according to the current overflow mode. The overflow count is also updated accordingly.
In the same situation with the parameter set to false, the cache will not be resized. You need to check the return value of the method when you set this parameter to false. If the cache cannot be resized because it would lead to discarded notifications, you need to empty the cache and try resizing the cache size again. To empty the cache, you can either pull the buffered notifications with the getNotifications method or discard them all by calling the connector client's clearCache method.
When the existing notifications fit within the new cache size or when increasing the cache size, the second parameter of setCacheSize has no effect.
Because several managers can connect through the same connector server object, it must handle the notifications for each separately. This implies that each connected manager has its own notification buffer and its own settings for controlling this cache. The overflow count is specific to each manager as well.
Here we have demonstrated each setting of the forwarding mechanism independently by controlling the notification broadcaster. In practice, periodic pulling, agent-side buffering and buffer overflow can all be happening at once. And you can call getNotifications at any time to do an on-demand pull of the notifications in the agent-side buffer. You should adjust the settings to fit the known or predicted behavior of your management architecture, depending upon communication constraints and your acceptable notification loss rate.
The caching policy is completely determined by the manager application. If notification loss is unacceptable, it is the manager's responsibility to configure the mechanism so that they are pulled as often as necessary. Also, the legacy notification mechanism can be updated dynamically. For example, the manager can compute the notification emission rate and update any of the settings (buffer size, pull period, and overflow mode) to minimize the risk of a lost notification.