In pull mode, notifications are stored by the connector server in a buffer until they are pulled by the connector client. Any one of the pull operations, whether by-request or automatic, empties this buffer, and it fills up again as new notifications are sent.
By default, this buffer will grow to contain all notifications. In the worst case scenario of an overfull buffer, this can lead either to an "out of memory" error of the agent application, a saturation of the communication layer, or an overload of the manager's listeners when the notifications are finally pulled. The ClientNotificationHandler interface defines the static NO_CACHE_LIMIT field to represent an unlimited buffer size.
To change the size of the agent's cache, call the connector client's setCacheSize method. The size of the cache is expressed as the number of notifications which can be stored in its buffer. When a cache buffer of limited size is full, new notifications will cause an overflow. Therefore you should also choose an overflow mode when using a limited cache size. The two overflow modes are defined by static fields of the ClientNotificationHandler interface:
DISCARD_OLD - The oldest notifications will be lost and the buffer will always be renewed with the latest notifications which have been sent; this is the default value when a limit is first set for the cache size
DISCARD_NEW - Once the notification buffer is full, any new notifications will be lost until the buffer is emptied by forwarding the messages; the buffer will always contain the first notifications sent after the previous pull operation
We demonstrate each of these modes in our sample manager, by first setting the cache size and the overflow mode, then by triggering more notifications than the cache buffer can hold.
System.out.println(">>> Use pull mode with period set to zero, " + "buffer size set to 10, and overflow mode set to DISCARD_OLD."); connectorClient.setMode(ClientNotificationHandler.PULL_MODE); connectorClient.setPeriod(0); connectorClient.setCacheSize(10, true); connectorClient.setOverflowMode(ClientNotificationHandler.DISCARD_OLD); System.out.println(">>> Have our MBean broadcast 30 notifications..."); params[0] = new Integer(30); signatures[0] = "java.lang.Integer"; connectorClient.invoke(mbean, "sendNotifications", params, signatures); System.out.println(">>> Done."); // Call getNotifications to pull all buffered notifications from the agent System.out.println("\n>>> Press <Enter> to get notifications."); System.in.read(); connectorClient.getNotifications(); // Wait for the handler to process the 10 notifications // These should be the 10 most recent notifications // (the greatest sequence numbers) Thread.sleep(100); System.out.println("\n>>> Press <Enter> to continue."); System.in.read(); // We should see that the 20 other notifications overflowed the agent buffer System.out.println(">>> Get overflow count = " + connectorClient.getOverflowCount()); |
The overflow count gives the total number of notifications that have been discarded because the buffer has overflowed. The number is cumulative from the first listener registration from the manager until all of the manager's listeners have been unregistered. The manager application can reset this value by calling the setOverflowCount method.
In our example application, we repeat the actions above, in order to cause the buffer to overflow again, but this time using the DISCARD_NEW policy. Again, the buffer size is ten, and there are 30 notifications. In this mode, the first 10 sequence numbers will remain in the cache to be forwarded when the manager pulls them from the agent, and 20 more will have overflowed.
When the buffer is full and notifications need to be discarded, the time reference for applying the overflow mode is the order in which notifications arrived in the buffer. Neither the time stamps nor the sequence numbers of the notifications are considered, since neither of these are necessarily absolute. Even the sequence of notifications from the same broadcaster can be non-deterministic. And in any case, broadcasters are free to set both time stamps and sequence numbers as they see fit, or even to set them to null.
The second parameter of the setCacheSize method is a boolean which determines whether or not the potential overflow of the cache is discarded when reducing the cache size. If the currently buffered notifications do not fit into the new cache size and this parameter is true, excess notifications are discarded according to the current overflow mode. The overflow count is also updated accordingly.
In the same situation with the parameter set to false, the cache will not be resized. You need to check the return value of the method when you set this parameter to false. If the cache cannot be resized because it would lead to discarded notifications, you need to pull the waiting notifications with the getNotifications method and try resizing the cache again. When the existing notifications fit within the new cache size or when increasing the cache size, the second parameter of setCacheSize has no effect.
Because several managers may connect through the same protocol, the connector server object must handle the notifications for each separately. This implies that each connected manager has its own notification buffer and its own settings for controlling this cache. The overflow count is specific to each manager as well.
Here we have demonstrated each setting independently by controlling the notification broadcaster. In practice, periodic pulling, agent-side buffering and buffer overflowing can all happen at once. And you can call getNotifications at any time to do an on-demand pull of any notifications in the agent-side buffer. You should adjust the settings to fit the known or predicted behavior of your management solution, depending upon communication constraints and your acceptable notification loss.
The caching policy is completely determined by the manager application. If notification loss in unacceptable, it is the manager's responsibility to configure the mechanism so that they are pulled as often as necessary. Also, the mechanism can be updated dynamically. For example, by checking the overflow count with every pull operation, the manager can know the number of lost packets, allowing it to compute a new notification emission rate. Using this rate, the manager can dynamically update any of the controls (buffer size, pull interval, and overflow mode) to keep up with the notification rate.