You will want to test your data service implementation thoroughly before putting it into a production environment. This section provides suggestions about how to test your implementation in the HA environment. The test cases are suggestions and are not exhaustive. For testing, you need to have access to a test-bed Sun Cluster configuration, so that your work will not impact production machines.
Test that your HA data service behaves properly in all cases where a logical host is moved between physical hosts. These include system crashes and the use of the haswitch(1M) and scadm(1M) stopnode commands. Test that client machines continue to get service after these events.
Test the idempotency of the methods. An important way to do this is to configure logical hosts with manual mode ON and repeatedly abort and rejoin one physical host, without ever doing an haswitch(1M) of a logical host to it. Let the rejoining host complete cluster reconfiguration before aborting it again. Note that when a rejoining host rejoins the cluster, cluster reconfiguration runs, but no logical host is moved between physical hosts during that reconfiguration.
Another way to test idempotency is to replace each method temporarily with a short shell script that calls the original method twice.
To test that your data service properly implements the abort and abort_net methods, make one physical host look very sick to Sun Cluster, but without crashing the host outright, so that Sun Cluster will take it out of the system on the "last wishes" path. First, do an haswitch(1M) of all logical hosts to that physical host. Then make that host appear to be sick by unplugging all the public network connections to that host. The. Sun Cluster network fault monitoring will notice the problem and take the physical host out of the cluster, using the aborting "last wishes" path.