public class StockQuotes extends java.lang.Object
Be sure to read the
Example Overview first to put this
example into context.
The program can be used to start up multiple stock quote servers supplying the following arguments:
java je.rep.quote.StockQuotes -env <environment home> \ -nodeName <nodeName> \ -nodeHost <hostname:port> \ -helperHost <hostname:port>The argument names resemble the
ReplicationConfignames to draw attention to the connection between the program argument names and ReplicationConfig APIs.
-env a pre-existing directory for the replicated JE environment -nodeName the name used to uniquely identify this node in the replication -nodeHost the unique hostname, port pair for this node -helperHost the hostname, port pair combination for the helper node. It's the same as the nodeHost only if this node is intended to become the initial Master, during the formation of the replication group.A typical demo session begins with a set of commands such as the following to start each node. The first node can be started as below:
java je.rep.quote.StockQuotes -env dir1 -nodeName n1 \ -nodeHost node.acme.com:5001 \ -helperHost node.acme.com:5001Note that the
nodeHostare the same, since it's the first node in the group. HA uses this fact to start a brand new replication group of size one, with this node as the master if there is no existing environment in the environment directory
Nodes can be added to the group by using a variation of the above. The second and third node can be started as follows:
java je.rep.quote.StockQuotes -env dir2 -nodeName n2 \ -nodeHost node.acme.com:5002 \ -helperHost node.acme.com:5001 java je.rep.quote.StockQuotes -env dir3 -nodeName n3 \ -nodeHost node.acme.com:5003 \ -helperHost node.acme.com:5002Note that each node has its own unique node name, and a distinct directory for its replicated environment. This and any subsequent nodes can use the first node as a helper to get itself going. In fact, you can pick any node already in the group to serve as a helper. So, for example when adding the third node, node 2 or node 1, could serve as helper nodes. The helper nodes simply provide a mechanism to help a new node get itself admitted into the group. The helper node is not needed once a node becomes part of the group.
When initially running the example, please use a group of at least three nodes. A two node group is a special case, and it is best to learn how to run larger groups first. For more information, see Two-Node Replication Groups. When initially creating the nodes, it is also important to start the master first.
But once the nodes have been created, the order in which the nodes are
started up does not matter. It minimizes the initial overall group startup
time to have the master (the one where the
helperHost and the
nodeHost are the same) node started first, since the master
initializes the replicated environment and is ready to start accepting and
processing commands even as the other nodes concurrently join the group.
The above commands start up a group with three nodes all running locally on the same machine. You can start up nodes on different machines connected by a TCP/IP network by executing the above commands on the respective machines. It's important in this case that the clocks on these machines, be reasonably synchronized, that is, they should be within a couple of seconds of each other. You can do this manually, but it's best to use a protocol like NTP for this purpose.
Upon subsequent restarts the nodes will automatically hold an election and select one of the nodes in the group to be the master. The choice of master is made visible by the master/replica prompt that the application uses to make the distinction clear. Note that at least a simple majority of nodes must be started before the application will respond with a prompt because it's only after a simple majority of nodes is available that an election can be held and a master elected. For a two node group, both nodes must be started before an election can be held.
Commands are submitted directly at the command prompt in the console established by the application at each node. Update commands are only accepted at the console associated with the current master, identified by the master prompt as below:
StockQuotes-2 (master)>After issuing a few commands, you may want to experiment with shutting down or killing some number of the replicated environments and bringing them back up to see how the application behaves.
If you type stock updates at an application that is currently running as a replica node, the update is refused and you must manually re-enter the updates on the console associated with the master. This is of course quite cumbersome and serves as motivation for the subsequent examples.
As shown below, there is no routing of requests between nodes in this example, which is why write requests fail when they are issued on a Replica node.
----------------------- | StockQuotes | Read and Write requests both succeed, | Instance 1: Master | because this is the Master. ----------------------- ----------------------- | StockQuotes | Read requests succeed, | Instance 2: Replica | but Write requests fail on a Replica. ----------------------- ----------------------- | StockQuotes | Read requests succeed, | Instance 3: Replica | but Write requests fail on a Replica. ----------------------- ...more Replica instances...
RouterDrivenStockQuotes along with
example that uses an external router built using the
Monitor to route write
requests externally to the master and provide primitive load balancing
across the nodes in the replication group.
|Modifier and Type||Method and Description|
Implements the "quit" command.
Copyright (c) 2002, 2017 Oracle and/or its affiliates. All rights reserved.