Single Master with Multiple Replica Strategy

You can select a high-availability (HA) strategy that uses either a multi-node cluster or a two-node cluster.

A two-node cluster has a single master with one replica database, as shown below:

A three-node cluster has a single master with two replica databases for a higher level of HA:

The following information describes the characteristics of a two-node or multi-node cluster:

  • Each host is running a backend server with an embedded-in-memory Berkeley XML database.
  • In the replicated group, there is only one master database and one or more replicas.
  • The master database is responsible for distribution of transactional modifications to other replicas in the cluster.
  • All back end server components interact with the database through a database proxy.
  • The database proxy determines if the request for service is a transactional modification or a request for data retrieval. All data retrievals are done on the local database irrespective if it is a replica or master database. Requests for transactional modifications (inserts, updates or deletes) are forwarded from the database proxy to the master database in the cluster.
  • The master database guarantees the transactions on a quorum basis in a cluster. This means that in a two-node cluster, one node must be up, in a three-node cluster two nodes must be up and so on. The majority of active members need to reply that they have received the replicated datasets before the master returns success on the transaction.
  • User transactional latency is accounted for by detection of the late arrival of replicated data. Best effort replication is provided, which can mean the call might return before the dataset appears on the replicated databases. The database transactional layer offers additional support with latency in replicated data.

    For example, a user on Host 3 starts a local transaction with the database proxy to insert content into the database. The database proxy in turn starts a transaction with the master database on Host 1. Each transaction that is started with the master database has a transactional ID associated with it. The master database uses best effort in ensuring that the datasets are replicated to the other members of its replication group.

    However, if the best effort time is exceeded and the master database has received replies from quorum (the other replicas); the master database returns success. Returning success guarantees replication will occur at some point. The database Proxy on host 3 waits until the required transactional ID appears in its local replicated database before returning success on the transaction to the user on Host 3. This guarantees that the content inserted on the Master database has reached the replicated database. Users that initiate transactions are guaranteed to see the outcome of those transactions in their local database independent of which host the original transaction was initiated.