5 Shard-Level Replication with Oracle Data Guard

Oracle Globally Distributed Database is tightly integrated with Oracle Data Guard, which provides shard-level replication.

The availability of a sharded database is not affected by an outage or slowdown of one or more shards. Oracle Active Data Guard replication is used to provide individual shard-level high availability. Replication is automatically configured and deployed when the sharded database is created.

Using Oracle Data Guard with a Sharded Database

Oracle Data Guard replication maintains one or more synchronized copies (standbys) of a shard (the primary) for high availability and data protection. Standbys may be deployed locally or remotely, and when using Oracle Active Data Guard can also be open for read-only access.

Oracle Data Guard can be used as the replication technology for sharded databases using the system-managed, user-defined, or composite method of sharding.

Using Oracle Data Guard with a System-Managed Sharded Database

In system-managed and composite sharding, the logical unit of replication is a group of shards called a shardgroup. In system-managed sharding, a shardgroup contains all of the data stored in the sharded database. The data is sharded by consistent hash across shards that make up the shardgroup. Shards that belong to a shardgroup are usually located in the same data center. An entire shardgroup can be fully replicated to one or more shardgroups in the same or different data centers.

The following figure illustrates how Data Guard replication is used with system-managed sharding. In the example in the figure there is a primary shardgroup, Shardgroup 1, and two standby shardgroups, Shardgroup 2 and Shardgroup 3. Shardgroup 1 consists of Data Guard primary databases (shards 1-3). Shardgroup 2 consists of local standby databases (shards 4-6) which are located in the same data center and configured for synchronous replication. And Shardgroup 3 consists of remote standbys (shards 7-9) located in a different data center and configured for asynchronous replication. Oracle Active Data Guard is enabled in this configuration, so each standby is open read-only.

Figure 5-1 System-Managed Sharding with Data Guard Replication

Description of Figure 5-1 follows
Description of "Figure 5-1 System-Managed Sharding with Data Guard Replication"

The concept of shardgroup as a logical unit of replication hides from the user the implementation details of replication. With Data Guard, replication is done at the shard (database) level. The sharded database in the figure above consists of three sets of replicated shards: {1, 4, 7}, {2, 5, 8} and {3, 6, 9}. Each set of replicated shards is managed as a Data Guard Broker configuration with fast-start failover (FSFO) enabled.

To deploy replication, specify the properties of the shardgroups (region, role, and so on) and add shards to them. Oracle Globally Distributed Database automatically configures Data Guard and starts an FSFO observer for each set of replicated shards. It also provides load balancing of the read-only workload, role based global services and replication lag, and locality based routing.

For high availability, Data Guard standby shards can be placed in the same region where the primary shards are placed. For disaster recovery, the standby shards can be located in another region.

Run the following GDSCTL commands to deploy the example configuration shown in the figure above.

CREATE SHARDCATALOG –database host00:1521:shardcat –region dc1,dc2

ADD GSM -gsm gsm1 -listener 1571 –catalog host00:1521:shardcat –region dc1
ADD GSM -gsm gsm2 -listener 1571 –catalog host00:1521:shardcat –region dc2
START GSM -gsm gsm1
START GSM -gsm gsm2

ADD SHARDGROUP -shardgroup shardgroup1 -region dc1 -deploy_as primary 
ADD SHARDGROUP -shardgroup shardgroup2 -region dc1 -deploy_as active_standby 
ADD SHARDGROUP -shardgroup shardgroup3 -region dc2 -deploy_as active_standby 

ADD CDB -connect cdb1
ADD CDB -connect cdb2
...
ADD CDB -connect cdb9

ADD SHARD -connect shard1 -CDB cdb1 -shardgroup shardgroup1
ADD SHARD -connect shard2 -CDB cdb2 -shardgroup shardgroup2
...
ADD SHARD -connect shard9 -CDB cdb9 -shardgroup shardgroup3

DEPLOY

Using Oracle Data Guard with a User-Defined Sharded Database

With user-defined sharding the logical (and physical) unit of replication is a shard. Shards are not combined into shardgroups. Each shard and its replicas make up a shardspace which corresponds to a single Data Guard Broker configuration. Replication can be configured individually for each shardspace. Shardspaces can have different numbers of standbys which can be located in different data centers. An example of user-defined sharding with Data Guard replication is shown in the following figure.

Figure 5-2 User-Defined Sharding with Data Guard Replication

Description of Figure 5-2 follows
Description of "Figure 5-2 User-Defined Sharding with Data Guard Replication"

Run the following GDSCTL commands to deploy the example configuration shown in the figure above.

CREATE SHARDCATALOG -sharding user –database host00:1521:cat –region dc1,dc2,dc3

ADD GSM -gsm gsm1 -listener 1571 –catalog host00:1521:cat –region dc1
ADD GSM -gsm gsm2 -listener 1571 –catalog host00:1521:cat –region dc2
ADD GSM -gsm gsm3 -listener 1571 –catalog host00:1521:cat –region dc3
START GSM -gsm gsm1
START GSM -gsm gsm2
START GSM -gsm gsm3

ADD SHARDSPACE -shardspace shardspace_a 
ADD SHARDSPACE -shardspace shardspace_b
ADD SHARDSPACE -shardspace shardspace_c

ADD CDB -connect cdb1
ADD CDB -connect cdb2
...
ADD CDB -connect cdb10

ADD SHARD -connect shard1 -CDB cdb1 -shardspace shardspace_a
ADD SHARD -connect shard2 -CDB cdb2 -shardspace shardspace_b
...
ADD SHARD -connect shard10 -CDB cdb10 -shardspace shardspace_c

DEPLOY

Using Oracle Data Guard with a Composite Sharded Database

In composite sharding, similar to user-defined sharding, a sharded database consists of multiple shardspaces. However, each shardspace, instead of replicated shards, contains replicated shardgroups.

Figure 5-3 Composite Sharding with Data Guard Replication

Description of Figure 5-3 follows
Description of "Figure 5-3 Composite Sharding with Data Guard Replication"

Run the following GDSCTL commands to deploy the example configuration shown in the figure above.

CREATE SHARDCATALOG -sharding composite –database host00:1521:cat –region dc1,dc2,dc3

ADD GSM -gsm gsm1 -listener 1571 –catalog host00:1521:cat –region dc1
ADD GSM -gsm gsm2 -listener 1571 –catalog host00:1521:cat –region dc2
ADD GSM -gsm gsm3 -listener 1571 –catalog host00:1521:cat –region dc3
START GSM -gsm gsm1
START GSM -gsm gsm2
START GSM -gsm gsm3

ADD SHARDSPACE -shardspace shardspace_a 
ADD SHARDSPACE -shardspace shardspace_b

ADD SHARDGROUP -shardgroup shardgroup_a1 –shardspace shardspace_a -region dc1 
-deploy_as primary 
ADD SHARDGROUP -shardgroup shardgroup_a2 –shardspace shardspace_a -region dc1     
-deploy_as active_standby
ADD SHARDGROUP -shardgroup shardgroup_a3 –shardspace shardspace_a -region dc3     
-deploy_as active_standby 
ADD SHARDGROUP -shardgroup shardgroup_b1 –shardspace shardspace_b -region dc1 
-deploy_as primary 
ADD SHARDGROUP -shardgroup shardgroup_b2 –shardspace shardspace_b -region dc1     
-deploy_as active_standby 
ADD SHARDGROUP -shardgroup shardgroup_b3 –shardspace shardspace_b -region dc2     
-deploy_as active_standby 

ADD CDB -connect cdb1
ADD CDB -connect cdb2
...

ADD SHARD -connect shard1 -cdb cdb1 -shardgroup shardgroup_a1
ADD SHARD -connect shard2 -cdb cdb2 -shardgroup shardgroup_a2
...

DEPLOY

Considerations and Limitations

Role switchback is user dependent: If a single-instance primary fails over to its standby, unlike Oracle RAC, you must intervene to reinstate the old primary by starting the database in mount state. The broker will then automatically complete the reinstatement.

Per-PDB feature is not supported: The database feature Per-PDB Data Guard integration is not supported in a sharded database architecture.