4 Sharded Database Deployment
Sharded database deployment includes the prerequisites and instructions for installing the required software components, creating the catalog, roles, and the sharded database, configuring replication for high availability, and creating the schema for the sharded database.
The following topics contain the concepts and tasks you need to deploy a sharded database:
Introduction to Sharded Database Deployment
Oracle Sharding provides the capability to automatically deploy the sharded database, which includes both the shards and the replicas.
The sharded database administrator defines the topology (regions, shard
hosts, replication technology) and invokes the DEPLOY
command with a
declarative specification using the GDSCTL
command-line interface.
Before You Begin
Note that there are many different configurations and topologies that can be used for a sharded database. Your particular sharded database may employ a variety of Oracle software components such as Oracle Data Guard, Oracle GoldenGate, and Oracle Real Application Clusters (Oracle RAC) along with different sharding methodologies including system-managed, composite, and user-defined sharding.
Depending on your application’s particular architecture and system requirements, you may have several choices from which to choose when designing your system. See Sharding Methods, Shard-Level High Availability for information about the various sharding methodologies and disaster recovery and high-availability options.
Choosing a Shard Creation Method
When deploying a sharded configuration, there are two different
GDSCTL
commands, ADD SHARD
and CREATE
SHARD
, that can be used to add a shard.
Before you start to configure the sharding topology, decide which shard creation method to use because this decision affects some of the configuration steps.
The differences between the ADD SHARD
and CREATE
SHARD
methods are explained where necessary in the configuration
instructions.
ADD SHARD Method
The GDSCTL ADD SHARD
command can be used to add a shard to
an Oracle Sharding configuration. When using this command, you are responsible for
creating the Oracle databases that will become shards during deployment. You can use
whatever method you want to create the databases as long as the databases meet the
prerequisites for inclusion in an Oracle Sharding configuration.
Some of the benefits of using the ADD SHARD
method
include:
- You have complete control over the process used to create the databases.
- It is straightforward to customize database parameters, naming, and storage locations.
- Both PDB and non-CDB shards are supported.
- There is less Oracle software to configure on the shard hosts.
- There is much less complexity in the deployment process because the
shard databases are created before you run any
GDSCTL
commands.
CREATE SHARD Method
The GDSCTL CREATE SHARD
command can be used to create a
shard in an Oracle Sharding configuration. With CREATE SHARD
, the shard
catalog leverages the Oracle Remote Scheduler Agent to run the Database Configuration
Assistant (DBCA) remotely on each shard host to create a database for you. This method
does not support PDBs, so any shard databases added must be non-CDBs.
Some of the benefits of using the CREATE SHARD
method
include:
- It is easier to create shard databases for non-database administrators.
- It provides a standard way to provision a new database, when no standard is in current practice.
- Any database created with
CREATE SHARD
is automatically configured correctly for Oracle Sharding without the need to run SQL statements against the database or otherwise adjust database parameters. - You can create standby databases automatically.
Sharded Database Deployment Roadmap
Follow this roadmap to set up hosts, install the required software, and configure and deploy a sharded database.
At a high level, the deployment steps are:
- Set up the components.
- Provision and configure the hosts that will be needed for the sharding configuration and topology selected (see Provision and Configure Hosts and Operating Systems).
- Install Oracle Database software on the selected catalog and shard nodes (see Install the Oracle Database Software).
- Install global service manager (GSM) software on the shard director nodes (see Install the Shard Director Software).
- Create databases needed to store the sharding metadata and the
application data.
- Create a database that will become the shard catalog along with any desired replicas for disaster recovery (DR) and high availability (HA) (see Create the Shard Catalog Database).
- If you are using the
ADD SHARD
method to deploy shards, create databases that will become the shards in the configuration including any standby databases needed for DR and HA (see Create the Shard Databases).
- Specify the sharding topology using some or all the following commands
from the
GDSCTL
command line utility, among others (see Configure the Sharded Database Topology).CREATE SHARDCATALOG
ADD GSM
START GSM
ADD SHARDSPACE
ADD SHARDGROUP
ADD CDB
ADD SHARD
ADD CREDENTIAL
ADD FILE
CREATE SHARD
ADD INVITEDNODE
- Run
DEPLOY
to deploy the sharding topology configuration (see Deploy the Sharding Configuration). - Add the global services needed to access any shard in the sharded database (see Create and Start Global Database Services).
- Verify the status of each shard (see Verify Shard Status).
When the sharded database configuration deployment is complete and successful, you can create the sharded schema objects needed for your application. See Sharded Database Schema Objects.
The topics that follow describe each of the deployment tasks in more detail along with specific requirements for various components in the system. These topics can act as a reference for the set up and configuration of each particular step in the process. However, by themselves, they will not produce a fully functional sharding configuration since they do not implement a complete sharding scenario, but only provide the requirements for each step.
Example Sharded Database Deployment walks you through a specific deployment scenario of a representative reference configuration. This section provides examples of every command needed to produce a fully functional sharded database after all of the steps are completed.
Provision and Configure Hosts and Operating Systems
Before you install any software, review these hardware, network, and operating system requirements for Oracle Sharding.
-
Oracle Database Enterprise Edition is required when running an Oracle Sharded Database.
-
Hardware and operating system requirements for shards are the same as those for Oracle Database. See your Oracle Database installation documentation for these requirements.
-
Hardware and operating system requirements for the shard catalog and shard directors are the same as those for the Global Data Services catalog and global service manager. See Oracle Database Global Data Services Concepts and Administration Guide for these requirements.
-
Network requirements are Low Latency GigE.
-
Port communication requirements are as follows.
-
Each and every shard must be able to reach each and every shard director's listener and ONS ports. The shard director listener ports and the ONS ports must also be opened to the application/client tier, all of the shards, the shard catalog, and all other shard directors.
The default listener port of the shard director is 1522, and the default ONS ports on most platforms are 6123 for the local ONS and 6234 for remote ONS.
-
Each and every shard must be able to reach the TNS Listener port (default 1521) of the shard catalog (both primary and standbys).
-
The TNS Listener port of each shard must be opened to all shard directors and the shard catalog.
-
All of the port numbers listed above are modifiable during the deployment configuration. However, the port numbers to be used must be known before setting up the host software.
-
-
Host name resolution must be successful between all of the shard catalog, shards, and shard director hosts. Operating system commands such as ‘ping’ must succeed from a given host to any other host when specifying any host names provided during sharded database configuration commands.
Number and Sizing of Host Systems
Depending on your specific configuration, the hosts that are needed may include the following:
-
Shard catalog host. The shard catalog host runs the Oracle Database that serves as the shard catalog. This database contains a small amount of sharding topology metadata and any duplicated tables that are created for your application. In addition, this database acts as a query coordinator for cross-shard queries and services connections for applications that have not been written to be sharding-aware. In general, the transaction workload and size of this database are not particularly large.
-
Shard catalog database standbys (replicas). At least one more host to contain a replica or standby of the primary shard catalog database is recommended. This host is necessary in case of a failure of the primary catalog host. In addition, while acting as a standby database, this host can also be configured to be a query coordinator for cross-shard queries.
-
Shard director host. The shard director (global service manager) software can reside on a separate host, or it can be co-located on the same host as the shard catalog. This component of the sharding system is comprised of a network listener and several background processes used to monitor and configure a sharded configuration. If the shard director is co-located on the same host as the catalog database it must be installed in a separate Oracle Home from the catalog database because the installation package is different than the one used for Oracle Database.
-
Multiple shard directors. For high-availability purposes, it is recommended that you have more than one shard director running in a sharded system. Any additional shard directors can run on their own hosts or on the hosts running the standby shard catalog databases.
-
Shards. In addition to the above hosts, each shard that is configured in the system should also run on its own separate host. The hosts and their configurations chosen for this task should be sized in the same way as a typical Oracle Database host depending on how much load is put on each particular shard.
-
Shard standbys (replicas). Again, for high-availability and disaster recovery purposes, replication technology such as Oracle Data Guard or Oracle Golden Gate should be used, and replicas created for all sharded data. Additional hosts will be needed to run these replica or standby databases.
Note:
Oracle GoldenGate replication support for Oracle Sharding High Availability is deprecated in Oracle Database 21c.
Once the number of hosts and capacity requirements for each host have been determined, provision your hardware resources as appropriate for your environment using whatever methodologies you choose. Before installing any software, you must confirm that the hosts can communicate with each other though the ports as described above. Because a sharding configuration is inherently a distributed system, it is crucial that this connectivity between and among all of the hosts is confirmed before moving on to the next steps in the deployment process. Failure to set up port access correctly will lead to failures in subsequent commands.
Install the Oracle Database Software
Install Oracle Database on each system that will host the shard catalog, a database shard, or their replicas.
Aside from the requirement that the shard catalog and all of the shards in an Oracle Sharding configuration require Oracle Database Enterprise Edition, there are no other special installation considerations needed for sharding as long as the installation is successful and all post-install scripts have been run successfully.
See your platform’s installation guide at https://docs.oracle.com/en/database/oracle/oracle-database/ for information about configuring operating system users.
If you will be using the CREATE SHARD
method to add shards to your
configuration, you must also install the Remote Scheduler Agent software on each
shard host. The agent does not need to be installed on the shard catalog hosts. See
Installing and Configuring the Scheduler Agent on a Remote
Host for more information.
The CREATE SHARD
method also requires the creation of two
directories on each shard host, $ORACLE_BASE/oradata
and
$ORACLE_BASE/fast_recovery_area
. Create these two directories
while logged into the shard host as the owner of the Oracle Database software.
Permissions should be set the same as for the directories that will hold the data
files for your shard database, which is typically full access for the software owner
only.
Install the Shard Director Software
Install the global service manager software on each system that you want to host a shard director.
Note that this software installation is distinct from an Oracle Database installation. If you choose to co-locate the shard director software on the same host as the shard catalog database, it must be installed in a separate Oracle Home.
See Oracle Database Global Data Services Concepts and Administration Guide for information about installing the global service manager software.
Create the Shard Catalog Database
Use the following information and guidelines to create the shard catalog database.
The shard catalog database contains a small amount of sharding topology metadata and also contains all the duplicated tables that will be created for use by your sharded application. The catalog database also acts as a query coordinator to run cross-shard queries that select and aggregate data from more than one shard.
From a sharding perspective, the way in which you create or provision the catalog database is irrelevant. The database can be created with the Database Configuration Assistant (DBCA), manually using SQL*Plus, or provisioned from cloud infrastructure tools.
As long as you have a running Oracle Database Enterprise Edition instance on the shard catalog host with the following characteristics, it can used as the shard catalog.
-
Create a legacy database or a pluggable database (PDB) for use as the shard catalog database. Using the root container (
CDB$ROOT
) of a container database (CDB) as the shard catalog database is not supported. -
Your shard catalog database must use a server parameter file (
SPFILE
). This is required because the sharding infrastructure uses internal database parameters to store configuration metadata, and that data needs to persist across database startup and shutdown operations.$ sqlplus / as sysdba SQL> show parameter spfile NAME TYPE VALUE -------- --------- ------------------------------------ spfile string /u01/app/oracle/dbs/spfilecat.ora
-
The database character set and national character set must be the same because it is used for all of the shard databases. This means that the character set chosen must contain all possible characters that will be inserted into the shard catalog or any of the shards.
This requirement arises from the fact that Oracle Data Pump is used internally to move transportable tablespaces from one shard to another during sharding
MOVE CHUNK
commands. A requirement of that mechanism is that character sets must match on the source and destination.$ sqlplus / as sysdba SQL> REM run the following command if using a CDB SQL> alter session set container=catalog_pdb_name; SQL> select * from nls_database_parameters 2 where parameter like '%CHARACTERSET'; PARAMETER VALUE ---------------------------------------- -------------------- NLS_NCHAR_CHARACTERSET AL16UTF16 NLS_CHARACTERSET WE8DEC
-
Because the shard catalog database can run multi-shard queries which connect to shards over database links, the
OPEN_LINKS
andOPEN_LINKS_PER_INSTANCE
database initialization parameter values must be greater than or equal to the number of shards that will be part of the sharded database configuration.$ sqlplus / as sysdba SQL> REM run the following command if using a CDB SQL> alter session set container=catalog_pdb_name; SQL> show parameter open_links NAME TYPE VALUE ------------------------------------ ----------- ------------ open_links integer 20 open_links_per_instance integer 20
-
Set the
DB_FILES
database initialization parameter greater than or equal to the total number of chunks and/or tablespaces in the system.Each data chunk in a sharding configuration is implemented as a tablespace partition and resides in its own operating system data file. As a result, the
DB_FILES
database initialization parameter must be greater than or equal to the total number of chunks (as specified on theCREATE SHARDCATALOG
orADD SHARDSPACE
commands) and/or tablespaces in the system.$ sqlplus / as sysdba SQL> REM run the following command if using a CDB SQL> alter session set container=catalog_pdb_name; SQL> show parameter db_files NAME TYPE VALUE ------------------------------------ ----------- ------------ db_files integer 1024
-
If you are planning to use
CREATE SHARD
to add shards to the sharding configuration, then theSHARED_SERVERS
andDISPATCHERS
database initialization parameters must be set to allow the Remote Scheduler Agent to connect to the catalog over an XDB connection. This is not necessary ifADD SHARD
will be used.Specifically,
SHARED_SERVERS
must be greater than 0 (zero) to allow shared server connections to the shard catalog from the Remote Scheduler Agent processes running on the shard hosts. In addition, the value ofDISPATCHERS
must contain a service for XDB, based on the Oracle SID value.$ sqlplus / as sysdba SQL> show parameter shared_servers NAME TYPE VALUE ------------------------------------ ----------- ------------ shared_servers integer 5 SQL> show parameter dispatchers NAME TYPE VALUE ------------------------------------ ----------- ---------------------- Dispatchers string (PROTOCOL=TCP), (PROTO COL=TCP)(SERVICE=mysid XDB)
After setting the parameter values appropriately, run the
ALTER SYSTEM REGISTER
command to ensure that the XDB service is available for incoming connection requests. -
To support Oracle Managed Files, which is used by the sharding chunk management infrastructure, the
DB_CREATE_FILE_DEST
database parameter must be set to a valid value.This location is used during chunk movement operations (for example
MOVE CHUNK
or automatic rebalancing) to store the transportable tablespaces holding the chunk data. In addition, files described in Oracle Database Administrator’s Guide, "Using Oracle Managed Files," are also stored in this location as is customary for any Oracle database using Oracle Managed Files.$ sqlplus / as sysdba SQL> REM run the following command if using a CDB SQL> alter session set container=catalog_pdb_name; SQL> show parameter db_create_file_dest NAME TYPE VALUE --------------------- --------- ----------------------------- db_create_file_dest string /u01/app/oracle/oradata
-
If a standby catalog database will be part of the sharding configuration, the
STANDBY_FILE_MANAGEMENT
database parameter should be set to in order to automatically create new database files on any standby catalog databases.If this parameter is set to
MANUAL
(which is the default), then new database files created duringCREATE TABLESPACE
commands, for example, will not be created on the standby. This will cause data unavailability and application errors if the standby ever becomes a primary database.$ sqlplus / as sysdba SQL> alter session set container=catalog_pdb_name; SQL> show parameter standby_file_management NAME TYPE VALUE ------------------------------------ ----------- ------------ standby_file_management stirng AUTO
-
An Oracle-provided user account named
GSMCATUSER
must be unlocked and assigned a password inside the legacy database or PDB designated for the shard catalog. This account is used by the shard director processes to connect to the shard catalog database and perform administrative tasks in response to sharding commands.If you are using a PDB as the shard catalog, note that
GSMCATUSER
is a common user in the container database. As a result, its password is the same forCDB$ROOT
and all PDBs in the CDB. If multiple PDBs in a single CDB are to be used as catalog databases for different sharding configurations, they will all share the sameGSMCATUSER
password which can be a security concern. To avoid this, host only one shard catalog PDB per CDB, and do not unlock theGSMCATUSER
account in any other PDBs.The password you specify is used later during sharding topology creation in any
ADD GSM
commands that are issued. It never needs to be specified again because the shard director stores it securely in an Oracle Wallet and decrypts it only when necessary.The
MODIFY GSM
command can be used to update the stored password if it is later changed on the shard catalog database.$ sqlplus / as sysdba SQL> alter user gsmcatuser account unlock; User altered. SQL> alter user gsmcatuser identified by gsmcatuser_password; User altered.
If you are using a PDB as the shard catalog, also run the following commands.
SQL> alter session set container=catalog_pdb_name; SQL> alter user gsmcatuser account unlock; User altered.
-
A shard catalog administrator account must be created, assigned a password, and granted privileges inside the legacy database or PDB designated as the shard catalog.
This account is the administrator account for the sharding metadata in the shard catalog database. It is used to access the shard catalog using the
GDSCTL
utility when an administrator needs to makes changes to the sharded database topology or perform other administrative tasks.GDSCTL
connects as this user to the shard catalog database whenGDSCTL
commands are run. The user name and password specified are used later in theCREATE SHARDCATALOG
command. As with theGSMCATUSER
account above, the user name and password are stored securely in an Oracle Wallet for later use. The stored credentials can be updated by issuing an explicitCONNECT
command fromGDSCTL
to reset the values in the wallet.$ sqlplus / as sysdba SQL> REM run the following command if using a CDB SQL> alter session set container=catalog_pdb_name; SQL> create user mysdbadmin identified by mysdbadmin_password; User created. SQL> grant gsmadmin_role to mysdbadmin; Grant succeeded.
-
Set up and run an Oracle Net TNS Listener at your chosen port (default is 1521) that can service incoming connection requests for the shard catalog legacy database or PDB.
The TNS Listener can be created and configured in whatever way you wish. If the shard catalog is a PDB, depending on how the database was created, it may be necessary to explicitly create a database service that can allow for direct connection requests to the PDB without the need to use
ALTER SESSION SET CONTAINER
.To validate that the listener is configured correctly when using a PDB for the shard catalog, do the following using your newly created mysdbadmin account above and an appropriate connect string. Running
LSNRCTL SERVICES
lists all services currently available using the listener.$ sqlplus mysdbadmin/mysdbadmin_password@catalog_connect_string SQL> show con_name CON_NAME ----------------------- catalog_pdb_name
Once you confirm connectivity, make note of the catalog_connect_string above. It is used later in the configuration process in the
GDSCTL CREATE SHARDCATALOG
command. Typically, it will be of the form host:port/service_name (for example,cathost.example.com:1521/catalog_pdb.example.com
).
After all of the above requirements have been met, the newly created database can now
be the target of a GDSCTL CREATE SHARDCATALOG
command.
For high availability and disaster recovery purposes, it is highly recommended that you also create one or more standby shard catalog databases. From a sharding perspective, as long as the above requirements are also met on the standby databases, and all changes to the primary shard catalog database are consistently applied to the standbys, there are no further sharding-specific configuration steps required.
Create the Shard Databases
If you are using the CREATE SHARD
method to add shards to
your configuration, then skip this topic as it does not apply to CREATE
SHARD
. Otherwise, the databases that will be used as shards should be created
on their respective hosts.
As with the shard catalog database, the way in which you create or provision the shard databases is irrelevant from a sharding perspective. The database can be created with the Database Configuration Assistant (DBCA), manually using SQL*Plus, or provisioned from cloud infrastructure tools.
As long as you have a running Oracle Database Enterprise Edition instance on each shard host, with the following characteristics, it can be used as a shard.
-
If the shard will be a PDB in a CDB, then an Oracle-provided user account named
GSMROOTUSER
must be unlocked and assigned a password insideCDB$ROOT
of the database designated for a shard. In addition, this user must be granted theSYSDG
andSYSBACKUP
system privileges.The
GSMROOTUSER
account is used byGDSCTL
and the shard director processes to connect to the shard database to perform administrative tasks in response to sharding commands. The password specified is used byGDSCTL
during sharding topology creation in anyADD CDB
commands that are issued. It is also be used by the shard director during theDEPLOY
command to configure Oracle Data Guard (as necessary) on the shard databases. It never needs to be specified again by the user, becauseGDSCTL
and the shard director store it securely in an Oracle Wallet and decrypt it only when necessary. TheMODIFY CDB
command can be used to update the stored password if it is later changed on the shard database.$ sqlplus / as sysdba SQL> alter user gsmrootuser account unlock; User altered. SQL> alter user gsmrootuser identified by gsmrootuser_password; User altered. SQL> grant SYSDG, SYSBACKUP to gsmrootuser; Grant succeeded.
-
If the shard will be a PDB, then create a PDB for use as the shard database. Using the root container (
CDB$ROOT
) of a CDB as a shard is not supported. -
Your shard database must use a server parameter file (
SPFILE
). TheSPFILE
is required because the sharding infrastructure uses internal database parameters to store configuration metadata, and that data must persist through database startup and shutdown operations.$ sqlplus / as sysdba SQL> REM run the following command if using a CDB SQL> alter session set container=shard_pdb_name; SQL> show parameter spfile NAME TYPE VALUE -------- --------- ------------------------------------ spfile string /u01/app/oracle/dbs/spfileshard.ora
-
The database character set and national character set of the shard database must be the same as that used for the shard catalog database and all other shard databases. This means that the character set you choose must contain all possible characters that will be inserted into the shard catalog or any of the shards.
This requirement arises from the fact that Oracle Data Pump is used internally to move transportable tablespaces from one shard to another during sharding
MOVE CHUNK
commands. A requirement of that mechanism is that character sets must match on the source and destination.$ sqlplus / as sysdba SQL> REM run the following command if using a CDB SQL> alter session set container=shard_pdb_name; SQL> select * from nls_database_parameters 2 where parameter like '%CHARACTERSET'; PARAMETER VALUE ---------------------------------------- -------------------- NLS_NCHAR_CHARACTERSET AL16UTF16 NLS_CHARACTERSET WE8DEC
-
The
COMPATIBLE
initialization parameter must be set to at least 12.2.0.$ sqlplus / as sysdba SQL> REM run the following command if using a CDB SQL> alter session set container=shard_pdb_name; SQL> show parameter compatible NAME TYPE VALUE ---------------------- ----------- ----------------- compatible string 19.0.0
-
Enable Flashback Database if your sharded database will use standby shard databases.
$ sqlplus / as sysdba SQL> REM run the following command if using a CDB SQL> alter session set container=shard_pdb_name; SQL> select flashback_on from v$database; FLASHBACK_ON ------------------ YES
-
FORCE LOGGING
mode must be enabled if your shard database will use standby shard databases.$ sqlplus / as sysdba SQL> REM run the following command if using a CDB SQL> alter session set container=shard_pdb_name; SQL> select force_logging from v$database; FORCE_LOGGING --------------------------------------- YES
-
Set the
DB_FILES
database initialization parameter greater than or equal to the total number of chunks and/or tablespaces in the system.Each data chunk in a sharding configuration is implemented as a tablespace partition and resides in its own operating system datafile. As a result, the
DB_FILES
database initialization parameter must be greater than or equal to the total number of chunks (as specified in theCREATE SHARDCATALOG
orADD SHARDSPACE
commands) and/or tablespaces in the system.$ sqlplus / as sysdba SQL> REM run the following command if using a CDB SQL> alter session set container=shard_pdb_name; SQL> show parameter db_files NAME TYPE VALUE ------------------------------------ ----------- ------------ db_files integer 1024
-
To support Oracle Managed Files, used by the sharding chunk management infrastructure, the
DB_CREATE_FILE_DEST
database parameter must be set to a valid value.This location is used during chunk movement operations (for example
MOVE CHUNK
or automatic rebalancing) to store the transportable tablespaces holding the chunk data. In addition, files described in Oracle Database Administrator’s Guide, "Using Oracle Managed Files," are also stored in this location as is customary for any Oracle database using Oracle Managed Files.$ sqlplus / as sysdba SQL> REM run the following command if using a CDB SQL> alter session set container=shard_pdb_name; SQL> show parameter db_create_file_dest NAME TYPE VALUE --------------------- --------- ----------------------------- db_create_file_dest string /u01/app/oracle/oradata
-
A directory object named
DATA_PUMP_DIR
must be created and accessible in the shard database from theGSMADMIN_INTERNAL
account.GSMADMIN_INTERNAL
is an Oracle-supplied account that owns all of the sharding metadata tables and PL/SQL packages. It should remain locked and is never used to login interactively. It’s only purpose is to own and control access to the sharding metadata and PL/SQL.$ sqlplus / as sysdba SQL> REM run the following command if using a CDB SQL> alter session set container=shard_pdb_name; SQL> create or replace directory DATA_PUMP_DIR as ‘/u01/app/oracle/oradata’; Directory created. SQL> grant read, write on directory DATA_PUMP_DIR to gsmadmin_internal; Grant succeeded.
-
To support file movement from shard to shard, the
DB_FILE_NAME_CONVERT
database parameter must be set to a valid value. This location is used when standby databases are in use, as is typical with non-sharded databases, and the location can also be used during chunk movement operations. For regular file system locations, it is recommended that this parameter end with a trailing slash (/).$ sqlplus / as sysdba SQL> REM run the following command if using a CDB SQL> alter session set container=shard_pdb_name; SQL> show parameter db_file_name_convert NAME TYPE VALUE ---------------------- --------- ----------------------------- db_file_name_convert string /dbs/SHARD1/, /dbs/SHARD1S/
-
If a standby shard databases will be part of the sharding configuration, the
STANDBY_FILE_MANAGEMENT
database parameter should be set toAUTO
to automatically create new database files on any standby shard databases.If this parameter is set to
MANUAL
(which is the default), then new database files created duringCREATE TABLESPACE
commands, for example, will not be created on the standby. This will cause data unavailability and application errors if the standby ever becomes a primary database.$ sqlplus / as sysdba SQL> alter session set container=shard_pdb_name; SQL> show parameter standby_file_management NAME TYPE VALUE ------------------------------------ ----------- ------------ standby_file_management string AUTO
-
An Oracle-provided user account named
GSMUSER
must be unlocked and assigned a password inside the PDB or legacy database designated as the shard database. In addition, this user must be granted theSYSDG
andSYSBACKUP
system privileges.If the shards are PDBs, note that
GSMUSER
is a common user in the CDB. As a result, its password is the same forCDB$ROOT
and all PDBs in the CDB, which can be a security concern. To avoid this, host only one shard PDB per CDB, and do not unlock theGSMUSER
account in any other PDBs.This account is used by the shard director processes to connect to the shard database and perform administrative tasks in response to sharding commands. The password specified is used later during sharding topology creation in any
ADD SHARD
commands that are issued. The password never needs to be specified again because the shard director stores it securely in an Oracle Wallet and only decrypts it when necessary. You can update the stored password using theMODIFY SHARD
command if the password is later changed on the shard database.$ sqlplus / as sysdba SQL> alter user gsmuser account unlock; User altered. SQL> alter user gsmuser identified by gsmuser_password; User altered. SQL> REM run the following commands if using a CDB SQL> alter session set container=shard_pdb_name; SQL> alter user gsmuser account unlock; User altered. SQL> REM all cases run the following command SQL> grant SYSDG, SYSBACKUP to gsmuser; Grant succeeded.
-
Set up and run an Oracle Net TNS Listener at your chosen port (default is 1521) that can service incoming connection requests for the shard PDB.
The TNS Listener can be created and configured in whatever way you wish. If the shard is a PDB, depending on how the database was created, it may be necessary to explicitly create a database service that can allow for direct connection requests to the PDB without the need to use
ALTER SESSION SET CONTAINER
.To validate that the listener is configured correctly when using a PDB for the shard, run the following command using your newly unlocked
GSMUSER
account and an appropriate connect string. RunningLSNRCTL SERVICES
lists all services currently available using the listener.$ sqlplus gsmuser/gsmuser_password@shard_connect_string SQL> show con_name CON_NAME ----------------------- shard_pdb_name
Once you confirm connectivity, make note of the shard_connect_string above. It is used later in the configuration process in the
GDSCTL ADD SHARD
command. Typically, the connect string is in the form host:port/service_name (for example,shardhost.example.com:1521/shard_pdb.example.com
).
Validate the Shard Database
To validate that all of the above requirements have been met, you can
run an Oracle-supplied procedure, validateShard
, that inspects the
shard database and reports any issues encountered. This procedure is read-only and
makes no changes to the database configuration.
The validateShard
procedure can and should be run
against primary, mounted (unopened) standby, and Active Data Guard standby databases
that are part of the sharded database configuration. You can run
validateShard
multiple times and at any time during the sharded
database life cycle, including after upgrades and patching.
To run the validateShard
package, do the following:
$ sqlplus / as sysdba
SQL> REM run the following command if using a CDB
SQL> alter session set container=shard_pdb_name;
SQL> set serveroutput on
SQL> execute dbms_gsm_fix.validateShard
This procedure will produce output similar to the following:
INFO: Data Guard shard validation requested.
INFO: Database role is PRIMARY.
INFO: Database name is SHARD1.
INFO: Database unique name is shard1.
INFO: Database ID is 4183411430.
INFO: Database open mode is READ WRITE.
INFO: Database in archivelog mode.
INFO: Flashback is on.
INFO: Force logging is on.
INFO: Database platform is Linux x86 64-bit.
INFO: Database character set is WE8DEC. This value must match the character set of the catalog database.
INFO: 'compatible' initialization parameter validated successfully.
INFO: Database is a multitenant container database.
INFO: Current container is SHARD1_PDB1.
INFO: Database is using a server parameter file (spfile).
INFO: db_create_file_dest set to: '/u01/app/oracle/dbs'
INFO: db_recovery_file_dest set to: '/u01/app/oracle/dbs'
INFO: db_files=1000. Must be greater than the number of chunks and/or
tablespaces to be created in the shard.
INFO: dg_broker_start set to TRUE.
INFO: remote_login_passwordfile set to EXCLUSIVE.
INFO: db_file_name_convert set to: '/dbs/SHARD1/, /dbs/SHARD1S/'
INFO: GSMUSER account validated successfully.
INFO: DATA_PUMP_DIR is '/u01/app/oracle/dbs/9830571348DFEBA8E0537517C40AF64B'.
All output lines marked INFO
are for informational
purposes and should be validated as correct for your configuration.
All output lines marked ERROR
must be fixed before moving on to the
next deployment steps. These issues will cause errors for certain sharding
operations if they are not resolved.
All output lines marked WARNING
may or may not be applicable for
your configuration. For example, if standby databases will not be used for this
particular deployment, then any warnings related to standby databases or recovery
can be ignored. This is especially true for non-production, proof-of-concept, or
application development deployments. Review all warnings and resolve as
necessary.
Once all of the above steps have been completed, the newly created
database can now be the target of a GDSCTL ADD SHARD
command.
For high availability and disaster recovery purposes, it is highly
recommended that you also create one or more standby shard databases. From a
sharding perspective, as long as the above requirements are also met on the standby
databases, and all changes to the primary shard database are applied to the
standbys, the standby database only needs to be added to the sharding configuration
with an ADD SHARD
command.
Configure the Sharded Database Topology
The sharded database topology is descibed by the sharding metadata in the
shard catalog database. Use GDSCTL
to configure the sharded database
topology.
The sharded database topology consists of the sharding method, replication (high availability) technology, the default number of chunks to be present in the sharded database, the location and number of shard directors, the numbers of shardgroups, shardspaces, regions, and shards in the sharded database, and the global services that will be used to connect to the sharded database.
Keep the Global Data Services Control Utility (GDSCTL)
Command Reference in the Oracle Database Global Data Services Concepts and Administration Guide on hand for
information about usage and options for the GDSCTL
commands used in
the configuration procedures.
Create the Shard Catalog
Use the GDSCTL CREATE SHARDCATALOG
command to create
metadata describing the sharded database topology in the shard catalog
database.
Note that once you run CREATE SHARDCATALOG
, and the rest
of the sharding metadata has been created, there are several metadata properties
that cannot be modified without recreating the entire sharded database from scratch.
These include the sharding method (system-managed, composite, user-defined),
replication technology (Oracle Data Guard, Oracle GoldenGate), default number of
chunks in the database, and others. Make sure that you consult the
GDSCTL
reference documentation for the complete list of
possible command options and their defaults.
Consult the GDSCTL
documentation or run GDSCTL HELP CREATE
SHARDCATALOG
for more details about the command usage.
Shard Catalog Connect String
When you run the CREATE SHARDCATALOG
command,
GDSCTL
connects to the shard catalog database with the user
name and connect string specified.
If your shard catalog database has an associated standby database for high availability or disaster recovery purposes, the connection string, catalog_connect_string, in the examples that follow, should specify all of the primary and standby databases. If you don't include the standby databases in the connect string, then the shard director processes will not be able to connect to the standby if the primary shard catalog is unavailable.
If the shard catalog database is a PDB, note that catalog_connect_string should specify the PDB for the shard catalog
database, not the CDB$ROOT
.
The following is a simple tnsnames.ora
entry.
CATALOG_CONNECT_STRING=
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = tcp)(HOST = primary_catalog)(PORT = 1521))
(ADDRESS = (PROTOCOL = tcp)(HOST = standby_catalog)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = catpdb.example.com)
)
)
If you are using the ADD SHARD
method to create shards, do only the
first step. If you are using the CREATE SHARD
method, do both
steps.
-
Run
CREATE SHARDCATALOG
with the settings appropriate for your planned sharding topology.Additional Parameters Required for CREATE SHARD Method
If you will use the
CREATE SHARD
method to add shards to the configuration, then when you runCREATE SHARDCATALOG
you must set the following additional parameters, which are required for Remote Scheduler Agent registration in the next step.–agent_password
specifies the password that will be used by the Remote Scheduler Agent to register with the shard catalog.–agent_port
specifies the port number that the Agent uses to create an XDB connection to the shard catalog. The default for this parameter is 8080.
System-Managed Sharding Method
In the following example, the sharded database metadata is created for a system-managed sharding configuration with two regions named region1 and region2. Because system-managed is the default sharding method, it does not need to be specified with the
-sharding
parameter.GDSCTL> create shardcatalog -database catalog_connect_string -user mysdbadmin/mysdbadmin_password -repl DG -region region1,region2
Note also that if
-shardspace
is not specified, a default shardspace named shardspaceora is created. If-region
is not specified, the default region named regionora is created. If the single default region is created along with the default shardspace, then a default shardgroup named shardspaceora_regionora is also created in the shardspace.Composite Sharding Method
The following example shows you how to create shard catalog metadata for a composite sharded database with Data Guard replication in
MaxAvailability
protection mode, 60 chunks per shardspace, and two shardspaces.GDSCTL> create shardcatalog -database catalog_connect_string -user mysdbadmin/mysdbadmin_password -sharding composite -chunks 60 -protectmode maxavailability -shardspace shardspace1,shardspace2
User-Defined Sharding Method
The next example shows you how to create shard catalog metadata for a user-defined sharded database with Data Guard replication.
GDSCTL> create shardcatalog -database catalog_connect_string -user mysdbadmin/mysdbadmin_password -sharding user -protectmode maxperformance
-
For CREATE SHARD method only: Register the Remote Scheduler Agents with the shard catalog, and start the agents on each shard host.
Go to each shard host, log in as the owner of the Oracle software installation, and run the following
schagent
commands in the Oracle Home from which the shard database will run.schagent –registerdatabase catalog_hostname agent_port schagent -start
In the
schagent
command above, replace catalog_hostname with the name of the shard catalog host, and replace agent_port as with the port number you configured inCREATE SHARDCATALOG
above.For example:
$ $ORACLE_HOME/bin/schagent –registerdatabase cathost.example.com 8080 $ $ORACLE_HOME/bin/schagent -start
After successful agent registration, the shard host can receive remote job requests from the shard catalog during
GDSCTL DEPLOY
. After a successful deployment on a given host, the Remote Scheduler Agent is no longer used during a sharded database’s life cycle and can be stopped safely using the following command.$ $ORACLE_HOME/bin/schagent –stop
Future Connections to the Shard Catalog
GDSCTL
stores the credentials for the shard catalog
administrator in a wallet on the local host. However, for subsequent
GDSCTL
sessions on other hosts, it may be necessary to
explicitly connect to the shard catalog in order to perform administrative tasks by
running the GDSCTL CONNECT
command, as shown here.
GDSCTL> connect mysdbadmin/mysdbadmin_password@catalog_connect_string
Add and Start Shard Directors
Add to the configuration the shard directors, which will monitor the
sharding system and run background tasks in response to GDSCTL
commands and
other events, and start them.
The following commands must be run on the host where the shard director processes are to run. This can be the shard catalog host or a dedicated host for the shard director processes.
For later GDSCTL
sessions, you might need to explicitly
specify the shard director to be administered. If an error message is shown
referencing the default GSMORA shard director, run
GDSCTL SET GSM
before continuing, as shown here.
GDSCTL> set gsm -gsm sharddirector1
Add Shardspaces If Needed
If you are using composite or user-defined sharding, and you need to add
more shardspaces to complete your desired sharding topology, use the ADD
SHARDSPACE
command to add additional shardspaces.
Add Shardgoups If Needed
If your sharded database topology uses the system-managed or composite sharding method, you can add any necessary additional shardgroups for your application.
Each shardspace must contain at least one primary shardgroup and may contain any number or type of standby shardgroups. Shardgroups are not used in the user-defined sharding method.
Verify the Sharding Topology
Before adding information about your shard databases to the catalog, verify
that your sharding topology is correct before proceeding by using the various GDSCTL
CONFIG
commands.
Once shards are added and deployed, it is no longer possible to change much of the shard catalog metadata, so validating your configuration is an important task at this point.
Add the Shard CDBs
If your shards will be PDBs inside CDBs, then add the CDBs containing the
shard PDBs to the sharding configuration with the ADD CDB
command. If you
will be using non-CDBs as your shards, or will be using CREATE SHARD
to add
shards, skip to the next section.
Add the Shards
Depending on whether you use ADD SHARD
or CREATE
SHARD
to add shards to your configuration, follow the appropriate instructions
below.
Add Shards Using GDSCTL ADD SHARD
Use the GDSCTL ADD SHARD
command to add the shard
information to the shard catalog.
Add Shards Using GDSCTL CREATE SHARD
Use the GDSCTL CREATE SHARD
command to create the shard
database and add the shard information to the shard catalog.
Run CREATE SHARD
with the parameters appropriate to your
sharding method, as shown in the following examples.
For system-managed or composite sharding, run
CREATE SHARD
with the parameters shown here.
GDSCTL> create shard -shardgroup shardgroup_name –destination shard_hostname
–osaccount account_name –ospassword account_password
For user-defined sharding, the command usage is slightly different.
GDSCTL> create shard -shardspace shardspace_name –deploy_as db_mode
–destination shard_hostname –osaccount account_name –ospassword account_password
The -shardgroup
or -shardspace
parameters specify
the location of the shard in your sharding topology, and -deploy_as
specifies the intended open mode (primary,
standby, active_standby) of the
shard.
The –destination
parameter specifies which Remote Scheduler Agent
the shard catalog contacts to spawn NETCA and DBCA to create the shard database.
This value is typically the host name of the shard host. To see the list of
available destinations, select from the
ALL_SCHEDULER_EXTERNAL_DESTS
view on the shard catalog
database.
The –osaccount
and –ospassword
parameters specify
the operating system user name and password to be used when spawning the NETCA and
DBCA processes on the shard host. Typically, the user name is the owner of the
Oracle Database software.
Password Encryption
To avoid specifying the cleartext password for the account on each
CREATE SHARD
command, you can store the encrypted password in
the shard catalog for later use by using the GDSCTL ADD CREDENTIAL
command. In the CREATE SHARD
command specify the credential name in
the command parameters instead of –osaccount
and
–ospassword
as shown below.
GDSCTL> add credential –credential credential_name
–osaccount account_name –ospassword account_password
GDSCTL> create shard -shardgroup shardgroup_name –destination shard_hostname
–credential credential_name
What Happens When You Run CREATE SHARD
When you run CREATE SHARD
, GDSCTL
validates the
input parameters and the shard host setup and then adds shard metadata to the shard
catalog, which in turn causes the following operations to be performed during
GDSCTL DEPLOY
:
-
Create and start a TNS Listener process on shard hosts at port 1521 with a listener name of “LISTENER” (by default)
-
For primary shards, create the shard database using the default DBCA template located on the shard host at
$ORACLE_HOME/assistants/dbca/templates/General_Purpose.dbc
(by default)The primary shards will have the following characteristics, by default:
- Randomly generated passwords for
SYS
,SYSTEM
, andGSMUSER
- A
db_unique_name
,db_name
, and SID of the form ‘shNN’ where NN is a sequence-based number to uniquely identify added shards - A
db_domain
value the same as the domain found in theALL_SCHEDULER_EXTERNAL_DESTS.HOSTNAME
column corresponding to the specified destination. If no domain is found, thendb_domain
is set to thedb_domain
of the shard catalog database. NLS_CHARACTERSET
andNLS_NCHAR_CHARACTERSET
values the same as those for the shard catalog database- The
db_file_name_convert
parameter set to‘*’,’$ORACLE_BASE/oradata/’
- The
db_create_file_dest
parameter set to$ORACLE_BASE/oradata
- The
remote_login_passwordfile
parameter set toEXCLUSIVE
- The database is in archivelog mode
- Force logging is enabled
- Database flashback is on
- If Oracle Data Guard replication was specified on
CREATE SHARDCATALOG
, the following parameters are set.dg_broker_start
set toTRUE
db_recovery_file_dest
set to$ORACLE_BASE/fast_recovery_area
db_recovery_file_dest_size
set to51200 MB
standby_file_management
set toAUTO
db_flashback_retention_target
set to60
- Randomly generated passwords for
- For standby shards, use DBCA and RMAN to create the standby database based on the existing primary. In general, all primary database parameters are inherited by the standbys.
CREATE SHARD Usage Tips For Database Customization
The GDSCTL CREATE SHARD
command has several parameters that let you
customize the shard databases.
-
The
–sys_password
and–system_password
parameters let you specify the passwords for theSYS
andSYSTEM
accounts on your new shards.Note that the
GSMUSER
password is always created randomly because this account is not intended to be used for interactive logins. To change theGSMUSER
password after deployment, change the password on the database withALTER USER
and then use theGDSCTL MODIFY SHARD
command to update the sharding metadata with the new password. -
The
–netparam
and–netparamfile
parameters let you customize the TNS listener name and port number when they are created on the shard host.The value specified for these parameters is the file name of a NETCA response file. Examples of response files can be found in
$ORACLE_HOME/assistants/netca
on the shard hosts. -
Likewise, you can use the
-dbtemplate
and-dbtemplatefile
parameters to specify the DBCA template file to be used when creating the shard database.The default template is
$ORACLE_HOME/assistants/dbca/templates/General_Purpose.dbc
on the shard host. -
You can use the
-dbparam
and-dbparamfile
parameters to directly pass DBCA command line parameters to the DBCA process on the shard host, just as you would enter them when running DBCA from the command line.To see all of the possible parameters for primary database creation, run
dbca -help -createDatabase
. To see parameters for standby database creation, rundbca -help -createDuplicateDB
. For example, to change the global database name and SID for a primary shard, create a file with a single line as shown below, and specify that file name in-dbparam
or-dbparamfile
.-gdbName mydb.example.com -sid mysid
-
When specifying any of these parameters, you can use
-netparamfile
,-dbtemplatefile
, or-dbparamfile
with an operating system file name, as seen fromGDSCTL
.Alternatively, you can save the contents of the file in the shard catalog database with the
ADD FILE
command and then use-netparam
,-dbtemplate
, or-dbparam
on yourCREATE SHARD
command.GDSCTL> create shard -dbtemplatefile /home/user/mytemplate.dbc -netparamfile /home/user/mynetca.rsp ...
or
GDSCTL> add file –file mytemplate –source /home/user/mytemplate.dbc GDSCTL> add file –file mynetca –source /home/user/mynetca.rsp GDSCTL> create shard –dbtemplate mytemplate –netparam mynetca ...
If you wish to ensure that a standby shard on a particular host is in the
same Data Guard configuration as a specific primary shard, it is recommended that
you create the primary shard first followed by the standby shard using the desired
–destination
value in CREATE SHARD
. If you
create several primary shards sequentially, and then create several standby shards,
the primaries and standbys are matched in Data Guard configurations in a
non-deterministic way.
Verify the Shard Configuration
Run GDSCTL CONFIG SHARD
to verify that the shard metadata on the
shard catalog is as expected.
GDSCTL> config shard
Name Shard Group Status State Region Availability
--------- ------------------- ------ ----- ------ ------------
sh1 shardgroup_primary U none region1 -
sh2 shardgroup_primary U none region1 -
sh3 shardgroup_standby U none region2 -
sh4 shardgroup_standby U none region2 -
Note that the value for Status is U for “undeployed”, and State and Availability are
none and - until the GDSCTL DEPLOY
command is successfully run.
Add Host Metadata
Add all of the host names and IP addresses of your shard hosts to the shard catalog.
As part of the deployment process, the shard director contacts the shards and directs them to register with the shard director’s TNS listener process. This listener process only accepts incoming registration requests from trusted sources and will reject registration requests from unknown hosts.
If your shard hosts have multiple host names or network interfaces
assigned to them, it is possible that the incoming registration request to the shard
director may come from a host that was not automatically added during ADD
SHARD
or CREATE SHARD
. In this case, the registration
request is rejected and the shard will not deploy correctly. The visible symptom of
this problem will be that CONFIG SHARD
shows
PENDING
for the shard’s Availability after
DEPLOY
has completed.
To avoid this issue, use the GDSCTL ADD INVITEDNODE
command to manually add all host names and IP addresses of your shard hosts to the
shard catalog metadata.
Deploy the Sharding Configuration
When the sharded database topology has been fully configured with
GDSCTL
commands, run the GDSCTL DEPLOY
command to
deploy the sharded database configuration.
When you run the GDSCTL DEPLOY
command, the output looks
like the following if you used the ADD SHARD
command to configure the
shards.
GDSCTL> deploy
deploy: examining configuration...
deploy: requesting Data Guard configuration on shards via GSM
deploy: shards configured successfully
The operation completed successfully
If you used CREATE SHARD
to configure the shards, the GDSCTL
DEPLOY
command output will look similar to the following.
GDSCTL> deploy
deploy: examining configuration...
deploy: deploying primary shard 'sh1' ...
deploy: network listener configuration successful at destination 'shard1'
deploy: starting DBCA at destination 'shard1' to create primary shard 'sh1' ...
deploy: deploying primary shard 'sh2' ...
deploy: network listener configuration successful at destination 'shard2'
deploy: starting DBCA at destination 'shard2' to create primary shard 'sh2' ...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: waiting for 2 DBCA primary creation job(s) to complete...
deploy: DBCA primary creation job succeeded at destination 'shard1' for shard 'sh1'
deploy: DBCA primary creation job succeeded at destination 'shard2' for shard 'sh2'
deploy: deploying standby shard 'sh3' ...
deploy: network listener configuration successful at destination 'shard3'
deploy: starting DBCA at destination 'shard3' to create standby shard 'sh3' ...
deploy: deploying standby shard 'sh4' ...
deploy: network listener configuration successful at destination 'shard4'
deploy: starting DBCA at destination 'shard4' to create primary shard 'sh4' ...
deploy: waiting for 2 DBCA standby creation job(s) to complete...
deploy: waiting for 2 DBCA standby creation job(s) to complete...
deploy: DBCA standby creation job succeeded at destination 'shard3' for shard 'sh3'
deploy: DBCA standby creation job succeeded at destination 'shard4' for shard 'sh4'
deploy: requesting Data Guard configuration on shards via GSM
deploy: shards configured successfully
The operation completed successfully
What Happens During Deployment
As you can see, when you run DEPLOY
several things happen.
- GDSCTL calls a PL/SQL procedure on the shard catalog that examines the sharded database topology configuration to determine if there are any undeployed shards present that are able to be deployed.
- If the
CREATE SHARD
method is used to create shards, PL/SQL code on the shard catalog schedules a Remote Scheduler Agent job on each shard host that runs NETCA, which creates and starts a TNS Listener. Then, a second job is scheduled to run DBCA on the shard host, which creates the shard database. If standbys are to be deployed, then another set of NETCA and DBCA jobs are run, which create the standby databases on their respective hosts after the primary databases are successfully created. - For shards that are being deployed, the shard catalog sends requests to the shard director to update database parameters on the shards, populate topology metadata on the shard, and direct the shard to register with the shard director.
- If Oracle Data Guard replication is in use, and standby databases are present to deploy, then the shard director calls PL/SQL APIs on the primary shards to create a Data Guard configuration, or to validate an existing configuration on the primary and standby sets. Fast Start Failover functionality is enabled on all of the shards and, in addition, the shard director starts a Data Guard observer process on its host to monitor the Data Guard configuration.
- If new shards are being added to an existing sharded database that already contains deployed shards (called an incremental deployment), then any DDL statements that have been run previously are run on the new shards to ensure that the application schemas are identical across all of the shards.
- Finally, in the case of an incremental deployment on a sharded database using
system-managed or composite sharding methods, automatic chunk movement is scheduled
in the background, which is intended to balance the number of chunks distributed
among the shards now in the configuration. This process can be monitored using the
GDSCTL CONFIG CHUNKS
command after theDEPLOY
command returns control toGDSCTL
.
What Does a Successful Deployment Look Like?
Following a successful deployment using PDBs and the ADD
SHARD
method, the output from CONFIG SHARD
should look
similar to the following, if Data Guard active standby shards are in use.
GDSCTL> config shard
Name Shard Group Status State Region Availability
--------- ------------------- ------- -------- ------- ------------
cdb1_pdb1 shardgroup_primary Ok Deployed region1 ONLINE
cdb2_pdb1 shardgroup_standby Ok Deployed region2 READ ONLY
cdb3_pdb2 shardgroup_primary Ok Deployed region1 ONLINE
cdb4_pdb2 shardgroup_standby Ok Deployed region2 READ ONLY
If you used the CREATE SHARD
method, or used ADD
SHARD
with non-CDBs, then the shard names are the
db_unique_name
value of the shard databases.
If mounted, non-open standbys are in use, the output will be similar to the following, because the shard director is unable to log in to check the status of a mounted database.
GDSCTL> config shard
Name Shard Group Status State Region Availability
--------- ------------------ ------------- -------- ------- ------------
cdb1_pdb1 shardgroup_primary Ok Deployed region1 ONLINE
cdb2_pdb1 shardgroup_standby Uninitialized Deployed region2 -
cdb3_pdb2 shardgroup_primary Ok Deployed region1 ONLINE
cdb4_pdb2 shardgroup_standby Uninitialized Deployed region2 -
What To Do If Something Is Not Right
If any shards are showing an availability of PENDING
, confirm that all
steps related to ADD INVITEDNODE
and CONFIG VNCR
from
the topology configuration were completed. If not, complete them now and run
GDSCTL SYNC DATABASE -database shard_name
to
complete shard deployment.
If you used the CREATE SHARD
method to add shards to your configuration,
and errors from NETCA or DBCA are returned during GDSCTL DEPLOY
from
Remote Scheduler Agent jobs, then do the following steps to resolve the errors and retry
the deployment.
-
Resolve the issues.
The error message returned by
GDSCTL DEPLOY
should have enough information to view the output from the failed job on the shard host.Typically, there will be trace and log files from the NETCA or DBCA execution in
$ORACLE_BASE/cfgtoollogs
on the shard host. Resolve any underlying issues that caused the failure (bad parameters, resource issues on the host, and the like). -
Re-set the shard host.
- Stop any running TNS listeners created during the attempted deployment.
- Stop any running shard databases started during the attempted deployment.
- Delete
$ORACLE_HOME/network/admin/listener.ora
- Delete all files associated with the failed shard creation from
$ORACLE_BASE/oradata
and$ORACLE_BASE/fast_recovery_area
-
Run
GDSCTL DEPLOY
again.
Create and Start Global Database Services
After the shards are successfully deployed, and the correct status is confirmed, create and start global database services on the shards to service incoming connection requests from your application.
As an example, the commands in the following examples create read-write services on the primary shards in the configuration and read-only services on the standby shards. These service names can then be used in connect strings from your application to appropriately route requests to the correct shards.
After the services are started, your sharded database is ready for application schema creation and incoming client connection requests.
Example 4-1 Add and start a global service that runs on all of the primary shards
The following commands create and start a global service named
oltp_rw_srvc
that a client can use to connect to the sharded
database. The oltp_rw_srvc
service runs read/write transactions on
the primary shards.
GDSCTL> add service -service oltp_rw_srvc -role primary
GDSCTL> start service -service oltp_rw_srvc
Example 4-2 Add and start a global service for the read-only workload to run on the standby shards
The oltp_ro_srvc
global service is created and started
to run read-only workloads on the standby shards. This assumes that the standby
shards are Oracle Active Data Guard standby shards which are open for read-only
access. Mounted, non-open standbys cannot service read-only connections, and exist
for disaster recovery and high availability purposes only.
GDSCTL> add service -service oltp_ro_srvc -role physical_standby
GDSCTL> start service -service oltp_ro_srvc
Example 4-3 Verify the status of the global services
GDSCTL> config service
Name Network name Pool Started Preferred all
---- ------------ ---- ------- -------------
oltp_rw_srvc oltp_rw_srvc.orasdb.oracdbcloud orasdb Yes Yes
oltp_ro_srvc oltp_ro_srvc.orasdb.oracdbcloud orasdb Yes Yes
GDSCTL> status service
Service "oltp_rw_srvc.orasdb.oradbcloud" has 2 instance(s). Affinity: ANYWHERE
Instance "orasdb%1", name: "cdb1_pdb1", db: "cdb1_pdb1", region: "region1", status: ready.
Instance "orasdb%21", name: "cdb3_pdb2", db: "cdb3_pdb2", region: "region1", status: ready.
Service "oltp_ro_srvc.orasdb.oradbcloud" has 2 instance(s). Affinity: ANYWHERE
Instance "orasdb%11", name: "cdb2_pdb1", db: "cdb2_pdb1", region: "region2", status: ready.
Instance "orasdb%31", name: "cdb4_pdb2", db: "cdb4_pdb2", region: "region2", status: ready.
Verify Shard Status
Once you complete the DEPLOY
step in your sharding
configuration deployment, verify the detailed status of each shard.
Run GDSCTL CONFIG SHARD
to see the detailed status of each
shard.
GDSCTL> config shard -shard cdb1_pdb1
Name: cdb1_pdb1
Shard Group: shardgroup_primary
Status: Ok
State: Deployed
Region: region1
Connection string:shard_connect_string
SCAN address:
ONS remote port: 0
Disk Threshold, ms: 20
CPU Threshold, %: 75
Version: 19.0.0.0
Failed DDL:
DDL Error: ---
Management error:
Failed DDL id:
Availability: ONLINE
Rack:
Supported services
------------------------
Name Preferred Status
---- --------- ------
oltp_ro_srvc Yes Enabled
oltp_rw_srvc Yes Enabled
Example Sharded Database Deployment
This example explains how to deploy a typical system-managed sharded
database with multiple replicas, using Oracle Data Guard for high availability. The
shard catalog and the shards in this example are PDBs and the shards are added to
the configuration with the ADD SHARD
command.
To deploy a system-managed sharded database you create shardgroups
and shards, create and configure the databases to be used as shards, execute
the DEPLOY
command, and create role-based global
services.
You are not required to map data to shards in system-managed sharding, because the data is automatically distributed across shards using partitioning by consistent hash. The partitioning algorithm evenly and randomly distributes data across shards. For more conceptual information about the system-managed sharded Database, see System-Managed Sharding.
Example Sharded Database Topology
Consider the following system-managed sharded database configuration, where shardgroup 1 contains the primary shards, while shardgroups 2 and 3 contain standby replicas.
In addition, let’s assume that the replicas in shardgroup 2 are Oracle Active Data Guard standbys (that is, databases open for read-only access), while the replicas in shardgroup 3 are mounted databases that have not been opened.
Table 4-1 Example System-Managed Topology Host Names
Topology Object | Description |
---|---|
Shard Catalog Database |
Every sharded database topology requires a shard catalog. In our example, the shard catalog database has 2 standbys, one in each data center. Primary
Active Standby
Standby
|
Regions |
Because there are two datacenters involved in this configuration, there are two corresponding regions created in the shard catalog database. Data center 1
Data center 2
|
Shard Directors (global service managers) |
Each region requires a shard director running on a host within that data center. This example shows how to use two shard directors per region, which is the best practice recommendation. Data center 1
Data center 2
|
Shardgroups |
Data center 1
Data center 2
|
Shards |
|