Importing Using MySQL Shell

Use MySQL Shell in the Oracle Cloud Infrastructure compute instance to import data to a standalone MySQL DB system. To import data to a high availability DB system, first import data to a standalone DB system and then enable high availability.

Note

For faster import, disable crash recovery, or use the data import feature. See Enabling or Disabling Crash Recovery and Importing Using the Data Import Feature.
This task requires the following:
  • SSH access to a compute instance with access to the MySQL DB system, enabling you to run MySQL Shell on the compute instance. See Networking.
  • MySQL Shell 8.0.27, or higher.
  • Enough storage to accommodate imported data, and enough network bandwidth for the import.
Do the following to import data to a MySQL DB system:
  1. Run MySQL Shell in the Oracle Cloud Infrastructure compute instance.
  2. Switch to the JavaScript input type, by typing \js and pressing Enter.
  3. Run the following command to start a global session by connecting to the endpoint of the DB system:
    \c <UserName>@<DBSystemEndpointIPAddress>
    • \c: Specifies the Shell command to establish a new connection.
    • <UserName>: Specifies the username for the DB System.
    • <DBSystemEndpointIPAddress>: Specifies the IP address of the endpoint of the DB system.
  4. If you are importing a dump that is located in the Oracle Cloud Infrastructure compute instance, run the following command to import data to the MySQL HeatWave Service DB system:
    util.loadDump("<URL>", {waitDumpTimeout: <TimeinSeconds> , updateGtidSet: "append" })
    • util.loadDump: Specifies the command to import data to the MySQL DB System.
    • <URL>: Specifies the path to a local directory containing the dump files. For example, /home/opc/dump.
    • waitDumpTimeout: (Optional) Enables you to apply a dump that is still in the process of being created. Tables are loaded as they become available, and the utility waits for the specified number of seconds after new data stops arriving in the dump location. When the timeout elapses, the utility aborts the import.
  5. If you are importing a dump from an Oracle Cloud Infrastructure Object Storage bucket, run the following command to import data to the MySQL HeatWave Service DB system:
    util.loadDump("<URL>", {threads: <Number>, osBucketName: "<MDSBucket>", waitDumpTimeout: <TimeinSeconds> , updateGtidSet: append })
    • util.loadDump: Specifies the command to import data to the MySQL DB System.
    • <URL>: Specifies the path prefix that the dump files have in the bucket, which was assigned using the outputUrl parameter when the dump was created.
    • threads: (Optional) Specifies the number of parallel threads to use to upload chunks of data to the target MySQL instance. The default value is 4.
    • osBucketName: Specifies the name of the Object Storage bucket.
    • waitDumpTimeout: (Optional) Enables you to apply a dump that is still in the process of being created. Tables are loaded as they become available, and the utility waits for the specified number of seconds after new data stops arriving in the dump location. When the timeout elapses, the utility aborts the import.
    • updateGtidSet: append: (Optional) For inbound replication, adds the transaction IDs from the source gtid_executed GTID set, to the replica gtid_purged GTID set. This lets you begin replication from the source without re-executing every past transaction from the source. Adding the GTIDs to gtid_purged tells the replica that those transactions have already been executed, although they are not present in the source binary log. See GTID Format and Storage.
  6. Retrieve the value of gtid_executed from the dump. For the dump in Object Storage, the gtid_executed GTID set is included in the dump metadata from MySQL Shell's instance dump utility, schema dump utility, or table dump utility, as the gtidExecuted field in the @.json dump file.
  7. Connect to the DB system replica and append the gtid_executed GTID set to the replica's gtid_purged GTID set using the following command:
    CALL sys.SET_GTID_PURGED("+<gtidSet>")

    This command runs a stored procedure on the replica to change the system variable value. <gtidSet> is the value from the gtidExecuted field in the @.json dump file, and the + sign appends it to the replica's gtid_purged GTID set.

The data is imported into the DB system.

Related Topics