Exporting a MySQL Instance

Export a MySQL instance to an Object Storage bucket using dump utilities of MySQL Shell. You can then use the data import feature to import data from the Object Storage bucket to a DB system present in the same region.

Use either of the following dump utility:

  • util.dumpInstance(outputUrl[, options]): MySQL instance export utility that exports all compatible schemas to an Object Storage bucket or to local files. By default, this utility exports users, events, routines, and triggers. See Dump Utilities.
  • util.dumpSchemas(schemas, outputUrl[, options]): MySQL schema export utility that exports selected schemas to an Object Storage bucket or to local files.
  • util.dumpTables(schema, tables, outputUrl[, options]): MySQL table export utility that exports selected tables of a schema to an Object Storage bucket or to local files.

While exporting data, perform compatibility checks on the schemas. If there are any issues, the dump utility aborts the export and produces a detailed list of issues and suggests steps to correct them. Also, if there is a connection interruption while exporting data, you have to re-run the dump utility. You cannot pause and resume exporting data.

Using MySQL Shell

Use the MySQL Shell dumpInstance utility to export a MySQL instance to an Object Storage bucket.

This task requires the following:
  • MySQL Shell 8.0.27, or higher. Exports created by MySQL Shell 8.0.27, or higher, cannot be imported by earlier versions of MySQL Shell. The latest version of MySQL Shell is recommended.
  • Access to Object Storage and an existing bucket.
  • A valid configuration file. If you have installed and configured the CLI in the default location, you have a valid configuration file. If you have not installed and configured the CLI, you must either install it, or create a configuration file manually. See SDK and CLI Configuration File.
Do the following to export a MySQL instance to an Object Storage bucket:
  1. Run MySQL Shell in client machine that contains the CLI configuration file.
  2. Switch to the JavaScript input type, by typing \js and pressing Enter.
  3. Run the following command to start a global session by connecting to the MySQL instance:
    \c <UserName>@<MySQLIPAddress>
    • \c: Specifies the Shell command to establish a new connection.
    • <UserName>: Specifies the user name of the MySQL instance.
    • <MySQLIPAddress>: Specifies the IP address or host name of the MySQL instance.
  4. (Recommended optional step) Run the following command to test run exporting the entire MySQL instance. It checks for DB system's compatibility issues and lists those issues together with suggested solutions in the output.
    util.dumpInstance("", {mode: "dryrun", ocimds: true})
    Identify any compatibility options that remove the issues found. You must include these compatibility options to export the MySQL instance successfully when the ocimds option is enabled.
  5. Run the following command to export the entire MySQL instance to an Object Storage bucket:
    util.dumpInstance("<BucketPrefix>", {osBucketName: "<MDSBucket>", threads: <ThreadSize>, ocimds: true, 
        compatibility: ["<comma separated list of compatibility options>], bytesPerChunk: "<ChunkSize>"})
    • util.dumpInstance: Specifies command to export an entire MySQL instance.
    • <BucketPrefix>: (Optional) Adds a prefix to the files uploaded to the bucket. If this is specified, the files are uploaded to the defined bucket with the prefix in the following format: <BucketPrefix>/filename, similarly to a file path. For example, if <BucketPrefix> is set to test, every file uploaded to the defined bucket, <MDSBucket>, is done so as test/filename. If you download the file, the prefix is treated as a folder in the download. For local exports, this parameter is the path to the local directory you want to export to.

      Although the contents of this parameter are optional, the quotation marks are not. Even if you do not intend to use a prefix, you must include the quotation marks in the syntax of the command.

    • osBucketName: Specifies the case-sensitive name of the Object Storage bucket to export to. MySQL Shell uses the tenancy and user information defined in the config file.
    • threads: (Optional) Specifies the number of processing threads to use for this task. The default value is 4. For best performance, it is recommended to set this parameter to the number of CPU cores available on the database server.
    • ocimds: Checks the data for compatibility with HeatWave Service. When this is set to true, you cannot export an instance if it is incompatible with HeatWave Service.
      Note

      Importing the data into a HeatWave DB system with the util.loadDump utility requires the dump to be created with the ocimds option.
    • compatibility: List the parameters that specify which modifications are performed on the exported data. You should specify the list of compatibility options suggested in the dryrun mode. See Compatibility Checks.
    • bytesPerChunk: (Optional) For large datasets, it is recommended to use this parameter to define larger chunks. The default chunk size is 64 MB. For example, bytesPerChunk: 128M, specifies a chunk size of 128MB.
The data is uploaded to the specified bucket.

Compatibility Checks

HeatWave Service has several security-related restrictions that are not present in a MySQL instance. Use the ocimds option of the dump utility to perform compatibility checks on the dumped data, and if there are any issues, the utility aborts the dump and produces a detailed list of issues and suggests steps to correct them.

The following command shows how to perform compatibility checks using the ocimds option in the dryrun mode. Some issues found by the ocimds option may require you to manually edit the schema before it can be loaded into HeatWave Service.

util.dumpInstance("", {mode: "dryrun", ocimds: true})

After you have identified the compatibility issues and compatibility options, you can specify the options in the command that exports the data.

util.dumpInstance("<BucketPrefix>", {osBucketName: "<MDSBucket>", ocimds: true, 
    compatibility: ["force_innodb", "strip_definers", "strip_restricted_grants", 
    "skip_invalid_accounts", "strip_tablespaces", "ignore_missing_pks"] } )

You can use the following comma-separated compatibility options to automatically modify the exported data, which resolves some of these compatibility issues:

  • force_innodb: HeatWave Service supports the InnoDB storage engine only. This option modifies the ENGINE clause of CREATE TABLE statements that use incompatible storage engines and replaces them with InnoDB.
  • strip_definers: Strips the "DEFINER=account" clause from views, routines, events, and triggers. HeatWave Service requires special privileges to create these objects with a definer other than the user loading the schema. By stripping the DEFINER clause, these objects will be created with that default definer. Views and Routines have their SQL SECURITY clause changed from DEFINER to INVOKER. This ensures that the access permissions of the account querying or calling these are applied, instead of the user that created them. If your database security model requires views and routines have more privileges than their invoker, manually modify the schema before loading it. See DEFINER and SQL Security.
  • strip_restricted_grants: Certain privileges are restricted in the HeatWave Service. Privileges such as RELOAD, FILE, SUPER, BINLOG_ADMIN, and SET_USER_ID. You cannot create users granting these privileges. This option strips these privileges from dumped GRANT statements.
  • skip_invalid_accounts: You cannot export a user that has no password defined. This option skips any such users.
  • strip_tablespaces: HeatWave Service has some restrictions on tablespaces. This option strips the TABLESPACE option from CREATE TABLE statements, so all tables are created in their default tablespaces.
  • Primary key flags:
    • create_invisible_pks: Primary keys are required by high availability DB systems. If you intend to export data for use in a highly available DB system, add primary keys if they are not defined on the tables. This compatibility flag adds invisible primary keys to each table which requires them. See Prerequisites.
    • ignore_missing_pks: If you do not intend to import into a high availability DB system, this compatibility flag ignores missing primary keys in your dump.

Additionally, DATA DIRECTORY, INDEX DIRECTORY, and ENCRYPTION options in CREATE TABLE statements are always commented out in DDL scripts if the ocimds option is enabled.

Note

If you intend to export an older version of MySQL, such as 5.7.9, and if you are using a MySQL Shell version older than 8.0.30, it is recommended to run the MySQL Shell Upgrade Checker Utility to generate a report of all potential issues with your migration. See Upgrade Checker Utility.