Exporting and Importing Data

Check the compatibility of your on-premise MySQL with MySQL Database Service and use MySQL Shell to export data from MySQL Servers and MySQL Shell or Console to import data to MySQL DB systems.

Exporting and Importing Data Overview

Check the compatability of your on-premise MySQL with MySQL Database Service and use the dump and load utilities of MySQL Shell to export to and import data from Object Storage.

The dump and load utilities of MySQL Shell supports all types of exports and imports. The minimum supported source versions, per major version, of MySQL are as follows:

Note

It is recommended to use the latest version of MySQL Shell.
  • MySQL 8.0.11: Fully supported by MySQL Shell.
  • MySQL 5.7.9: Fully supported by MySQL Shell.
  • MySQL 5.6.10: Fully supported by MySQL Shell 8.0.26, or higher. As of MySQL Shell 8.0.22, you can dump instances, schemas, and tables from a MySQL 5.6.10, or higher, instance, but you cannot dump user accounts. To dump user accounts, use MySQL Shell 8.0.26, or higher.

MySQL Shell provides the following utilities:

  • dumpInstance(): MySQL instance export utility that exports all compatible schemas to an Object Storage bucket or to local files. By default, this utility exports users, events, routines, and triggers.
  • dumpSchemas(): Schema export utility that exports selected schemas to an Object Storage bucket or to local files. See MySQL Shell Instance and Schema Dump Utilities.
  • loadDump(): An import utility that imports schemas to a DB system. See MySQL Shell Dump Loading Utility. To import a schema to a MySQL DB system, install MySQL Shell on a machine with access to the DB system. This can be a local machine using a VPN connection to the VCN, or a compute instance. See Connecting to a DB System, Bastion Session, and VPN Connection.

The dump files are exported as DDL files specifying the schema structure and tab-separated value (.tsv) files containing the data. By default, the .tsv files are compressed using zstd, but gzip is also available as an option. You can also choose no compression but if you are uploading to Object Storage, it is recommended to use the default.

To further improve performance, large tables are chunked by default. The default chunk size is 32MB. You can disable chunking, but this is not recommended for large databases. You can import the chunks by parallel threads, which can greatly improve import performance.

Compatibility Between On-Premise MySQL and MySQL Database Service

MySQL Database Service has several security-related restrictions that are not present in an on-premise MySQL instance. Use the ocimds option of the dump utility to perform compatibility checks on the schemas and abort the dump if any issues are found. The utility produces a detailed list of issues and suggests steps to correct them.

The dump utility in MySQL Shell makes it easier to load existing databases into the MySQL Database Service. The loadDump utility allows import of dumps that are created with the ocimds option enabled only.

The following command shows how to perform compatibility checks using the ocimds option. Some issues found by the ocimds option may require you to manually edit your schema before it can be loaded into MySQL Database Service.

util.dumpInstance("<BucketPrefix>", {osBucketName: "<mds-bucket>", ocimds: true,
compatibility: ["strip_restricted_grants", "strip_definers", "ignore_missing_pks",
"skip_invalid_accounts"]})

You can use the compatibility options to automatically modify the dumped schemas, which resolves some of these compatibility issues. You can pass one or more of the following comma-separated options to compatibility:

  • force_innodb: MySQL Database Service supports the InnoDB storage engine, only. This option modifies the ENGINE= clause of CREATE TABLE statements that use incompatible storage engines and replaces them with InnoDB.
  • strip_definers: Strips the "DEFINER=account" clause from views, routines, events, and triggers. MySQL Database Service requires special privileges to create these objects with a definer other than the user loading the schema. By stripping the DEFINER clause, these objects will be created with that default definer. Views and Routines have their SQL SECURITY clause changed from DEFINER to INVOKER. This ensures that the access permissions of the account querying or calling these are applied, instead of the user that created them. If your database security model requires views and routines have more privileges than their invoker, manually modify the schema before loading it. See DEFINER and SQL Security.
  • strip_restricted_grants: Certain privileges are restricted in the MySQL Database Service. Privileges such as RELOAD, FILE, SUPER, BINLOG_ADMIN, and SET_USER_ID. You cannot create users granting these privileges. This option strips these privileges from dumped GRANT statements.
  • skip_invalid_accounts: You cannot export a user that has no password defined. This option skips any such users.
  • strip_tablespaces: Tablespaces have some restrictions in the MySQL Database Service. If you need tables created in their default tablespaces, this option strips the TABLESPACE= option from CREATE TABLE statements.
  • Primary key flags:
    • create_invisible_pks: Primary keys are required by high availability. If you intend to export data for use in a highly available DB system, add primary keys as they are not defined on the tables. This compatibility flag adds invisible primary keys to each table which requires them. See Prerequisites.
    • ignore_missing_pks: If you do not intend to use high availability on your DB system, this compatibility flag ignores missing primary keys in your dump.

Additionally, DATA DIRECTORY, INDEX DIRECTORY, and ENCRYPTION options in CREATE TABLE statements are always commented out in DDL scripts if the ocimds option is enabled.

Note

If you intend to export an older version of MySQL, such as 5.7.9, the minimum supported source version, it is recommended to run the MySQL Shell Upgrade Checker Utility to generate a report of all potential issues with your migration. See MySQL Shell User Guide - Upgrade Checker Utility.

Migrating from an On-Premise MySQL to MySQL Database Service

Use the Console and MySQL Shell to migrate from an on-premise MySQL to MySQL Database Service.

This task requires the following:
  • Permissions to access the Console, create DB system, view and create a VCN configuration.
Do the following to migrate from an on-premise MySQL to MySQL Database Service:
  1. (Optional) Create a VCN with a public and private subnet. See Creating a Virtual Cloud Network.
  2. Create a MySQL DB system. See Creating a DB System Using the Console.
  3. Create a VPN connection. See VPN Connection.
  4. Create an Object Storage bucket to transfer the source data. See To create a bucket section in Using the Console.
  5. Export your source data to Object Storage. See Exporting Data Using MySQL Shell.
  6. Import the source data to your replica DB system. See Importing Data Using Object Storage Bucket and MySQL Shell.
  7. Populate your DB system using MySQL Shell. See Populating the DB System Replica Using MySQL Shell.

Exporting Data Using MySQL Shell

Use the MySQL Shell dumpInstance utility to export data from a supported MySQL Server source to an Object Storage bucket .

This task requires the following:
  • MySQL Shell 8.0.27, or higher. The commands in this task use the JS execution mode of MySQL Shell.
    Note

    Exports created by MySQL Shell 8.0.27, or higher, cannot be imported by earlier versions of MySQL Shell. The latest version of MySQL Shell is recommended.
  • Access to Object Storage and an existing bucket.
  • A valid configuration file. If you have installed and configured the CLI in the default location, you have a valid configuration file. If you have not installed and configured the CLI, you must either install it, or create a configuration file manually. See SDK and CLI Configuration File.
  • You have run the dumpInstance command with the dryRun and ocimds parameters set to true. This performs a test run of the export, checking for compatibility issues, and listing those issues in the output. See MySQL Shell Instance and Schema Dump Utilities.
To export a local MySQL instance to an Object Storage bucket, you can either export the entire MySQL instance, using util.dumpInstance, or specific schemas using util.dumpSchemas. The syntax for each command is:
  • util.dumpInstance(outputUrl[, options])
  • util.dumpSchemas(schemas, outputUrl[, options])

This task uses util.dumpInstance with compatibility options.

  1. The following command exports an entire instance, stripping any grants which cannot be used in a MySQL Database Service DB system:
    util.dumpInstance("bucketPrefix", {osBucketName: "mds-bucket", threads: n, ocimds: true, 
        compatibility: ["strip_restricted_grants", "strip_definers", "ignore_missing_pks", 
        "skip_invalid_accounts"]})
    • util.dumpInstance: Exports all data in the MySQL instance.
    • bucketPrefix: (Optional). Adds a prefix to the files uploaded to the bucket. If this is specified, the files are uploaded to the defined bucket with the prefix in the following format: bucketPrefix/filename, similarly to a file path. For example, if bucketPrefix is set to test, every file uploaded to the defined bucket, mds-bucket, is done so as test/filename. If you download the file, the prefix is treated as a folder in the download. For local exports, this parameter is the path to the local directory you want to export to.
      Note

      Although the contents of this parameter are optional, the quotation marks are not. Even if you do not intend to use a prefix, you must include the quotation marks in the syntax of the command. For example:
      util.dumpInstance("", {osBucketName: "mds-bucket", threads: n, ocimds: true, 
          compatibility: ["strip_restricted_grants", "strip_definers", "ignore_missing_pks", 
          "skip_invalid_accounts"]})
    • osBucketName: Specifies the case-sensitive name of the Object Storage bucket to export to. MySQL Shell uses the tenancy and user information defined in the config file.
    • threads: Specifies the number of processing threads to use for this task. Default is 4. For best performance, it is recommended to set this parameter to the number of CPU cores available on the database server.
    • ocimds: Checks the data for compatibility with MySQL Database Service. When this is set to true, you cannot export an instance if it is incompatible with MySQL Database Service. If you are using MySQL Shell 8.0.22 to 8.0.26, setting this to true automatically enables ociParManifest. It is not enabled automatically with MySQL Shell 8.0.27.
    • ociParManifest: true: (Optional) Generates a manifest file (@.manifest.json) which generates a PAR for read access for each item in the dump, and a manifest file (@.manifest.json) listing all the PAR URLs. The PARs expire after a week by default, which you can change using the ociParExpireTime option.
      Note

      Importing using a manifest file (@.manifest.json) is a deprecated functionality. Import from a bucket or bucket prefix PAR instead.
    • compatibility: List the parameters that specify which modifications are performed on the exported data. The compatibility options used here assume that the exported data will be used in a standalone DB System, not a highly available DB System. See Compatibility Between On-Premise MySQL and MySQL Database Service.
    Note

    For large datasets, it is recommended to use the bytesPerChunk parameter to define larger chunks. The default chunk size is 32MB. To increase the size of the individual chunks, add the bytesPerChunk parameter to the command. For example: bytesPerChunk: 128M specifies a chunk size of 128MB.

    For more information on the options available to the dumpInstance and dumpSchemas utilities, see Instance and Schema Dump Utilities

  2. The command generates output similar to the following:
    Checking for compatibility with MySQL Database Service 8.0.27
    NOTE: User root@localhost had restricted privileges (RELOAD, FILE, SUPER, BINLOG_ADMIN, SET_USER_ID) removed
    NOTE: Database world had unsupported ENCRYPTION option commented out
    NOTE: Database world_x had unsupported ENCRYPTION option commented out
    Compatibility issues with MySQL Database Service 8.0.21 were found and repaired. 
    Please review the changes made before loading them.
    Acquiring global read lock
    All transactions have been started
    Locking instance for backup
    Global read lock has been released
    Writing global DDL files
    Writing users DDL
    Writing DDL for schema `world`
    Writing DDL for table `world`.`city`
    Preparing data dump for table `world`.`city`
    .....
    .....
    Preparing data dump for table `world_x`.`countrylanguage`
    Data dump for table `world_x`.`countrylanguage` will be chunked using column `CountryCode`
    Running data dump using 8 threads.
    NOTE: Progress information uses estimated values and may not be accurate.
    Writing DDL for table `world_x`.`countryinfo`
    Writing DDL for table `world_x`.`countrylanguage`
    Data dump for table `world_x`.`country` will be written to 1 file
    Data dump for table `world_x`.`city` will be written to 1 file
    Data dump for table `world`.`city` will be written to 1 file
    Data dump for table `world`.`countrylanguage` will be written to 1 file
    Data dump for table `world`.`country` will be written to 1 file
    Data dump for table `world_x`.`countryinfo` will be written to 1 file
    Data dump for table `world_x`.`countrylanguage` will be written to 1 file
    2 thds dumping - 100% (10.84K rows / ~10.81K rows), 1.33K rows/s, 71.70 KB/s uncompressed, 15.01 KB/s compressed
    Duration: 00:00:08s
    Schemas dumped: 3
    Tables dumped: 7
    Uncompressed data size: 514.22 KB
    Compressed data size: 106.78 KB
    Compression ratio: 4.8
    Rows written: 10843
    Bytes written: 106.78 KB
    Average uncompressed throughput: 62.96 KB/s
    Average compressed throughput: 13.07 KB/s
The data is uploaded to the specified bucket.

Importing Data

Use MySQL Shell or the Console to import data to the MySQL DB system.

Note

Ensure your DB system has enough storage space for import.
  • Using Console: Use the Console to import data using an existing PAR URL, or generate one using the wizard provided. The wizard can only generate PAR URLs for buckets and bucket prefix. It cannot generate PAR URLs for manifest files. See Importing Data Using Pre-Authenticated Requests.
    Note

    You can import to a DB system in the same region as the Object Storage bucket only.
  • Using MySQL Shell: You have the following ways to import data using MySQL Shell:

Importing Data Using Pre-Authenticated Requests

Use the Console to import data from a MySQL Shell dump to the MySQL DB system using Pre-Authenticated Request (PAR).

  1. Open the navigation menu, and select Databases. Under MySQL, click DB Systems.
  2. Click Create DB System.
  3. Configure the DB system, and then click Show Advanced Options.
  4. Click the Data Import tab and provide the following information:
    • PAR Source URL: (Optional) Specify the Pre-Authenticated Request (PAR) URL for the bucket or bucket prefix.
      Note

      Importing using a manifest file (@.manifest.json) is a deprecated functionality.
    • Click here to create a PAR URL for an existing bucket: (Optional) Click the link to create a PAR URL for an existing bucket, and provide the following information:
      • Select a bucket in <CompartmentName>: Select the Object Storage bucket that contains your dump.
      • Configure Prefix:
        • Select the prefix: Select the prefix from the list of valid prefixes.
        • Enter a prefix: Select the option to enable you to define a bucket prefix, similar to a folder name. The prefix must exist in the selected bucket. Prefix names take the format prefixName/. Omitting the forward slash delimiter in the PAR results in an invalid URL. You can specify paths with multiple prefixes, prefixName/prefixName1/prefixName2/.
        Note

        MySQL Database supports the folder-type of prefix only. The filename-matching prefix type is not supported.
      • Specify an expiration time for the PAR: Select an expiration time for the PAR. The default value is one week.
  5. Click Create and set PAR URL to generate the PAR URL.
  6. Click Create.

Importing Data Using Pre-Authenticated Requests and MySQL Shell

Use MySQL Shell and Pre-Authenticated Requests (PAR) to import data from an Object Storage bucket or bucket prefix to a MySQL DB system.

This task requires the following:
  • One of the following network access types to the target MySQL DB system. See Networking Setup.
    • A bridged connection using FastConnect or VPN, enabling you to run MySQL Shell locally.
    • SSH access to a compute instance with access to the MySQL DB system, enabling you to run MySQL Shell on the compute instance
  • MySQL Shell 8.0.27, or higher.
  • Access to Object Storage and an existing bucket that contains the exported files.
  • PAR URL for the bucket or prefix with the access types Permit object reads and Enable Object Listing. See Working with Pre-Authenticated Requests.
    Note

    MySQL Shell supports the folder-type of prefix, only. The filename-matching prefix type is not supported.
  • A valid CLI configuration file. If you have installed and configured the CLI in the default location, you have a valid configuration file. If you have not installed and configured the CLI, you must either install it, or create a configuration file manually. See SDK and CLI Configuration File.
  • Enough storage space in the DB system for importing data.
Do the following to import data from Object Storage to a MySQL DB system:
  1. Run MySQL Shell.
  2. Switch to the JavaScript input type, by typing \js and pressing Enter.
  3. Run the following command to start a global session by connecting to the endpoint of the DB system:
    \c <UserName>@<DBSystemEndpointIPAddress>
    • \c: Specifies the Shell command to establish a new connection.
    • UserName: Specifies the username for the DB System.
    • DBSystemEndpointIPAddress: Specifies the IP address of the endpoint of the DB system.
  4. Run the following command to import data from an Object Storage bucket, mds-bucket, to the MySQL Database Service DB system:
    util.loadDump("PARURL", {progressFile: "nameOfProgressFile.json"})
    • util.loadDump: Specifies the command to import data from the specified Object Storage bucket to MySQL DB System.
    • PARURL: Specifies the PAR URL of the bucket or bucket prefix. If you are using a PAR for a bucket prefix, manually define the prefix name in the URL. For example, if your generated PAR URL is: objectstorage.region.com/p/secret/n/namespace/b/bucketName/o/ and the prefix you used to generate the PAR is Prefix001, edit the PAR to produce the following: objectstorage.region.com/p/secret/n/namespace/b/bucketName/o/Prefix001/.
    • progressFile: Specifies the filename of the local progress file. This option is mandatory, the command will fail if it is not present. You can specify the following values:
      • progressFile: "": No progress file is generated.
      • progressFile: "progressFile.json": A file named progressFile.json is written to your home or user directory, depending on your operating system.
      • progressFile: "C:/temp/progressFile.json": A file named progressFile.json is written to the temp directory on your C: drive.
The data is imported into the DB system and the progress is written to the progress file.

Importing Data Using Manifest File Pre-Authenticated Requests and MySQL Shell

Use MySQL Shell and Pre-Authenticated Request (PAR) of a manifest file to import data from Object Storage to a MySQL DB system.

Note

Importing using a manifest file (@.manifest.json) is a deprecated functionality. Import from a bucket or bucket prefix PAR instead.
This task requires the following:
  • One of the following network access types to the target MySQL DB system. See Networking Setup.
    • A bridged connection using FastConnect or VPN, enabling you to run MySQL Shell locally.
    • SSH access to a compute instance with access to the MySQL DB system, enabling you to run MySQL Shell on the compute instance
  • MySQL Shell 8.0.27, or higher.
  • Access to Object Storage and an existing bucket that contains the exported files.
  • Generated read-only PAR URL for the manifest file (@.manifest.json).
  • A valid CLI configuration file. If you have installed and configured the CLI in the default location, you have a valid configuration file. If you have not installed and configured the CLI, you must either install it, or create a configuration file manually. See SDK and CLI Configuration File.
  • Enough storage space in the DB system for importing data.
  • A MySQL command-line client such as MySQL Shell.
Do the following to import data from Object Storage to a MySQL DB system:
  1. Run MySQL Shell.
  2. Switch to the JavaScript input type, by typing \js and pressing Enter.
  3. Run the following command to start a global session by connecting to the endpoint of the DB system:
    \c <UserName>@<DBSystemEndpointIPAddress>
    • \c: Specifies the Shell command to establish a new connection.
    • UserName: Specifies the username for the DB System.
    • DBSystemEndpointIPAddress: Specifies the IP address of the endpoint of the DB system.
  4. Run the following command to import data from an Object Storage bucket, mds-bucket, to the MySQL Database Service DB system:
    util.loadDump("PARURLofManifest", {osBucketName: "mds-bucket", threads: n, 
        progressFile: "path/to/progressFile.json"})
    • util.loadDump: Specifies the command to import data from the specified Object Storage bucket to the DB system.
    • PARURLofManifest: Specifies the PAR URL of the manifest file.
    • osBucketName: Specifies the name of the Object Storage bucket to import from. MySQL Shell uses the tenancy and user information that you define in the config file.
    • threads: Specifies the number of processing threads to use for this task. The default value is 4. For best performance, it is recommended to set this parameter to twice the number of OCPUs used by the target DB system.
    • progressFile: Specifies the path to the JSON file you want to use to record the progress of the import.
    Note

    If you are importing data that was exported from a MySQL 5.7 server, such as 5.7.9, or higher, specify the "ignoreVersion": true option. If you do not specify this option, and you try to import data exported from 5.7, the import fails.
The data is imported into the DB system and the progress is written to the progress file.
Note

You can import data from Object Storage while export is uploading that data. See Simultaneously Importing and Exporting Data Using MySQL Shell.

Importing Data Using Object Storage Bucket and MySQL Shell

Use MySQL Shell to import a MySQL Shell dump from Object Storage bucket or bucket prefix to a MySQL DB system.

This task requires the following:
  • One of the following network access types to the target MySQL DB system. See Networking Setup.
    • A bridged connection using FastConnect or VPN, enabling you to run MySQL Shell locally.
    • SSH access to a compute instance with access to the MySQL DB system, enabling you to run MySQL Shell on the compute instance
  • MySQL Shell 8.0.27, or higher.
    Note

    Exports created by MySQL Shell 8.0.27, or higher, cannot be imported by earlier versions of MySQL Shell. The latest version of MySQL Shell is recommended.
  • Access to Object Storage and an existing bucket that contains the exported files.
  • A valid CLI configuration file. If you have installed and configured the CLI in the default location, you have a valid configuration file. If you have not installed and configured the CLI, you must either install it, or create a configuration file manually. See SDK and CLI Configuration File.
  • Enough storage space in the DB system for importing data.
Do the following to import a MySQL Shell dump from Object Storage to a MySQL DB system:
  1. Run MySQL Shell.
  2. Switch to the JavaScript input type, by typing \js and pressing Enter.
  3. Run the following command to start a global session by connecting to the endpoint of the DB system:
    \c <UserName>@<DBSystemEndpointIPAddress>
    • \c: Specifies the Shell command to establish a new connection.
    • UserName: Specifies the username for the DB System.
    • DBSystemEndpointIPAddress: Specifies the IP address of the endpoint of the DB system.
  4. Run the following command to import data from an Object Storage bucket, mds-bucket, to the MySQL Database Service DB system:
    util.loadDump("bucketPrefix", {osBucketName: "mds-bucket", threads: n})
    • util.loadDump: Specifies the command to import data from the specified Object Storage bucket to MySQL DB System.
    • bucketPrefix: (Optional) If the data was uploaded to Object Storage with a prefix, specifies that prefix in the import command.
    • osBucketName: Specifies the name of the Object Storage bucket to import from.
    • threads: Specifies the number of processing threads to use for this task. The default value is 4. For best performance, it is recommended to set this parameter to twice the number of OCPUs used by the target DB system.

    For more information on the options available to the loadDump utility, see Dump Loading Utility

    Note

    If you are importing data that was exported from a MySQL 5.7 server, such as 5.7.9, or higher, specify the "ignoreVersion": true option. If you do not specify this option, and you try to import data exported from 5.7, the import fails.
  5. Enter the text of the second step here.
    Loading DDL and Data from OCI ObjectStorage bucket=Shell-Bucket, prefix='dump1' using 12 threads.
    Target is MySQL 8.0.21-cloud (MySQL Database Service). Dump was produced from MySQL 8.0.21
    Checking for pre-existing objects...
    Executing common preamble SQL
    Executing DDL script for schema `world_x`
    Executing DDL script for `world_x`.`countrylanguage`
    Executing DDL script for `world_x`.`country`
    Executing DDL script for `world_x`.`countryinfo`
    Executing DDL script for `world_x`.`city`
    Executing DDL script for schema `world`
    Executing DDL script for `world`.`countrylanguage`
    Executing DDL script for `world`.`country`
    Executing DDL script for `world`.`city`
    Executing DDL script for schema `imdb`
    [Worker006] world_x@countryinfo@@0.tsv.zst: Records: 239  Deleted: 0  Skipped: 0  Warnings: 0
    [Worker002] world@country@@0.tsv.zst: Records: 239  Deleted: 0  Skipped: 0  Warnings: 0
    [Worker003] world_x@country@@0.tsv.zst: Records: 239  Deleted: 0  Skipped: 0  Warnings: 0
    [Worker005] world_x@countrylanguage@@0.tsv.zst: Records: 984  Deleted: 0  Skipped: 0  Warnings: 0
    [Worker008] world@countrylanguage@@0.tsv.zst: Records: 984  Deleted: 0  Skipped: 0  Warnings: 0
    [Worker001] world_x@city@@0.tsv.zst: Records: 4079  Deleted: 0  Skipped: 0  Warnings: 0
    [Worker007] world@city@@0.tsv.zst: Records: 4079  Deleted: 0  Skipped: 0  Warnings: 0
    Executing common postamble SQL                           
                                            
    7 chunks (10.84K rows, 514.22 KB) for 7 tables in 3 schemas were loaded in 5 sec 
    (avg throughput 102.84 KB/s)
    0 warnings were reported during the load.
    Note

    If you cancel the process while it is running, (pressing Ctrl+c once) all existing threads are allowed to complete and Shell writes a progress file to Object Storage, recording the progress of the import process. This enables you to pick up where you left off, if you want to restart the import, and assuming no external changes were made to the data. If you cancel the import by pressing Ctrl+c twice, the progress file is written, and the InnoDB engine rolls back the ongoing transactions. This can take some time.
The data is imported into the DB system.
Note

You can import data from Object Storage while export is uploading that data. See Simultaneously Importing and Exporting Data Using MySQL Shell.

MySQL Shell Progress File

The MySQL Shell progress file records the progress of your imports and exports and enables you to restart operations in the event of a network or service disruption.

The progress file is defined by the progressFile parameter.

The following example loads data from Object Storage and creates a progress file called progressfile.json in the your home directory:
util.loadDump("bucketPrefix", {osBucketName: "mds-bucket", threads: 4, progressFile: "~/progressfile.json"})
Note

If you specify the progressFile: parameter, but leave the value blank, progressFile: "", no progress file is written. In that case, you cannot resume the import in the event of a problem.

Simultaneously Importing and Exporting Data Using MySQL Shell

You can load a dump while it is still being created in the Object Storage bucket.

The loadDump option enables you to load a dump while it is still being created in the Object Storage bucket. When all uploaded chunks are processed, the command either waits for more data, the dump is marked as completed, or the defined timeout passes.

The following example specifies a 5 minute timeout:

util.loadDump("bucketPrefix", {osBucketName: "mds-bucket", threads: n, waitDumpTimeout: 300})