8 Configuring the Storage Appliance

This chapter describes how to configure the storage appliance, which is included in the Exalogic machine.

This chapter contains the following topics:

8.1 Prerequisites

The following are the prerequisites for configuring the storage appliance:

  • Powering on the storage appliance by pressing the switches on the storage controllers, as described in Section 3.3.3, "Powering On the Exalogic Machine"

  • Gathering information, such as IP address, IP net mask, Host name, Domain Name Server (DNS) domain name, DNS server IP address, Default router IP address, and Password for configuring an Ethernet interface on the storage controllers

  • Running the Oracle Exalogic Configuration Utility to reconfigure IP addresses and other network parameters for the storage appliance

8.2 Getting Started

You can access the storage appliance over Ethernet via the Cisco Ethernet Management Switch.

The storage controllers are configured in an active-passive cluster, by default. The software propagates the configuration to the peer controller during cluster initialization. After the cluster is initialized, you can administer the system from either storage controller.

Tip:

Refer to the Cluster documentation in the Oracle ZFS Storage Appliance Administration Guide located on http://docs.oracle.com/cd/E27998_01/html/E48433/index.html for more information.

Complete the following steps:

  1. Verify that the storage appliance is powered up and on the network.

  2. Connect an Ethernet cable from your network to the NET0 port on the back panel of the controller (storage server head).

  3. Open a terminal window and use an SSH client to connect to the administrative console of the storage appliance (ssh root@192.168.128.256). When prompted, enter the administrative password for the storage appliance that you set when running Oracle Exalogic Configuration Utility to configure the Exalogic machine.

    1. After login, at the command prompt, type start /SP/console.

    2. Type y to confirm that you want to start the console.

    3. Press any key to begin configuring the appliance. The shell interface configuration screen appears. NET-0 at the top of the screen should be underlined.

    4. Verify the information on the screen, or enter values that do not appear.

    5. Apply the values by pressing ESC-1 or the F1 key or by pressing Enter after confirming the password. The final shell configuration screen appears, confirming that your appliance is ready for further configuration using the browser user interface (BUI).

  4. Configure the remaining system parameters through a browser running on any client on the same network as the initial interface. The management software is designed to be fully featured and functional on the following supported web browsers: Firefox 2.x and 3.x, Internet Explorer 7, Internet Explorer 8, Safari 3.1 or later, and WebKit 525.13 or later.

  5. Direct your browser to the storage system using either the IP address or host name you assigned to the NET0 port as follows:

    https://ipaddress:215

    or

    https://hostname:215

    The login screen appears.

  6. Type root into the Username field and the administrative password that you entered into the appliance shell kit interface and press the Enter key. The Welcome screen appears.

  7. To begin configuring the system, click Start on the Welcome screen.You are guided through the Initial Configuration of the remaining network interfaces, DNS, time settings, directory service, and storage.

8.3 Storage Appliance Overview

This section introduces projects and shares.

8.3.1 Introduction to Projects

All file systems and LUNs are grouped into projects. A project defines a common administrative control point for managing shares. All shares within a project can share common settings, and quotas can be enforced at the project level in addition to the share level. Projects can also be used solely for grouping logically related shares together, so their common attributes (such as accumulated space) can be accessed from a single point.By default, the appliance creates node-level projects based on the number of compute nodes in your Exalogic machine when a storage pool is first configured. For example, for a compute node with the host name abc, the default project abc_1 is created. You can create all shares within this default project. However, Oracle recommends that you create additional projects for organizational purposes.

8.3.2 Introduction to Shares

Shares are file systems and LUNs that are exported over supported data protocols to clients of the appliance. File systems export a file-based hierarchy and can be accessed over NFS over IPoIB in the case of Exalogic machines. The project/share tuple is a unique identifier for a share within a pool. Multiple projects can contain shares with the same name, but a single project cannot contain shares with the same name. A single project can contain both file systems and LUNs, and they share the same namespace.

For a list of default shares created in the Exalogic machine, see Default Storage Configuration.

8.4 Configuration Overview

The storage appliance in the Exalogic machine is configured at different stages of the Exalogic machine setup and enterprise deployment.

The following are the configuration stages:

8.4.1 Initial Configuration

The initial configuration involves networking configuration for the NET0 interface, configuration of ILOM IP addresses, launch of service processor console, launch of several client network services, and the layout of the storage pool for standalone operation. When completed, the appliance in the Exalogic machine is ready for use, and it will have default shares configured for Exalogic compute nodes to access.

Note:

When you run the Oracle Exalogic Configuration Utility set of tools and scripts, the initial configuration for the storage appliance is completed.

For more information, see the Shares and Configuration sections of the Oracle ZFS Storage Appliance Administration Guide (http://docs.oracle.com/cd/E27998_01/html/E48433/toc.html). Alternatively, see the Oracle Fusion Middleware Exalogic Enterprise Deployment Guide for the recommended storage configuration in the Oracle Exalogic environment.

8.4.2 Connecting Storage Heads to the Management Network and Accessing the Web Interface

Figure 8-1 shows the physical network connections for the storage appliance.

Figure 8-1 Network Ports on the Storage Appliance

Description of Figure 8-1 follows
Description of "Figure 8-1 Network Ports on the Storage Appliance"

By default, the NET0 (igb0), NET1 (igb1), and NET2 (igb2) ports on the storage heads are connected to the Cisco management switch, which is included in the Exalogic machine. The igb0 and igb1 interfaces are reserved for administrative access, such as access via a web browser or via command line. This configuration ensures that the storage heads are always reachable, independent of the load on the network data interfaces, and independent of which head is active. One end of a free hanging cable is connected to NET3 (igb3). You can use the other end of this cable to connect to your data center network directly. Typically, for high availability purposes, this cable is connected to a data center switch other than the one that Exalogic's Cisco Management Switch is connected to.

To view the default network configuration of the storage appliance included in your Exalogic machine, do the following:

  1. In a web browser, enter the IP address or host name you assigned to the NET0 port of either storage head as follows:

    https://ipaddress:215

    or

    https://hostname:215

    The login screen appears.

  2. Type root into the Username field and the administrative password that you entered into the appliance shell kit interface and press the Enter key. The Welcome screen is displayed.

  3. Click the Configuration tab, and click NETWORK. The default networking configuration is displayed, as shown in Figure 8-2.

    Note:

    The interface names and IP addresses shown on the screens in this chapter are examples only. You must verify the interface names in your environment and use them accordingly.

    Figure 8-2 Network Configuration Screen

    Description of Figure 8-2 follows
    Description of "Figure 8-2 Network Configuration Screen"

    The Interfaces section shows the configured network interfaces. The green icon indicates that an interface is active on the storage head whose IP address or host name is used to access the administration console. The blue icon indicates that an interface is not active on the storage head. To view or edit the network settings for an interface, click the pencil icon. The interface settings are displayed in a screen, as in Figure 8-3.

    Figure 8-3 Network Interface Settings

    Description of Figure 8-3 follows
    Description of "Figure 8-3 Network Interface Settings"

    Note:

    The interface names and IP addresses shown on the screens in this chapter are examples only. You must verify the interface names in your environment and use them accordingly.

8.4.3 Cluster Network Configuration

The cluster is set up in an active-passive configuration. All resources, data interface links, and storage pool are owned by the active storage head. When the active node fails, all resources (except the one that is locked to the active node) will be taken over by the passive storage head.

In the example configuration for an active head, igb0 is used as the administrative network interface for the active storage head, such as storagenode1. The lock symbol indicates that igb0 is locked to this storage head. To access this active storage head in a browser, you can use the following URL using either the host name or the IP address:

https://storagenode1:215

or

https://<IP_storagenode1>:215

In the example configuration for a passive head, igb1 is used as the administrative network interface for the passive storage head, such as storagenode2. The lock symbol indicates that igb1 is locked to this storage head. To access this passive storage head in a browser, you can use the following URL using either the host name or the IP address:

https://storagenode2:215

or

https://<IP_storagenode2>:215

Note:

For more information about network configuration for the storage appliance, see the "Network" topic in the Oracle ZFS Storage Appliance Administration Guide.

8.4.4 Network Configuration Options

You can choose any of the following network configuration options for the storage appliance, based on your specific requirements:

8.4.4.1 Option 1: ASR Support and Separate Paths for Management and Disaster Recovery

In this default configuration, the igb0 port on your active storage head (head 1) is used, and the management option is enabled. The igb0 port on your stand-by storage head (head 2) is not used. The igb1 port on your stand-by storage head (head 2) is used, and the management option is disabled. The igb2 and igb3 ports are bonded with IP Multipathing (IPMP), and the management option is disabled on both igb2 and igb3.

Tip:

Administrators should remember to use two different management URLs for the storage heads.

This default configuration option offers the following benefits:

  • Supports Automated Service Request (ASR) for the storage appliance included in the Exalogic machine, using ports igb0 and igb1

  • Supports disaster recovery for the Exalogic machine, using ports igb2 and igb3

  • Provides Exalogic Configuration Utility, which is used to reconfigure the Exalogic machine based on your specific requirements, with ports igb0 and igb1

  • Separates the disaster recovery path from the management path

Note:

Ensure that the free hanging cable from the igb3 port is connected to your data center network switch. Typically, for high availability purposes, this cable is connected to a data center switch other than the one that Exalogic's Cisco Management Switch is connected to.

The bonded interface is a new interface, such as dr-repl-interface, with igb2 and igb3 configured as an IPMP group. For example, the network settings of the dr-repl-interface is shown in Figure 8-4.

Figure 8-4 igb2 and igb3 in an IPMP Group

Description of Figure 8-4 follows
Description of "Figure 8-4 igb2 and igb3 in an IPMP Group"

Note:

The interface names and IP addresses shown on the screens in this chapter are examples only. You must verify the interface names in your environment and use them accordingly.

In the Properties section, if you select the Allow Administration option, management is enabled on the interface. To create an IPMP Group with two interfaces, such as igb2 and igb3, you must click the + icon (next to Interfaces) on the Network Configuration Screen. The Network Interface screen is displayed, as shown in Figure 8-5.

Figure 8-5 Creating a New IPMP Group Interface

Description of Figure 8-5 follows
Description of "Figure 8-5 Creating a New IPMP Group Interface"

Enter a name for the new interface. In the Properties section, select the Enable Interface option. Select the IP MultiPathing Group option to configure two interfaces, such as igb2 and igb3, in an IPMP group.

8.4.4.2 Option 2: ASR Support and Shared Path for Management and Disaster Recovery, with Single Management URL

In this custom configuration, the igb0 port on your active storage head (head 1) is used, and the management option is enabled. The igb0 port on your stand-by storage head (head 2) is not used. The igb1 port on your stand-by storage head (head 2) is used, and the management option is disabled. The igb2 and igb3 ports are bonded with IP Multipathing (IPMP), and the management option is enabled on both igb2 and igb3.

This configuration option offers the following benefits:

  • Supports Automated Service Request (ASR) for the storage appliance included in the Exalogic machine, using ports igb0 and igb1

  • Supports disaster recovery for the Exalogic machine, using ports igb2 and igb3

  • Provides Exalogic Configuration Utility, which is used to reconfigure the Exalogic machine based on your specific requirements, with ports igb0 and igb1

  • Provides single management URL for both storage heads, using ports igb2 and igb3

Note:

This option does not separate the management path from the disaster recovery path.

To configure this option, complete the following steps:

  1. Ensure that the physical connections are correct, as shown in Figure 8-1. Ensure that the free hanging cable from the igb3 port is connected to your data center network switch.

  2. In a web browser, enter the IP address or host name you assigned to the NET0 port of either storage head as follows:

    https://ipaddress:215

    or

    https://hostname:215

    The login screen appears.

  3. Type root into the Username field and the administrative password that you entered into the appliance shell kit interface and press the Enter key. The Welcome screen is displayed.

  4. Click the Configuration tab, and click NETWORK. The default networking configuration is displayed.

  5. On the network configuration screen (Figure 8-2), click the pencil symbol next to the IPMP interface, such as dr-repl-interface (the bonded interface of igb2 and igb3). The Network Interface screen for dr-repl-interface is displayed, as in Figure 8-6.

    Figure 8-6 IPMP Network Interface Settings

    Description of Figure 8-6 follows
    Description of "Figure 8-6 IPMP Network Interface Settings"

    Note:

    The interface names and IP addresses shown on the screens in this chapter are examples only. You must verify the interface names in your environment and use them accordingly.

  6. Select the Allow Administration option to enable management traffic on both igb2 and igb3 interfaces.

  7. Click APPLY.

8.4.4.3 Option 3: ASR Support and No Disaster Recovery, But with Single Management URL

In this custom configuration, the igb0 port on your active storage head (head 1) is used, and the management option is enabled. The igb0 port on your stand-by storage head (head 2) is not used. The igb1 port on your stand-by storage head (head 2) is used, and the management option is disabled. The igb2 port uses a virtual IP, and the management option is enabled. The igb3 port is not used.

This configuration option offers the following benefits:

  • Supports Automated Service Request (ASR) for the storage appliance included in the Exalogic machine, using ports igb0 and igb1

  • Provides Exalogic Configuration Utility, which is used to reconfigure the Exalogic machine based on your specific requirements, with ports igb0 and igb1

  • Provides single management URL for both storage heads, using the port igb2

Note:

This option does not offer disaster recovery support. When you use this configuration option, you may connect the free hanging cable from igb3 to the Cisco Management switch.

To configure this option, complete the following steps:

  1. Ensure that the physical connections are correct, as shown in Figure 8-1.

  2. In a web browser, enter the IP address or host name you assigned to the NET0 port of either storage head as follows:

    https://ipaddress:215

    or

    https://hostname:215

    The login screen appears.

  3. Type root into the Username field and the administrative password that you entered into the appliance shell kit interface and press the Enter key. The Welcome screen is displayed.

  4. Click the Configuration tab, and click NETWORK. The default networking configuration is displayed.

  5. On the network configuration screen (Figure 8-2), click the delete symbol next to the IPMP interface, such as dr-repl-interface (the bonded interface of igb2 and igb3). Delete this IPMP interface.

  6. On the network configuration screen (Figure 8-2), click the pencil symbol next to the igb3 interface. The Network Interface screen for igb3 is displayed. Click the Enable Interface option to disable the interface, which is enabled, by default.

  7. Click APPLY.

8.4.5 Default Storage Configuration

By default, a single storage pool is configured. Active-passive clustering for the server heads is configured. Data is mirrored, which yields a highly reliable and high-performing system.

The default storage configuration is done at the time of manufacturing, and it includes the following shares:

  • Two exclusive NFS shares for each of the Exalogic compute nodes - one for crash dumps, and another for general purposes

    In this scenario, you can implement access control for these shares, based on your requirements.

  • Two common NFS shares to be accessed by all compute nodes - one for patches, and another for general purposes

Table 8-1 Default Configuration of the storage appliance

Default Configuration Name

Storage pool

exalogic

Projects

  • Projects at the compute node level

    NODE_1 to NODE_N

    where N represents the number of compute nodes in your Exalogic machine rack configuration.

  • Common project

    common

Shares

  • NODE_SHARES

    dumps, and general

    These shares are at the compute node level.

  • COMMON_SHARES

    common/patches, common/general, and common/images


Note:

This table represents the default configuration of the storage appliance before the Exalogic machine rack configuration is modified at the customer's site. Oracle Exalogic Configuration Utility does not alter this configuration.

8.4.6 Custom Configuration

You can create and configure a number of projects and shares on the storage appliance to meet your specific storage requirements in the enterprise.

You can implement custom configuration, such as the following:

  • Custom projects, such as Dept_1, Dept_2.

  • Custom shares, such as jmslogs, jtalogs.

  • Creation and administration of users.

  • Access control for custom shares.

Note:

For information about the recommended directory structure and shares, see the Oracle Fusion Middleware Exalogic Enterprise Deployment Guide.

8.5 Creating Custom Projects

Shares are grouped together as Projects. For example, you can create a project for Dept_1. Dept_1 will contain department-level shares.

To create the Dept_1 project, do the following:

  1. In the Browser User Interface (BUI), click the Shares tab.

    The shares page is displayed.

  2. Click the Projects panel.

  3. Click the + button above the list of projects in the project panel.

  4. Enter a name for the project, such as Dept_1. The new project Dept_1 is listed on the Project Panel, which is on the left navigation pane.

  5. Click the General tab on the Dept_1 project page to set project properties. This section of the BUI controls overall settings for the project that are independent of any particular protocol and are not related to access control or snapshots. While the CLI groups all properties in a single list, this section describes the behavior of the properties in both contexts.

    The project settings page contains three sections: Space Usage (Users and Groups), Inherited Properties, and Default Settings (File systems and LUNs). Table 8-2 describes the project settings.

    Table 8-2 Project Settings

    Section and Setting Description

    Space Usage

    Space within a storage pool is shared between all shares. File systems can grow or shrink dynamically as needed, though it is also possible to enforce space restrictions on a per-share basis.

    • Quota - Sets a maximum limit on the total amount of space consumed by all file systems and LUNs within the project.

    • Reservation - Guarantees a minimum amount of space for use across all file systems and LUNs within the project.

    Inherited Properties

    Standard properties that can either be inherited by shares within the project. The behavior of these properties is identical to that at the shares level.

    • Mountpoint - The location where the file system is mounted. This property is only valid for file systems.

      Oracle recommends that you use specify /export/<project_name> as the default mountpoint. By using this consistently, you can group all shares and mount under the relevant project. It also prevents multiple shares from using the same mount points. Note that the same storage appliance is used by a multiple departments (15 in the case of Exalogic machine full rack configuration). The departments will have a similar share structure, such as /export/dept_1/<share1>, /export/dept_2/share1, and so on.

    • Read only - Controls whether the file system contents are read only. This property is only valid for file systems.

    • Update access time on read - Controls whether the access time for files is updated on read. This property is only valid for file systems.

    • Non-blocking mandatory locking - Controls whether CIFS locking semantics are enforced over POSIX semantics. This property is only valid for file systems.

    • Data deduplication - Controls whether duplicate copies of data are eliminated.

    • Data compression - Controls whether data is compressed before being written to disk.

    • Checksum - Controls the checksum used for data blocks.

    • Cache device usage - Controls whether cache devices are used for the share.

    • Synchronous write bias - Controls the behavior when servicing synchronous writes. By default, the system optimizes synchronous writes for latency, which leverages the log devices to provide fast response times.

    • Database record size - Controls the block size used by the file system. This property is only valid for file systems.

      By default, file systems will use a block size just large enough to hold the file, or 128K for large files. This means that any file over 128K in size will be using 128K blocks. If an application then writes to the file in small chunks, it will necessitate reading and writing out an entire 128K block, even if the amount of data being written is comparatively small. The property can be set to any power of 2 from 512 to 128K.

    • Additional replication - Controls number of copies stored of each block, above and beyond any redundancy of the storage pool.

    • Virus scan - Controls whether this file system is scanned for viruses. This property is only valid for file systems.

    • Prevent destruction - When set, the share or project cannot be destroyed. This includes destroying a share through dependent clones, destroying a share within a project, or destroying a replication package.

    • Restrict ownership change - By default, this check box is selected and the ownership of files can only be changed by a root user. This property can be removed on a per-filesystem or per-project basis by deselecting this check box. When deselected, file ownership can be changed by the owner of the file or directory.

    Default Settings

    Custom settings for file systems, to be used as default, include the following:

    • User - User that is the current owner of the directory.

    • Group - Group that is the current owner of the directory.

    • Permissions - Permissions include Read (R), Write (W), or Execute (X).

    Custom settings for LUNs, to be used as default, include the following:

    • Volume Size - Controls the size of the LUN. By default, LUNs reserve enough space to completely fill the volume

    • Thin provisioned - Controls whether space is reserved for the volume. This property is only valid for LUNs.

      By default, a LUN reserves exactly enough space to completely fill the volume. This ensures that clients will not get out-of-space errors at inopportune times. This property allows the volume size to exceed the amount of available space. When set, the LUN will consume only the space that has been written to the LUN. While this allows for thin provisioning of LUNs, most file systems do not expect to get "out of space" from underlying devices, and if the share runs out of space, it may cause instability or a corruption on clients, or both.

    • Volume block size - The native block size for LUNs. This can be any power of 2 from 512 bytes to 128K, and the default is 8K.


  6. After entering your choices, click Apply.

8.6 Creating Custom Shares

Shares are file systems and LUNs that are exported over supported data protocols to compute nodes. File systems export a file-based hierarchy and can be accessed over NFS over IPoIB in Exalogic machines.

To create a custom share, such as domain_home under the Dept_1 project, do the following:

  1. In the Browser User Interface (BUI), click the Shares tab.

    The shares page is displayed.

  2. Click the + button next to Filesystems to add a file system. The Create Filesystem screen is displayed.

    Figure 8-7 Create Filesystem

    Description of Figure 8-7 follows
    Description of "Figure 8-7 Create Filesystem"

  3. In the Create Filesystems screen, choose the target project from the Project pull-down menu. For example, choose Dept_1.

  4. In the Name field, enter a name for the share. For example, enter domain_home.

  5. From the Data migration source pull-down menu, choose None.

  6. Select the Permissions option. Table 8-3 lists the access types and permissions.

    Table 8-3 File System Access Types and Permissions

    Access Type Description Permissions to Grant

    User

    User that is the current owner of the directory.

    The following permissions can be granted:

    • R - Read - Permission to list the contents of the directory.

    • W - Write - Permission to create files in the directory.

    • X - Execute - Permission to look up entries in the directory. If users have execute permissions but not read permissions, they can access files explicitly by name but not list the contents of the directory.

    Group

    Group that is the current group of the directory.

    Other

    All other accesses.


    You can use this feature to control access to the file system, based on the access types (users and groups) in Dept_1.

  7. You can either inherit a mountpoint by selecting the Inherit mountpoint option or set a mountpoint.

    Note:

    The mount point must be under /export. The mount point for one share cannot conflict with another share. In addition, it cannot conflict with another share on cluster peer to allow for proper failover.

    When inheriting the mountpoint property, the current dataset name is appended to the project's mountpoint setting, joined with a slash ('/'). For example, if the domain_home project has the mountpoint setting /export/domain_home, then domain_home/config inherits the mountpoint /export/domain_home/config.

  8. To enforce UTF-8 encoding for all files and directories in the file system, select the Reject non UTF-8 option. When set, any attempts to create a file or directory with an invalid UTF-8 encoding will fail.

    Note:

    This option is selected only when you are creating the file system.

  9. From the Case sensitivity pull-down menu, select Mixed, Insensitive, or Sensitive to control whether directory lookups are case-sensitive or case-insensitive.

    Table 8-4 Case Sensitivity Values

    BUI Value Description

    Mixed

    Case sensitivity depends on the protocol being used. For NFS, FTP, and HTTP, lookups are case-sensitive. This is default, and prioritizes conformance of the various protocols over cross-protocol consistency.

    Insensitive

    All lookups are case-insensitive, even over protocols (such as NFS) that are traditionally case-sensitive. This setting should only be used where CIFS is the primary protocol and alternative protocols are considered second-class, where conformance to expected standards is not an issue.

    Sensitive

    All lookups are case-sensitive. In general, do not use this setting.


    Note:

    This option is selected only when you are creating the file system.

  10. From the Normalization pull-down menu, select None, Form C, Form D, Form KC, or Form KD to control what unicode normalization, if any, is performed on filesystems and directories. Unicode supports the ability to have the same logical name represented by different encodings. Without normalization, the on-disk name stored will be different, and lookups using one of the alternative forms will fail depending on how the file was created and how it is accessed. If this property is set to anything other than None (the default), the Reject non UTF-8 property must also be selected.

    Table 8-5 Normalization Settings

    BUI Value Description

    None

    No normalization is done.

    Form C

    Normalization Form Canonical Composition (NFC) - Characters are decomposed and then recomposed by canonical equivalence.

    Form D

    Normalization Form Canonical Decomposition (NFD) - Characters are decomposed by canonical equivalence.

    Form KC

    Normalization Form Compatibility Composition (NFKC) - Characters are decomposed by compatibility equivalence, then recomposed by canonical equivalence.

    Form KD

    Normalization Form Compatibility Decomposition (NFKD) - Characters are decomposed by compatibility equivalence.


    Note:

    This option is selected only when you are creating the file system.

  11. After entering the values, click Apply.

8.7 Using the Phone Home Service to Manage the Storage Appliance

You can use the PhoneHome service screen in the BUI to manage the appliance registration as well as the PhoneHome remote support service. Registering the storage appliance connects your appliance with the inventory portal of Oracle, through which you can manage your Sun gear. Registration is also a prerequisite for using the PhoneHome service.

The PhoneHome service communicates with Oracle support to provide:

  • Fault reporting - the system reports active problems to Oracle for automated service response. Depending on the nature of the fault, a support case may be opened. Details of these events can be viewed in Problems.

  • Heartbeats - daily heartbeat messages are sent to Oracle to indicate that the system is up and running. Oracle support may notify the technical contact for an account when one of the activated systems fails to send a heartbeat for too long.

  • System configuration - periodic messages are sent to Oracle describing current software and hardware versions and configuration as well as storage configuration. No user data or metadata is transmitted in these messages.

Note:

You need a valid Oracle Single Sign-On account user name and password to use the fault reporting and heartbeat features of the Phone Home service. Go to http://support.oracle.com and click Register to create your account

8.7.1 Registering Your Storage Appliance

To register the appliance for the first time, you must provide a Oracle Single Sign-On account and specify one of that account's inventory teams into which to register the appliance.

Using the BUI:

  1. Enter your Oracle Single Sign-On user name and password. A privacy statement will be displayed for your review. It can be viewed at any time later in both the BUI and CLI.

  2. The appliance will validate the credentials and allow you to choose which of your inventory teams to register with. The default team for each account is the same as the account user name, prefixed with a '$'.

  3. Commit your changes.

    Note:

    You can see a log of PhoneHome events in Maintenance->Logs->PhoneHome.

    If the phone home service is enabled before a valid Oracle Single Sign-On account has been entered, it will appear in the maintenance state. You must enter a valid Oracle Single Sign-On account to use the phone home service.