The software described in this documentation is either no longer supported or is in extended support.
Oracle recommends that you upgrade to a current supported release.

1.4 Upgrade

Limited support for offline upgrades from Ceph Storage for Oracle Linux Release 1.0 to the current release is provided. Some additional configuration specific to your deployment may be required when you have completed the upgrade.

Where upgrade is required it is recommended that upgrades of components are performed in the following order:

  1. Ceph Deploy Package

  2. Ceph Monitors

  3. Ceph OSD Daemons

  4. Ceph Metadata Servers

  5. Ceph Object Gateways

Oracle recommends that all daemons of a specific type are upgraded together to ensure that they are all on the same release, and that all of the components within a cluster are upgraded before you attempt to configure or use any new functionality in the current release.

The following instructions provide a brief outline of some of the common steps required to perform an upgrade for a storage cluster. Remember that the cluster must go offline during the upgrade.

  1. To begin the upgrade process, the Yum configuration on all systems that are part of the Ceph Storage Cluster must be updated to provide access to the appropriate yum repositories and channels as described in Section 1.2, “Enabling Access to the Ceph Packages”.

  2. Upgrade the Ceph Deploy package on the deployment node within your environment:

    # yum upgrade ceph-deploy
  3. Stop any running Ceph services on each of the different nodes within the environment:

    # /etc/init.d/ceph stop mon
    # /etc/init.d/ceph stop osd
    # /etc/init.d/ceph stop mds

    Check the status of the cluster and ensure that it is not running:

    # ceph status
  4. Use Ceph deploy to install the package updates on each node in the storage cluster:

    # ceph-deploy install ceph-node{1..4}
    Note

    The upstream documentation mentions the --release switch, which is meant to allow you to control which release you are updating to, however this switch does not have any effect when used in an Oracle Linux environment and the packages are simply installed to the latest version via Yum.

  5. Add the following lines to the end of /etc/ceph/ceph.conf on each node:

    rbd default feature = 3
    setuser match path = /var/lib/ceph/$type/$cluster-$id
  6. On each node, check that the ceph user and group have ownership of the directories used for Ceph:

    # chown -R ceph:ceph /var/lib/ceph
    # chown -R ceph:ceph /etc/ceph
  7. On each Ceph monitor node, you must manually enable the systemd service for the Ceph monitor service. For example:

    # systemctl enable ceph-mon@ceph-node1.service

    Start the service once you have enabled it and check its status to make sure that it is running properly:

    # systemctl start ceph-mon@ceph-node1.service
    # systemctl status ceph-mon@ceph-node1.service
  8. From the administration node in the cluster, set the noout option to prevent the CRUSH algorithm from attempting to rebalance the cluster during the upgrade:

    # ceph osd set noout

    Also mark all OSDs as down within the cluster:

    # ceph osd down seq 0 1000
  9. On each node that runs the Ceph OSD daemon, enable the systemd service, restart the service and check its status to make sure that it is running properly:

    # systemctl enable ceph-osd@0.service
    # systemctl restart ceph-osd@0.service
    # systemctl status ceph-osd@0.service
  10. Reboot each Ceph OSD node once the ceph-osd services have been enabled.

  11. From an admin node within the cluster, often the same as the deployment node, run the following command to tune the OSD instances:

    # ceph osd crush tunables default

    Updating the OSD tunable profile to the default setting requires that any Ceph client connecting to Ceph is up to date as well. Older Ceph clients designed to connect to previous releases may struggle to interface with a tuned Ceph deployment.

  12. Check the cluster status with the following command:

    # ceph status

    If the health of the cluster is not set to HEALTH_OK, re-enable the CRUSH algorithm's ability to balance the cluster by unsetting the noout option:

    # ceph osd unset noout
  13. On each node running the Ceph MDS daemon, enable the systemd service and restart the service:

    # systemctl enable ceph-mds@ceph-node1.service
    # systemctl start ceph-mds@ceph-node1.service
  14. Comment out or remove the setuser match path parameter in /etc/ceph/ceph.conf on each node:

    # setuser match path = /var/lib/ceph/$type/$cluster-$id

    This parameter is only required during the upgrade. If it remains enabled, it can cause problems when the Ceph Object Gateway service, radosgw, is started.

  15. On each node running the Ceph Object Gateway service, perform the following steps to complete the upgrade:

    1. Stop any running instances of the legacy Ceph Object Gateway service:

      # /etc/init.d/ceph-radosgw stop
    2. Stop the running Apache instance and disable it, so that it does not start at boot:

      # systemctl stop httpd
      # systemctl disable httpd
    3. Edit the existing Ceph configuration for the Ceph Object Gateway by opening /etc/ceph.conf in an editor.

      An existing gateway configuration entry for a previous release should be similar to the following:

      [client.radosgw.gateway]
      host = ceph-node4
      keyring = /etc/ceph/ceph.client.radosgw.keyring
      rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock
      log file = /var/log/radosgw/client.radosgw.gateway.log
      rgw print continue = false

      Modify this entry to comment out the rgw socket path and rgw print continue parameters and to add an entry for the rgw frontends parameter that defines the port that the Civetweb web server should use. For example:

      [client.radosgw.gateway]
      host = ceph-node4
      keyring = /etc/ceph/ceph.client.radosgw.keyring
      # rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock
      log file = /var/log/radosgw/client.radosgw.gateway.log
      # rgw print continue = false
      rgw_frontends = civetweb port=80
      Tip

      Take note of the configuration entry name. In the example, this is client.radosgw.gateway. You use this to specify the appropriate systemd service that should be enabled and run for this configuration. In this case, gateway name is gateway.

    4. Enable and restart the systemd service and target for the gateway configuration. For example:

      # systemctl enable ceph-radosgw@radosgw.gateway.service
      # systemctl enable ceph-radosgw.target
      # systemctl restart ceph-radosgw@radosgw.gateway.service
      Important

      Make sure that you specify the correct gateway name for the service. This should match the name in the configuration. In our example the gateway name is gateway.

    5. Check the running status of the gateway service to make sure that the gateway is running and there are no errors:

      # systemctl status ceph-radosgw@radosgw.gateway.service
    6. Check that a client is able to access the gateway and list any existing buckets. See Section 1.6.1, “Simple Ceph Object Gateway” for an example test script.