The software described in this documentation is either no longer supported or is in extended support.
Oracle recommends that you upgrade to a current supported release.
Limited support for offline upgrades from Ceph Storage for Oracle Linux Release 1.0 to the current release is provided. Some additional configuration specific to your deployment may be required when you have completed the upgrade.
Where upgrade is required it is recommended that upgrades of components are performed in the following order:
Ceph Deploy Package
Ceph Monitors
Ceph OSD Daemons
Ceph Metadata Servers
Ceph Object Gateways
Oracle recommends that all daemons of a specific type are upgraded together to ensure that they are all on the same release, and that all of the components within a cluster are upgraded before you attempt to configure or use any new functionality in the current release.
The following instructions provide a brief outline of some of the common steps required to perform an upgrade for a storage cluster. Remember that the cluster must go offline during the upgrade.
To begin the upgrade process, the Yum configuration on all systems that are part of the Ceph Storage Cluster must be updated to provide access to the appropriate yum repositories and channels as described in Section 1.2, “Enabling Access to the Ceph Packages”.
Upgrade the Ceph Deploy package on the deployment node within your environment:
#
yum upgrade ceph-deploy
Stop any running Ceph services on each of the different nodes within the environment:
#
/etc/init.d/ceph stop mon
#/etc/init.d/ceph stop osd
#/etc/init.d/ceph stop mds
Check the status of the cluster and ensure that it is not running:
#
ceph status
Use Ceph deploy to install the package updates on each node in the storage cluster:
#
ceph-deploy install
ceph-node{1..4}
NoteThe upstream documentation mentions the
--release
switch, which is meant to allow you to control which release you are updating to, however this switch does not have any effect when used in an Oracle Linux environment and the packages are simply installed to the latest version via Yum.Add the following lines to the end of
/etc/ceph/ceph.conf
on each node:rbd default feature = 3 setuser match path = /var/lib/ceph/$type/$cluster-$id
On each node, check that the
ceph
user and group have ownership of the directories used for Ceph:#
chown -R ceph:ceph /var/lib/ceph
#chown -R ceph:ceph /etc/ceph
On each Ceph monitor node, you must manually enable the systemd service for the Ceph monitor service. For example:
#
systemctl enable ceph-mon@
ceph-node1
.serviceStart the service once you have enabled it and check its status to make sure that it is running properly:
#
systemctl start ceph-mon@
#ceph-node1
.servicesystemctl status ceph-mon@
ceph-node1
.serviceFrom the administration node in the cluster, set the
noout
option to prevent the CRUSH algorithm from attempting to rebalance the cluster during the upgrade:#
ceph osd set noout
Also mark all OSDs as
down
within the cluster:#
ceph osd down seq 0 1000
On each node that runs the Ceph OSD daemon, enable the systemd service, restart the service and check its status to make sure that it is running properly:
#
systemctl enable ceph-osd@
#0
.servicesystemctl restart ceph-osd@
#0
.servicesystemctl status ceph-osd@
0
.serviceReboot each Ceph OSD node once the
ceph-osd
services have been enabled.From an admin node within the cluster, often the same as the deployment node, run the following command to tune the OSD instances:
#
ceph osd crush tunables default
Updating the OSD tunable profile to the default setting requires that any Ceph client connecting to Ceph is up to date as well. Older Ceph clients designed to connect to previous releases may struggle to interface with a tuned Ceph deployment.
Check the cluster status with the following command:
#
ceph status
If the health of the cluster is not set to
HEALTH_OK
, re-enable the CRUSH algorithm's ability to balance the cluster by unsetting thenoout
option:#
ceph osd unset noout
On each node running the Ceph MDS daemon, enable the systemd service and restart the service:
#
systemctl enable ceph-mds@
#ceph-node1
.servicesystemctl start ceph-mds@
ceph-node1
.serviceComment out or remove the
setuser match path
parameter in/etc/ceph/ceph.conf
on each node:# setuser match path = /var/lib/ceph/$type/$cluster-$id
This parameter is only required during the upgrade. If it remains enabled, it can cause problems when the Ceph Object Gateway service, radosgw, is started.
On each node running the Ceph Object Gateway service, perform the following steps to complete the upgrade:
Stop any running instances of the legacy Ceph Object Gateway service:
#
/etc/init.d/ceph-radosgw stop
Stop the running Apache instance and disable it, so that it does not start at boot:
#
systemctl stop httpd
#systemctl disable httpd
Edit the existing Ceph configuration for the Ceph Object Gateway by opening
/etc/ceph.conf
in an editor.An existing gateway configuration entry for a previous release should be similar to the following:
[client.radosgw.gateway] host =
ceph-node4
keyring = /etc/ceph/ceph.client.radosgw.keyring rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock log file = /var/log/radosgw/client.radosgw.gateway.log rgw print continue = falseModify this entry to comment out the
rgw socket path
andrgw print continue
parameters and to add an entry for thergw frontends
parameter that defines the port that the Civetweb web server should use. For example:[client.radosgw.gateway] host =
ceph-node4
keyring = /etc/ceph/ceph.client.radosgw.keyring # rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock log file = /var/log/radosgw/client.radosgw.gateway.log # rgw print continue = false rgw_frontends = civetweb port=80TipTake note of the configuration entry name. In the example, this is
client.radosgw.gateway
. You use this to specify the appropriate systemd service that should be enabled and run for this configuration. In this case, gateway name isgateway
.Enable and restart the systemd service and target for the gateway configuration. For example:
#
systemctl enable ceph-radosgw@radosgw.
#gateway
.servicesystemctl enable ceph-radosgw.target
#systemctl restart ceph-radosgw@radosgw.
gateway
.serviceImportantMake sure that you specify the correct gateway name for the service. This should match the name in the configuration. In our example the gateway name is
.gateway
Check the running status of the gateway service to make sure that the gateway is running and there are no errors:
#
systemctl status ceph-radosgw@radosgw.
gateway
.serviceCheck that a client is able to access the gateway and list any existing buckets. See Section 1.6.1, “Simple Ceph Object Gateway” for an example test script.