The software described in this documentation is either no longer supported or is in extended support.
Oracle recommends that you upgrade to a current supported release.
A multisite Ceph Object Gateway configuration can be deployed to achieve synchronization between different zones within a zonegroup. The multisite configuration replaces the federated Ceph Object Gateway configuration described in previous releases of Ceph.
Oracle has tested a multisite configuration consisting of a single zonegroup containing multiple zones distributed across separate storage clusters. No sync agent is configured to mirror data changes between gateways, which allows for a simpler active-active configuration. All metadata operations, such as the creation of new users, must be made via the master zone, however data operations, such as the creation of buckets and objects can be handled by any zone in the deployment.
The following configuration steps describe a basic multisite configuration consisting of two Ceph Storage Clusters in a single zone group containing three separate zones that actively sync data between them. The example setup is deployed on three servers with local storage available. It is assumed that the systems do not have an existing Ceph configuration and that they have access to the Ceph packages and their dependencies, as described in Section 1.2, “Enabling Access to the Ceph Packages”.
The following naming conventions are used in the example configuration:
Realm:
gold
Master Zonegroup:
us
Master Zone:
us-east-1
Secondary Zone:
us-east-2
Secondary Zone:
us-west
The zones us-east-1
and
us-east-2
are part of the same Ceph Storage
Cluster. The zone us-west
is installed on a
second Ceph Storage Cluster.
Install the ceph-deploy tool on one of the systems that shall be part of the first cluster:
#
yum install ceph-deploy
This system is referred to as the 'first cluster deployment node' through the rest of these instructions.
Create a clean working Ceph configuration directory for the storage cluster and change to this directory, for example:
#
rm -rf /var/mydom_ceph
#mkdir /var/mydom_ceph
#cd /var/mydom_ceph
Clear the systems of any pre-existing Ceph configuration information or data:
#
ceph deploy purge
#ceph-node1
ceph-node2
ceph deploy purgedata
ceph-node1
ceph-node2
Replace
ceph-node1
andceph-node2
with the hostnames of the systems taking part in the cluster.Deploy the Ceph cluster configuration:
#
ceph-deploy new
ceph-node1
ceph-node2
Replace
ceph-node1
andceph-node2
with the hostnames of the systems taking part in the cluster.Update the configuration template with required configuration variables:
#
echo "osd pool default size = 2" >> ceph.conf
#echo "rbd default features = 3" >> ceph.conf
Install the Ceph cluster packages on the nodes:
#
ceph-deploy install
ceph-node1
ceph-node2
Replace
ceph-node1
andceph-node2
with the hostnames of the systems taking part in the cluster.Create a cluster monitor on one of the nodes:
#
ceph-deploy mon create-initial
#ceph-deploy mon create
#ceph-node1
ceph-deploy gatherkeys
ceph-node1
Replace
ceph-node1
with the hostname of the node that you wish to designate as a cluster monitor.Prepare an available disk on each node to function as an Object Storage Device (OSD):
#
ceph-deploy osd create --zap-disk --fs-type
#xfs
ceph-node1
:sdb
ceph-deploy osd create --zap-disk --fs-type
xfs
ceph-node2
:sdc
Replace
ceph-node1
andceph-node2
with the hostnames of the systems taking part in the cluster. Replacexfs
with your preferred filesystem type, eitherxfs
orbtrfs
. Replacesdb
andsdc
with the appropriate device names for available disks on each host. Note that these disks or repartitioned and formatted, destroying any existing data on them.Check the Ceph status to make sure that the cluster is healthy and that the OSDs are available:
#
ceph status
From the first cluster deployment node, install the Ceph Object Gateway software on each of the nodes in the cluster:
#
ceph-deploy install --rgw
#ceph-node1
ceph-node2
ceph-deploy rgw create
ceph-node1
ceph-node2
Replace
ceph-node1
andceph-node2
with the hostnames of the systems taking part in the cluster where you wish to install the Ceph Object Gateway software.Edit the template configuration in
/var/mydom_ceph/ceph.conf
on the first cluster deployment node and add the following lines to the end of the configuration file:osd pool default pg num = 100 osd pool default pgp num = 100 mon pg warn max per osd = 2100 [client.rgw.
ceph-node1
] rgw_frontends = "civetweb port=80" [client.rgw.ceph-node2
] rgw_frontends = "civetweb port=80"Replace
ceph-node1
andceph-node2
with the hostnames of the gateway systems.Push the configuration to each of the nodes in the cluster:
#
ceph-deploy --overwrite-conf config push
ceph-node1
ceph-node2
On each of the nodes, restart the Ceph Object Gateway service and check its status to ensure that it is running correctly:
#
systemctl restart ceph-radosgw@*
#systemctl status ceph-radosgw@*
Ceph Object Gateways require several pools to store gateway
related data. Where gateways are configured as zones, it is
typical to create pools particular to a zone using the
naming convention:
.
For this reason, it is best to manually create all of the
required pools for each of the zones within the cluster.
zone.pool-name
On the first cluster deployment node, run the following commands to create all of the pools required for the zones that are hosted in this cluster:
#ceph osd pool create
#ceph-us-east-1
.rgw.control 16 16ceph osd pool create
#ceph-us-east-1
.rgw.data.root 16 16ceph osd pool create
#ceph-us-east-1
.rgw.gc 16 16ceph osd pool create
#ceph-us-east-1
.rgw.log 16 16ceph osd pool create
#ceph-us-east-1
.rgw.intent-log 16 16ceph osd pool create
#ceph-us-east-1
.rgw.usage 16 16ceph osd pool create
#ceph-us-east-1
.rgw.users.keys 16 16ceph osd pool create
#ceph-us-east-1
.rgw.users.email 16 16ceph osd pool create
#ceph-us-east-1
.rgw.users.swift 16 16ceph osd pool create
#ceph-us-east-1
.rgw.users.uid 16 16ceph osd pool create
#ceph-us-east-1
.rgw.buckets.index 32 32ceph osd pool create
#ceph-us-east-1
.rgw.buckets.data 32 32ceph osd pool create
#ceph-us-east-1
.rgw.meta 16 16ceph osd pool create
#ceph-us-east-2
.rgw.control 16 16ceph osd pool create
#ceph-us-east-2
.rgw.data.root 16 16ceph osd pool create
#ceph-us-east-2
.rgw.gc 16 16ceph osd pool create
#ceph-us-east-2
.rgw.log 16 16ceph osd pool create
#ceph-us-east-2
.rgw.intent-log 16 16ceph osd pool create
#ceph-us-east-2
.rgw.usage 16 16ceph osd pool create
#ceph-us-east-2
.rgw.users.keys 16 16ceph osd pool create
#ceph-us-east-2
.rgw.users.email 16 16ceph osd pool create
#ceph-us-east-2
.rgw.users.swift 16 16ceph osd pool create
#ceph-us-east-2
.rgw.users.uid 16 16ceph osd pool create
#ceph-us-east-2
.rgw.buckets.index 32 32ceph osd pool create
#ceph-us-east-2
.rgw.buckets.data 32 32ceph osd pool create
ceph-us-east-2
.rgw.meta 16 16
While configuring zones, each gateway instance requires a system user with credentials set up to allow for S3-like access. This allows each gateway instance to pull the configuration remotely using the access and secret keys. To make sure that the same keys are configured on each gateway instance, it is best to define these keys beforehand and to set them manually when the zones and users are created.
It is good practice to set these as reusable environment variables while you are setting up your configuration and to randomize the keys as much as possible:
#SYSTEM_ACCESS_KEY=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 20 | head -n 1)
#SYSTEM_SECRET_KEY=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 40 | head -n 1)
You can check that these keys are set and contain good content:
#echo SYSTEM_ACCESS_KEY=$SYSTEM_ACCESS_KEY
#echo SYSTEM_SECRET_KEY=$SYSTEM_SECRET_KEY
Keep a record of the output from these commands. You need to export the same environment variables when you set up the second cluster and the secondary zone located here.
The multisite configuration is built around a single realm,
named gold
, with a single zonegroup
called us
. Within this zonegroup is a
master zone named ceph-us-east-1
. The
following steps describe what must be done to create and
configure these components:
Create the realm and make it the default:
#
radosgw-admin realm create --rgw-realm=
gold
--defaultDelete the default zonegroup which is created as part of the simple installation of the Ceph Object Gateway software.
#
radosgw-admin zonegroup delete --rgw-zonegroup=default
Create a new master zonegroup. The master zonegroup is in control of the zonegroup map and propagates changes across the system. This zonegroup should be set as the default zonegroup to allow you to run commands for it in future without having to explicitly identify is using the
--rgw-zonegroup
switch.#
radosgw-admin zonegroup create --rgw-zonegroup=
us
\ --endpoints=http://ceph-node1.example.com:80
--master --defaultCreate the master zone and make it the default zone. Note that for metadata operations, such as user creation, you must use this zone. You can also add the zone to the zonegroup when you create it, and specify the access and secret key that should be used for this zone:
#
radosgw-admin zone create --rgw-zonegroup=
us
--rgw-zone=ceph-us-east-1
\ --endpoints=http://ceph-node1.example.com:80
--access-key=$SYSTEM_ACCESS_KEY \ --secret=$SYSTEM_SECRET_KEY --default --masterCreate a system user that can be used to access the zone pools. The keys for this user must match the keys used by each each of the zones that are being configured:
#
radosgw-admin user create --uid=
zone.user
--display-name="ZoneUser
" \ --access-key=$SYSTEM_ACCESS_KEY --secret=$SYSTEM_SECRET_KEY --systemThe realm period holds the entire configuration structure for the current state of the realm. When realm information, such as configuration for the zonegroups and zones, is modified, the changes must be updated for the period. This is achieved by committing the changes:
#
radosgw-admin period update --commit
The following commands can be executed on the first cluster deployment node, but are used to update the zonegroup and realm configuration to add the secondary zone hosted on the other node within the cluster.
Create the secondary zone, making sure that you specify the same access and secret key as used for the master zone:
#
radosgw-admin zone create --rgw-zonegroup=
us
--rgw-zone=ceph-us-east-2
\ --endpoints=http://ceph-node2.example.com:80
\ --access-key=$SYSTEM_ACCESS_KEY --secret=$SYSTEM_SECRET_KEYUpdate the realm period with the new configuration information:
#
radosgw-admin period update --commit
Edit the template configuration in the working directory on
the first cluster deployment node to map the zone names to
each gateway configuration. This is done by adding a line
for the rgw_zone
variable to the gateway
configuration entry for each node:
[client.rgw.ceph-node1] rgw_frontends = "civetweb port=80"rgw_zone=ceph-us-east-1
[client.rgw.ceph-node2] rgw_frontends = "civetweb port=80"rgw_zone=ceph-us-east-2
When you have updated the template configuration, push the changes to each of the nodes in the cluster:
# ceph-deploy --overwrite-conf config push ceph-node1
ceph-node2
On each of the nodes, restart the Ceph Object Gateway service and check its status to ensure that it is running correctly:
#systemctl restart ceph-radosgw@*
#systemctl status ceph-radosgw@*
Install and deploy a second cluster in much the same way as
you did the first. In this example, the second cluster
consists of a single node, although you may add more nodes
if you require. This node ultimately hosts the gateway for
the ceph-us-west
zone.
The following commands, recap on the steps to deploy the
cluster and to configure an OSD that can be used for
storage. These commands must be issued on a new server,
,
outside of the first cluster:
ceph-node3
#mkdir -p /var/mydom_ceph; cd /var/mydom_ceph
#yum install ceph-deploy
#ceph-deploy new
#ceph-node3
echo "osd pool default size = 2" >> ceph.conf
#echo "rbd default features = 3" >> ceph.conf
#ceph-deploy install
#ceph-node3
ceph-deploy mon create-initial
#ceph-deploy mon create
#ceph-node3
ceph-deploy gatherkeys
#ceph-node3
ceph-deploy osd create --zap-disk --fs-type
xfs
ceph-node3
:sdb
Install the Ceph Object Gateway software on the newly deployed node in the cluster:
#
ceph-deploy install --rgw
#ceph-node3
ceph-deploy rgw create
ceph-node3
Replace
ceph-node3
with the hostname of the node where you wish to install the Ceph Object Gateway software.Edit the template configuration in
/var/mydom_ceph/ceph.conf
on the second cluster deployment node and add the following lines to the end of the configuration file:osd pool default pg num = 100 osd pool default pgp num = 100 mon pg warn max per osd = 2100 [client.rgw.
ceph-node3
] rgw_frontends = "civetweb port=80"Replace
ceph-node3
with the hostname of the gateway system.Push the configuration to each of the nodes in the cluster:
#
ceph-deploy --overwrite-conf config push
ceph-node3
Restart the Ceph Object Gateway service on the gateway node and check its status to ensure that it is running correctly:
#
systemctl restart ceph-radosgw@*
#systemctl status ceph-radosgw@*
Create the required pools for the Ceph Object Gateway on the second cluster by running the following commands:
#ceph osd pool create
#ceph-us-west
.rgw.control 16 16ceph osd pool create
#ceph-us-west
.rgw.data.root 16 16ceph osd pool create
#ceph-us-west
.rgw.gc 16 16ceph osd pool create
#ceph-us-west
.rgw.log 16 16ceph osd pool create
#ceph-us-west
.rgw.intent-log 16 16ceph osd pool create
#ceph-us-west
.rgw.usage 16 16ceph osd pool create
#ceph-us-west
.rgw.users.keys 16 16ceph osd pool create
#ceph-us-west
.rgw.users.email 16 16ceph osd pool create
#ceph-us-west
.rgw.users.swift 16 16ceph osd pool create
#ceph-us-west
.rgw.users.uid 16 16ceph osd pool create
#ceph-us-west
.rgw.buckets.index 32 32ceph osd pool create
#ceph-us-west
.rgw.buckets.data 32 32ceph osd pool create
ceph-us-west
.rgw.meta 16 16
Export the same
SYSTEM_ACCESS_KEY
andSYSTEM_SECRET_KEY
environment variables that you set up on the first cluster. For example:#
SYSTEM_ACCESS_KEY=
#OJywnXPrAA4uSCgv1UUs
SYSTEM_SECRET_KEY=
dIpf1FRPwUYcXfswYx6qjC0eSuHEeHy0I2f9vHFf
Using these keys, pull the realm configuration directly from the first cluster, via the node running the master zone, by issuing the following command:
#
radosgw-admin realm pull --url=
http://ceph-node1.example.com:80
\ --access-key=$SYSTEM_ACCESS_KEY --secret=$SYSTEM_SECRET_KEYPull the period state directly from the first cluster, via the node running the master zone:
#
radosgw-admin period pull --url=
http://ceph-node1.example.com:80
\ --access-key=$SYSTEM_ACCESS_KEY --secret=$SYSTEM_SECRET_KEYSet the default realm for the gateway instance to
gold
:#
radosgw-admin realm default --rgw-realm=
gold
Set the default zonegroup to
us
:#
radosgw-admin zonegroup default --rgw-zonegroup=
us
Create the new secondary zone,
ceph-us-west
, and add it to theus
zonegroup. Make sure that when you create the zone you use the same access and secret keys as were used on the original configuration on the first cluster:#
radosgw-admin zone create --rgw-zonegroup=
us
--rgw-zone=ceph-us-west
\ --endpoints=http://ceph-node3.example.com:80
\ --default --access-key=$SYSTEM_ACCESS_KEY --secret=$SYSTEM_SECRET_KEYCommit the zonegroup changes to update the period state:
#
radosgw-admin period update --commit --rgw-zone=
ceph-us-west
Edit the template configuration in the working directory at
/var/mydom_ceph/ceph.conf
to map the zone names to the gateway configuration. This is done by adding a line for thergw_zone
variable to the gateway configuration entry:[client.rgw.ceph-node3] rgw_frontends = "civetweb port=80"
rgw_zone=ceph-us-west
[client.rgw.ceph-node2] rgw_frontends = "civetweb port=80"rgw_zone=ceph-us-east-2
When you have updated the template configuration, push the changes to each of the nodes in the cluster:
#
ceph-deploy --overwrite-conf config push
ceph-node3
Restart the Ceph Object Gateway service and check its status to ensure that it is running correctly:
#
systemctl restart ceph-radosgw@*
#systemctl status ceph-radosgw@*
At this point all zones should be running and synchronizing.
To test synchronization, you can create a bucket in any of the zones and then list the buckets on any alternative zone. You should discover that the newly created bucket is visible within any of the zones.
This test can be performed using a simple Python script.
Copy the example script, included below, into a file called
~/s3zone_test.py
on any host that is able
to access each of the nodes where the zones are running:
#!/usr/bin/env python import boto import boto.s3.connection from optparse import OptionParser parser = OptionParser() parser.add_option("--access_key", dest="access_key", default="OJywnXPrAA4uSCgv1UUs
") parser.add_option("--secret_key", dest="secret_key", default="dIpf1FRPwUYcXfswYx6qjC0eSuHEeHy0I2f9vHFf
") parser.add_option("-H","--host", dest="host", default="ceph-node1.example.com
") parser.add_option("-s","--secure",dest="is_secure", action="store_true", default=False) parser.add_option("-c","--create", dest="bucket") (options,args) = parser.parse_args() conn = boto.connect_s3( aws_access_key_id = options.access_key, aws_secret_access_key = options.secret_key, host = options.host, is_secure=options.is_secure, calling_format = boto.s3.connection.OrdinaryCallingFormat(), ) if options.bucket: bucket = conn.create_bucket(options.bucket) for bucket in conn.get_all_buckets(): print "{name}\t{created}".format( name = bucket.name, created = bucket.creation_date, )
Substitute the default values for the
access_key
, secret_key
and host
in the script to better suit
your own environment.
To test the script, make sure that it is executable by modifying the permissions on the script:
# chmod 775 ~/s3zone_test.py
Check that you have all the appropriate libraries installed to properly run the script:
# yum install python-boto
You can use the options in the script to set different variables such as which zone host you wish to run the script against, and to determine whether or not to create a new bucket. Create a new bucket in the first zone by running the following command:
#~/s3zone_test.py --host
ceph-node1
-cmy-bucket-east-1
my-bucket-east-1 2016-09-21T09:16:14.894Z
Now check the other two nodes to see that the new bucket is synchronized:
#~/s3zone_test.py --host
ceph-node2
my-bucket-east-1 2016-09-21T09:16:16.932Z
#~/s3zone_test.py --host
ceph-node3
my-bucket-east-1 2016-09-21T09:16:15.145Z
Note that the timestamps are different due to the time that it took to synchronize the data. You may also test creating a bucket in the zone located on the second cluster:
#~/s3zone_test.py --host
ceph-node3
-cmy-bucket-west-1
my-bucket-east-1 2016-09-21T09:16:15.145Z my-bucket-west-1 2016-09-21T09:22:15.456Z
Check that this bucket is synchronized into the other zones:
#~/s3zone_test.py --host
ceph-node1
my-bucket-east-1 2016-09-21T09:16:14.894Z my-bucket-west-1 2016-09-21T09:22:15.428Z
#~/s3zone_test.py --host
ceph-node2
my-bucket-east-1 2016-09-21T09:16:16.932Z my-bucket-west-1 2016-09-21T09:22:17.488Z
When configuring SSL for a multisite Ceph Object Gateway deployment, it is critical that each zone is capable of validating and verifying the SSL certificates. This means that if you choose to use self-signed certificates, each zone must have a copy of all of the certificates already within its recognized CA bundle. Alternatively, make sure that you use certificates signed by a recognized Certificate Authority.
Note that you may wish to use the instructions that are
provided in the workaround described for
Section 1.8.5, “SSL SecurityWarning: Certificate has no
subjectAltName
” when creating your
certificates to avoid the warning message mentioned.
The steps provided in this example show how to create self-signed certificates for each gateway node, how to share the generated certificates between the nodes to ensure that they can be validated and how to change the existing multisite configuration to enable SSL.
Create certificates for each gateway node in the deployment.
On each gateway node, run the following commands:
#
cd /etc/ceph
#openssl genrsa -out ca.key 2048
#openssl req -new -key ca.key -out ca.csr -subj "
#/C=US/ST=California/L=RedWoodShore/O=Oracle/OU=Linux/CN=`hostname`
"openssl x509 -req -days 365 -in ca.csr -signkey ca.key -out ca.crt
#cp -f ca.crt /etc/pki/tls/certs/`hostname`.crt
#cp -f ca.key /etc/pki/tls/private/`hostname`.key
#cp -f ca.csr /etc/pki/tls/private/`hostname`.csr
#cp ca.crt /etc/pki/tls/`hostname`.pem
#cat ca.key >> /etc/pki/tls/`hostname`.pem
You may replace the values for the certificate subject line with values that are more appropriate to your organization, but ensure that the CommonName (CN) of the certificate resolves to the hostname that you use for the endpoint URLs in your zone configuration.
Copy the certificates to from each gateway node to the collection of recognized CAs in
/etc/pki/tls/certs/ca-bundle.crt
on each node.For example, on
ceph-node1
:#
cat /etc/ceph/ca.crt >> /etc/pki/tls/certs/ca-bundle.crt
#cat /etc/ceph/ca.crt | ssh root@
#ceph-node2
"cat >> /etc/pki/tls/certs/ca-bundle.crt"cat /etc/ceph/ca.crt | ssh root@
ceph-node3
"cat >> /etc/pki/tls/certs/ca-bundle.crt"On
ceph-node2
:#
cat /etc/ceph/ca.crt >> /etc/pki/tls/certs/ca-bundle.crt
#cat /etc/ceph/ca.crt | ssh root@
#ceph-node1
"cat >> /etc/pki/tls/certs/ca-bundle.crt"cat /etc/ceph/ca.crt | ssh root@
ceph-node3
"cat >> /etc/pki/tls/certs/ca-bundle.crt"On
ceph-node3
:#
cat /etc/ceph/ca.crt >> /etc/pki/tls/certs/ca-bundle.crt
#cat /etc/ceph/ca.crt | ssh root@
#ceph-node1
"cat >> /etc/pki/tls/certs/ca-bundle.crt"cat /etc/ceph/ca.crt | ssh root@
ceph-node2
"cat >> /etc/pki/tls/certs/ca-bundle.crt"If you are running a firewall service on any of the nodes in your environment, make sure that traffic is permitted on port 443. For example:
#
firewall-cmd --zone=public --add-port=443/tcp --permanent
#systemctl restart firewalld.service
Modify the existing zonegroup information to use HTTPS to access any zone endpoints.
To do this run the following commands:
#
radosgw-admin zonegroup get|sed -r 's/http(.*):80/https\1:443/g' >/tmp/zonegroup.json
#radosgw-admin zonegroup set --infile /tmp/zonegroup.json
#radosgw-admin period update --commit
Redeploy the Ceph Object Gateway on each node in your environment, to reset any previous configuration and to ensure that the nodes are deployed using the full hostname matching the CN used for the certificates.
Run the following command from the first cluster deployment node:
#
ceph-deploy --overwrite-conf rgw create
ceph-node1.example.com
ceph-node2.example.com
Substitute
ceph-node1.example.com
andceph-node2.example.com
with the full hostnames of the nodes that are running the gateway software, so that these match the CN used on their certificates.Run the following command from the second cluster deployement node:
#
ceph-deploy --overwrite-conf rgw create
ceph-node3.example.com
Substitute
ceph-node3.example.com
with the full hostname of the node that is running the gateway software, so that it matches the CN used on the certificate that you generated for this node.At this point, all of the gateway services should have restarted running on the default port 7480.
On the deployment node on each cluster, edit the template configuration to change the port number and to identify the location of the SSL certificate PEM file for each gateway.
For example, on
ceph-node1
:#
cd /var/mydom_ceph
Edit
ceph.conf
and modify the gateway configuration entries. Make sure that the full hostname is used in the entry label and that you modify the port and add an entry pointing to the SSL certificate path:... osd pool default pg num = 100 osd pool default pgp num = 100 mon pg warn max per osd = 2100 [
client.rgw.
]ceph-node1.example.com
rgw_frontends = "civetweb port=443s ssl_certificate=
rgw_zone=ceph-us-east-1 [/etc/pki/tls/ceph-node1.example.com.pem
"client.rgw.
]ceph-node2.example.com
rgw_frontends = "civetweb port=443s ssl_certificate=
rgw_zone=ceph-us-east-2/etc/pki/tls/ceph-node2.example.com.pem
"Push the modified configuration to each gateway node:
#
ceph-deploy --overwrite-conf config push
ceph-node1
ceph-node2
On each of the nodes, restart the Ceph Object Gateway service and check its status to ensure that it is running correctly:
#
systemctl restart ceph-radosgw@*
#systemctl status ceph-radosgw@*
Repeat this step on the second cluster deployment node, to update the configuration for
ceph-node3.example.com
.At this point, each gateway entry should be configured to use SSL. You can test that the zones are continuing to synchronize correctly and are using SSL, by using the test script at
~/s3zone_test.py
, remembering to use the-s
switch to enable SSL.For example:
#
~/s3zone_test.py -s --host
ceph-node2.example.com
-cmy-bucket-east-2
my-bucket-east-1 2016-09-21T09:16:16.932Z my-bucket-east-2 2016-09-21T14:09:51.287Z my-bucket-west-1 2016-09-21T09:22:17.488Z
#~/s3zone_test.py -s --host
ceph-node3.example.com
my-bucket-east-1 2016-09-21T09:16:15.145Z my-bucket-east-2 2016-09-21T14:09:58.783Z my-bucket-west-1 2016-09-21T09:22:15.456Z
If you attempt to use the script without the
-s
switch set, the script attempts to connect without SSL on port 80 and fails to connect, ultimately terminating with a socket error:socket.error: [Errno 111] Connection refused