The software described in this documentation is either no longer supported or is in extended support.
Oracle recommends that you upgrade to a current supported release.
A Ceph Object Gateway provides a REST interface to the Ceph Storage Cluster to facilitate Amazon S3 and OpenStack Swift client access. The Ceph Object Gateway is described in more detail in the upstream documentation.
There are three different deployment configurations for Ceph Object Gateways, depending on requirements.
Simple Ceph Object Gateway
A simple Ceph Object Gateway configuration is used where the Ceph Object Storage service runs in a single data center and there is no requirement to define regions and zones.
Multisite Ceph Object Gateway configuration is used where the Ceph Object Storage service is geographically distributed within a federated architecture and provides the facility to configure storage for different regions and to further distinguish separate zones per region. Data synchronization agents allow the service to maintain multiple copies of the data across a widely distributed environment, helping to provide better data resilience for failover, backup and disaster recovery. The feature facilitates the ability to write to non-master zones and maintain synchronization of data changes between different zones.
Ceph Storage for Oracle Linux Release 1.0 required that you manually configure an external web server and the FastCGI module for use with the Ceph Object Gateway. In this release, the software includes an embedded version of the lightweight Civetweb Web Server that simplifies installation and configuration of the Ceph Object Gateway service.
The Ceph Object Gateway is a client of the Ceph Storage Cluster, but may be hosted on a node within the cluster if required. The Ceph Object Gateway has the following requirements:
A running Ceph Storage Cluster
A public facing network interface that allows traffic on the network port used to serve HTTP or HTTPS requests (7480 by default)
A name for the Ceph Object Gateway instance
A storage cluster user name with appropriate permissions in a keyring
Pools to store its data
A data directory for the gateway instance
An instance entry in the Ceph Configuration file
The following sections describe installation and deployment steps to get you started using Ceph Object Gateway.
To install the Ceph Object Gateway software on a node within your storage cluster, you can run the following command from the deployment node within your environment:
# ceph-deploy install --rgw ceph-node1
Substitute ceph-node1
with the
resolvable hostname of the node where you wish to install
the software. Note that the target node must have the
appropriate Yum channels configured, as described in
Section 1.2, “Enabling Access to the Ceph Packages”.
To create a Ceph Object Gateway within the Ceph configuration, use the following command:
# ceph-deploy --overwrite-conf rgw create ceph-node1
If you are running a firewall service, make sure that the port where the Ceph Object Gateway is running is open. For example:
#firewall-cmd --zone=public --add-port 7480/tcp --permanent
#firewall-cmd --reload
Note that if you change the default port for the Ceph Object Gateway at a later stage, you may need to repeal this firewall rule and add a new rule for the new port number.
Before you are able to use the Ceph Object Gateway, users must be created to allow access to the different APIs exposed by the gateway.
To create a user for S3 access, run the following command on the gateway host:
# radosgw-admin user create --uid="testuser
" --display-name="First User
"
The command returns JSON formatted output describing the
newly created user. This output includes a listing of the
keys that are automatically generated when you create a new
user. Take note of the access_key
and
secret_key
for the user that you have
created. You require these when connecting to the gateway
from an S3 client application.
If you wish to provide access to the Swift API, a subuser can be created against the original user. To create a user for Swift access, run the following command on the gateway host:
# radosgw-admin subuser create --uid=testuser
--subuser=testuser:swift
The output should display the details for the updated user, with the added subuser. Swift makes use of its own keys, so you need to create a specific key for this subuser before you are able to use it with a Swift client. To do this, run the following command:
# radosgw-admin key create --subuser=testuser:swift
--key-type=swift --gen-secret
Once again, the details for the updated user are displayed.
This time, the swift_keys
key in the JSON
output is updated to display a user
and
secret_key
that can be used for Swift
access validation.
At any point, if you need to see this user information again to obtain keys or to check other information, such as permissions, you can use the radosgw-admin user info command.
To test S3 access, you require the
python-boto
package and you must create a
simple Python script that can be used to create a new
bucket.
Install the
python-boto
package if it is not already installed:#
yum install python-boto
Create a Python script that can be used to test S3 access. Using a text editor, create a file called
s3test.py
and insert the following code:#!/usr/bin/env python import boto import boto.s3.connection access_key = '
SZUP3NC5P7452N1HQT4B
' secret_key = 'v0Ok4YK0MtcSZURk6vwCwRtQnB3vAW2G8TjrAlIj
' conn = boto.connect_s3( aws_access_key_id = access_key, aws_secret_access_key = secret_key, host = 'ceph-node1.example.com
', port =7480
, is_secure=False, calling_format = boto.s3.connection.OrdinaryCallingFormat(), ) bucket = conn.create_bucket('my-new-bucket') for bucket in conn.get_all_buckets(): print "{name} {created}".format( name = bucket.name, created = bucket.creation_date, )Replace the
access_key
value,SZUP3NC5P7452N1HQT4B
, with the access key for thetestuser
user that you created for S3 access. Replace thesecret_key
value,v0Ok4YK0MtcSZURk6vwCwRtQnB3vAW2G8TjrAlIj
, with the secret key that was generated for thetestuser
user that you created for S3 access. Replaceceph-node1.example.com
with the hostname or fully qualified domain name where the gateway host is located. Replace the port number7480
, if you have configured an alternate port to the default.Change permissions on the script so that you can run the script:
#
chmod 776 ./s3test.py
Run the script:
#
./s3test.py
my-new-bucket 2016-09-19T09:25:17.056Z
The script should return the name of the new bucket and the date and timestamp for when it was created.
If you need to test Swift access, install the Swift command-line client and use the secret key that was generated for the subuser that you created for this purpose.
Install the
python-swiftclient
package and its dependencies:#
yum install python-swiftclient
Run the client from the command line, providing the appropriate credentials and connection information:
#
swift -A http://
ceph-node1.example.com
:7480
/auth/1.0 -Utestuser:swift
\ -K '2DHaQknPsc5XsYEmHQ0mWCGLnoGnaCr4VUd62czm
' listmy-new-bucket
Replace ceph-node1.example.com
with the hostname or fully qualified domain name where the
gateway host is located. Replace the port number
7480
, if you have configured an
alternate port to the default. Replace
testuser:swift
, with the
testuser
user subuser that you
created for Swift access. Replace
2DHaQknPsc5XsYEmHQ0mWCGLnoGnaCr4VUd62czm
,
with the secret Swift key that was generated for the Swift
subuser. Run man swift for more
information about this command and the options available.
The command should list any existing buckets within Ceph, including the bucket created when you tested S3 access.
The port number that is used for the Ceph Object Gateway
HTTP interface can be updated or changed in the
configuration file on the admin node within the Storage
Cluster. To change the port number used by the Ceph Object
Gateway, edit the ceph.conf
file in your
working directory on the deployment node of your cluster.
Add the following lines to the end of the configuration
file:
[client.rgw.ceph-node1
] rgw_frontends = "civetweb port=80
"
Replace ceph-node1
with the
resolvable hostname that you used when you deployed the
gateway. Replace 80
with the port
number that you wish to use for the HTTP port.
To push the configuration change to the gateway node and to the other nodes in the cluster, run the following command:
# ceph-deploy --overwrite-conf config push ceph-node{1..4}.example.com
Note that in the above command,
ceph-node{1..4}.example.com
is
equivalent to a space separated list that includes all of
the nodes within the storage cluster, and specifically
includes the gateway node.
On the gateway node, you should restart the Ceph Object Gateway for the settings to take effect:
# systemctl restart ceph-radosgw@*
To enable SSL on the Ceph Object Gateway service, you must install the OpenSSL packages on the gateway host, if they are not installed already:
# yum install -y openssl mod_ssl
Create an SSL certificate and key that can be used by the
Ceph Object Gateway service. Instructions are provided in
the workaround described for
Section 1.8.5, “SSL SecurityWarning: Certificate has no
subjectAltName
”.
Ideally, the certificate should be signed by a recognized
Certificate Authority (CA).
If you configure your Ceph Object Gateway to use a self-signed certificate, you may encounter SSL certificate verification or validation errors when you attempt to access the service in SSL mode, particularly when using the example python script provided in this document.
If you choose to use a self-signed certificate, you can copy the CA certificate to the client system's certificate bundle to avoid any errors. For example:
# cat custom.crt >> /etc/pki/tls/certs/ca-bundle.crt
Alternately, use the client program or script's environment to specify the path to additional trusted CA certificates in PEM format. The environment variables SSL_CERT_FILE and SSL_CERT_DIR can be used to specify additional trusted CA certificates. For example:
# SSL_CERT_FILE=/root/ceph/custom.pem python script.py
Note that Oracle does not recommend the use of self-signed certificates in production environments.
Update the ceph.conf
file in your working
directory on the deployment node of your cluster. If there
is an existing entry for
[client.rgw.
,
you may need to modify it to look similar to the following
example. Alternatively add an entry that looks similar to
the following:
gateway
]
[client.rgw.ceph-node1
] rgw_frontends = "civetweb port=443
s ssl_certificate=/etc/pki/tls/ceph-node1.example.com.pem
Replace ceph-node1
with the
resolvable hostname that you used when you deployed the
gateway. Replace 443
with the
port number that you wish to use for the HTTPS port. Note
that the port number must have the letter
s
affixed to indicate to the embedded
Civetweb web server that HTTPS should be used on this port.
Replace
/etc/pki/tls/ceph-node1.example.com.pem
with the path to a PEM formatted file that contains both the
certificate and key file.
To push the configuration change to the gateway node and to the other nodes in the cluster, run the following command:
# ceph-deploy --overwrite-conf config push ceph-node{1..4}.example.com
Note that in the above command,
ceph-node{1..4}.example.com
is
equivalent to a space separated list that includes all of
the nodes within the storage cluster, and specifically
includes the gateway node.
On the gateway node, you should restart the Ceph Object Gateway for the settings to take effect:
# systemctl restart ceph-radosgw@*
If you are running firewall software on the gateway node, make sure that a rule exists to allow traffic on the port defined in your configuration file. For instance:
# firewall-cmd --zone=public --add-port=443/tcp --permanent