The Cinder block storage service provides persistent block storage for OpenStack instances to use. The Cinder service is enabled by default.
Cinder requires some form of back-end storage. By default, Cinder uses volumes in a local volume group managed by the Linux Logical Volume Manager (LVM). The local volume group must be named cinder-volumes and has to be configured manually. You can also enable Cinder to use specialized storage appliances by configuring vendor-specific volume drivers.
Cinder also provides a backup service, which enables you to automatically back up volumes to an external storage. By default, the external storage is an NFS share.
By default, the Cinder block storage service uses local volumes
managed by the Linux Logical Volume Manager (LVM). The Cinder
service creates and manages the volumes in an LVM volume group
called cinder-volumes on the storage node.
You have to manually create this volume group.
Perform the following steps on each storage node:
Install the LVM tools.
The LVM tools are usually installed by default. If they not installed, install them:
# yum install lvm2
Use the pvcreate command to set up the devices that you want to use as physical volumes with LVM.
If the devices contain any existing data, back up the data.
# pvcreate [
options]device...For example, to set up
/dev/sdband/dev/sdcdevices as physical volumes:#
pvcreate -v /dev/sd[bc]Set up physical volume for "/dev/sdb" with 41943040 available sectors Zeroing start of device /dev/sdb Writing physical volume data to disk "/dev/sdb" Physical volume "/dev/sdb" successfully created ...Use the vgcreate command to create the cinder-volumes volume group.
# vgcreate [
options] cinder-volumesphysical_volume...For example, to create the volume group from the physical volumes
/dev/sdband/dev/sdc:#
vgcreate -v cinder-volumes /dev/sd[bc]Adding physical volume '/dev/sdb' to volume group 'cinder-volumes' Adding physical volume '/dev/sdc' to volume group 'cinder-volumes' Archiving volume group "cinder-volumes" metadata (seqno 0). Creating volume group backup "/etc/lvm/backup/cinder-volumes" (seqno 1). Volume group "cinder-volumes" successfully created
For more information, see the lvm(8),
pvcreate(8), vgcreate(8),
and other LVM manual pages.
If you have dedicated storage appliances, you do not have to use the default Cinder LVM volume driver.
To use a different volume driver, you add the configuration
settings for the driver to the
/etc/kolla/config/cinder.conf file on the
master node. If
this file does not exist, create it.
The volume_driver configuration setting is
used to specify the volume driver:
volume_driver = cinder.volume.drivers.driver_nameUsing an NFS driver for Cinder volumes is not supported.
For the Oracle ZFS Storage Appliance iSCSI driver, use:
volume_driver = cinder.volume.drivers.zfssa.zfssaiscsi.ZFSSAISCSIDriver
For more information about this driver, see http://docs.openstack.org/juno/config-reference/content/zfssa-volume-driver.html.
For Oracle Flash Storage Systems, use:
volume_driver = cinder.volume.drivers.ofs.ofs.OracleFSFibreChannelDriver
To download the Cinder volume driver for Oracle Flash Storage Systems, go to:
http://www.oracle.com/technetwork/server-storage/san-storage/downloads/index.html
For more information about the available Cinder volume drivers and their configuration settings, see the OpenStack Configuration Reference:
http://docs.openstack.org/kilo/config-reference/content/section_volume-drivers.html
The following is an example configuration for the Oracle ZFS Storage Appliance iSCSI driver:
[DEFAULT] enabled_backends = zfsISCSIdriver-1 [zfsISCSIdriver-1] volume_backend_name = zfsISCSIdriver-1 volume_driver = cinder.volume.drivers.zfssa.zfssaiscsi.ZFSSAISCSIDriver san_ip = 10.10.10.10 san_login = cinder san_password = password zfssa_pool = mypool zfssa_project = myproject zfssa_initiator_username = iqn.name zfssa_initiator_group = default zfssa_target_portal = 10.10.10.11:3260 zfssa_target_interfaces = e1000g0
The Cinder service provides the ability to back up Cinder volumes to an external storage. By default, the backup driver is configured to use NFS. However, you need to configure the NFS share as the location for the backups by setting a Kolla property.
On the master node run the following command:
$ kollacli property set cinder_backup_sharehost_name:path
where host_name is the fully
qualified DNS name or IP address of the host that exports the
NFS share, and path is the full path
to the share on that host.
You can configure separate shares for individual hosts or groups, see Section 4.4, “Setting Properties for Groups or Hosts” for details.
By default, Cinder supports NFS version 4.1 or higher. If the
NFS host uses an earlier version of NFS, this can cause errors.
To configure Cinder to downgrade the NFS version, you can add
configuration settings to the
/etc/kolla/config/cinder.conf file on the
master node. If this file
does not exist, create it.
If the NFS host supports NFS version 4, add the following to the configuration file:
[DEFAULT] backup_mount_options="vers=4,minorversion=0"
Otherwise, add the following to the configuration file:
[DEFAULT] backup_mount_options="vers=3"

