4 Creating and Managing Volumes
WARNING:
Gluster on Oracle Linux 8 is no longer supported. See Oracle Linux: Product Life Cycle Information for more information.
Oracle Linux 7 is now in Extended Support. See Oracle Linux Extended Support and Oracle Open Source Support Policies for more information. Gluster on Oracle Linux 7 is excluded from extended support.
This chapter discusses Gluster volume types and how to create, manage, and monitor volumes.
Creating Volumes
On each node in the trusted storage pool, storage must be allocated for volumes. In the
examples in this guide, a file system is mounted on
/data/glusterfs/myvolume/mybrick
on each node. For information on setting
up storage on nodes, see Preparing Oracle Linux Nodes.
Gluster creates a volume on this file system to use as bricks.
You can use several volume types:
-
Distributed: Distributes files across the bricks in the volume. You can use distributed volumes where the requirement is to scale storage and the redundancy isn't required, or is provided by other hardware/software layers. Disk/server failure can result in a serious loss of data as it's spread across the bricks in the volume.
-
Replicated: Replicates files across bricks in the volume. You can use replicated volumes when high-availability is required.
-
Distributed Replicated: Distributes files across replicated bricks in the volume. You can use distributed replicated volumes to scale storage and for high-availability and high-reliability. Distributed replicated volumes offer improved read performance.
-
Dispersed: Provides space efficient protection against disk or server failures (based on erasure codes). This volume type stripes the encoded data of files, with some redundancy added, across several bricks in the volume. Dispersed volumes provide a configurable level of reliability with minimum space waste.
-
Distributed Dispersed: Distributes files across dispersed bricks in the volume. This has the same advantages of distributed replicated volumes, using dispersed instead of replicated to store the data to bricks.
The generally accepted naming convention for creating bricks and volumes is:
/data/glusterfs/volume_name/brick_name/brick
In this example, brick_name is the file system that can be mounted from a client. For information on mounting a Gluster file system, see Accessing Volumes.
This section describes the basic steps to set up each of these volume types. When creating volumes, include all nodes in the trusted storage pool, including the node on which you're performing the step to create the volume.
The notation used in the examples to create and manage volumes might be provided in the Bash brace expansion notation. For example:
node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick
This configuration is similar to providing the node information in the following longer format:
node1:/data/glusterfs/myvolume/mybrick/brick node2:/data/glusterfs/myvolume/mybrick/brick node3:/data/glusterfs/myvolume/mybrick/brick
When a volume is configured, you can enable TLS on the volume to authenticate and encrypt connections between nodes that serve data for the volume, and for client systems that connect to the pool to access the volume. See Setting Up Transport Layer Security for more information.
For more detailed information, see the Gluster upstream documentation.
Creating Distributed Volumes
This section provides an example of creating a pool using a distributed volume.
Example 4-1 Creating a distributed volume
This example creates a distributed volume over three nodes, with one brick on each node.
sudo gluster volume create myvolume node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick
volume create: myvolume: success: please start the volume to access data
sudo gluster volume start myvolume
volume start: myvolume: success
sudo gluster volume info
Volume Name: myvolume Type: Distribute Volume ID: ... Status: Started Snapshot Count: 0 Number of Bricks: 3 Transport-type: tcp Bricks: Brick1: node1:/data/glusterfs/myvolume/mybrick/brick Brick2: node2:/data/glusterfs/myvolume/mybrick/brick Brick3: node3:/data/glusterfs/myvolume/mybrick/brick Options Reconfigured: transport.address-family: inet nfs.disable: on
Creating Replicated Volumes
This section discusses creating a pool using replicated volumes. The
replica
count sets the number of copies of files across bricks in the
volume. Two or three copies are typically used. To protect against server and disk failures,
the bricks of the volume must be on different nodes.
Split-brain is a situation where two or more replicated copies of a file become divergent, and not enough information exists to select a copy as being pristine and to self-heal any bad copies. Split-brain situations occur mostly because of network issues with clients connecting to the files in the volume.
If you set replica
to be an even number (say, 2), split-brain might occur
as both bricks think they have the latest and correct version. You can use an odd number for
the replica
count (say, 3), to prevent split-brain.
By using an arbiter brick, you can avoid split-brain while not needing the extra storage
required of a replica 3
volume, which needs to store three copies of the
files. An arbiter brick contains metadata about the files but not the files themselves on
other bricks in the volume. Thus, the set up can be much smaller in size. The last brick in
each replica subvolume is used as the arbiter brick, for example, if you use replica 3
arbiter 1
, every third brick is used as an arbiter brick.
Note:
Volumes using an arbiter brick can only be created using the
replica 3 arbiter 1
option.
Example 4-2 Creating a replicated volume
This example creates a replicated volume with one brick on three nodes.
sudo gluster volume create myvolume replica 3 node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick
volume create: myvolume: success: please start the volume to access data
sudo gluster volume start myvolume
volume start: myvolume: success
sudo gluster volume info
Volume Name: myvolume Type: Replicate Volume ID: ... Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: node1:/data/glusterfs/myvolume/mybrick/brick Brick2: node2:/data/glusterfs/myvolume/mybrick/brick Brick3: node3:/data/glusterfs/myvolume/mybrick/brick Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off
Example 4-3 Creating a replicated volume with an arbiter
This example creates a replicated volume with one brick on three nodes, and sets one arbiter brick.
sudo gluster volume create myvolume replica 3 arbiter 1 node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick
volume create: myvolume: success: please start the volume to access data
sudo gluster volume start myvolume
volume start: myvolume: success
sudo gluster volume info
Volume Name: myvolume Type: Replicate Volume ID: ... Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: node1:/data/glusterfs/myvolume/mybrick/brick Brick2: node2:/data/glusterfs/myvolume/mybrick/brick Brick3: node3:/data/glusterfs/myvolume/mybrick/brick (arbiter) Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off
Creating Distributed Replicated Volumes
This section discusses creating a pool using distributed replicated volumes. The number of
bricks must be a multiple of the replica
count. For example, six nodes with
one brick, or three nodes with two bricks on each node.
The order in which bricks are specified affects data protection. Each
replica
count forms a replica set, with all replica sets combined into a
volume-wide distribute set. Ensure that replica sets aren't on the same node by listing the
first brick on each node, then the second brick on each node, in the same order.
Example 4-4 Creating a distributed replicated volume with one brick on six nodes
This example creates a distributed replicated volume with one brick on six nodes.
sudo gluster volume create myvolume replica 3 node{1..6}:/data/glusterfs/myvolume/mybrick/brick
volume create: myvolume: success: please start the volume to access data
sudo gluster volume start myvolume
volume start: myvolume: success
sudo gluster volume info
Volume Name: myvolume Type: Distributed-Replicate Volume ID: ... Status: Started Snapshot Count: 0 Number of Bricks: 2 x 3 = 6 Transport-type: tcp Bricks: Brick1: node1:/data/glusterfs/myvolume/mybrick/brick Brick2: node2:/data/glusterfs/myvolume/mybrick/brick Brick3: node3:/data/glusterfs/myvolume/mybrick/brick Brick4: node4:/data/glusterfs/myvolume/mybrick/brick Brick5: node5:/data/glusterfs/myvolume/mybrick/brick Brick6: node6:/data/glusterfs/myvolume/mybrick/brick Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off
Example 4-5 Creating a distributed replicated volume with one brick on six nodes with an arbiter
This example creates a distributed replicated volume with one brick on six nodes. Each third brick is used as an arbiter brick.
sudo gluster volume create myvolume replica 3 arbiter 1 node{1..6}:/data/glusterfs/myvolume/mybrick/brick
volume create: myvolume: success: please start the volume to access data
sudo gluster volume start myvolume
volume start: myvolume: success
sudo gluster volume info
Volume Name: myvolume Type: Distributed-Replicate Volume ID: ... Status: Created Snapshot Count: 0 Number of Bricks: 2 x (2 + 1) = 6 Transport-type: tcp Bricks: Brick1: node1:/data/glusterfs/myvolume/mybrick/brick Brick2: node2:/data/glusterfs/myvolume/mybrick/brick Brick3: node3:/data/glusterfs/myvolume/mybrick/brick (arbiter) Brick4: node4:/data/glusterfs/myvolume/mybrick/brick Brick5: node5:/data/glusterfs/myvolume/mybrick/brick Brick6: node6:/data/glusterfs/myvolume/mybrick/brick (arbiter) Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off
Example 4-6 Creating a distributed replicated volume with two bricks over three nodes
This example creates a distributed replicated volume with two bricks over three nodes.
sudo gluster volume create myvolume replica 3 node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick1 node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick2
volume create: myvolume: success: please start the volume to access data
sudo gluster volume start myvolume
volume start: myvolume: success
sudo gluster volume info
Volume Name: myvolume Type: Distributed-Replicate Volume ID: ... Status: Started Snapshot Count: 0 Number of Bricks: 2 x 3 = 6 Transport-type: tcp Bricks: Brick1: node1:/data/glusterfs/myvolume/mybrick/brick1 Brick2: node2:/data/glusterfs/myvolume/mybrick/brick1 Brick3: node3:/data/glusterfs/myvolume/mybrick/brick1 Brick4: node1:/data/glusterfs/myvolume/mybrick/brick2 Brick5: node2:/data/glusterfs/myvolume/mybrick/brick2 Brick6: node3:/data/glusterfs/myvolume/mybrick/brick2 Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off
Example 4-7 Creating a distributed replicated volume with two bricks over three nodes with an arbiter
This example creates a distributed replicated volume with two bricks over three nodes. Each third brick is used as an arbiter brick.
sudo gluster volume create myvolume replica 3 arbiter 1 node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick1 node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick2
volume create: myvolume: success: please start the volume to access data
sudo gluster volume start myvolume
volume start: myvolume: success
sudo gluster volume info
Volume Name: myvolume Type: Distributed-Replicate Volume ID: ... Status: Started Snapshot Count: 0 Number of Bricks: 2 x (2 + 1) = 6 Transport-type: tcp Bricks: Brick1: node1:/data/glusterfs/myvolume/mybrick/brick1 Brick2: node2:/data/glusterfs/myvolume/mybrick/brick1 Brick3: node3:/data/glusterfs/myvolume/mybrick/brick1 (arbiter) Brick4: node1:/data/glusterfs/myvolume/mybrick/brick2 Brick5: node2:/data/glusterfs/myvolume/mybrick/brick2 Brick6: node3:/data/glusterfs/myvolume/mybrick/brick2 (arbiter) Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off
Creating Dispersed Volumes
This section discusses creating a pool using dispersed volumes.
You set the volume redundancy level when you create a dispersed
volume. The redundancy
value sets how many
bricks can be lost without interrupting the operation of the
volume. The redundancy
value must be greater
than 0
, and the total number of bricks must
be greater than
2*redundancy
. A
dispersed volume must have a minimum of three bricks.
All bricks of a disperse set must have the same capacity, otherwise, when the smallest brick becomes full, no added data is allowed in the disperse set.
Example 4-8 Creating a dispersed volume with one brick on three nodes
This example creates a dispersed volume with one brick on three nodes.
sudo gluster volume create myvolume disperse 3 redundancy 1 node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick
volume create: myvolume: success: please start the volume to access data
sudo gluster volume start myvolume
volume start: myvolume: success
sudo gluster volume info
Volume Name: myvolume Type: Disperse Volume ID: ... Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: node1:/data/glusterfs/myvolume/mybrick/brick Brick2: node2:/data/glusterfs/myvolume/mybrick/brick Brick3: node3:/data/glusterfs/myvolume/mybrick/brick Options Reconfigured: transport.address-family: inet nfs.disable: on
Creating Distributed Dispersed Volumes
This section discusses creating a pool using distributed dispersed volumes. Distributed
dispersed volumes consist of two dispersed subvolumes, which are then distributed. The number
of bricks must be a multiple of the disperse
count, and greater than
0
. As a dispersed volume must have a minimum of three bricks, a distributed
dispersed volume must have at least six bricks. For example, six nodes with one brick, or
three nodes with two bricks on each node are needed for this volume type.
The order in which bricks are specified affects data protection. Each
disperse
count forms a disperse set, with all disperse sets combined into a
volume-wide distribute set. Ensure that disperse sets aren't on the same node by listing the
first brick on each node, then the second brick on each node, in the same order.
The redundancy
value is used in the same way
as for a dispersed volume.
Example 4-9 Creating a distributed dispersed volume with one brick on six nodes
This example creates a distributed dispersed volume with one brick on six nodes.
sudo gluster volume create myvolume disperse 3 redundancy 1 node{1..6}:/data/glusterfs/myvolume/mybrick/brick
volume create: myvolume: success: please start the volume to access data
sudo gluster volume start myvolume
volume start: myvolume: success
sudo gluster volume info
Volume Name: myvolume Type: Distributed-Disperse Volume ID: ... Status: Started Snapshot Count: 0 Number of Bricks: 2 x (2 + 1) = 6 Transport-type: tcp Bricks: Brick1: node1:/data/glusterfs/myvolume/mybrick/brick Brick2: node2:/data/glusterfs/myvolume/mybrick/brick Brick3: node3:/data/glusterfs/myvolume/mybrick/brick Brick4: node4:/data/glusterfs/myvolume/mybrick/brick Brick5: node5:/data/glusterfs/myvolume/mybrick/brick Brick6: node6:/data/glusterfs/myvolume/mybrick/brick Options Reconfigured: transport.address-family: inet nfs.disable: on
Example 4-10 Creating a distributed dispersed volume with two bricks on three nodes
This example creates a distributed dispersed volume with two bricks on three nodes.
sudo gluster volume create myvolume disperse 3 redundancy 1 \ node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick1 \ node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick2
volume create: myvolume: success: please start the volume to access data
sudo gluster volume start myvolume
volume start: myvolume: success
sudo gluster volume info
Volume Name: myvolume Type: Distributed-Disperse Volume ID: ... Status: Started Snapshot Count: 0 Number of Bricks: 2 x (2 + 1) = 6 Transport-type: tcp Bricks: Brick1: node1:/data/glusterfs/myvolume/mybrick/brick1 Brick2: node2:/data/glusterfs/myvolume/mybrick/brick1 Brick3: node3:/data/glusterfs/myvolume/mybrick/brick1 Brick4: node1:/data/glusterfs/myvolume/mybrick/brick2 Brick5: node2:/data/glusterfs/myvolume/mybrick/brick2 Brick6: node3:/data/glusterfs/myvolume/mybrick/brick2 Options Reconfigured: transport.address-family: inet nfs.disable: on
Managing Volumes
This section provides some basic volume management operations. For more information on volume management, see the upstream documentation.
Setting Volume Options
Several options are available to configure and tune volumes. These options are set with the following command:
gluster volume set volume_name option
For example, to restrict access to mounting the volume to the IP addresses on a network:
sudo gluster volume set myvolume auth.allow 192.168.10.*
To set access to volume subdirectories, type:
sudo gluster volume set myvolume auth.allow "/(192.168.10.*),/mysubdir1(192.168.1.*),/mysubdir2(192.168.2.*)"
Stopping a Volume
To stop a volume, use the the command:
gluster volume stop volume_name
At the prompt, confirm the operation to stop the volume.
Self Healing a Replicated Volume
The self-heal daemon runs in the background and diagnoses issues with bricks and automatically starts a self-healing process every 10 minutes on the files that require healing. To see the files that require healing, use:
sudo gluster volume heal myvolume info
You can start a self-healing manually using:
sudo gluster volume heal myvolume
To list the files in a volume which are in split-brain state, use:
sudo gluster volume heal myvolume info split-brain
See the upstream documentation for the methods available to avoid and recover from split-brain issues.
Expanding a Volume
You can increase the number of bricks in a volume to expand available storage. When
expanding distributed replicated and distributed dispersed volumes, you need to add a number
of bricks that's a multiple of the replica
or disperse
count. For example, to expand a distributed replicated volume with a replica
count of 2
, you need to add bricks in multiples of 2, such as 4, 6, 8, and so
on.
-
Prepare the new node with the same configuration and storage as all existing nodes in the trusted storage pool.
-
Add the node to the pool.
gluster peer probe node4
-
Add the bricks.
gluster volume add-brick myvolume node4:/data/glusterfs/myvolume/mybrick/brick
-
Rebalance the volume to distribute files to the new brick(s).
sudo gluster volume rebalance myvolume start
To check the status of the volume rebalance, type:
sudo gluster volume rebalance myvolume status
Example 4-11 Creating a distributed replicated volume and adding a node
This example creates a distributed replicated volume with three nodes and two bricks on
each node. The volume is then extended to add a new node with two bricks added to the node.
Note that when you add a new node to a replicated volume, you need to increase the
replica
count to the new number of nodes in the pool.
sudo gluster volume create myvolume replica 3 node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick1 node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick2
volume create: myvolume: success: please start the volume to access data
sudo gluster volume start myvolume
volume start: myvolume: success
sudo gluster volume info
Volume Name: myvolume Type: Distributed-Replicate Volume ID: ... Status: Started Snapshot Count: 0 Number of Bricks: 2 x 3 = 6 Transport-type: tcp Bricks: Brick1: node1:/data/glusterfs/myvolume/mybrick/brick1 Brick2: node2:/data/glusterfs/myvolume/mybrick/brick1 Brick3: node3:/data/glusterfs/myvolume/mybrick/brick1 Brick4: node1:/data/glusterfs/myvolume/mybrick/brick2 Brick5: node2:/data/glusterfs/myvolume/mybrick/brick2 Brick6: node3:/data/glusterfs/myvolume/mybrick/brick2 Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off
sudo gluster peer status
Number of Peers: 2 Hostname: node2 Uuid: ... State: Peer in Cluster (Connected) Hostname: node3 Uuid: ... State: Peer in Cluster (Connected)
sudo gluster peer probe node4
peer probe: success.
sudo gluster peer status
Number of Peers: 3 Hostname: node2 Uuid: ... State: Peer in Cluster (Connected) Hostname: node3 Uuid: ... State: Peer in Cluster (Connected) Hostname: node4 Uuid: ... State: Peer in Cluster (Connected)
sudo gluster volume add-brick myvolume replica 4 node4:/data/glusterfs/myvolume/mybrick/brick1 node4:/data/glusterfs/myvolume/mybrick/brick2
volume add-brick: success
sudo gluster volume info
Volume Name: myvolume Type: Distributed-Replicate Volume ID: ... Status: Started Snapshot Count: 0 Number of Bricks: 2 x 4 = 8 Transport-type: tcp Bricks: Brick1: node1:/data/glusterfs/myvolume/mybrick/brick1 Brick2: node2:/data/glusterfs/myvolume/mybrick/brick1 Brick3: node3:/data/glusterfs/myvolume/mybrick/brick1 Brick4: node4:/data/glusterfs/myvolume/mybrick/brick1 Brick5: node1:/data/glusterfs/myvolume/mybrick/brick2 Brick6: node2:/data/glusterfs/myvolume/mybrick/brick2 Brick7: node3:/data/glusterfs/myvolume/mybrick/brick2 Brick8: node4:/data/glusterfs/myvolume/mybrick/brick2 Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off
sudo gluster volume rebalance myvolume start
volume rebalance: myvolume: success: Rebalance on myvolume has been started successfully. Use rebalance status command to check status of the rebalance process. ID: ...
sudo gluster volume rebalance myvolume status
... volume rebalance: myvolume: success
Example 4-12 Adding bricks to nodes in a distributed replicated volume
This example adds two bricks to an existing distributed replicated volume. The steps to create this volume are shown in Example 4-11.
sudo gluster volume add-brick myvolume node{1,2,3,4}:/data/glusterfs/myvolume/mybrick/brick3 node{1,2,3,4}:/data/glusterfs/myvolume/mybrick/brick4
volume add-brick: success
sudo gluster volume info
Volume Name: myvolume Type: Distributed-Replicate Volume ID: ... Status: Started Snapshot Count: 0 Number of Bricks: 4 x 4 = 16 Transport-type: tcp Bricks: Brick1: node1:/data/glusterfs/myvolume/mybrick/brick1 Brick2: node2:/data/glusterfs/myvolume/mybrick/brick1 Brick3: node3:/data/glusterfs/myvolume/mybrick/brick1 Brick4: node4:/data/glusterfs/myvolume/mybrick/brick1 Brick5: node1:/data/glusterfs/myvolume/mybrick/brick2 Brick6: node2:/data/glusterfs/myvolume/mybrick/brick2 Brick7: node3:/data/glusterfs/myvolume/mybrick/brick2 Brick8: node4:/data/glusterfs/myvolume/mybrick/brick2 Brick9: node1:/data/glusterfs/myvolume/mybrick/brick3 Brick10: node2:/data/glusterfs/myvolume/mybrick/brick3 Brick11: node3:/data/glusterfs/myvolume/mybrick/brick3 Brick12: node4:/data/glusterfs/myvolume/mybrick/brick3 Brick13: node1:/data/glusterfs/myvolume/mybrick/brick4 Brick14: node2:/data/glusterfs/myvolume/mybrick/brick4 Brick15: node3:/data/glusterfs/myvolume/mybrick/brick4 Brick16: node4:/data/glusterfs/myvolume/mybrick/brick4 Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off
sudo gluster volume rebalance myvolume start
volume rebalance: myvolume: success: Rebalance on myvolume has been started successfully. Use rebalance status command to check status of the rebalance process. ID: ...
sudo gluster volume rebalance myvolume status
... volume rebalance: myvolume: success
Shrinking a Volume
You can decrease the number of bricks in a volume. This might be useful if a node in the Gluster pool fails due to a hardware or network fault.
When shrinking distributed replicated and distributed dispersed volumes, you need a number
of bricks that's a multiple of the replica
or stripe
count.
For example, to shrink a distributed replicate volume with a replica
count of
2
, you need to remove bricks in multiples of 2 (such as 4, 6, 8, and so
on). The bricks you remove must be from the same replica or disperse set.
-
Remove the bricks.
sudo gluster volume remove-brick myvolume node4:/data/glusterfs/myvolume/mybrick/brick start
The
start
option automatically triggers a volume rebalance operation to migrate data from the removed bricks to other bricks in the volume. -
To check the status of the brick removal, type:
sudo gluster volume remove-brick myvolume status
-
When the
brick-removal
status iscompleted
, commit the remove-brick operation.sudo gluster volume remove-brick myvolume commit
At the prompt, confirm the operation to delete the bricks.
The data on the brick is migrated to other bricks in the pool. The data on the removed brick is no longer accessible at the Gluster mount point. Removing the brick removes the configuration information and not the data. You can continue to access the data directly from the brick if required.
Example 4-13 Removing a node from a distributed replicated volume
This example removes a node from a pool with four nodes. The replica
count for this volume is 4
. As a node is removed, the
replica
count must be reduced to 3
. The
start
option isn't needed in replicated volumes. Instead, the
force
option is used, which means that . The force
option means you don't need to check the remove-brick
process status, or
perform the remove-brick
commit steps.
sudo gluster volume info
Volume Name: myvolume Type: Distributed-Replicate Volume ID: ... Status: Started Snapshot Count: 0 Number of Bricks: 4 x 4 = 16 Transport-type: tcp Bricks: Brick1: node1:/data/glusterfs/myvolume/mybrick/brick1 Brick2: node2:/data/glusterfs/myvolume/mybrick/brick1 Brick3: node3:/data/glusterfs/myvolume/mybrick/brick1 Brick4: node4:/data/glusterfs/myvolume/mybrick/brick1 Brick5: node1:/data/glusterfs/myvolume/mybrick/brick2 Brick6: node2:/data/glusterfs/myvolume/mybrick/brick2 Brick7: node3:/data/glusterfs/myvolume/mybrick/brick2 Brick8: node4:/data/glusterfs/myvolume/mybrick/brick2 Brick9: node1:/data/glusterfs/myvolume/mybrick/brick3 Brick10: node2:/data/glusterfs/myvolume/mybrick/brick3 Brick11: node3:/data/glusterfs/myvolume/mybrick/brick3 Brick12: node4:/data/glusterfs/myvolume/mybrick/brick3 Brick13: node1:/data/glusterfs/myvolume/mybrick/brick4 Brick14: node2:/data/glusterfs/myvolume/mybrick/brick4 Brick15: node3:/data/glusterfs/myvolume/mybrick/brick4 Brick16: node4:/data/glusterfs/myvolume/mybrick/brick4 Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off
sudo gluster volume remove-brick myvolume replica 3 node4:/data/glusterfs/myvolume/mybrick/brick1 node4:/data/glusterfs/myvolume/mybrick/brick2 node4:/data/glusterfs/myvolume/mybrick/brick3 node4:/data/glusterfs/myvolume/mybrick/brick4 force
sudo gluster volume info
Volume Name: myvolume Type: Distributed-Replicate Volume ID: ... Status: Started Snapshot Count: 0 Number of Bricks: 4 x 3 = 12 Transport-type: tcp Bricks: Brick1: node1:/data/glusterfs/myvolume/mybrick/brick1 Brick2: node2:/data/glusterfs/myvolume/mybrick/brick1 Brick3: node3:/data/glusterfs/myvolume/mybrick/brick1 Brick4: node1:/data/glusterfs/myvolume/mybrick/brick2 Brick5: node2:/data/glusterfs/myvolume/mybrick/brick2 Brick6: node3:/data/glusterfs/myvolume/mybrick/brick2 Brick7: node1:/data/glusterfs/myvolume/mybrick/brick3 Brick8: node2:/data/glusterfs/myvolume/mybrick/brick3 Brick9: node3:/data/glusterfs/myvolume/mybrick/brick3 Brick10: node1:/data/glusterfs/myvolume/mybrick/brick4 Brick11: node2:/data/glusterfs/myvolume/mybrick/brick4 Brick12: node3:/data/glusterfs/myvolume/mybrick/brick4 Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off
sudo gluster peer detach node4
peer detach: success
sudo gluster peer status
Number of Peers: 2 Hostname: node2 Uuid: ... State: Peer in Cluster (Connected) Hostname: node3 Uuid: ... State: Peer in Cluster (Connected)
Example 4-14 Removing bricks from a distributed replicated volume
This example removes two bricks from a distributed replicated volume.
sudo gluster volume info
Volume Name: myvolume Type: Distributed-Replicate Volume ID: ... Status: Started Snapshot Count: 0 Number of Bricks: 4 x 3 = 12 Transport-type: tcp Bricks: Brick1: node1:/data/glusterfs/myvolume/mybrick/brick1 Brick2: node2:/data/glusterfs/myvolume/mybrick/brick1 Brick3: node3:/data/glusterfs/myvolume/mybrick/brick1 Brick4: node1:/data/glusterfs/myvolume/mybrick/brick2 Brick5: node2:/data/glusterfs/myvolume/mybrick/brick2 Brick6: node3:/data/glusterfs/myvolume/mybrick/brick2 Brick7: node1:/data/glusterfs/myvolume/mybrick/brick3 Brick8: node2:/data/glusterfs/myvolume/mybrick/brick3 Brick9: node3:/data/glusterfs/myvolume/mybrick/brick3 Brick10: node1:/data/glusterfs/myvolume/mybrick/brick4 Brick11: node2:/data/glusterfs/myvolume/mybrick/brick4 Brick12: node3:/data/glusterfs/myvolume/mybrick/brick4 Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off
sudo gluster volume remove-brick myvolume node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick3 node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick4 start
volume remove-brick start: success ID: ...
sudo gluster volume remove-brick myvolume \ node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick3 \ node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick4 \ status
Node ... status run time in h:m:s --------- ... ------------ -------------- localhost ... completed 0:00:00 node2 ... completed 0:00:00 node3 ... completed 0:00:01
sudo gluster volume remove-brick myvolume node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick3 node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick4 commit
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit: success
Check the removed bricks to ensure all files are migrated.
If files with data are found on the brick path, copy them via a gluster mount
point before re-purposing the removed brick.
sudo gluster volume info
Volume Name: myvolume Type: Distributed-Replicate Volume ID: ... Status: Started Snapshot Count: 0 Number of Bricks: 2 x 3 = 6 Transport-type: tcp Bricks: Brick1: node1:/data/glusterfs/myvolume/mybrick/brick1 Brick2: node2:/data/glusterfs/myvolume/mybrick/brick1 Brick3: node3:/data/glusterfs/myvolume/mybrick/brick1 Brick4: node1:/data/glusterfs/myvolume/mybrick/brick2 Brick5: node2:/data/glusterfs/myvolume/mybrick/brick2 Brick6: node3:/data/glusterfs/myvolume/mybrick/brick2 Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off
Deleting a Volume
Deleting a volume erases all data on the volume. To delete a volume, first stop it, then use the following command:
sudo gluster volume delete volume_name
At the prompt, confirm the operation to delete the volume and erase all data.
To reuse the storage, remove all directories on each node. For example:
sudo rm -rf /data/glusterfs/myvolume/mybrick/*
Monitoring Volumes
You can monitor volumes to help with performance tuning, planning storage capacity, and troubleshooting.
These are the main commands you use for monitoring volumes:
-
gluster volume status
-
gluster volume profile
-
gluster volume top
These commands display information about brick and volume status and performance.
This section contains information on using these monitoring commands.
Using the Volume Status Command
The gluster volume status command displays information on the status of bricks and volumes. Use the following syntax:
gluster volume status volume_name options
The following examples show basic use of the gluster volume status command that can be used for the performance of common tasks. For more information, see the upstream documentation.
-
List status information for each brick in the volume.
gluster volume status volume_name
-
List more detailed status information for each brick in the volume.
gluster volume status volume_name detail
-
List the clients connected to the volume.
gluster volume status volume_name clients
-
List the memory usage and memory pool details for each brick in the volume.
gluster volume status volume_name mem
-
List the
inode
tables of the volume.gluster volume status volume_name inode
-
List the open file descriptor tables of the volume.
gluster volume status volume_name fd
-
List the pending calls for the volume.
gluster volume status volume_name callpool
The following are detailed examples of the use of the volume status
command
and their corresponding output.
Example 4-15 Showing brick status
This example shows how to display information about bricks in a volume
udo gluster volume status myvolume
Status of volume: myvolume Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick node1:/data/glusterfs/myvolume/mybric k/brick 49154 0 Y 13553 Brick node2:/data/glusterfs/myvolume/mybric k/brick 49154 0 Y 10212 Brick node3:/data/glusterfs/myvolume/mybric k/brick 49152 0 Y 27358 Brick node4:/data/glusterfs/myvolume/mybric k/brick 49152 0 Y 30502 Brick node5:/data/glusterfs/myvolume/mybric k/brick 49152 0 Y 16282 Brick node6:/data/glusterfs/myvolume/mybric k/brick 49152 0 Y 8913 Self-heal Daemon on localhost N/A N/A Y 13574 Self-heal Daemon on node3 N/A N/A Y 27379 Self-heal Daemon on node5 N/A N/A Y 16303 Self-heal Daemon on node2 N/A N/A Y 10233 Self-heal Daemon on node6 N/A N/A Y 8934 Self-heal Daemon on node4 N/A N/A Y 30523 Task Status of Volume myvolume ------------------------------------------------------------------------------ There are no active volume tasks
Example 4-16 Showing detailed brick status information
This example shows how to display detailed information about bricks in a volume
sudo gluster volume status myvolume detail
Status of volume: myvolume ------------------------------------------------------------------------------ Brick : Brick node1:/data/glusterfs/myvolume/mybrick/brick TCP Port : 49154 RDMA Port : 0 Online : Y Pid : 13553 File System : xfs Device : /dev/vdb Mount Options : rw,relatime,attr2,inode64,noquota Inode Size : N/A Disk Space Free : 98.9GB Total Disk Space : 100.0GB Inode Count : 104857600 Free Inodes : 104857526 ------------------------------------------------------------------------------ ... Brick : Brick node6:/data/glusterfs/myvolume/mybrick/brick TCP Port : 49152 RDMA Port : 0 Online : Y Pid : 8913 File System : xfs Device : /dev/vdb Mount Options : rw,relatime,attr2,inode64,noquota Inode Size : N/A Disk Space Free : 99.9GB Total Disk Space : 100.0GB Inode Count : 104857600 Free Inodes : 104857574
Example 4-17 Show brick memory usage
This example shows how to display information about memory usage for bricks in a volume.
sudo gluster volume status myvolume mem
Memory status for volume : myvolume ---------------------------------------------- Brick : node1:/data/glusterfs/myvolume/mybrick/brick Mallinfo -------- Arena : 9252864 Ordblks : 150 Smblks : 11 Hblks : 9 Hblkhd : 16203776 Usmblks : 0 Fsmblks : 976 Uordblks : 3563856 Fordblks : 5689008 Keepcost : 30848 ---------------------------------------------- ... Brick : node6:/data/glusterfs/myvolume/mybrick/brick Mallinfo -------- Arena : 9232384 Ordblks : 184 Smblks : 43 Hblks : 9 Hblkhd : 16203776 Usmblks : 0 Fsmblks : 4128 Uordblks : 3547696 Fordblks : 5684688 Keepcost : 30848 ----------------------------------------------
Using the Volume Profile Command
The gluster volume profile command displays brick I/O information for each File Operation (FOP) for a volume. The information provided by this command helps you identify where bottlenecks might be in a volume.
Note:
Turning on volume profiling might affect system performance. Use this command only for troubleshooting and monitoring performance.
Use the following syntax:
gluster volume profile volume_name options
Use the gluster volume profile -help command to show the full syntax.
The following exmples show basic use of the gluster volume profile command and are useful for the performance of common tasks. For more information, see the upstream documentation.
-
Start the profiling service for a volume.
gluster volume profile volume_name start
-
Display the profiling I/O information of each brick in a volume.
gluster volume profile volume_name info
-
Stop the profiling service for a volume.
gluster volume profile volume_name stop
The following is a detailed example on how to use the volume profile
to
monitor a volume.
Example 4-18 Monitoring a volume
The example proceeds in 3 phases: turning on profiling, displaying volume information, and turning off profiling.
sudo gluster volume profile myvolume start
Starting volume profile on myvolume has been successful
sudo gluster volume info myvolume
Volume Name: myvolume
Type: Distributed-Replicate
Volume ID: ...
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: node1:/data/glusterfs/myvolume/mybrick/brick
Brick2: node2:/data/glusterfs/myvolume/mybrick/brick
Brick3: node3:/data/glusterfs/myvolume/mybrick/brick
Brick4: node4:/data/glusterfs/myvolume/mybrick/brick
Brick5: node5:/data/glusterfs/myvolume/mybrick/brick
Brick6: node6:/data/glusterfs/myvolume/mybrick/brick
Options Reconfigured:
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
sudo gluster volume profile myvolume info
Brick: node1:/data/glusterfs/myvolume/mybrick/brick --------------------------------------------------- Cumulative Stats: %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 871 RELEASEDIR 0.17 2.00 us 2.00 us 2.00 us 3 OPENDIR 3.07 36.67 us 31.00 us 48.00 us 3 LOOKUP 10.68 95.75 us 15.00 us 141.00 us 4 GETXATTR 86.08 514.33 us 246.00 us 908.00 us 6 READDIR Duration: 173875 seconds Data Read: 0 bytes Data Written: 0 bytes Interval 5 Stats: Duration: 45 seconds Data Read: 0 bytes Data Written: 0 bytes ...
sudo gluster volume profile myvolume stop
Stopping volume profile on myvolume has been successful
Using the Volume Top Command
The gluster volume top command displays brick performance metrics (read, write, file open calls, file read calls, and so on). Use the following syntax:
gluster volume top volume_name options
To display the full syntax of the command, type gluster volume top -help.
The following examples show basic use of the gluster volume top command and are useful for the performance of common tasks. For more information , see the upstream documentation.
-
List the files with the highest open calls on each brick in the volume.
gluster volume top volume_name read
-
List the files with the highest write calls on each brick in the volume.
gluster volume top volume_name write
-
List the files with the highest open calls on each brick in the volume.
gluster volume top volume_name open
-
List the files with the highest directory read calls on each brick in the volume.
gluster volume top volume_name opendir
The following are more detailed examples of the gluster volume top
command
with corresponding output:
Example 4-19 Showing the performance for all bricks in a volume
This example shows how to display the read and write performance for all bricks in a volume.
sudo gluster volume top myvolume read-perf bs 2014 count 1024
Brick: node1:/data/glusterfs/myvolume/mybrick/brick Throughput 1776.34 MBps time 0.0012 secs Brick: node2:/data/glusterfs/myvolume/mybrick/brick Throughput 1694.61 MBps time 0.0012 secs Brick: node6:/data/glusterfs/myvolume/mybrick/brick Throughput 1640.68 MBps time 0.0013 secs Brick: node5:/data/glusterfs/myvolume/mybrick/brick Throughput 1809.07 MBps time 0.0011 secs Brick: node4:/data/glusterfs/myvolume/mybrick/brick Throughput 1438.17 MBps time 0.0014 secs Brick: node3:/data/glusterfs/myvolume/mybrick/brick Throughput 1464.73 MBps time 0.0014 secs
sudo gluster volume top myvolume write-perf bs 2014 count 1024
Brick: node1:/data/glusterfs/myvolume/mybrick/brick Throughput 779.42 MBps time 0.0026 secs Brick: node4:/data/glusterfs/myvolume/mybrick/brick Throughput 759.61 MBps time 0.0027 secs Brick: node5:/data/glusterfs/myvolume/mybrick/brick Throughput 763.26 MBps time 0.0027 secs Brick: node6:/data/glusterfs/myvolume/mybrick/brick Throughput 736.02 MBps time 0.0028 secs Brick: node2:/data/glusterfs/myvolume/mybrick/brick Throughput 751.85 MBps time 0.0027 secs Brick: node3:/data/glusterfs/myvolume/mybrick/brick Throughput 713.61 MBps time 0.0029 secs
Example 4-20 Showing the performance for a specific brick
This example shows how to display the read and write performance for a single brick in a volume.
sudo gluster volume top myvolume read-perf bs 2014 count 1024 brick node1:/data/glusterfs/myvolume/mybrick/brick
Brick: node1:/data/glusterfs/myvolume/mybrick/brick Throughput 1844.67 MBps time 0.0011 secs
sudo gluster volume top myvolume write-perf bs 2014 count 1024 brick node1:/data/glusterfs/myvolume/mybrick/brick
Brick: node1:/data/glusterfs/myvolume/mybrick/brick Throughput 612.88 MBps time 0.0034 secs