Chapter 4 Creating and Managing Volumes
This chapter discusses Gluster volume types and how to create, manage and monitor volumes.
4.1 Creating Volumes
On each node in the trusted storage pool, storage should be
allocated for volumes. In the examples in this guide, a file
system is mounted on
/data/glusterfs/myvolume/mybrick
on each node.
For information on setting up storage on nodes, see
Section 2.4.1, “Preparing Oracle Linux Nodes”. Gluster creates a
volume on this file system to use as bricks.
There are number of volume types you can use:
-
Distributed: Distributes files randomly across the bricks in the volume. You can use distributed volumes where the requirement is to scale storage and the redundancy is not required, or is provided by other hardware/software layers. Disk/server failure can result in a serious loss of data as it is spread randomly across the bricks in the volume.
-
Replicated: Replicates files across bricks in the volume. You can use replicated volumes when high-availability is required.
-
Distributed Replicated: Distributes files across replicated bricks in the volume. You can use distributed replicated volumes to scale storage and for high-availability and high-reliability. Distributed replicated volumes offer improved read performance.
-
Dispersed: Provides space efficient protection against disk or server failures (based on erasure codes). This volume type stripes the encoded data of files, with some redundancy added, across multiple bricks in the volume. Dispersed volumes provide a configurable level of reliability with minimum space waste.
-
Distributed Dispersed: Distributes files across dispersed bricks in the volume. This has the same advantages of distributed replicated volumes, using dispersed instead of replicated to store the data to bricks.
The generally accepted naming convention for creating bricks and volumes is:
/data/glusterfs/
volume_name
/brick_name
/brick
In this example, brick_name
is the file
system that can be mounted from a client. For information on
mounting a Gluster file system, see
Chapter 5, Accessing Volumes.
This section describes the basic steps to set up each of these volume types. When creating volumes, you should include all nodes in the trusted storage pool, including the node on which you are performing the step to create the volume.
The notation used in the examples to create and manage volumes may be provided in the Bash brace expansion notation. For example:
node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick
This is equivalent to providing the node information in the longer form of:
node1:/data/glusterfs/myvolume/mybrick/brick
node2:/data/glusterfs/myvolume/mybrick/brick
node3:/data/glusterfs/myvolume/mybrick/brick
When a volume is configured, you can enable TLS on the volume to authenticate and encrypt connections between nodes that serve data for the volume, and for client systems that connect to the pool to access the volume. See Section 2.4.4, “Setting Up Transport Layer Security” for more information.
For more detailed information, see the Gluster upstream documentation.
4.1.1 Creating Distributed Volumes
This section provides an example of creating a pool using a distributed volume.
This example creates a distributed volume over three nodes, with one brick on each node.
sudo gluster volume create myvolume node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick volume create: myvolume: success: please start the volume to access data sudo gluster volume start myvolume volume start: myvolume: success sudo gluster volume info Volume Name: myvolume Type: Distribute Volume ID: ... Status: Started Snapshot Count: 0 Number of Bricks: 3 Transport-type: tcp Bricks: Brick1: node1:/data/glusterfs/myvolume/mybrick/brick Brick2: node2:/data/glusterfs/myvolume/mybrick/brick Brick3: node3:/data/glusterfs/myvolume/mybrick/brick Options Reconfigured: transport.address-family: inet nfs.disable: on
4.1.2 Creating Replicated Volumes
This section discusses creating a pool using replicated volumes.
The replica
count sets the number of copies
of files across bricks in the volume. Generally, two or three
copies are used. To protect against server and disk failures,
the bricks of the volume should be on different nodes.
Split-brain is a situation where two or more replicated copies of a file become divergent, and there is not enough information to select a copy as being pristine and to self-heal any bad copies. Split-brain situations occur mostly due to network issues with clients connecting to the files in the volume.
If you set replica
to be an even number (say,
2), you may encounter split-brain as both bricks think they have
the latest and correct version. You can use an odd number for
the replica
count (say, 3), to prevent
split-brain.
Using an arbiter brick also enables you to avoid split-brain,
yet doesn't require the extra storage required of a
replica 3
volume, which needs to store three
copies of the files. An arbiter brick contains metadata about
the files (but not the files) on other bricks in the volume, so
can be much smaller in size. The last brick in each replica
subvolume is used as the arbiter brick, for example, if you use
replica 3 arbiter 1
, every third brick is
used as an arbiter brick.
Volumes using an arbiter brick can only be created using the
replica 3 arbiter 1
option.
This example creates a replicated volume with one brick on three nodes.
sudo gluster volume create myvolume replica 3 node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick volume create: myvolume: success: please start the volume to access data sudo gluster volume start myvolume volume start: myvolume: success sudo gluster volume info Volume Name: myvolume Type: Replicate Volume ID: ... Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: node1:/data/glusterfs/myvolume/mybrick/brick Brick2: node2:/data/glusterfs/myvolume/mybrick/brick Brick3: node3:/data/glusterfs/myvolume/mybrick/brick Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off
This example creates a replicated volume with one brick on three nodes, and sets one arbiter brick.
sudo gluster volume create myvolume replica 3 arbiter 1 node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick volume create: myvolume: success: please start the volume to access data sudo gluster volume start myvolume volume start: myvolume: success sudo gluster volume info Volume Name: myvolume Type: Replicate Volume ID: ... Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: node1:/data/glusterfs/myvolume/mybrick/brick Brick2: node2:/data/glusterfs/myvolume/mybrick/brick Brick3: node3:/data/glusterfs/myvolume/mybrick/brick (arbiter) Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off
4.1.3 Creating Distributed Replicated Volumes
This section discusses creating a pool using distributed
replicated volumes. The number of bricks should be a multiple of
the replica
count. For example, six nodes
with one brick, or three nodes with two bricks on each node.
The order in which bricks are specified affects data protection.
Each replica
count forms a replica set, with
all replica sets combined into a volume-wide distribute set.
Make sure that replica sets are not on the same node by listing
the first brick on each node, then the second brick on each
node, in the same order.
This example creates a distributed replicated volume with one brick on six nodes.
sudo gluster volume create myvolume replica 3 node{1..6}:/data/glusterfs/myvolume/mybrick/brick volume create: myvolume: success: please start the volume to access data sudo gluster volume start myvolume volume start: myvolume: success sudo gluster volume info Volume Name: myvolume Type: Distributed-Replicate Volume ID: ... Status: Started Snapshot Count: 0 Number of Bricks: 2 x 3 = 6 Transport-type: tcp Bricks: Brick1: node1:/data/glusterfs/myvolume/mybrick/brick Brick2: node2:/data/glusterfs/myvolume/mybrick/brick Brick3: node3:/data/glusterfs/myvolume/mybrick/brick Brick4: node4:/data/glusterfs/myvolume/mybrick/brick Brick5: node5:/data/glusterfs/myvolume/mybrick/brick Brick6: node6:/data/glusterfs/myvolume/mybrick/brick Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off
This example creates a distributed replicated volume with one brick on six nodes. Each third brick is used as an arbiter brick.
sudo gluster volume create myvolume replica 3 arbiter 1 node{1..6}:/data/glusterfs/myvolume/mybrick/brick volume create: myvolume: success: please start the volume to access data sudo gluster volume start myvolume volume start: myvolume: success sudo gluster volume info Volume Name: myvolume Type: Distributed-Replicate Volume ID: ... Status: Created Snapshot Count: 0 Number of Bricks: 2 x (2 + 1) = 6 Transport-type: tcp Bricks: Brick1: node1:/data/glusterfs/myvolume/mybrick/brick Brick2: node2:/data/glusterfs/myvolume/mybrick/brick Brick3: node3:/data/glusterfs/myvolume/mybrick/brick (arbiter) Brick4: node4:/data/glusterfs/myvolume/mybrick/brick Brick5: node5:/data/glusterfs/myvolume/mybrick/brick Brick6: node6:/data/glusterfs/myvolume/mybrick/brick (arbiter) Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off
This example creates a distributed replicated volume with two bricks over three nodes.
sudo gluster volume create myvolume replica 3 \ node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick1 \ node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick2 volume create: myvolume: success: please start the volume to access data sudo gluster volume start myvolume volume start: myvolume: success sudo gluster volume info Volume Name: myvolume Type: Distributed-Replicate Volume ID: ... Status: Started Snapshot Count: 0 Number of Bricks: 2 x 3 = 6 Transport-type: tcp Bricks: Brick1: node1:/data/glusterfs/myvolume/mybrick/brick1 Brick2: node2:/data/glusterfs/myvolume/mybrick/brick1 Brick3: node3:/data/glusterfs/myvolume/mybrick/brick1 Brick4: node1:/data/glusterfs/myvolume/mybrick/brick2 Brick5: node2:/data/glusterfs/myvolume/mybrick/brick2 Brick6: node3:/data/glusterfs/myvolume/mybrick/brick2 Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off
This example creates a distributed replicated volume with two bricks over three nodes. Each third brick is used as an arbiter brick.
sudo gluster volume create myvolume replica 3 arbiter 1 \ node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick1 \ node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick2 volume create: myvolume: success: please start the volume to access data sudo gluster volume start myvolume volume start: myvolume: success sudo gluster volume info Volume Name: myvolume Type: Distributed-Replicate Volume ID: ... Status: Started Snapshot Count: 0 Number of Bricks: 2 x (2 + 1) = 6 Transport-type: tcp Bricks: Brick1: node1:/data/glusterfs/myvolume/mybrick/brick1 Brick2: node2:/data/glusterfs/myvolume/mybrick/brick1 Brick3: node3:/data/glusterfs/myvolume/mybrick/brick1 (arbiter) Brick4: node1:/data/glusterfs/myvolume/mybrick/brick2 Brick5: node2:/data/glusterfs/myvolume/mybrick/brick2 Brick6: node3:/data/glusterfs/myvolume/mybrick/brick2 (arbiter) Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off
4.1.4 Creating Dispersed Volumes
This section discusses creating a pool using dispersed volumes.
You set the volume redundancy level when you create a dispersed
volume. The redundancy
value sets how many
bricks can be lost without interrupting the operation of the
volume. The redundancy
value must be greater
than 0
, and the total number of bricks must
be greater than
2*
. A
dispersed volume must have a minimum of three bricks.
redundancy
All bricks of a disperse set should have the same capacity, otherwise, when the smallest brick becomes full, no additional data is allowed in the disperse set.
This example creates a dispersed volume with one brick on three nodes.
sudo gluster volume create myvolume disperse 3 redundancy 1 node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick volume create: myvolume: success: please start the volume to access data sudo gluster volume start myvolume volume start: myvolume: success sudo gluster volume info Volume Name: myvolume Type: Disperse Volume ID: ... Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: node1:/data/glusterfs/myvolume/mybrick/brick Brick2: node2:/data/glusterfs/myvolume/mybrick/brick Brick3: node3:/data/glusterfs/myvolume/mybrick/brick Options Reconfigured: transport.address-family: inet nfs.disable: on
4.1.5 Creating Distributed Dispersed Volumes
This section discusses creating a pool using distributed
dispersed volumes. Distributed dispersed volumes consist of two
dispersed subvolumes, which are then distributed. The number of
bricks should be a multiple of the disperse
count, and greater than 0
. As a dispersed
volume must have a minimum of three bricks, a distributed
dispersed volume must have at least six bricks. For example, six
nodes with one brick, or three nodes with two bricks on each
node are needed for this volume type.
The order in which bricks are specified affects data protection.
Each disperse
count forms a disperse set,
with all disperse sets combined into a volume-wide distribute
set. Make sure that disperse sets are not on the same node by
listing the first brick on each node, then the second brick on
each node, in the same order.
The redundancy
value is used in the same way
as for a dispersed volume.
This example creates a distributed dispersed volume with one brick on six nodes.
sudo gluster volume create myvolume disperse 3 redundancy 1 node{1..6}:/data/glusterfs/myvolume/mybrick/brick volume create: myvolume: success: please start the volume to access data sudo gluster volume start myvolume volume start: myvolume: success sudo gluster volume info Volume Name: myvolume Type: Distributed-Disperse Volume ID: ... Status: Started Snapshot Count: 0 Number of Bricks: 2 x (2 + 1) = 6 Transport-type: tcp Bricks: Brick1: node1:/data/glusterfs/myvolume/mybrick/brick Brick2: node2:/data/glusterfs/myvolume/mybrick/brick Brick3: node3:/data/glusterfs/myvolume/mybrick/brick Brick4: node4:/data/glusterfs/myvolume/mybrick/brick Brick5: node5:/data/glusterfs/myvolume/mybrick/brick Brick6: node6:/data/glusterfs/myvolume/mybrick/brick Options Reconfigured: transport.address-family: inet nfs.disable: on
This example creates a distributed dispersed volume with two bricks on three nodes.
sudo gluster volume create myvolume disperse 3 redundancy 1 \ node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick1 \ node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick2 volume create: myvolume: success: please start the volume to access data sudo gluster volume start myvolume volume start: myvolume: success sudo gluster volume info Volume Name: myvolume Type: Distributed-Disperse Volume ID: ... Status: Started Snapshot Count: 0 Number of Bricks: 2 x (2 + 1) = 6 Transport-type: tcp Bricks: Brick1: node1:/data/glusterfs/myvolume/mybrick/brick1 Brick2: node2:/data/glusterfs/myvolume/mybrick/brick1 Brick3: node3:/data/glusterfs/myvolume/mybrick/brick1 Brick4: node1:/data/glusterfs/myvolume/mybrick/brick2 Brick5: node2:/data/glusterfs/myvolume/mybrick/brick2 Brick6: node3:/data/glusterfs/myvolume/mybrick/brick2 Options Reconfigured: transport.address-family: inet nfs.disable: on
4.2 Managing Volumes
This section provides some basic volume management operations. For more information on volume management, see the upstream documentation.
4.2.1 Setting Volume Options
There are a number of options you can set to configure and tune volumes. These options are set with:
gluster volume set
volume_name
option
For example, to restrict access to mounting the volume to the IP addresses on a network:
sudo gluster volume setmyvolume
auth.allow192.168.10.*
Likewise, to set access to volume subdirectories, type:
sudo gluster volume setmyvolume
auth.allow "/(192.168.10.*),/mysubdir1(192.168.1.*),/mysubdir2(192.168.2.*)
"
4.2.2 Starting a Volume
To start a volume, use the the command:
gluster volume start
volume_name
4.2.3 Stopping a Volume
To stop a volume, use the the command:
gluster volume stop
volume_name
You are requested to confirm the operation. Enter
y
to confirm that you want to stop the
volume.
4.2.4 Self Healing a Replicated Volume
The self-heal daemon runs in the background and diagnoses issues with bricks and automatically initiates a self-healing process every 10 minutes on the files that require healing. To see the files that require healing, use:
sudo gluster volume heal myvolume
info
You can start a self-healing manually using:
sudo gluster volume heal myvolume
To list the files in a volume which are in split-brain state, use:
sudo gluster volume heal myvolume
info split-brain
See the upstream documentation for the methods available to avoid and recover from split-brain issues.
4.2.5 Expanding a Volume
You can increase the number of bricks in a volume to expand available storage. When
expanding distributed replicated and distributed dispersed volumes, you need to add a number
of bricks that is a multiple of the replica
or
disperse
count. For example, to expand a distributed replicated volume
with a replica
count of 2
, you need to add bricks in
multiples of 2, such as 4, 6, 8, and so on.
-
Prepare the new node with the same configuration and storage as all existing nodes in the trusted storage pool.
-
Add the node to the pool.
gluster peer probe
node4
-
Add the brick(s).
gluster volume add-brick
myvolume
node4:/data/glusterfs/myvolume/mybrick/brick
-
Rebalance the volume to distribute files to the new brick(s).
sudo gluster volume rebalance
myvolume
startTo check the status of the volume rebalance, type:
sudo gluster volume rebalance
myvolume
status
This example creates a distributed replicated volume with
three nodes and two bricks on each node. The volume is then
extended to add a new node with an additional two bricks on
the node. Note that when you add a new node to a replicated
volume, you need to increase the replica
count to the new number of nodes in the pool.
sudo gluster volume create myvolume replica 3 \ node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick1 \ node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick2 volume create: myvolume: success: please start the volume to access data sudo gluster volume start myvolume volume start: myvolume: success sudo gluster volume info Volume Name: myvolume Type: Distributed-Replicate Volume ID: ... Status: Started Snapshot Count: 0 Number of Bricks: 2 x 3 = 6 Transport-type: tcp Bricks: Brick1: node1:/data/glusterfs/myvolume/mybrick/brick1 Brick2: node2:/data/glusterfs/myvolume/mybrick/brick1 Brick3: node3:/data/glusterfs/myvolume/mybrick/brick1 Brick4: node1:/data/glusterfs/myvolume/mybrick/brick2 Brick5: node2:/data/glusterfs/myvolume/mybrick/brick2 Brick6: node3:/data/glusterfs/myvolume/mybrick/brick2 Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off sudo gluster peer status Number of Peers: 2 Hostname: node2 Uuid: ... State: Peer in Cluster (Connected) Hostname: node3 Uuid: ... State: Peer in Cluster (Connected) sudo gluster peer probe node4 peer probe: success. sudo gluster peer status Number of Peers: 3 Hostname: node2 Uuid: ... State: Peer in Cluster (Connected) Hostname: node3 Uuid: ... State: Peer in Cluster (Connected) Hostname: node4 Uuid: ... State: Peer in Cluster (Connected) sudo gluster volume add-brick myvolume replica 4 \ node4:/data/glusterfs/myvolume/mybrick/brick1 \ node4:/data/glusterfs/myvolume/mybrick/brick2 volume add-brick: success sudo gluster volume info Volume Name: myvolume Type: Distributed-Replicate Volume ID: ... Status: Started Snapshot Count: 0 Number of Bricks: 2 x 4 = 8 Transport-type: tcp Bricks: Brick1: node1:/data/glusterfs/myvolume/mybrick/brick1 Brick2: node2:/data/glusterfs/myvolume/mybrick/brick1 Brick3: node3:/data/glusterfs/myvolume/mybrick/brick1 Brick4: node4:/data/glusterfs/myvolume/mybrick/brick1 Brick5: node1:/data/glusterfs/myvolume/mybrick/brick2 Brick6: node2:/data/glusterfs/myvolume/mybrick/brick2 Brick7: node3:/data/glusterfs/myvolume/mybrick/brick2 Brick8: node4:/data/glusterfs/myvolume/mybrick/brick2 Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off sudo gluster volume rebalance myvolume start volume rebalance: myvolume: success: Rebalance on myvolume has been started successfully. Use rebalance status command to check status of the rebalance process. ID: ... sudo gluster volume rebalance myvolume status ... volume rebalance: myvolume: success
This example adds two bricks to an existing distributed replicated volume. The steps to create this volume are shown in Example 4.11, “Creating a distributed replicated volume and adding a node”.
sudo gluster volume add-brick myvolume \ node{1,2,3,4}:/data/glusterfs/myvolume/mybrick/brick3 \ node{1,2,3,4}:/data/glusterfs/myvolume/mybrick/brick4 volume add-brick: success sudo gluster volume info Volume Name: myvolume Type: Distributed-Replicate Volume ID: ... Status: Started Snapshot Count: 0 Number of Bricks: 4 x 4 = 16 Transport-type: tcp Bricks: Brick1: node1:/data/glusterfs/myvolume/mybrick/brick1 Brick2: node2:/data/glusterfs/myvolume/mybrick/brick1 Brick3: node3:/data/glusterfs/myvolume/mybrick/brick1 Brick4: node4:/data/glusterfs/myvolume/mybrick/brick1 Brick5: node1:/data/glusterfs/myvolume/mybrick/brick2 Brick6: node2:/data/glusterfs/myvolume/mybrick/brick2 Brick7: node3:/data/glusterfs/myvolume/mybrick/brick2 Brick8: node4:/data/glusterfs/myvolume/mybrick/brick2 Brick9: node1:/data/glusterfs/myvolume/mybrick/brick3 Brick10: node2:/data/glusterfs/myvolume/mybrick/brick3 Brick11: node3:/data/glusterfs/myvolume/mybrick/brick3 Brick12: node4:/data/glusterfs/myvolume/mybrick/brick3 Brick13: node1:/data/glusterfs/myvolume/mybrick/brick4 Brick14: node2:/data/glusterfs/myvolume/mybrick/brick4 Brick15: node3:/data/glusterfs/myvolume/mybrick/brick4 Brick16: node4:/data/glusterfs/myvolume/mybrick/brick4 Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off. sudo gluster volume rebalance myvolume start volume rebalance: myvolume: success: Rebalance on myvolume has been started successfully. Use rebalance status command to check status of the rebalance process. ID: ... sudo gluster volume rebalance myvolume status ... volume rebalance: myvolume: success
4.2.6 Shrinking a Volume
You can decrease the number of bricks in a volume. This may be useful if a node in the Gluster pool encounters a hardware or network fault.
When shrinking distributed replicated and distributed dispersed
volumes, you need to remove a number of bricks that is a
multiple of the replica
or
stripe
count. For example, to shrink a
distributed replicate volume with a replica
count of 2
, you need to remove bricks in
multiples of 2 (such as 4, 6, 8, and so on). The bricks you
remove must be from the same replica or disperse set.
-
Remove the brick(s).
sudo gluster volume remove-brick
myvolume
node4:/data/glusterfs/myvolume/mybrick/brick
startThe
start
option automatically triggers a volume rebalance operation to migrate data from the removed brick(s) to other bricks in the volume. -
To check the status of the brick removal, type:
sudo gluster volume remove-brick
myvolume
status -
When the
brick-removal
status iscompleted
, commit the remove-brick operation.sudo gluster volume remove-brick
myvolume
commitYou are requested to confirm the operation. Enter
y
to confirm that you want to delete the brick(s).The data on the brick is migrated to other bricks in the pool. The data on the removed brick is no longer accessible at the Gluster mount point. Removing the brick removes the configuration information and not the data. You can continue to access the data directly from the brick if required.
This example removes a node from a pool with four nodes. The
replica
count for this volume is
4
. As a node is removed, the
replica
count must be reduced to
3
. The start
option is
not needed in replicated volumes, instead, you should use the
force
option. The force
option means you do not need to check the
remove-brick
process status, or perform the
remove-brick
commit steps.
sudo gluster volume info Volume Name: myvolume Type: Distributed-Replicate Volume ID: ... Status: Started Snapshot Count: 0 Number of Bricks: 4 x 4 = 16 Transport-type: tcp Bricks: Brick1: node1:/data/glusterfs/myvolume/mybrick/brick1 Brick2: node2:/data/glusterfs/myvolume/mybrick/brick1 Brick3: node3:/data/glusterfs/myvolume/mybrick/brick1 Brick4: node4:/data/glusterfs/myvolume/mybrick/brick1 Brick5: node1:/data/glusterfs/myvolume/mybrick/brick2 Brick6: node2:/data/glusterfs/myvolume/mybrick/brick2 Brick7: node3:/data/glusterfs/myvolume/mybrick/brick2 Brick8: node4:/data/glusterfs/myvolume/mybrick/brick2 Brick9: node1:/data/glusterfs/myvolume/mybrick/brick3 Brick10: node2:/data/glusterfs/myvolume/mybrick/brick3 Brick11: node3:/data/glusterfs/myvolume/mybrick/brick3 Brick12: node4:/data/glusterfs/myvolume/mybrick/brick3 Brick13: node1:/data/glusterfs/myvolume/mybrick/brick4 Brick14: node2:/data/glusterfs/myvolume/mybrick/brick4 Brick15: node3:/data/glusterfs/myvolume/mybrick/brick4 Brick16: node4:/data/glusterfs/myvolume/mybrick/brick4 Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off sudo gluster volume remove-brick myvolume replica 3 \ node4:/data/glusterfs/myvolume/mybrick/brick1 \ node4:/data/glusterfs/myvolume/mybrick/brick2 \ node4:/data/glusterfs/myvolume/mybrick/brick3 \ node4:/data/glusterfs/myvolume/mybrick/brick4 \ force sudo gluster volume info Volume Name: myvolume Type: Distributed-Replicate Volume ID: ... Status: Started Snapshot Count: 0 Number of Bricks: 4 x 3 = 12 Transport-type: tcp Bricks: Brick1: node1:/data/glusterfs/myvolume/mybrick/brick1 Brick2: node2:/data/glusterfs/myvolume/mybrick/brick1 Brick3: node3:/data/glusterfs/myvolume/mybrick/brick1 Brick4: node1:/data/glusterfs/myvolume/mybrick/brick2 Brick5: node2:/data/glusterfs/myvolume/mybrick/brick2 Brick6: node3:/data/glusterfs/myvolume/mybrick/brick2 Brick7: node1:/data/glusterfs/myvolume/mybrick/brick3 Brick8: node2:/data/glusterfs/myvolume/mybrick/brick3 Brick9: node3:/data/glusterfs/myvolume/mybrick/brick3 Brick10: node1:/data/glusterfs/myvolume/mybrick/brick4 Brick11: node2:/data/glusterfs/myvolume/mybrick/brick4 Brick12: node3:/data/glusterfs/myvolume/mybrick/brick4 Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off sudo gluster peer detach node4 peer detach: success sudo gluster peer status Number of Peers: 2 Hostname: node2 Uuid: ... State: Peer in Cluster (Connected) Hostname: node3 Uuid: ... State: Peer in Cluster (Connected)
This example removes two bricks from a distributed replicated volume.
sudo gluster volume info Volume Name: myvolume Type: Distributed-Replicate Volume ID: ... Status: Started Snapshot Count: 0 Number of Bricks: 4 x 3 = 12 Transport-type: tcp Bricks: Brick1: node1:/data/glusterfs/myvolume/mybrick/brick1 Brick2: node2:/data/glusterfs/myvolume/mybrick/brick1 Brick3: node3:/data/glusterfs/myvolume/mybrick/brick1 Brick4: node1:/data/glusterfs/myvolume/mybrick/brick2 Brick5: node2:/data/glusterfs/myvolume/mybrick/brick2 Brick6: node3:/data/glusterfs/myvolume/mybrick/brick2 Brick7: node1:/data/glusterfs/myvolume/mybrick/brick3 Brick8: node2:/data/glusterfs/myvolume/mybrick/brick3 Brick9: node3:/data/glusterfs/myvolume/mybrick/brick3 Brick10: node1:/data/glusterfs/myvolume/mybrick/brick4 Brick11: node2:/data/glusterfs/myvolume/mybrick/brick4 Brick12: node3:/data/glusterfs/myvolume/mybrick/brick4 Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off sudo gluster volume remove-brick myvolume \ node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick3 \ node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick4 \ start volume remove-brick start: success ID: ... sudo gluster volume remove-brick myvolume \ node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick3 \ node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick4 \ status Node ... status run time in h:m:s --------- ... ------------ -------------- localhost ... completed 0:00:00 node2 ... completed 0:00:00 node3 ... completed 0:00:01 sudo gluster volume remove-brick myvolume \ node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick3 \ node{1,2,3}:/data/glusterfs/myvolume/mybrick/brick4 \ commit Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y volume remove-brick commit: success Check the removed bricks to ensure all files are migrated. If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick. sudo gluster volume info Volume Name: myvolume Type: Distributed-Replicate Volume ID: ... Status: Started Snapshot Count: 0 Number of Bricks: 2 x 3 = 6 Transport-type: tcp Bricks: Brick1: node1:/data/glusterfs/myvolume/mybrick/brick1 Brick2: node2:/data/glusterfs/myvolume/mybrick/brick1 Brick3: node3:/data/glusterfs/myvolume/mybrick/brick1 Brick4: node1:/data/glusterfs/myvolume/mybrick/brick2 Brick5: node2:/data/glusterfs/myvolume/mybrick/brick2 Brick6: node3:/data/glusterfs/myvolume/mybrick/brick2 Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off
4.2.7 Deleting a Volume
Deleting a volume erases all data on the volume. To delete a volume, first stop it, then use the command:
gluster volume delete
volume_name
You are requested to confirm the operation. Enter
y
to confirm that you want to delete the
volume and erase all data.
If you want to reuse the storage, you should remove all directories on each node. For example:
sudo rm -rf /data/glusterfs/myvolume/mybrick/*
4.3 Monitoring Volumes
You can monitor volumes to help with performance tuning, planning storage capacity, and troubleshooting.
These are the main commands you use for monitoring volumes:
-
gluster volume status
-
gluster volume profile
-
gluster volume top
These commands display information about brick and volume status and performance.
This section contains information on using these monitoring commands.
4.3.1 Using the Volume Status Command
The gluster volume status command displays information on the status of bricks and volumes. Use the following syntax:
gluster volume status
volume_name
options
The following examples show basic use of the gluster volume status command that can be used for the performance of common tasks. For more information, see the upstream documentation.
-
gluster volume status
volume_name
-
Lists status information for each brick in the volume.
-
gluster volume status
volume_name
detail -
Lists more detailed status information for each brick in the volume.
-
gluster volume status
volume_name
clients -
Lists the clients connected to the volume.
-
gluster volume status
volume_name
mem -
Lists the memory usage and memory pool details for each brick in the volume.
-
gluster volume status
volume_name
inode -
Lists the inode tables of the volume.
-
gluster volume status
volume_name
fd -
Lists the open file descriptor tables of the volume.
-
gluster volume status
volume_name
callpool -
Lists the pending calls for the volume.
Some more detailed examples that include output follow.
This example displays status information about bricks in a volume.
sudo gluster volume status myvolume
Status of volume: myvolume
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick node1:/data/glusterfs/myvolume/mybric
k/brick 49154 0 Y 13553
Brick node2:/data/glusterfs/myvolume/mybric
k/brick 49154 0 Y 10212
Brick node3:/data/glusterfs/myvolume/mybric
k/brick 49152 0 Y 27358
Brick node4:/data/glusterfs/myvolume/mybric
k/brick 49152 0 Y 30502
Brick node5:/data/glusterfs/myvolume/mybric
k/brick 49152 0 Y 16282
Brick node6:/data/glusterfs/myvolume/mybric
k/brick 49152 0 Y 8913
Self-heal Daemon on localhost N/A N/A Y 13574
Self-heal Daemon on node3 N/A N/A Y 27379
Self-heal Daemon on node5 N/A N/A Y 16303
Self-heal Daemon on node2 N/A N/A Y 10233
Self-heal Daemon on node6 N/A N/A Y 8934
Self-heal Daemon on node4 N/A N/A Y 30523
Task Status of Volume myvolume
------------------------------------------------------------------------------
There are no active volume tasks
This example displays more detailed status information about bricks in a volume.
sudo gluster volume status myvolume detail
Status of volume: myvolume
------------------------------------------------------------------------------
Brick : Brick node1:/data/glusterfs/myvolume/mybrick/brick
TCP Port : 49154
RDMA Port : 0
Online : Y
Pid : 13553
File System : xfs
Device : /dev/vdb
Mount Options : rw,relatime,attr2,inode64,noquota
Inode Size : N/A
Disk Space Free : 98.9GB
Total Disk Space : 100.0GB
Inode Count : 104857600
Free Inodes : 104857526
------------------------------------------------------------------------------
...
Brick : Brick node6:/data/glusterfs/myvolume/mybrick/brick
TCP Port : 49152
RDMA Port : 0
Online : Y
Pid : 8913
File System : xfs
Device : /dev/vdb
Mount Options : rw,relatime,attr2,inode64,noquota
Inode Size : N/A
Disk Space Free : 99.9GB
Total Disk Space : 100.0GB
Inode Count : 104857600
Free Inodes : 104857574
This example displays information about memory usage for bricks in a volume.
sudo gluster volume status myvolume mem
Memory status for volume : myvolume
----------------------------------------------
Brick : node1:/data/glusterfs/myvolume/mybrick/brick
Mallinfo
--------
Arena : 9252864
Ordblks : 150
Smblks : 11
Hblks : 9
Hblkhd : 16203776
Usmblks : 0
Fsmblks : 976
Uordblks : 3563856
Fordblks : 5689008
Keepcost : 30848
----------------------------------------------
...
Brick : node6:/data/glusterfs/myvolume/mybrick/brick
Mallinfo
--------
Arena : 9232384
Ordblks : 184
Smblks : 43
Hblks : 9
Hblkhd : 16203776
Usmblks : 0
Fsmblks : 4128
Uordblks : 3547696
Fordblks : 5684688
Keepcost : 30848
----------------------------------------------
4.3.2 Using the Volume Profile Command
The gluster volume profile command displays brick I/O information for each File Operation (FOP) for a volume. The information provided by this command helps you identify where bottlenecks may be in a volume.
Turning on volume profiling may affect system performance, so should be used for troubleshooting and performance monitoring only.
Use the following syntax:
gluster volume profile
volume_name
options
Use the gluster volume profile -help command to show the full syntax.
The following exmples show basic use of the gluster volume profile command and are useful for the performance of common tasks. For more information, see the upstream documentation.
-
gluster volume profile
volume_name
start -
Starts the profiling service for a volume.
-
gluster volume profile
volume_name
info -
Displays the profiling I/O information of each brick in a volume.
-
gluster volume profile
volume_name
stop -
Stops the profiling service for a volume.
A more detailed example of using volume profiling follows.
This example turns on profiling for a volume, shows the volume
profiling information, then turns profiling off. When
profiling is started for a volume, two new diagnostic
properties are enabled and displayed when you show the volume
information (diagnostics.count-fop-hits
and
diagnostics.latency-measurement
).
sudo gluster volume profile myvolume start Starting volume profile on myvolume has been successful sudo gluster volume info myvolume Volume Name: myvolume Type: Distributed-Replicate Volume ID: ... Status: Started Snapshot Count: 0 Number of Bricks: 2 x 3 = 6 Transport-type: tcp Bricks: Brick1: node1:/data/glusterfs/myvolume/mybrick/brick Brick2: node2:/data/glusterfs/myvolume/mybrick/brick Brick3: node3:/data/glusterfs/myvolume/mybrick/brick Brick4: node4:/data/glusterfs/myvolume/mybrick/brick Brick5: node5:/data/glusterfs/myvolume/mybrick/brick Brick6: node6:/data/glusterfs/myvolume/mybrick/brick Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on transport.address-family: inet nfs.disable: on performance.client-io-threads: off sudo gluster volume profile myvolume info Brick: node1:/data/glusterfs/myvolume/mybrick/brick --------------------------------------------------- Cumulative Stats: %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 871 RELEASEDIR 0.17 2.00 us 2.00 us 2.00 us 3 OPENDIR 3.07 36.67 us 31.00 us 48.00 us 3 LOOKUP 10.68 95.75 us 15.00 us 141.00 us 4 GETXATTR 86.08 514.33 us 246.00 us 908.00 us 6 READDIR Duration: 173875 seconds Data Read: 0 bytes Data Written: 0 bytes Interval 5 Stats: Duration: 45 seconds Data Read: 0 bytes Data Written: 0 bytes ... sudo gluster volume profile myvolume stop Stopping volume profile on myvolume has been successful
4.3.3 Using the Volume Top Command
The gluster volume top command displays brick performance metrics (read, write, file open calls, file read calls, and so on). Use the following syntax:
gluster volume top
volume_name
options
To display the full syntax of the command, type gluster volume top -help.
The following examples show basic use of the gluster volume top command and are useful for the performance of common tasks. For more information , see the upstream documentation.
-
gluster volume top
volume_name
read -
Lists the files with the highest open calls on each brick in the volume.
-
gluster volume top
volume_name
write -
Lists the files with the highest write calls on each brick in the volume.
-
gluster volume top
volume_name
open -
Lists the files with the highest open calls on each brick in the volume.
-
gluster volume top
volume_name
opendir -
Lists the files with the highest directory read calls on each brick in the volume.
Some more detailed examples that include output follow.
This example shows how to display the read and the write performance for all bricks in a volume.
sudo gluster volume top myvolume read-perf bs 2014 count 1024 Brick: node1:/data/glusterfs/myvolume/mybrick/brick Throughput 1776.34 MBps time 0.0012 secs Brick: node2:/data/glusterfs/myvolume/mybrick/brick Throughput 1694.61 MBps time 0.0012 secs Brick: node6:/data/glusterfs/myvolume/mybrick/brick Throughput 1640.68 MBps time 0.0013 secs Brick: node5:/data/glusterfs/myvolume/mybrick/brick Throughput 1809.07 MBps time 0.0011 secs Brick: node4:/data/glusterfs/myvolume/mybrick/brick Throughput 1438.17 MBps time 0.0014 secs Brick: node3:/data/glusterfs/myvolume/mybrick/brick Throughput 1464.73 MBps time 0.0014 secs sudo gluster volume top myvolume write-perf bs 2014 count 1024 Brick: node1:/data/glusterfs/myvolume/mybrick/brick Throughput 779.42 MBps time 0.0026 secs Brick: node4:/data/glusterfs/myvolume/mybrick/brick Throughput 759.61 MBps time 0.0027 secs Brick: node5:/data/glusterfs/myvolume/mybrick/brick Throughput 763.26 MBps time 0.0027 secs Brick: node6:/data/glusterfs/myvolume/mybrick/brick Throughput 736.02 MBps time 0.0028 secs Brick: node2:/data/glusterfs/myvolume/mybrick/brick Throughput 751.85 MBps time 0.0027 secs Brick: node3:/data/glusterfs/myvolume/mybrick/brick Throughput 713.61 MBps time 0.0029 secs
This example shows how to display the read and the write performance for a brick.
sudo gluster volume top myvolume read-perf bs 2014 count 1024 brick node1:/data/glusterfs/myvolume/mybrick/brick Brick: node1:/data/glusterfs/myvolume/mybrick/brick Throughput 1844.67 MBps time 0.0011 secs sudo gluster volume top myvolume write-perf bs 2014 count 1024 brick \ node1:/data/glusterfs/myvolume/mybrick/brick Brick: node1:/data/glusterfs/myvolume/mybrick/brick Throughput 612.88 MBps time 0.0034 secs