- Oracle ZFS Storage Appliance Analytics Guide, Release OS8.8.x
- Working with Analytics
- When to Add the First Write Log Device (CLI)
When to Add the First Write Log Device (CLI)
Use this procedure to determine if you need a first write log device for Oracle ZFS Storage Appliance. Fibre Channel and iSCSI writes are synchronous and benefit from log devices when the write cache is enabled. Be sure to thoroughly read and understand the documented implications of enabling write caching on LUNs before proceeding. For more information, see Space Management for Shares in Oracle ZFS Storage Appliance Administration Guide, Release OS8.8.x.
To determine if you need to add more than one write log device, see When to Add More Write Log Devices (CLI).
- Create a worksheet as described in Creating a Worksheet (CLI), select that worksheet, and then enter
dataset
.hostname:analytics worksheets> select worksheet-000 hostname:analytics worksheet-000> dataset
- Enter
set name=nfs2.ops[op]
, and then entercommit
to add NFSv2 operations per second broken down by type of operation to your worksheet.hostname:analytics worksheet-000 dataset (uncommitted)> set name=nfs2.ops[op] name = nfs2.ops[op] hostname:analytics worksheet-000 dataset (uncommitted)> commit
- Enter
dataset
. -
Repeat steps 2 and 3 to add the following datasets:
-
NFSv3 operations per second, broken down by type of operation (
nfs3.ops[op]
) -
NFSv4 operations per second, broken down by type of operation (
nfs4.ops[op]
) -
NFSv4.1 operations per second, broken down by type of operation (
nfs4-1.ops[op]
) -
iSCSI operations per second, broken down by type of operation (
iscsi.ops[op]
) -
Fibre Channel operations per second, broken down by type of operation (
fc.ops[op]
) -
SMB operations per second, broken down by type of operation (
smb.ops[op]
)
hostname:analytics worksheet-000 dataset (uncommitted)> set name=nfs3.ops[op] name = nfs3.ops[op] hostname:analytics worksheet-000 dataset (uncommitted)> commit hostname:analytics worksheet-000> dataset hostname:analytics worksheet-000 dataset (uncommitted)> set name=nfs4.ops[op] name = nfs4.ops[op] hostname:analytics worksheet-000 dataset (uncommitted)> commit hostname:analytics worksheet-000> dataset hostname:analytics worksheet-000 dataset (uncommitted)> set name=nfs4-1.ops[op] name = nfs4-1.ops[op] hostname:analytics worksheet-000 dataset (uncommitted)> commit hostname:analytics worksheet-000> dataset hostname:analytics worksheet-000 dataset (uncommitted)> set name=iscsi.ops[op] name = iscsi.ops[op] hostname:analytics worksheet-000 dataset (uncommitted)> commit hostname:analytics worksheet-000> dataset hostname:analytics worksheet-000 dataset (uncommitted)> set name=fc.ops[op] name = fc.ops[op] hostname:analytics worksheet-000 dataset (uncommitted)> commit hostname:analytics worksheet-000> dataset hostname:analytics worksheet-000 dataset (uncommitted)> set name=smb.ops[op] name = smb.ops[op] hostname:analytics worksheet-000 dataset (uncommitted)> commit
-
- Enter
done
, and then enterdone
again to exit the context.hostname:analytics worksheet-000> done hostname:analytics worksheets> done
- Wait at least 15 minutes, and then go to
analytics datasets
.Note:
Fifteen minutes is a general guideline. This amount of time may be adjusted if you have a performance-sensitive, short-duration synchronous write workload.hostname:> analytics datasets
- Enter
show
to view a list of available datasets.hostname:analytics datasets> show Datasets: DATASET STATE INCORE ONDISK NAME dataset-000 active 1.27M 15.5M arc.accesses[hit/miss] dataset-001 active 517K 9.21M arc.accesses[hit/miss=metadata hits][L2ARC eligibility] ... dataset-030 active 290K 7.80M nfs2.ops[op] hostname:analytics datasets>
- Enter
select
and the dataset with the namenfs2.ops[op]
.In this example, dataset name
nfs2.ops[op]
corresponds todataset-030
.hostname:analytics datasets> select dataset-030
- Enter
read 900
to read the last 900 seconds, or 15 minutes, of the dataset, and copy and save the data to you environment for future reference.hostname:analytics dataset-030> read 900
- Enter
done
.hostname:analytics dataset-030> done
-
Repeat steps 7 through 10 for the following datasets:
-
NFSv3 operations per second, broken down by type of operation (
nfs3.ops[op]
) -
NFSv4 operations per second, broken down by type of operation (
nfs4.ops[op]
) -
NFSv4.1 operations per second, broken down by type of operation (
nfs4-1.ops[op]
) -
iSCSI operations per second, broken down by type of operation (
iscsi.ops[op]
) -
Fibre Channel operations per second, broken down by type of operation (
fc.ops[op]
) -
SMB operations per second, broken down by type of operation (
smb.ops[op]
)
Note:
Remember to copy and save the data for each dataset to your environment for future reference.hostname:analytics datasets> show ... hostname:analytics datasets> select dataset-032 hostname:analytics dataset-032> read 900 ... hostname:analytics dataset-032> done hostname:analytics datasets> show ... hostname:analytics datasets> select dataset-034 ... hostname:analytics datasets> select dataset-27 ... hostname:analytics datasets> select dataset-13 ... hostname:analytics datasets> select dataset-07 ... hostname:analytics datasets> select dataset-40
-
-
Examine the data.
You might want to add the first write log device when one or all of the following conditions are present:
-
The sum of iSCSI writes, Fibre Channel writes, and NFS/SMB synchronous operations is at least 1000 per second
-
There are at least 100 NFS commits per second
-