Oracle® Hierarchical Storage Manager and StorageTek QFS Software Maintenance and Administration Guide Release 6.0 E42064-03 |
|
Previous |
Next |
This chapter covers file-system maintenance and reconfiguration tasks. The first section, Managing Oracle HSM File Systems, addresses maintenance of all Oracle HSM file systems, archiving and non-archiving, shared and unshared (standalone). The second section, Managing Oracle HSM Shared File Systems, deals with special considerations that affect shared file systems.
This section outlines the following tasks:
Set file system quotas to control the online and total storage space that a given user or collection of users can consume within the file system. You can set quotas by user ID, by group ID, or by an administrator-defined admin set ID that groups users by common characteristics, such as participation in a particular project. The admin set ID is especially useful when a project includes users from several groups and spans multiple directories and files.
You enable quotas by mounting a file system with the quota
mount option (set by default) and disable quotas by mounting it with the noquota
mount option. You define the quotas by placing one or more quota files in the file system root directory: .quota_u
, .quota_g
, and .quota_a
, which set quotas for users, groups, and admin sets, respectively. The first record in each file, record 0
, sets the default values. Subsequent records set values specific to particular users, groups, or admin sets.
Quotas allocate usable file-system space, not simply storage space. They thus set upper limits on both the number of 512-byte blocks allocated on the media and the number of inodes allocated in the file system. The block count measures the storage space per se. The inode count measures the resources available for accessing that storage. A single file that used a great many blocks of storage space but only one inode might thus take up the same amount of file-system space as a great many empty, zero-length files that take up many inodes and no blocks.
Each quota can include both soft and hard limits. A hard limit defines the maximum file-system resources that all of a given owner's files can use temporarily. A soft limit defines the maximum file-system resources that the owner's files can use indefinitely. Resource usage can grow to amounts that lie between the soft and hard limits only for brief intervals, as defined by the grace period in the quota.
This section describes the following administrative tasks:
Characterize the Storage Requirements of Users, Groups, and Organizational Units
Create Admin Sets for Projects and for Directories Used by Multiple Groups
To set sustainable quotas, you have to set limits that accommodate user requirements in a way that is both manageable and scalable. So, before setting quotas, estimate the storage requirements of your users. To keep the process manageable, start by classifying user requirements as broadly as possible, so that you address the greatest number of requirements with the smallest amount of administrative effort. You can then specially assess a small number of user requirements that do not fit in to the broader categories. The results will provide the broad outlines of the quotas and types of limits that you will set.
The approach outlined below starts by identifying the file-system requirements of access-control groups, since most organizations already define these groups. Then it defines special sets of users whose needs do not align well with those of the standard groups. Then and only then does it begin to address any requirements that are unique to individual users. Proceed as follows:
Since your existing access-control groups already gather together users with similar resource requirements, start by defining the average storage requirements of any groups that will use the file system. Estimate both the average amount of storage space used (in 512-kilobyte blocks) and the average number of files stored, which is equivalent to the average number of inodes used.
Since group members typically have similar organizational roles and work responsibilities, they frequently need access to the same directories and files and generally make similar demands on storage. In the example, we identify three groups that will use the file system /samqfs1
: dev
(Product Development), cit
(Corporate Information Technologies), and pgmt
(Program Management). We list the groups, the number of members in each, and their average individual and group requirements in a simple spreadsheet:
Group | Users | Average Blocks Per User | Average Files Per User | Average Blocks/Group | Average Files/Group |
---|---|---|---|---|---|
dev |
30 |
67108864 |
500 |
2013265920 |
15000 |
cit |
15 |
10485760 |
50 |
157286400 |
750 |
pmgt |
6 |
20971520 |
200 |
125829120 |
1200 |
Total Blocks/ Files (Average) |
Next, carry out the same calculations for the maximum amount of storage space and the maximum number of files that group members will store at any given time. Record the results.
In the example, we record the results in a new spreadsheet:
Group | Users | Maximum Blocks Per User | Maximum Files Per User | Maximum Blocks/Group | Maximum Files/Group |
---|---|---|---|---|---|
dev |
30 |
100663296 |
1000 |
3019898880 |
30000 |
cit |
15 |
15728640 |
100 |
235929600 |
1500 |
pmgt |
6 |
31457280 |
400 |
188743680 |
2400 |
Total Blocks/ Files (Maximum) |
Now identify any sets of users that belong to different groups but share distinct storage requirements that cannot be not addressed on the basis of group membership. Make the same estimates and carry out the same calculations for each identified organization as you did for each access-control group.
In the example, we identify two company projects that will require storage allocations, code named portal
and lockbox
. Members of the engineering, marketing, compliance, test, and documentation groups will work together on these projects and will use the same directories and many of the same files. So we add them to our requirements spreadsheets:
Group | Users | Average Blocks Per User | Average Files Per User | Average Blocks/Group | Average Files/Group |
---|---|---|---|---|---|
dev |
30 |
67108864 |
500 |
2013265920 |
15000 |
cit |
15 |
10485760 |
50 |
157286400 |
750 |
pmgt |
6 |
20971520 |
200 |
125829120 |
1200 |
portal |
10 |
31457280 |
400 |
314572800 |
4000 |
lockbox |
12 |
31457280 |
500 |
377487360 |
6000 |
Total Blocks/ Files (Average) |
Group | Users | Maximum Blocks Per User | Maximum Files Per User | Maximum Blocks/Group | Maximum Files/Group |
---|---|---|---|---|---|
dev |
30 |
100663296 |
1000 |
3019898880 |
30000 |
cit |
15 |
15728640 |
100 |
235929600 |
1500 |
pmgt |
6 |
31457280 |
400 |
188743680 |
2400 |
portal |
10 |
37748736 |
700 |
377487360 |
7000 |
lockbox |
12 |
45613056 |
600 |
547356672 |
7200 |
Total Blocks/ Files (Maximum) |
Now identify any individual users whose requirements have not yet been addressed. Make the same estimates and carry out the same calculations for each user as you did for each access-control group and non-group organization.
Where possible, address user requirements collectively, so that polices are uniform and management overhead is at a minimum. However, when individual requirements are unique, you need to address them individually. In the example, we identify jr23547
in the pgmt
group as a user whose special responsibilities require special storage allocations. So we add him to our requirements spreadsheets:
Group | Users Per Set | Average Blocks Per User | Average Files Per User | Average Blocks | Average Files |
---|---|---|---|---|---|
dev |
30 |
67108864 |
500 |
2013265920 |
15000 |
cit |
15 |
10485760 |
50 |
157286400 |
750 |
pmgt |
6 |
20971520 |
200 |
125829120 |
1200 |
portal |
10 |
31457280 |
400 |
314572800 |
4000 |
lockbox |
12 |
31457280 |
500 |
377487360 |
6000 |
jr23547 |
1 |
10485760 |
600 |
10485760 |
600 |
Total Blocks/ Files (Average) |
Group | Users | Maximum Blocks Per User | Maximum Files Per User | Maximum Blocks/Group | Maximum Files/Group |
---|---|---|---|---|---|
dev |
30 |
100663296 |
1000 |
3019898880 |
30000 |
cit |
15 |
15728640 |
100 |
235929600 |
1500 |
pmgt |
6 |
31457280 |
400 |
188743680 |
2400 |
portal |
10 |
37748736 |
700 |
377487360 |
7000 |
lockbox |
12 |
45613056 |
600 |
547356672 |
7200 |
jr23547 |
1 |
100663296 |
2000 |
100663296 |
2000 |
Total Blocks/ Files (Maximum) |
Finally, calculate the average and maximum blocks and files that all users require.
Group | Users | Average Blocks Per User | Average Files Per User | Average Blocks/Group | Average Files/Group |
---|---|---|---|---|---|
dev |
30 |
67108864 |
500 |
2013265920 |
15000 |
cit |
15 |
10485760 |
50 |
157286400 |
750 |
pmgt |
6 |
20971520 |
200 |
125829120 |
1200 |
portal |
10 |
31457280 |
400 |
314572800 |
4000 |
lockbox |
12 |
31457280 |
500 |
377487360 |
6000 |
jr23547 |
1 |
10485760 |
600 |
10485760 |
600 |
Total Blocks/ Files (Average) |
2998927360 |
27550 |
Group | Users | Maximum Blocks Per User | Maximum Files Per User | Maximum Blocks/Group | Maximum Files/Group |
---|---|---|---|---|---|
dev |
30 |
100663296 |
1000 |
3019898880 |
30000 |
cit |
15 |
15728640 |
100 |
235929600 |
1500 |
pmgt |
6 |
31457280 |
400 |
188743680 |
2400 |
portal |
10 |
37748736 |
700 |
377487360 |
7000 |
lockbox |
12 |
45613056 |
600 |
547356672 |
7200 |
jr23547 |
1 |
100663296 |
2000 |
100663296 |
2000 |
Total Blocks/ Files (Average) |
4470079488 |
50100 |
If you need to administer project-based quotas or other quotas that cannot be defined by access-control group and user IDs, Create Admin Sets for Projects and for Directories Used by Multiple Groups.
If you are setting quotas on an empty, newly created file system, go to Configure a New File System to Use Quotas.
If you are setting quotas on a file system that already holds files, go to "Configure an Existing File System to Use Quotas".
An admin set is a directory hierarchy or an individual directory or file that is identified for quota purposes by an admin set ID. All files created with a specified admin set ID or stored in a directory with a specified admin set ID have the same quotas, regardless of the user or group IDs that actually own the files. To define admin sets, proceed as follows:
Log in to the file-system server as root
.
In the example, the server is named server1
:
[server1]root@solaris:~#
If you are using an admin set to configure storage quotas for a new project or team, create a new directory somewhere within the file system for this project or team.
In the example, we create the directory in the /samqfs1
file system and name it portalproject/
for the project of the same name
[server1]root@solaris:~#mkdir
/samqfs1/portalproject
Assign an admin set ID to the directory or file on which you need to set a quota. Use the command samchaid
[
-fhR
]
admin-set-id
directory-or-file-name
, where:
-f
forces the assignment and does not report errors.
-h
assigns the admin set ID to symbolic links. Without this option, the group of the file referenced by the symbolic link is changed.
-R
assigns the admin set ID recursively to subdirectories and files.
admin-set-id
is a unique integer value.
directory-or-file-name
is the name of the directory or file to which you are assigning the admin set ID.
In the example, we assign the admin ID 1
to the directory /samqfs1/portalproject/
and all of its subdirectories and files.
[server1]root@solaris:~#samchaid
-R
1
/samqfs1/portalproject/
You can check the assignment, if desired. Use the command sls -D
directory-path
, where the -D
specifies a detailed Oracle HSM directory listing for files and directories in directory-path
:
[server1]root@solaris:~#sls
-D
/samqfs1/
/portalproject: mode: drwxr-xr-x links: 2 owner: root group: root length: 4096 admin id: 1 inode: 1047.1 project: user.root(1) access: Feb 24 12:49 modification: Feb 24 12:44 changed: Feb 24 12:49 attributes: Feb 24 12:44 creation: Feb 24 12:44 residence: Feb 24 12:44
If you are setting quotas on an empty, newly created file system, go to Configure a New File System to Use Quotas.
If you are setting quotas on a file system that already holds files, go to "Configure an Existing File System to Use Quotas".
Use this procedure if you are creating a new file system and if no files currently reside in the file system.
Log in to the file-system server as root
.
In the example, the server is named server2
:
[server2]root@solaris:~#
If the new file system is not currently mounted, mount it before proceeding.
If you have to set up quotas for groups, create a group quota file in the file-system root directory, .quota_g
. Use the Solaris command dd
if=/dev/zero
of=
mountpoint
/.quota_g
bs=4096
count=
number-blocks
, where:
if=/dev/zero
specifies null characters from the UNIX special file /dev/zero
as the input.
of=
mountpoint
/.quota_g
specifies the output file, where mountpoint
is the mount point directory for the file system.
bs=4096
sets the block size for the write to 4096
bytes.
count=
number-blocks
specifies the number of blocks to write. This value depends on the number of records that the file will hold. There is one 128-byte record for each specified quota, so one block can accommodate 32 records.
In the example, we create the group quota file for the file system newsamfs
mounted at /newsamfs
. During the requirements-gathering phase, we identified three groups that need quotas on the file system, dev
, cit
, and pgmt
. We do not anticipate adding any other group quotas, so we size the file at one block:
[server2]root@solaris:~#dd
if=
/dev/zero
of=
/newsamfs
/.quota_g
bs=
4096
count=
1
If you have to set up quotas for admin sets, create an admin set quota file in the file-system root directory, .quota_a
. Use the Solaris command dd
if=/dev/zero
of=
mountpoint
/.quota_a
bs=4096
, where:
mountpoint
is the mount point directory for the file system.
.quota_a
is the name of the output file.
4096
is the block size for the write in bytes.
number-blocks
is the number of blocks to write.
In the example, we create the admin sets quota file for the file system newsamfs
mounted at /newsamfs
. During the requirements-gathering phase, we identified two projects that need quotas on the file system, portal
(admin set ID 1
) and lockbox
(admin set ID 2
). We do not anticipate adding any other admin set quotas, so we size the file at one block:
[server2]root@solaris:~#dd
if=
/dev/zero
of=
/newsamfs
/.quota_a
bs=
4096
count=
1
If you have to set up quotas for users, create a user quota file, .quota_u
, in the file-system root directory. Use the Solaris command dd
if=/dev/zero
of=
mountpoint
/.quota_u
bs=4096
count=
number-blocks
, where:
mountpoint
is the mount point directory for the file system.
.quota_u
is the name of the output file.
4096
is the block size for the write in bytes.
number-blocks
is the number of blocks to write.
In the example, we create the user quota file for the file system newsamfs
mounted at /newsamfs
. During the requirements-gathering phase, we identified one user that needed specific quotas on the file system, jr23547
. We do not anticipate adding any other individual user quotas, so we size the file at one block:
[server2]root@solaris:~#dd
if=
/dev/zero
of=
/newsamfs
/.quota_u
bs=
4096
count=
1
Unmount the file system.
You must unmount the file system before you can remount it and enable the quota files.
[server2]root@solaris:~#umount
/newsamfs
Perform a file system check.
[server2]root@solaris:~#samfsck
-F
newsamfs
Remount the file system.
The system enables quotas when it detects one or more quota files in the root directory of the file system.
You do not need to include the quota
mount option in the /etc/vfstab
or samfs.cmd
file, because file systems are mounted with quotas enabled by default.
[server2]root@solaris:~#mount
/newsamfs
Next, set or update quotas as needed. See "Set Quotas for Groups, Projects, Directories, and Users".
Use this procedure if you are creating quotas for a file system that already holds files.
Log in to the file-system server as root
.
In the example, the server is named server1
:
[server1]root@solaris:~#
Open the /etc/vfstab
file in a text editor, and make sure that the noquota
mount option has not been set.
In the example, we open the file in the vi
text editor. The noquota
mount option has been set:
[server1]root@solaris:~#vi
/etc/vfstab
#File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #------------ ------- --------------- ------ ---- ------- ------------ /devices - /devices devfs - no - /proc - /proc proc - no - ... samqfs1 - /samqfs1 samfs - nonoquota
If the noquota
mount option has been set in the /etc/vfstab
file, delete it and save the file.
[server1]root@solaris:~# vi /etc/vfstab #File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #------------ ------- --------------- ------ ---- ------- ------------ /devices - /devices devfs - no - /proc - /proc proc - no - ... samqfs1 - /samqfs1 samfs - no-
:wq
[server1]root@solaris:~#
Open the /etc/opt/SUNWsamfs/samfs.cmd
file in a text editor, and make sure that the noquota
mount option has not been set.
In the example, we open the file in the vi
text editor. The noquota
mount option has not been set:
[server1]root@solaris:~#vi
/etc/opt/SUNWsamfs/samfs.cmd
# These are the defaults. To change the default behavior, uncomment the # appropriate line (remove the '#' character from the beginning of the line) # and change the value. # #inodes = 0 #fs = samqfs1 # forcedirectio (default no forcedirectio) # high = 80 # low = 70 # weight_size = 1. # weight_age = 1. # readahead = 128 ... # dio_wr_ill_min = 0 # dio_wr_consec = 3 # qwrite (ma filesystem, default no qwrite) # shared_writer (ma filesystem, default no shared_writer) # shared_reader (ma filesystem, default no shared_reader)
If the noquota
mount option has been set in the /etc/opt/SUNWsamfs/samfs.cmd
file, delete it and save the file.
If you deleted the noquota
mount option from the /etc/vfstab
file and/or the /etc/opt/SUNWsamfs/samfs.cmd
file, unmount the file system.
When you remove a noquota
mount option, you must unmount the file system so that you can remount it with quotas enabled.
[server1]root@solaris:~#umount
/samqfs1
If the file system is not currently mounted, mount it now.
The file system must be mounted before you can enable quotas.
[server1]root@solaris:~#mount
/samqfs1
Change to the root directory of the file system and check for any existing quota files. Use the Solaris command ls
-a
and look for the files .quota_g
, .quota_a
, and/or .quota_u
.
In the example, no quota files currently exist.
[server1]root@solaris:~#cd
/samqfs1
[server1]root@solaris:~#ls
-a
/samqfs1
. .archive .fuid .stage portalproject .. .domain .inodes lost+found
If quota files exist, do not modify them.
If you have to set up quotas for groups and the group quota file, .quota_g
, does not already exist in the file-system root directory, create the file now. Use the Solaris command dd
if=
/dev/zero
of=
mountpoint
/.quota_g
bs=
4096
count=
number-blocks
, where:
if=
/dev/zero
specifies null characters from the UNIX special file /dev/zero
as the input.
of=
mountpoint
/.quota_g
specifies the output file, where mountpoint
is the mount point directory for the file system.
bs=
4096
sets the block size for the write to 4096
bytes.
count=
number-blocks
specifies the number of blocks to write. This value depends on the number of records that the file will hold. There is one 128-byte record for each specified quota, so one block can accommodate 32 records.
In the example, we create the group quota file for the file system /samqfs1
mounted at /samqfs1
. During the requirements-gathering phase, we identified three groups that need quotas on the file system, dev
, cit
, and pgmt
. We do not anticipate adding any other group quotas, so we size the file at one block:
[server1]root@solaris:~#dd
if=
/dev/zero
of=
/samqfs1
/.quota_g
bs=
4096
count=
1
If you have to set up quotas for admin sets and the admin sets quota file, .quota_a
, does not already exist in the file-system root directory, create the file now. Use the Solaris command dd
if=
/dev/zero
of=
mountpoint
/.quota_a
bs=
4096
count=
number-blocks
, where:
mountpoint
is the mount point directory for the file system.
.quota_a
is the name of the output file.
4096
is the block size for the write in bytes.
number-blocks
is the number of blocks to write.
In the example, we create the admin sets quota file for the file system /samqfs1
mounted at /samqfs1
. During the requirements-gathering phase, we identified two projects that need quotas on the file system, portal
(admin set ID 1
) and lockbox
(admin set ID 2
). We do not anticipate adding any other admin set quotas, so we size the file at one block:
[server1]root@solaris:~#dd
if=
/dev/zero
of=
/samqfs1
/.quota_a
bs=
4096
count=
1
If you have to set up quotas for users and the user quota file, .quota_u
, does not already exist in the file-system root directory, create the file now. Use the Solaris command dd
if=
/dev/zero
of=
mountpoint
/.quota_u
bs=
4096
count=
number-blocks
, where:
mountpoint
is the mount point directory for the file system.
.quota_u
is the name of the output file.
4096
is the block size for the write in bytes.
number-blocks
is the number of blocks to write.
In the example, we create the user quota file for the file system /samqfs1
mounted at /samqfs1
. During the requirements-gathering phase, we identified one user that needed specific quotas on the file system, jr23547
. We do not anticipate adding any other individual user quotas, so we size the file at one block:
[server1]root@solaris:~#dd
if=
/dev/zero
of=
/samqfs1
/.quota_u
bs=
4096
count=
1
Unmount the file system.
You must unmount the file system before you can remount it and enable the quota files.
[server1]root@solaris:~#umount
/samqfs1
Perform a file system check.
[server1]root@solaris:~#samfsck
-F
/samqfs1
Remount the file system.
The system enables quotas when it detects one or more quota files in the root directory of the file system.
You do not need to include the quota
mount option in the /etc/vfstab
or samfs.cmd
file, because file systems are mounted with quotas enabled by default.
[server1]root@solaris:~#mount
/samqfs1
Next, Set Quotas for Groups, Projects, Directories, and Users.
You set new quotas and adjust existing ones using the samquota
command. Follow the procedure below:
Once you have characterized storage requirements, decide on the appropriate quotas for each group, user, and non-group organization. Consider the following factors and make adjustments as necessary:
the size of the file system compared to the average and maximum number of blocks that all users require
the number of inodes in the file system compared to the average and maximum number of inodes that all users require
the numbers and types of users that are likely to be close to their maximum requirement at any given time.
Log in to the file-system server as root
.
In the example, the server is named server1
:
[server1]root@solaris:~#
Set limits for each group that requires them. Use the command samquota
-b
number-blocks
:
type
[
:
scope
]
-f
number-files
:
type
[
:
scope
]
-t
interval
[
:
scope
]
-G
groupID
[
directory-or-file
]
, where:
-b
number-blocks
sets the maximum number of 512-kilobyte blocks that can be stored in the file system to number-blocks
, an integer (see the samquota
man page for alternative ways of specifying size). A value of 0
(zero) specifies an unlimited number of blocks.
:
is a field separator.
type
specifies the kind of limit, either h
for a hard limit or s
for a soft limit.
scope
(optional) identifies the type of storage that is subject to the limit. It can be either o
for online (disk-cache) storage only or t
for total storage, which includes both disk-cache and archival storage (the default).
-f
number-files
sets the maximum number of files that can be stored in the file system to number-files
, an integer. A value of 0
(zero) specifies an unlimited number of files.
-t
number-seconds
sets the grace period, the time during which soft limits can be exceeded, to number-seconds
, an integer representing a number of seconds (see the samquota
man page for alternative ways of specifying time).
-G
groupID
specifies a group name or integer identifier for the group. A value of 0
(zero) sets the default limits for all groups.
directory-or-file
(optional) is the mount point directory for a specific file system or a specific directory or file on which you need to set a quota.
In the example, we use our estimates from the requirements-gathering phase to set both hard and soft limits on both the amount of storage space in the /samqfs1
file system that group dev
can use and the numbers of files that it can store. We set the grace period to 4320
seconds (twelve hours) for online storage only (note that the commands below are entered as single lines—the line breaks are escaped by the backslash character):
[server1]root@solaris:~#samquota
-b
3019898880
:
h
:
t
-f
30000
:
h
:
t
-t
4320:o
\-G
dev
/samqfs1
[server1]root@solaris:~#samquota
-b 2013265920
:
s
:
t
-f 15000
:
s
:
t
-t 4320
\-G
dev
/samqfs1
Set limits for each admin set that requires them. Use the command samquota
-b
number-blocks
:
type
[
:
scope
]
-f
number-files
:
type
[
:
scope
]
-t
interval
[
:
scope
]
-A
adminsetID
[
directory-or-file
]
, where -A
adminsetID
is the integer value that uniquely identifies the admin set.
Setting adminsetID
to 0
(zero) sets the default limits for all admin sets.
In the example, we use our estimates from the requirements-gathering phase to set both hard and soft limits on both the amount of storage space in the /samqfs1
file system that the portal
project (admin set ID 1
) can use and the numbers of files that it can store. We set the grace period to 4320
seconds (twelve hours) for total storage used, which is the default scope (note that the command below is entered as a single line—the line break is escaped by the backslash character):
[server1]root@solaris:~#samquota
-b 377487360
:
h
:
t
-f 7000
:
h
:
t
-t 4320
\-A
1
/samqfs1
[server1]root@solaris:~#samquota
-b 314572800
:
s
:
t
-f 4000
:
s
:
t
-A
1
/samqfs1
Set limits for each individual user that requires them. Use the command samquota
-b
number-blocks
:
type
[
:
scope
]
-f
number-files
:
type
[
:
scope
]
-t
interval
[
:
scope
]
-U
userID
[
directory-or-file
]
, where -U
userID
is a user name or integer identifier for the user.
Setting userID
to 0
(zero) sets the default limits for all users.
In the example, we use our estimates from the requirements-gathering phase to set both hard and soft limits on both the amount of storage space in the /samqfs1
file system that user jr23547
can use and the numbers of files that jr23547
can store. We set the grace period to 120960
seconds (two weeks) for total storage used, which is the default scope (note that the commands below are entered as single lines—the line breaks are escaped by the backslash character):
[server1]root@solaris:~#samquota
-b
100663296
:
h
:
t
-f
2000
:
h
:
t
-t 4320
\-U
jr23547
/samqfs1
[server1]root@solaris:~#samquota
-b
10485760
:
s
:
t
-f
600
:
s
:
t
-t
4320
\-U
jr23547
/samqfs1
Stop here.
If you mount an Oracle HSM file system with the noquota
mount option when there are quota files in the root directory, quota records become inconsistent as blocks or files are allocated or freed. In this situation, proceed as follows:
Log in to the file-system server as root
.
In the example, the server is named server1
:
[server1]root@solaris:~#
Unmount the affected file system.
In the example, we unmount file system samfs2
:
[server1]root@solaris:~#umount
samfs2
[server1]root@solaris:~#
Open the /etc/vfstab
file in a text editor, and make sure that the noquota
mount option has not been set.
In the example, we open the file in the vi
text editor. The noquota
mount option has been set:
[server1]root@solaris:~#vi
/etc/vfstab
#File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #------------ ------- --------------- ------ ---- ------- ------------ /devices - /devices devfs - no - /proc - /proc proc - no - ... samfs2 - /samfs2 samfs - nonoquota
If the noquota
mount option has been set in the /etc/vfstab
file, delete it and save the file.
[server1]root@solaris:~# vi /etc/vfstab #File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #------------ ------- --------------- ------ ---- ------- ------------ /devices - /devices devfs - no - /proc - /proc proc - no - ... samfs2 - /samfs2 samfs - no-
:wq
[server1]root@solaris:~#
Open the /etc/opt/SUNWsamfs/samfs.cmd
file in a text editor, and make sure that the noquota
mount option has not been set.
In the example, we open the file in the vi
text editor. The noquota
mount option has not been set:
[server1]root@solaris:~#vi
/etc/opt/SUNWsamfs/samfs.cmd
# These are the defaults. To change the default behavior, uncomment the # appropriate line (remove the '#' character from the beginning of the line) # and change the value. # #inodes = 0 #fs = samqfs1 # forcedirectio (default no forcedirectio) # high = 80 # low = 70 # weight_size = 1. # weight_age = 1. # readahead = 128 ... # dio_wr_ill_min = 0 # dio_wr_consec = 3 # qwrite (ma filesystem, default no qwrite) # shared_writer (ma filesystem, default no shared_writer) # shared_reader (ma filesystem, default no shared_reader)
If the noquota
mount option has been set in the /etc/opt/SUNWsamfs/samfs.cmd
file, delete it and save the file.
Repair the inconsistent quota records. Use the command samfsck
-F
family-set-name
, where family-set-name
is the family set name for the file system in the /etc/opt/SUNWsamfs/mcf
file.
[server1]root@solaris:~#samfsck
-F
samfs2
Remount the file system.
The system enables quotas when it detects one or more quota files in the root directory of the file system.
You do not need to include the quota
mount option in the /etc/vfstab
or samfs.cmd
file, because file systems are mounted with quotas enabled by default.
[server1]root@solaris:~#mount
/samfs2
[server1]root@solaris:~#
Stop here.
Both administrators and users can monitor quotas and resource usage. The root
user can generate quota reports on users, groups, or admin sets with the samquota
command. File-system users can check their own quotas using the squota
command.
See the procedures below:
Log in to the file-system server as root
.
In the example, the server is named server1
:
[server1]
root@solaris:~#
To display quota statistics for all groups, use the command samquota
-g
[
directory-or-file
]
, where the optional directory-or-file
parameter limits the scope of the report to the file system mounted on the specified directory, the specified directory itself, or the specified file.
In the example, we request a report for the samqfs1
file system, which is mounted at /samqfs1
:
[server1]root@solaris:~#samquota
-g
/samqfs1
To display quota statistics for all admin sets, use the command samquota
-a
[
directory-or-file
]
, where the optional directory-or-file
parameter limits the scope of the report to the file system mounted on the specified directory, the specified directory itself, or the specified file.
In the example, we request a report for the samqfs1
file system, which is mounted at /samqfs1
:
[server1]
root@solaris:~#samquota
-a
/samqfs1
To display quota statistics for all users, use the command samquota
-u
[
directory-or-file
]
, where the optional directory-or-file
parameter limits the scope of the report to the file system mounted on the specified directory, the specified directory itself, or the specified file.
In the example, we request a report for the samqfs1
file system, which is mounted at /samqfs1
:
[server1]
root@solaris:~#samquota
-u
/samqfs1
To display quota statistics for a specific group, use the command samquota
-G
groupID
[
directory-or-file
]
, where groupID
specifies a group name or integer identifier for the group and where the optional directory-or-file
parameter limits the scope of the report to the file system mounted on the specified directory, the specified directory itself, or the specified file.
In the example, we request a report on quotas for the dev
group in the samqfs1
file system, which is mounted at /samqfs1
:
[server1]root@solaris:~#samquota
-G
dev
/samqfs1
To display quota statistics for a specific admin set, use the command samquota
-A
adminsetID
[
directory-or-file
]
, where adminsetID
specifies an integer identifier for the admin set and where the optional directory-or-file
parameter limits the scope of the report to the file system mounted on the specified directory, the specified directory itself, or the specified file.
In the example, we request a report on quotas for admin set 1
in the samqfs1
file system, which is mounted at /samqfs1
:
[server1]
root@solaris:~#samquota
-A
1
/samqfs1
To display quota statistics for a specific user, use the command samquota
-U
userID
[
directory-or-file
]
, where userID
specifies a user name or integer identifier for the user and where the optional directory-or-file
parameter limits the scope of the report to the file system mounted on the specified directory, the specified directory itself, or the specified file.
In the example, we request a report on the quotas for user jr23547
in the samqfs1
file system, which is mounted at /samqfs1
:
[server1]
root@solaris:~#samquota
-U
jr23547
/samqfs1
Stop here.
Log in to a file-system host using your user ID.
In the example, we log in to host server1
as user od447
:
[server1]od447@solaris:~#
To display quota statistics for all groups, use the command squota
[
directory-or-file
]
, where the optional directory-or-file
parameter limits the scope of the report to the file system mounted on the specified directory, the specified directory itself, or the specified file.
In the example, we request a report for all file systems:
[server1]od447@solaris:~# squota
Limits
Type ID In Use Soft Hard
/samqfs1
Files group 101 1 1000 1200
Blocks group 101 8 20000 30000
Grace period 25920
No user quota entry.
[server1]od447@solaris:~#
Stop here.
When you need to extend a grace period temporarily or when you need to cut a grace period short, you can do so:
If a group, user, or admin set has exceeded the specified soft limit for its quota and needs to remain above the soft limit temporarily but for a period that is longer than the current grace period allows, you can grant the extension as follows:
Log in to the file-system server as root
.
In the example, we log in to host server1
:
[server1]root@solaris:~#
Check the quota that requires an extension. Use the command samquota
-
quota-type
ID
[
directory-or-file
]
where:
quota-type
ID
is G
plus a group name or ID number, A
plus an admin set ID number, or U
plus a user name or ID number.
directory-or-file
(optional) is either the mount point directory for a specific file system or the specific directory or file for which you need to extend a grace period.
In the example, the dev
group is significantly over the soft limit and has only a couple of hours left in its grace period:
[server1]root@solaris:~#samquota
-G
dev
/samqfs1
Online Limits Total Limits Type ID In Use Soft Hard In Use Soft Hard /samqfs1 Files group 101 323 15000 30000 323 15000 30000 Blocks group 101 3109330961 2013265920 3019898880 3109330961 2013265920 3019898880 Grace period 4320 4320 ---> Warning: soft limits to be enforced in 2h21m16s [server1]root@solaris:~#
Extend the grace period, if warranted. Use the command samquota
-
quota-type
ID
-x
number-seconds
[
directory-or-file
]
, where:
quota-type
ID
is G
plus a group name or ID number, A
plus an admin set ID number, or U
plus a user name or ID number.
directory-or-file
(optional) is either the mount point directory for a specific file system or the specific directory or file for which you need to extend a grace period.
number-seconds
is an integer representing the number of seconds in the extension (see the samquota
man page for alternative ways of specifying time).
Enter y
(yes) when prompted to continue.
In the example, we extend the grace period for the dev
group to 267840
seconds (31 days) for files in the samqfs1
file system:
[server1]root@solaris:~#samquota
-G
dev
-x
267840
/samqfs1
Setting Grace Timer: continue?y
When we recheck the dev
group quota, the grace period has been extended:
[server1]root@solaris:~#samquota
-G
dev
/samqfs1
Online Limits Total Limits Type ID In Use Soft Hard In Use Soft Hard /samqfs1 Files group 101 323 15000 30000 323 15000 30000 Blocks group 101 43208 2013265920 3019898880 43208 2013265920 3019898880 Grace period 267840 267840 ---> Warning: soft limits to be enforced in 31d [server1]root@solaris:~#
If a group, admin set, or user regularly needs extensions, re-evaluate storage requirements and/or consider increasing the grace period permanently. Use the procedure "Set Quotas for Groups, Projects, Directories, and Users".
Stop here.
If a group, user, or admin set has exceeded the specified soft limit for its quota and cannot free space quickly enough to get below the soft limit before the current grace period expires, you can restart the grace period. Proceed as follows:
Log in to the file-system server as root
.
In the example, we log into host server1
:
[server1]root@solaris:~#
Check the quota that requires an extension. Use the command samquota
-
quota-type
ID
[
directory-or-file
]
where:
quota-type
ID
is G
plus a group name or ID number, A
plus an admin set ID number, or U
plus a user name or ID number.
directory-or-file
(optional) is either the mount point directory for a specific file system or a specific directory or file for which you need to extend a grace period.
In the example, the cit
group is over the soft limit for the samqfs1
file system and has just over an hour left in its grace period:
[server1]root@solaris:~#samquota
-G
cit
/samqfs1
Online Limits Total Limits Type ID In Use Soft Hard In Use Soft Hard /samqfs1 Files group 119 762 750 1500 762 750 1500 Blocks group 119 3109330961 2013265920 3019898880 120096782 157286400 235929600 Grace period 4320 4320 ---> Warning: soft limits to be enforced in 1h11m23s [server1]root@solaris:~#
To reset the grace period to its full starting size the next time that a file or block is allocated, clear the grace period timer. Use the command samquota
-
quota-type
ID
-x
clear
[
directory-or-file
]
, where:
quota-type
ID
is G
plus a group name or ID number, A
plus an admin set ID number, or U
plus a user name or ID number.
directory-or-file
(optional) is either the mount point directory for a specific file system or the specific directory or file for which you need to extend a grace period.
Enter y
(yes) when prompted to continue.
In the example, we clear the grace-period timer for the cit
group's quota on the samqfs1
file system.
[server1]root@solaris:~#samquota
-G
cit
-x
clear
/samqfs1
Setting Grace Timer: continue?y
[server1]root@solaris:~#
When we recheck the cit
group quota, a file has been allocated and the grace period has been reset to 12h
, 12 hours (4320
seconds):
[server1]root@solaris:~#samquota
-G
cit
/samqfs1
Online Limits Total Limits Type ID In Use Soft Hard In Use Soft Hard /samqfs1 Files group 119 763 750 1500 763 750 1500 Blocks group 119 3109330961 2013265920 3019898880 120096782 157286400 235929600 Grace period 4320 4320 ---> Warning: soft limits to be enforced in 12h [server1]root@solaris:~#
Alternatively, to reset the grace period to its full starting size immediately, reset the grace period timer. Use the command samquota
-
quota-type
ID
-x
reset
[
directory-or-file
]
.
quota-type
ID
is G
plus a group name or ID number, A
plus an admin set ID number, or U
plus a user name or ID number.
directory-or-file
(optional) is either the mount point directory for a specific file system or the specific directory or file for which you need to extend a grace period.
Enter y
(yes) when prompted to continue.
In the example, we clear the grace-period timer for the cit
group's quota on the samqfs1
file system.
[server1]root@solaris:~#samquota
-G
cit
-x
reset
/samqfs1
Setting Grace Timer: continue?y
[server1]root@solaris:~#
When we recheck the cit
group quota, the grace period has been reset to 12h
, 12 hours (4320
seconds):
[server1]root@solaris:~#samquota
-G
cit
/samqfs1
Online Limits Total Limits Type ID In Use Soft Hard In Use Soft Hard /samqfs1 Files group 119 762 750 1500 762 750 1500 Blocks group 119 3109330961 2013265920 3019898880 120096782 157286400 235929600 Grace period 4320 4320 ---> Warning: soft limits to be enforced in 12h [server1]root@solaris:~#
Stop here.
Log in to the file-system server as root
.
In the example, we log into host server1
:
[server1]root@solaris:~#
Check the grace period that you need to cut short. Use the command samquota
-
quota-type
ID
[
directory-or-file
]
where:
quota-type
ID
is G
plus a group name or ID number, A
plus an admin set ID number, or U
plus a user name or ID number.
directory-or-file
(optional) is either the mount point directory for a specific file system or a specific directory or file for which you need to extend a grace period.
In the example, the cit
group is over the soft limit and has eleven hours left in its grace period, but we need to end the grace period early:
[server1]root@solaris:~#samquota
-G
cit
/samqfs1
Online Limits Total Limits Type ID In Use Soft Hard In Use Soft Hard /samqfs1 Files group 119 822 750 1500 822 750 1500 Blocks group 119 3109330961 2013265920 3019898880 120096782 157286400 235929600 Grace period 4320 4320 ---> Warning: soft limits to be enforced in 11h [server1]root@solaris:~#
Expire the grace period. Use the command samquota
-
quota-type
ID
-x
expire
[
directory-or-file
]
, where:
quota-type
ID
is G
plus a group name or ID number, A
plus an admin set ID number, or U
plus a user name or ID number.
directory-or-file
(optional) is either the mount point directory for a specific file system or a specific directory or file for which you need to extend a grace period.
In the example, we expire the grace period for the cit
group:
root@solaris:~#samquota
-G
cit
-x
expire
/samqfs1
Setting Grace Timer: continue?y
When we re-check quotas, soft limits for the cit
group are being enforced as hard limits:
[server1]root@solaris:~#samquota
-G
cit
/samqfs1
Online Limits Total Limits Type ID In Use Soft Hard In Use Soft Hard /samqfs1 Files group 119 762 750 1500 762 750 1500 Blocks group 119 3109330961 2013265920 3019898880 120096782 157286400 235929600 Grace period 4320 4320 ---> Online soft limits under enforcement (since 6s ago) [server1]root@solaris:~#
Stop here.
You can inhibit file-system resource allocations by creating inconsistent quota values. When the file system detects that quota values are not consistent for a user, group, or admin set, it prevents that user, group, or admin set from using any more system resources. So setting the hard limit for a quota lower than the corresponding soft limit stops further allocations. To use this technique, proceed as follows:
Log in to the file-system server as root
.
In the example, we log into host server1
:
[server1]root@solaris:~#
Back up the quota so that you can restore it later. Export the current configuration, and redirect the information to a file. Use the command samquota
-
quota-type
ID
[
directory-or-file
]
>
file
where:
quota-type
ID
is G
plus a group name or ID number, A
plus an admin set ID number, or U
plus a user name or ID number.
directory-or-file
(optional) is either the mount point directory for a specific file system or a specific directory or file for which you need to extend a grace period.
file
is the name of the output file.
In the example, we export the quota for the cit
group to the file restore.samqfs1.quota_g.cit
in the root
user's home directory (note that the command below is entered as a single line—the line break is escaped by the backslash character):
[server1]root@solaris:~#samquota
-G
cit
-e
/samqfs1
>
\/root/restore.samqfs1.quota_g.cit
[server1]root@solaris:~#
Check the output. Use the Solaris command more
<
file
, where file
is the name of the output file.
[server1]root@solaris:~# more < /root/restore.samqfs1.quota_g.cit
# Type ID
# Online Limits Total Limits
# soft hard soft hard
# Files
# Blocks
# Grace Periods
samquota -G 119 \
-f 750:s:o -f 1500:h:o -f 750:s:t -f 1500:h:t \
-b 157286400:s:o -b 235929600:h:o -b 157286400:s:t -b 235929600:h:t \
-t 4320:o -t 4320:t
[server1]root@solaris:~#
Set the hard limits for the quota to 0
(zero) and set the soft limits to 1
(or any non-zero value). Use the command samquota
-
quota-type
ID
-f 1:s -f 0:h -b 1:s -b 0:h
[
directory-or-file
]
.
quota-type
ID
is G
plus a group name or ID number, A
plus an admin set ID number, or U
plus a user name or ID number.
directory-or-file
(optional) is the mount point directory for a specific file system or a specific directory or file for which you need to extend a grace period.
In the example, we make the quota settings for the cit
group in the /samqfs1
file system inconsistent, and thereby stop new resource allocations.
[server1]root@solaris:~#samquota
-G
cit
-f 1:s -f 0:h -b 1:s -b 0:h
/samqfs1
[server1]root@solaris:~#
When we check the quota for the cit
group, zero quotas are in effect. The exclamation point characters (!
) show all current use as over-quota, so no further allocations will be made:
[server1]root@solaris:~#samquota
-G
cit
/samqfs1
Online Limits Total Limits Type ID In Use Soft Hard In Use Soft Hard /sam6 Files group 119 822! 1 0 822! 1 0 Blocks group 119 3109330961! 1 0 3109330961! 1 0 Grace period 4320 4320 ---> Quota values inconsistent; zero quotas in effect. [server1]root@solaris:~#
When you are ready resume normal allocations by restoring the modified quota to its original state. Execute the backup file that you created as a shell script. Use the Solaris command sh
file
, where file
is the name of the backup file.
In the example, we restore the quota for the cit
group by executing the file /root/restore.samqfs1.quota_g.cit
[server1]root@solaris:~#sh
/root/restore.samqfs1.quota_g.cit
Setting Grace Timer: continue? y Setting Grace Timer: continue? y [server1]root@solaris:~#
When we check the quota, normal limits have been restored and allocations are no longer blocked:
[server1]root@solaris:~#samquota
-G
cit
/samqfs1
Online Limits Total Limits Type ID In Use Soft Hard In Use Soft Hard /samqfs1 Files group 119 822 750 1500 822 750 1500 Blocks group 119 3109330961 2013265920 3019898880 120096782 157286400 235929600 Grace period 4320 4320 ---> Warning: soft limits to be enforced in 11h [server1]root@solaris:~#
Stop here.
To remove or disable quotas for a file system, disable quotas in the mount process.
Log in to the file-system server as root
.
In the example, we log in to host server1
:
[server1]root@solaris:~#
Open the /etc/vfstab
file in a text editor, add the noquota
mount option to the mount options column of the file system row, and save the file.
In the example, we open the file in the vi
text editor, and set the noquota
mount option for the samqfs1
file system:
[server1]root@solaris:~#vi
/etc/vfstab
#File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #------------ ------- --------------- ------ ---- ------- ------------ /devices - /devices devfs - no - /proc - /proc proc - no - ...samqfs1
- /samqfs1 samfs - nonoquota
:wq
[server1]root@solaris:~#
If the file system is mounted, unmount it.
You must unmount and then remount a file system so that the operating system reloads the /etc/vfstab
file and makes the specified changes. In the example, we unmount the samqfs1
file system:
[server1]root@solaris:~#umount
samqfs1
[server1]root@solaris:~#
Mount the file system.
In the example, we mount the samqfs1
file system:
[server1]root@solaris:~#mount
samqfs1
[server1]root@solaris:~#
If you expect to reinstate quotas later, leave the quota files in pace.
When you are ready to reinstate quotas, you can simply unmount the file system, run the command samfsck
-F
on the file system, remove the noquota
mount option, and then remount the file system.
If you do not expect to reinstate quotas or if you need to reclaim the space consumed by quota files, use the Solaris command rm
to delete the files .quota_g
, .quota_a
, and/or .quota_u
from the root directory of the file system.
In the example, we remove all quota files from the /samqfs1
file system root directory:
[server1]root@solaris:~#rm
/samqfs1
/
.quota_g
[server1]root@solaris:~#rm
/samqfs1
/
.quota_a
[server1]root@solaris:~#rm
/samqfs1
/
.quota_u
[server1]root@solaris:~#
Stop here.
In general, you manage archiving file systems in much the same was as you would non-archiving file systems. However, you must stop the archiving process before carrying out most file-system management tasks. When active, the archiving processes make changes to the file-system's primary disk cache. So you must quiesce these processes before you do maintenance work on the disk cache. This section covers the following tasks:
Log in to the file system host as root
.
In the example, we log in to host server1
:
[server1]root@solaris:~#
Idle all archiving processes. Use the command samcmd
aridle
.
This command will allow current archiving and staging to complete, but will not start any new jobs:
[server1]root@solaris:~#samcmd
aridle
[server1]root@solaris:~#
Idle all staging processes. Use the command samcmd
stidle
.
This command will allow current archiving and staging to complete, but will not start any new jobs:
[server1]root@solaris:~#samcmd
stidle
[server1]root@solaris:~#
Wait for active archiving jobs to complete. Check on the status of the archiving processes using the command samcmd
a
.
When archiving processes are Waiting
for
:arrun
, the archiving process is idle:
[server1]root@solaris:~#samcmd
a
Archiver status samcmd 5.4 10:20:34 May 20 2014 samcmd on samfs-mds sam-archiverd: Waiting for :arrun sam-arfind: ... Waiting for :arrun
Wait for active staging jobs to complete. Check on the status of the staging processes using the command samcmd
u
.
When staging processes are Waiting for :strun
, the staging process is idle:
[server1]root@solaris:~#samcmd
u
Staging queue samcmd 5.4 10:20:34 May 20 2014 samcmd on solaris.demo.lan Staging queue by media type: all sam-stagerd: Waiting for :strun root@solaris:~#
To fully quiesce the system, Stop Archiving and Staging Processes as well.
If you have not already done so, Idle Archiving and Staging Processes.
If you have not already done so, log in to the file system host as root
.
In the example, we log in to host server1
:
[server1]root@solaris:~#
Idle all removable media drives before proceeding further. For each drive, use the command samcmd
equipment-number
idle
, where equipment-number
is the equipment ordinal number assigned to the drive in the /etc/opt/SUNWsamfs/mcf
file.
This command will allow current archiving and staging jobs to complete before turning drives off
, but will not start any new work. In the example, we idle four drives, with ordinal numbers 801
, 802
, 803
, and 804
:
[server1]root@solaris:~#samcmd
801
idle
[server1]root@solaris:~#samcmd
802
idle
[server1]root@solaris:~#samcmd
803
idle
[server1]root@solaris:~#samcmd
804
idle
[server1]root@solaris:~#
Wait for running jobs to complete.
We can check on the status of the drives using the command samcmd
r
. When all drives are notrdy
and empty
, we are ready to proceed.
[server1]root@solaris:~#samcmd
r
Removable media samcmd 5.4 18:37:09 Feb 17 2014 samcmd on samqfs1host ty eq status act use state vsn li 801 ---------p 0 0% notrdy empty li 802 ---------p 0 0% notrdy empty li 803 ---------p 0 0% notrdy empty li 804 ---------p 0 0% notrdy empty [server1]root@solaris:~#
When the archiver and stager processes are idle and the tape drives are all notrdy
, stop the library-control daemon. Use the command samd
stop
.
[server1]root@solaris:~#samd
stop
[server1]root@solaris:~#
Proceed with file-system maintenance.
When maintenance is complete, Restart Archiving and Staging Processes.
When you restart operations, pending stages are reissued and archiving is resumed.
Stop here.
When you are ready, resume normal, automatic operation, proceed a follows:
Log in to the file system host as root
.
In the example, we log in to host server1
:
[server1]root@solaris:~#
Restart the Oracle HSM library-control daemon. Use the command samd
start
.
[server1]root@solaris:~#samd
start
[server1]root@solaris:~#
Stop here.
Renaming a file system is a two-step process. First you change the family set name for the file system by editing the /etc/opt/SUNWsamfs/mcf
file. Then you have the samfsck
-R
-F
command read the new name and update the superblock on the corresponding disk devices. To rename a file system, use the procedure below:
Log in to the file-system server as root
.
In the example, we log in to host server1
:
[server1]root@solaris:~#
If you are repairing an archiving file system, carry out the procedure "Idle Archiving and Staging Processes" before proceeding further.
Unmount the file system that you need to rename.
In the example, we unmount file system samqfs1
:
[server1]root@solaris:~#umount
samqfs1
Open the /etc/opt/SUNWsamfs/mcf
file in a text editor, and locate the file system that you need to rename.
In the example, we use the vi
editor. We need to change the name of the samqfs1
file system:
[server1]root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------- --------- --------- ------------ ------ ---------- samqfs1 100 ms samqfs1 on /dev/dsk/c1t3d0s3 101 md samqfs1 on /dev/dsk/c1t4d0s5 102 md samqfs1 on
In the fourth column of the file, change the family set name of the file system to the new value. You may also change the file-system equipment identifier in the first column, but do not change anything else. Save the file and close the editor.
In the example, we change both the equipment identifier and the family set name of the file system from samqfs1
to samqfs-hpcc
:
[server1]root@solaris:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------- --------- --------- ------------ ------ ----------samqfs-hpcc
100 mssamqfs-hpcc
on /dev/dsk/c1t3d0s3 101 mdsamqfs-hpcc
on /dev/dsk/c1t4d0s5 102 mdsamqfs-hpcc
on:wq
root@solaris:~#
Rewrite the file-system super block to reflect the new family set name. Use the command samfsck
-R
-F
family-set-name
, where family-set-name
is the family set name that you just specified in the /etc/opt/SUNWsamfs/mcf
file.
When issued with the -R
and -F
options, the samfsck
command reads the new family set name and the corresponding disk-storage equipment identifiers from the /etc/opt/SUNWsamfs/mcf
file. It then rewrites the super block on the specified disk devices with the new family set name. In the example, we run the command with the new samqfs-hpcc
family set name:
[server1]root@solaris:~#samfsck
-R
-F
samqfs-hpcc
Open the /etc/vfstab
file in a text editor, and locate the entry for the file system that you are renaming.
In the example, we open the file in the vi
text editor. We need to change the samqfs1
file system entry to use the new name:
[server1]root@solaris:~#vi
/etc/vfstab
#File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #------------ ------- --------------- ------ ---- ------- ------------ /devices - /devices devfs - no - /proc - /proc proc - no - ... samqfs1 - /samqfs1 samfs - no -
In the /etc/vfstab
entry for the file system that you have renamed, change the file system name in the first column and the mount-point directory name in the third column (if required), and save the file.
In the example, we change the name of the samqfs1
file system to samqfs-hpcc
and change the mount point to match:
[server1]root@solaris:~# vi /etc/vfstab #File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #------------ ------- --------------- ------ ---- ------- ------------ /devices - /devices devfs - no - /proc - /proc proc - no - ...samqfs-hpcc
- /samqfs-hpcc
samfs - no -:wq
[server1]root@solaris:~#
Create the new mount-point directory for the new file system, if required, and set the access permissions for the mount point.
Users must have execute (x
) permission to change to the mount-point directory and access files in the mounted file system. In the example, we create the /samqfs-hpcc
mount-point directory and set permissions to 755
(-rwxr-xr-x
):
[server1]root@solaris:~#mkdir
/
samqfs-hpcc
[server1]root@solaris:~#chmod
755
/
samqfs-hpcc
[server1]root@solaris:~#
Check the mcf
file for errors by running the sam-fsd
command, and correct any that are detected.
The sam-fsd
is an initialization command that reads Oracle HSM configuration files. It will stop if it encounters an error:
[server1]root@solaris:~# sam-fsd
Tell the Oracle HSM software to re-read the mcf
file and reconfigure itself accordingly. Use the command samd
config
.
[server1]root@solaris:~#samd
config
If samd config
reports errors, correct them and re-issue the command until no errors are found.
Mount the file system.
In the example, we use the new mount point directory:
[server1]root@solaris:~#mount
/samqfs-hpcc
Stop here.
When file systems report errors via samu
, Oracle HSM Manager, or the /var/adm/sam-log
file, follow the procedure below:
Log in to the file-system server as root
.
In the example, we log in to host server1
:
[server1]root@solaris:~#
If you are repairing an archiving file system, carry out the procedure "Idle Archiving and Staging Processes" before proceeding further.
Unmount the affected file system.
You may need to try more than once if you are waiting for archiving to stop. In the example, we unmount file system samqfs1
:
[server1]root@solaris:~# umount samqfs1 samfs umount: /samqfs1: is busy [server1]root@solaris:~#umount
samqfs1
[server1]root@solaris:~#
Repair the file system. Use the command samfsck
-F
-V
family-set-name
, where family-set-name
is the family set name specified for the file system in the /etc/opt/SUNWsamfs/mcf
file.
It is often a good idea to save the repair results to a date-stamped file for later reference and for diagnostic purposes, when necessary. So in the example, we save the results by piping the samfsck
output to the command tee
/var/tmp/
samfsck-FV.
family-set-name
.`
date
'+%Y%m%d.%H%M%S'
`
(note that the command below is entered as a single line—the line break is escaped by the backslash character):
[server1]root@solaris:~#samfsck
-F
-V
samqfs1
|
tee
\/var/tmp/
samfsck-FV.
samqfs1.
`
date
'+%Y%m%d.%H%M%S'
`
name: /samqfs1 version: 2A First pass Second pass Third pass NOTICE: ino 2.2, Repaired link count from 8 to 14 Inodes processed: 123392 total data kilobytes = 1965952 total data kilobytes free = 1047680 total meta kilobytes = 131040 total meta kilobytes free = 65568 INFO: FS samma1 repaired: start: May 19, 2014 10:57:13 AM MDT finish: May 19, 2014 10:57:37 AM MDT NOTICE: Reclaimed 70057984 bytes NOTICE: Reclaimed 9519104 meta bytes [server1]root@solaris:~#
Remount the file system.
[server1]root@solaris:~#mount
/samqfs1
[server1]root@solaris:~#
Stop here.
Before you add devices to an existing file system, you should consider your requirements and your alternatives. Make sure that enlarging the existing file system is the best way to meet growing capacity requirements. If you need more physical storage space to accommodate new projects or user communities, creating one or more new Oracle HSM file systems may be a better choice. Multiple, smaller file systems will generally offer better performance than one much larger file system, and the smaller file systems may be easier to create and maintain.
Once you have decided that you need to enlarge a file system, take either of the following approaches:
Proceed as follows:
Log in to the file-system server as root
.
In the example, we log in to host server1
:
[server1]root@solaris:~#
Open the /etc/opt/SUNWsamfs/mcf
file in a text editor, and locate the file system that you need to enlarge.
In the examples, we use the vi
editor. We need to enlarge two file systems, the general-purpose samqfsms
file system and the high-performance samqfs2ma
file system:
[server1]root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------- --------- --------- --------- ------ --------------- samqfsms 100 ms samqfsms on /dev/dsk/c1t3d0s3 101 md samqfsms on /dev/dsk/c1t4d0s5 102 md samqfsms on samqfs2ma 200 ma samqfs2ma on /dev/dsk/c1t3d0s3 201 mm samqfs2ma on /dev/dsk/c1t3d0s5 202 md samqfs2ma on /dev/dsk/c1t4d0s5 203 md samqfs2ma on
If you are adding devices to a general-purpose ms
file system, add additional data/metadata devices to the end of the file system definition in the mcf
file. Then save the file, and close the editor.
You can add up to 252 logical devices. In the example, we add two devices, 103
and 104
, to the samqfsms
file system:
[server1]root@solaris:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------- --------- --------- --------- ------ --------------- samqfsms 100 ms samqfsms on /dev/dsk/c1t3d0s3 101 md samqfsms on /dev/dsk/c1t4d0s5 102 md samqfsms on/dev/dsk/c1t3d0s7
103
md
samqfsms
on
/dev/dsk/c1t4d0s7
104
md
samqfsms
on
:wq
[server1]root@solaris:~#
If you are adding devices to a high-performance ma
file system, add data devices and one or more mm
disk devices to the end of the file system definition in the mcf
file. Then save the file, and close the editor.
Always add new devices at the end of the list of existing devices. You can add up to 252, adding metadata devices proportionately as you add data devices. In the example, we add one mm
metadata device, 204
, and two md
data devices, 205
and 206
, to the samqfs2ma
file system:
[server1]root@solaris:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------- --------- --------- --------- ------ --------------- ... samqfs2ma 200 ma samqfs2ma on /dev/dsk/c1t3d0s3 201 mm samqfs2ma on /dev/dsk/c1t3d0s5 202 md samqfs2ma on /dev/dsk/c1t4d0s5 203 md samqfs2ma on/dev/dsk/c1t5d0s6
204
mm
samqfs2ma
on
/dev/dsk/c1t3d0s7
205
md
samqfs2ma
on
/dev/dsk/c1t4d0s7
206
md
samqfs2ma
on
:wq
[server1]root@solaris:~#
Check the mcf
file for errors by running the sam-fsd
command, and correct any that are detected.
The sam-fsd
is an initialization command that reads Oracle HSM configuration files. It will stop if it encounters an error:
[server1]root@solaris:~# sam-fsd
If the sam-fsd
command finds an error in the mcf
file, edit the file to correct the error and recheck as described in the preceding step.
In the example below, sam-fsd
reports an unspecified problem with a device:
[server1]root@solaris:~# sam-fsd Problem in mcf file /etc/opt/SUNWsamfs/mcf for filesystem samqfsms sam-fsd: Problem with file system devices.
Usually, such errors are the result of inadvertent typing mistakes. Here, when we open the mcf
file in an editor, we find that we have typed a letter o
instead of a 0 in the equipment name for device 104
, the second new md
device:
samqfsms 100 ms samqfsms on
/dev/dsk/c1t3d0s3 101 md samqfsms on
/dev/dsk/c1t4d0s5 102 md samqfsms on
/dev/dsk/c1t3d0s7 103 md samqfsms on
/dev/dsk/c1t4do
s7 104 md samqfsms on
^
If the sam-fsd
command runs without error, the mcf
file is correct. Proceed to the next step.
The example is a partial listing of error-free output:
[server1]root@solaris:~# sam-fsd
Trace file controls:
sam-amld /var/opt/SUNWsamfs/trace/sam-amld
cust err fatal ipc misc proc date
...
Would start sam-archiverd()
Would start sam-stagealld()
Would start sam-stagerd()
Would start sam-amld()
[server1]root@solaris:~#
Tell the Oracle HSM software to re-read the mcf
file and reconfigure itself accordingly. Use the command samd
config
.
[server1]root@solaris:~#samd
config
Configuring SAM-FS [server1]root@solaris:~#
Make sure that samd
config
has updated the Oracle HSM file system configuration to include the new devices. Use the command samcmd
f
.
The devices should be in the off
state. In the example, samcmd
f
shows the new devices, 103
and 104
, and both are off
:
[server1]root@solaris:~#samcmd
f
File systems samcmd 5.4 16:57:35 Feb 27 2014 samcmd on server1 ty eq state device_name status high low mountpoint server ms 100 on samqfsms m----2----- 80% 70% /samqfsms md 101 on /dev/dsk/c1t3d0s3 md 102 on /dev/dsk/c1t4d0s5 md 103 off /dev/dsk/c1t3d0s7 md 104 off /dev/dsk/c1t4d0s7 [server1]root@solaris:~#
Enable the newly added devices. For each device, use the command samcmd
add
equipment-number
, where equipment-number
is the equipment ordinal number assigned to the device in the /etc/opt/SUNWsamfs/mcf
file.
In the example, we enable new devices, 103
and 104
:
[server1]root@solaris:~#samcmd
add
103
[server1]root@solaris:~#samcmd
add
104
If you are adding devices to a shared file system, go to "Finish Configuring New Devices Added to a Shared File System".
If you are adding devices to an unshared, standalone file system, make sure that the devices were added and are ready for use by the file system. Use the command samcmd
m
, check the results.
When the device is in the on
state, it has been added successfully and is ready to use. In the example, we have successfully added devices 103
and 104
:
[server1]root@solaris:~#samcmd
f
Mass storage status samcmd 5.4 17:17:08 Feb 27 2014 samcmd on server1 ty eq status use state ord capacity free ra part high low ms 100 m----2----- 13% on 3.840G 3.588G 1M 16 80% 70% md 101 31% on 0 959.938M 834.250M md 102 13% on 1 959.938M 834.250M md 103 0% on 2 959.938M 959.938M md 104 0% on 3 959.938M 959.938M [server1]root@solaris:~#
Stop here.
When you add devices to a shared file system, you must carry out a few more steps before the devices are configured on all file-system hosts. Proceed as follows:
Log in to the file-system metadata server host as root
.
In the example, the metadata server host is named metadata-server
:
[metadata-server]root@solaris:~#
Make sure that the new devices were added to the metadata server. Use the command samcmd
m
.
When the device is in the unavail
state, it has been added successfully but is not yet ready for use. In the example, we have successfully added devices 103
and 104
:
[metadata-server]root@solaris:~#samcmd
f
Mass storage status samcmd 5.4 17:17:08 Feb 27 2014 samcmd on metadata-server ty eq status use state ord capacity free ra part high low ms 100 m----2----- 13% on 3.840G 3.588G 1M 16 80% 70% md 101 31% on 0 959.938M 834.250M md 102 13% on 1 959.938M 834.250M md 103 0% unavail 2 959.938M 959.938M md 104 0% unavail 3 959.938M 959.938M [metadata-server]root@solaris:~#
Log in to each file-system client hosts as root
.
Remember to include potential metadata servers, since they are also clients. In the example, we need to log in to a potential metadata server, named potential-metadata-server
, and two clients, client1
and client2Linux
. So we open three terminal windows and use secure shell (ssh
):
[metadata-server]root@solaris:~#ssh
root@potential-metadata-server
Password: [potential-metadata-server]root@solaris:~# [metadata-server]root@solaris:~#ssh
root@client1
Password: [client1]root@solaris:~# [metadata-server]root@solaris:~#ssh
root@client2Linux
Password: [client2Linux]:[root@linux ~]#
If the client is a Linux client, unmount the shared file system.
[client2Linux]:[root@linux ~]#umount
/samqfsms
On each client, open the /etc/opt/SUNWsamfs/mcf
file in a text editor, and add the new devices to the end of the file system definition, just as you did on the server.
In the example, we add devices 103
and 104
to the mcf
file on client1
:
[client1]root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------- --------- --------- --------- ------ ---------- samqfsms 100 ms samqfsms on shared /dev/dsk/c1t3d0s3 101 md samqfsms on /dev/dsk/c1t4d0s5 102 md samqfsms on/dev/dsk/c1t3d0s7
103
md
samqfsms
on/dev/dsk/c1t4d0s7
104
md
samqfsms
on:wq
[metadata-server]root@solaris:~#
On each client, check the mcf
file for errors by running the sam-fsd
command, and correct any that are detected.
[metadata-server]root@solaris:~# sam-fsd
On each client, tell the Oracle HSM software to re-read the mcf
file and reconfigure itself accordingly:
[metadata-server]root@solaris:~#samd
config
If the client is a Linux client, mount the shared file system.
[client2Linux]:[root@linux ~]#mount
/samqfsms
Once all clients have been configured, return to the metadata server, and enable storage allocation on the new devices. For each device, use the command samcmd
alloc
equipment-number
, where equipment-number
is the equipment ordinal number assigned to the device in the /etc/opt/SUNWsamfs/mcf
file.
In the example, we enable storage allocation on devices 103
and 104
:
[metadata-server]root@solaris:~#samcmd
alloc
103
[metadata-server]root@solaris:~#samcmd
alloc
104
Finally, make sure that the devices are ready for use by the file system. Use the command samcmd
m
, and check the results.
When the device is in the on
state, it has been added successfully and is ready to use. In the example, we have successfully added devices 103
and 104
:
[metadata-server]root@solaris:~#samcmd
f
Mass storage status samcmd 5.4 17:17:08 Feb 27 2014 samcmd on metadata-server ty eq status use state ord capacity free ra part high low ms 100 m----2----- 13% on 3.840G 3.588G 1M 16 80% 70% md 101 31% on 0 959.938M 834.250M md 102 13% on 1 959.938M 834.250M md103
0%on
2 959.938M 959.938M md104
0%on
3 959.938M 959.938M [metadata-server]root@solaris:~#
Stop here.
Proceed as follows:
Log in to the file-system server host as root
.
In the example, the metadata server host is named server1
:
[server1]root@solaris:~#
Before you unmount an archiving file system, you must carry out the procedure "Idle Archiving and Staging Processes".
Unmount the file system.
Do not proceed until you have unmounted the file system. In the example, we unmount file system samqfs1
:
[server1]root@solaris:~#umount
samqfs1
Open the /etc/opt/SUNWsamfs/mcf
file in a text editor, and locate the file system that you need to enlarge.
In the example, we use the vi
editor. We need to enlarge the samqfs1
file system:
[server1]root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- --------- ------ --------- samqfs1 100 ms samqfs1 on /dev/dsk/c1t3d0s3 101 md samqfs1 on /dev/dsk/c1t4d0s5 102 md samqfs1 on
If you are adding devices to a high-performance ma
file system, you must add metadata storage along with the data storage. Add enough additional mm
disk devices to store the metadata for the data devices that you add. Then save the file, and close the editor.
You can add up to 252 logical devices. In the example, we add one mm
metadata device to the samqfs2ma
file system and two data devices to the samqfs2ma
file system:
[server1]root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- --------- ------ --------- samqfs2ma 200 ma samqfs2ma on /dev/dsk/c1t3d0s3 201 mm samqfs2ma on/dev/dsk/c1t5d0s6
204
mm
samqfs2ma
on /dev/dsk/c1t3d0s5 202 md samqfs2ma on /dev/dsk/c1t4d0s5 203 md samqfs2ma on/dev/dsk/c1t3d0s7 205 md samqfs2ma on
/dev/dsk/c1t4dos7 206 md samqfs2ma on
:wq
[server1]root@solaris:~#
If you are adding devices to a general-purpose ms
file system, add additional data/metadata devices to the file system definition in the mcf
file. Then save the file, and close the editor.
You can add up to 252 logical devices. In the example, we add two devices to the samqfs1
file system:
[server1]root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- --------- ------ --------- samqfs1 100 ms samqfs1 on /dev/dsk/c1t3d0s3 101 md samqfs1 on /dev/dsk/c1t4d0s5 102 md samqfs1 on/dev/dsk/c1t3d0s7 103 md samqfs1 on
/dev/dsk/c1t4dos7 104 md samqfs1 on
:wq
[server1]root@solaris:~#
Check the mcf
file for errors by running the sam-fsd
command, and correct any that are detected.
The sam-fsd
is an initialization command that reads Oracle HSM configuration files. It will stop if it encounters an error:
[server1]root@solaris:~# sam-fsd
Tell the Oracle HSM software to re-read the mcf
file and reconfigure itself accordingly:
root@solaris:~#samd
config
Incorporate the new devices into file system. Use the command samgrowfs
family-set-name
, where family-set-name
is the family set name specified for the file system in the /etc/opt/SUNWsamfs/mcf
file.
In the example, we grow the samqfs1
file system:
[server1]root@solaris:~#samgrowfs
samqfs1
Remount the file system.
[server1]root@solaris:~#mount
/samqfs1
If you added devices to an archiving file system, restart the Oracle HSM library-management daemon. Use the command samd
start
.
[server1]root@solaris:~#samd
start
If you neglected to unmount the file system before making changes and if, consequently, the file system will not mount, restore the original mcf
file by deleting references to the added devices. Then run samd
config
to restore the configuration, unmount the file system, and start over.
Stop here.
When required, you can remove data devices from mounted Oracle HSM file systems. Typically this becomes necessary when you need to replace a failed unit or when you need to free up under-utilized devices for other uses. There are, however, some limitations.
You can only remove data devices. You cannot remove any devices used to hold metadata, since metadata defines the organization of the file system itself. This means that you can remove md
, mr
, and striped-group devices from high-performance ma
file systems only. You cannot remove mm
metadata devices from ma
file systems. Nor can you remove md
devices from general purpose ms
file systems, since these devices store both data and metadata.
To remove devices, you must also have somewhere to move any valid data files that reside on the target device. This means that you cannot remove all the devices. One device must always remain available in the file system and it must have enough free capacity to hold all files residing on the devices that you remove. So, if you need to remove a striped group, you must have another available striped group configured with an identical number of member devices.
To remove devices, proceed as follows:
Carry out the following tasks:
samexplorer
Log in to the file-system server host as root
.
In the example, the metadata server host is named server1
:
[server1]root@solaris:~#
Create a samexplorer
report. Use the command samexplorer
path/
hostname
.
YYYY
MM
DD
.
hh
mm
z
.tar.gz
, where:
path
is the path to the chosen directory.
hostname
is the name of the Oracle HSM file system host.
YYYY
MM
DD
.
hh
mm
z
is a date and time stamp.
By default, the file is called /tmp/SAMreport.
hostname
.
YYYY
MM
DD
.
hh
mm
z
.tar.gz
. In the example, we use the directory /zfs1/tmp/
, where /zfs1
is a file system that has no components in common with the Oracle HSM file system (note that the command below is entered as a single line—the line break is escaped by the backslash character):
[server1]root@solaris:~#samexplorer
\/zfs1/sam_config/explorer/samhost1.20140130.1659MST.tar.gz
Report name: /zfs1/sam_config/explorer/samhost1.20140130.1659MST.tar.gz Lines per file: 1000 Output format: tar.gz (default) Use -u for unarchived/uncompressed. Please wait............................................. Please wait............................................. Please wait...................................... The following files should now be ftp'ed to your support provider as ftp type binary. /zfs1/sam_config/explorer/samhost1.20140130.1659MST.tar.gz
Log in to the file-system server host as root
.
In the example, the metadata server host is named server1
:
[server1]root@solaris:~#
Select the location where the recovery point file will be stored. The selected location must share no devices with the file system that you are backing up and must have room to store an unusually large file.
The devices that we intend to remove may contain files that have not been archived. Since such files exist only as single copies, we will have to create a recovery point file that stores at least some data as well as metadata. This can substantially increase the size of the recovery point file.
In the example, we create a subdirectory, tmp/
, in a file system has no components in common with the Oracle HSM file system, /zfs1
:
[server1]root@solaris:~#mkdir
/zfs1/
tmp/
[server1]root@solaris:~#
Change to the file system's root directory.
In the example, we change to the mount-point directory /samqfs1
:
[server1]root@solaris:~#cd
/samqfs1
[server1]root@solaris:~#
Back up the file-system metadata and any unarchived data. Use the command samfsdump
-f
-u
recovery-point
, where recovery-point
is the path and file name of the finished recovery point file.
Note that the -u
option adds the data portion of unarchived files to the recovery point. This can greatly increase the size of the file.
In the example, we create a recovery point file for the samqfs1
file system called samqfs1-20140313.025215
in the directory /zfs1/tmp/
. We check the result using the command ls
-l
(note that the second command below is entered as a single line—the line break is escaped by the backslash character):
[server1]root@solaris:~#cd /samqfs1
[server1]root@solaris:~#samfsdump -f
\/zfs1/
tmp/samqfs1-`date '+%Y%m%d.%H%M%S'`
-T
/samqfs1
samfsdump statistics: Files: 10010 Directories: 2 Symbolic links: 0 Resource files: 0 Files as members of hard links : 0 Files as first hard link : 0 File segments: 0 File archives: 10010 Damaged files: 0 Files with data: 0 File warnings: 0 Errors: 0 Unprocessed dirs: 0 File data bytes: 0 [server1]root@solaris:~#ls
-l
/zfs1/tmp/
samqfs1
* -rw-r--r-- 1 root other 5376517 Mar 13 02:52 /zfs1/tmp/samqfs1-20140313.025215 [server1]root@solaris:~#
Now Remove Devices from a Mounted High-Performance File System.
You must remove devices one at a time. For each device, proceed as follows:
Log in to the file-system server host as root
.
In the example, the metadata server host is named server1
:
[server1]root@solaris:~#
Open the /etc/opt/SUNWsamfs/mcf
file, and note the equipment ordinal number for the device that you need to remove.
In the example, we use the vi
editor. We need to remove device /dev/dsk/c1t4d0s7
from the equipment list for the samqfs1
file system. The equipment ordinal number is 104
:
[server1]root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------- --------- --------- --------- ------ -------------- samqfs1 100 ms samqfs1 on /dev/dsk/c1t3d0s3 101 md samqfs1 on /dev/dsk/c1t4d0s5 102 md samqfs1 on /dev/dsk/c1t3d0s7 103 md samqfs1 on/dev/dsk/c1t4d0s7
104
md samqfs1 on:q
[server1]root@solaris:~#
Before you try to remove a device, make sure that the remaining devices in the file system can accept any files that have to be moved from the device that you intend to delete.
Make sure that the remaining devices have adequate capacity.
If the device is a striped group, make sure that the file system contains another striped group with an equivalent configuration.
For example, if the striped group that you plan to remove has four equipment numbers, you must have another striped group that is in the ON state and has four equipment numbers.
Make sure that the file system that you plan to modify has a version 2A superblock. Use the command samfsinfo
filesystem-name
, where filesystem-name
is the name of the file system.
In the example, file system samqfs1
uses a version:
2A
superblock:
[server1]root@solaris:~# /opt/SUNWsamfs/sbin/samfsinfo samqfs1 samfsinfo: filesystem samqfs1 is mounted. name:samqfs1
version:
2A
time: Tuesday, June 28, 2011 6:07:36 AM MDT feature: Aligned Maps count: 4 ... [server1]root@solaris:~#
If the file system does not have a version 2A superblock, stop here. You cannot remove devices while this file system is mounted.
If you are removing devices from an Oracle HSM archiving file system, release all archived files from the disk device that you are removing. Use the command samcmd
release
equipment-number
, where equipment-number
is the equipment ordinal number that identifies the device in the /etc/opt/SUNWsamfs/mcf
file.
If the device is a striped group, provide the equipment number of any device in the group.
The Oracle HSM software changes the state of the specified device to noalloc
(no allocations) so that no new files are stored on it, and starts releasing previously archived files. Once the device contains no unarchived files, the software removes the device from the file system configuration and changes its state to off
.
In the example, we release files from device 104
in the archiving file system samqfs1
:
[server1]root@solaris:~#samcmd
release
104
If you are removing a device from an Oracle HSM non-archiving file system, move all remaining valid files off the disk device that you are removing. Use the command samcmd
remove
equipment-number
, where equipment-number
is the equipment ordinal number that identifies the device in the /etc/opt/SUNWsamfs/mcf
file.
The Oracle HSM software changes the state of the specified device to noalloc
(no allocations) so that no new files are stored on it, and starts moving files that contain valid data to the remaining devices in the file system. When all files have been moved, the software removes the device from the file system configuration and changes its state to off
.
In the example, we move files off of device 104
:
[server1]root@solaris:~#samcmd
remove
104
Monitor the progress of the selected process, samcmd
remove
or samcmd
release
. Use the command samcmd
m
and/or watch the log file and /var/opt/SUNWsamfs/trace/sam-shrink
file.
The release
process completes fairly quickly if all files have been archived, because it merely releases space associated with files that have been copied to archival media. Depending on the amount of data and the number of files, the remove
process takes considerably longer because it must move files between disk devices.
[server1]root@solaris:~#samcmd
m
ty eq status use state ord capacity free ra part high low ms 100 m----2----- 27% on 3.691G 2.628G 1M 16 80% 70% md 101 27% on 0 959.938M 703.188M md 102 28% on 1 899.938M 646.625M md 103 13% on 2 959.938M 834.250M md 104 0% noalloc 3 959.938M 959.938M [server1]root@solaris:~#
If you are using samcmd
release
and the target device does not enter the off
state, there are unarchived files on the device. Wait for the archiver to run and archiving to complete. Then use the command samcmd
release
again. You can check on the progress of archiving by using the command samcmd
a
.
The release
process cannot free the disk space until unarchived files are archived.
[server1]root@solaris:~#samcmd
a
Archiver status samcmd 5.4 14:12:14 Mar 1 2014 samcmd on server1 sam-archiverd: Waiting for resources sam-arfind: samqfs1 mounted at /samqfs1 Files waiting to start 4 schedule 2 archiving 2 [server1]root@solaris:~#
If samcmd
release
fails because one or more unarchived files cannot be archived, move the unarchived files to another device. Use the command samcmd
remove
equipment-number
, just as you would when removing devices from a non-archiving, standalone file system.
In the example, we move files off of device 104
:
[server1]root@solaris:~#samcmd
remove
104
Once the device state has been changed to off
, open the /etc/opt/SUNWsamfs/mcf
file in a text editor, locate the file system, and update the equipment list to reflect the changes. Save the file and close the editor.
In the example, samcmd
m
shows that 104
is off
. So we use the vi
editor to open the mcf
file. We remove the entry for device 104
from the equipment list for the samqfs1
file system and save our changes:
[server1]root@solaris:~#samcmd
m
ty eq status use state ord capacity free ra part high low ms 100 m----2----- 27% on 3.691G 2.628G 1M 16 80% 70% md 101 27% on 0 959.938M 703.188M md 102 28% on 1 899.938M 646.625M md 103 13% on 2 959.938M 834.250M md 104 0% off 3 959.938M 959.938M [server1]root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------- --------- --------- --------- ------ --------- samqfs1 100 ms samqfs1 on /dev/dsk/c1t3d0s3 101 md samqfs1 on /dev/dsk/c1t4d0s5 102 md samqfs1 on /dev/dsk/c1t3d0s7 103 md samqfs1 on:wq
[server1]root@solaris:~#
Check the modified mcf
file for errors by running the sam-fsd
command, and correct any errors that are detected.
The sam-fsd
command will stop if it encounters an error:
[server1]root@solaris:~# sam-fsd
Tell the Oracle HSM software to re-read the mcf
file and reconfigure itself accordingly:
[server1]root@solaris:~#samd
config
Stop here.
This section outlines the following tasks:
When you mount or unmount a shared file system, the order in which you mount or unmount the metadata server and the clients is important.
For failover purposes, the mount options should be the same on the metadata server and all potential metadata servers. For example, you can create a samfs.cmd
file that contains the mount options and copy that file to all of the hosts.
For more information about mounting shared file systems, see the mount_samfs
man page.
Log in to the Oracle HSM metadata server and client hosts as root
.
In the example, we log in to the metadata server host for the sharefs
file system, sharefs-mds
. Then we open a terminal window for each client, sharefs-client1
and sharefs-client2
. We use ssh
(Secure Shell) to log in:
[sharefs-mds]root@solaris:~#ssh
root@sharefs-client1
Password: [sharefs-client1]root@solaris:~# [sharefs-mds]root@solaris:~#ssh
root@sharefs-client2
Password: [sharefs-client2]root@solaris:~#
If the file system has an entry in the Solaris /etc/vfstab
file, mount the shared file system on the metadata server host using the command mount
mountpoint
, where mountpoint
is the mount point directory on the host's root file system.
Always mount the file system on the metadata server host first, before mounting the file system on clients.
In the example, the sharefs
file system has the following entry in the /etc/vfstab
file:
sharefs - /sharefs samfs - no shared
So we can mount the file system by supplying only the mount point parameter:
[sharefs-mds]root@solaris:~#mount
/sharefs
[sharefs-mds]root@solaris:~#
If the file system does not have an entry in the Solaris /etc/vfstab
file, mount the shared file system on the metadata server host using the command mount
-F
samfs
-o
shared
mountpoint
, where mountpoint
is the mount point directory on the host's root file system.
Always mount the file system on the metadata server host first, before mounting the file system on clients.
In the example, the sharefs
file system has no entry in the /etc/vfstab
file:
[sharefs-mds]root@solaris:~#mount
-F
samfs
-o
shared
/sharefs
[sharefs-mds]root@solaris:~#
If the file system has an entry in the Solaris /etc/vfstab
file, mount the shared file system on each client host using the command mount
mountpoint
, where mountpoint
is the mount point directory on the host's root file system.
You can mount the file system on the client hosts in any order.
[sharefs-client1]root@solaris:~#mount
/sharefs
[sharefs-client1]root@solaris:~# [sharefs-client2]root@solaris:~#mount
/sharefs
[sharefs-client2]root@solaris:~#
If the file system does not have an entry in the Solaris /etc/vfstab
file, mount the shared file system on each client host using the command mount
-F
samfs
-o
shared
mountpoint
, where mountpoint
is the mount point directory on the host's root file system.
You can mount the file system on the client hosts in any order.
[sharefs-client1]root@solaris:~#mount
-F
samfs
-o
shared
/sharefs
[sharefs-client1]root@solaris:~# [sharefs-client2]root@solaris:~#mount
-F
samfs
-o
shared
/sharefs
[sharefs-client2]root@solaris:~#
Stop here.
Log in to the Oracle HSM metadata server and client hosts as root
.
In the example, we log in to the metadata server host for the sharefs
file system, sharefs-mds
. Then we open a terminal window for each client, sharefs-client1
and sharefs-client2
and use ssh
(Secure Shell) to log in:
[sharefs-mds]root@solaris:~#ssh
root@sharefs-client1
Password: [sharefs-client1]root@solaris:~# [sharefs-mds]root@solaris:~#ssh root@sharefs-client2
Password: [sharefs-client2]root@solaris:~#
If the file system is shared through NFS or SAMBA, unshare the file system before you unmount it. On the metadata server, use the command unshare
mount-point
, where mount-point
is the mount point directory of the Oracle HSM file system.
[sharefs-mds]root@solaris:~#unshare
/sharefs
[sharefs-mds]root@solaris:~#
Unmount the Oracle HSM shared file system from each client. Use the command umount
mount-point
, where mount-point
is the mount point directory of the Oracle HSM file system.
See the umount_samfs
man page for further details. In the example, we unmount /sharedqfs1
from our two clients, sharefs-client1
and sharefs-client2
:
[sharefs-client1]root@solaris:~#umount
/sharefs
[sharefs-client1]root@solaris:~#exit
[sharefs-mds]root@solaris:~# [sharefs-client2]root@solaris:~#umount
/sharefs
[sharefs-client1]root@solaris:~#exit
[sharefs-mds]root@solaris:~#
Unmount the Oracle HSM shared file system from the metadata server. Use the command umount
-o
await_clients=
interval
mount-point
, where mount-point
is the mount point directory of the Oracle HSM file system and interval
is the number of seconds by which the -o
await_clients
option delays execution.
When the umount
command is issued on the metadata server of an Oracle HSM shared file system, the -o
await_clients
option makes umount
wait the specified number of seconds so that clients have time to unmount the share. It has no effect if you unmount an unshared file system or issue the command on an Oracle HSM client. See the umount_samfs
man page for further details.
In the example, we unmount the /sharefs
file system from the server, allowing 60
seconds for clients to unmount:
[sharefs-mds]root@solaris:~#umount
-o
await_clients=
60
/sharefs
[sharefs-mds]root@solaris:~#
Stop here.
This section provides instructions for configuring additional hosts as clients of a shared file system and for de-configuring existing clients. It includes the following sections:
There are three parts to the process of adding a client host to a shared file system:
First, you Add the Host Information to the Shared File System Configuration.
Then you configure the shared file system on the host, using the procedure specific to the host operating system, either Configure the Shared File System on a Solaris Client or Configure the Shared File System on a Linux Client Host.
Finally, you mount the shared file system on the host, using the procedure specific to the host operating system, either Mount the Shared File System on a Solaris Host or Mount the Shared File System on a Linux Client Host.
Log in to the Oracle HSM metadata server as root
.
In the example, the Oracle HSM shared file system is sharefs
, and the metadata server host is sharefs-mds
:
[sharefs-mds]root@solaris:~#
Back up the file /etc/opt/SUNWsamfs/hosts.
filesystem
, where filesystem
is the name of the file system to which you are adding the client host.
Note that the command below is entered as a single line—the line break is escaped by the backslash character:
[sharefs-mds]root@solaris:~#cp
/etc/opt/SUNWsamfs/hosts.sharefs
\/etc/opt/SUNWsamfs/hosts.sharefs.bak
If the shared file system is mounted, run the command samsharefs
filesystem
from the active metadata server, redirecting output to a file, /etc/opt/SUNWsamfs/hosts.
filesystem
, where filesystem
is the name of the file system to which you are adding the client host.
The samsharefs
command displays the host configuration for an Oracle HSM shared file system. Redirecting the output to a file creates a new hosts file (note that the command below is entered as a single line—the line break is escaped by the backslash character):
[sharefs-mds]root@solaris:~# samsharefssharedqfs1
>
\/etc/opt/SUNWsamfs/hosts.
sharedqfs1
If the shared file system is not mounted, run the command samsharefs
-R
filesystem
from an active or potential metadata server, redirecting output to the file /etc/opt/SUNWsamfs/hosts.
filesystem
, where filesystem
is the name of the file system to which you are adding the client host.
The samsharefs
-R
command can only be run from an active or potential metadata server (see the samsharefs
man page for more details). The samsharefs
command displays the host configuration for an Oracle HSM shared file system. Redirecting the output to a file creates a new hosts file. In the example, we run the command from the metadata server sharefs-mds
(note that the command below is entered as a single line—the line break is escaped by the backslash character):
[sharefs-mds]root@solaris:~#samsharefs
-R
sharedqfs1
\>
/etc/opt/SUNWsamfs/hosts.sharedqfs1
Open the newly created hosts file in a text editor.
In the example, we use the vi
editor. The host configuration includes the active metadata server, sharefs-mds
, one client that is also a potential metadata server, sharefs-mds_alt
, and two other clients, sharefs-client1
and sharefs-client2
:
[sharefs-mds]root@solaris:~#vi
/etc/opt/SUNWsamfs/hosts.sharefs
# Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------------ ---------------------- ------- --- ---------- sharefs-mds 10.79.213.117 1 0 server sharefs-mds_alt 10.79.213.217 2 0 sharefs-client1 10.79.213.133 0 0 sharefs-client2 10.79.213.47 0 0
In the hosts file, add a line for the new client host, save the file, and close the editor.
In the example, we add an entry for the host sharefs-client3
:
[sharefs-mds]root@solaris:~# vi /etc/opt/SUNWsamfs/hosts.sharefs # Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------------ ---------------------- ------- --- ---------- sharefs-mds 10.79.213.117 1 0 server sharefs-mds_alt 10.79.213.217 2 0 sharefs-client1 10.79.213.133 0 0 sharefs-client2 10.79.213.47 0 0sharefs-client3
10.79.213.49
0
0
:wq
[sharefs-mds]root@solaris:~#
If the file system is mounted, update the file-system from the active metadata server. Use the command samsharefs
-u
filesystem
, where filesystem
is the name of the file system to which you are adding the client host.
The samsharefs
command re-reads the revised hosts file and updates the configuration:
[sharefs-mds]root@solaris:~#samsharefs
-u
sharefs1
If the file system is not mounted, update the file-system from an active or potential metadata server. Use the command samsharefs
-R
-u
filesystem
, where filesystem
is the name of the file system to which you are adding the client host.
The samsharefs
command re-reads the revised hosts file and updates the configuration:
[sharefs-mds]root@solaris:~#samsharefs
-R
-u
sharefs1
If you are adding a Solaris host as a client, go to "Configure the Shared File System on a Solaris Client".
If you are adding a Linux host as a client, go to "Configure the Shared File System on a Linux Client Host".
On the shared file-system client, log in as root
.
In the example, the Oracle HSM shared file system is sharefs
, and the client host is sharefs-client1
:
[sharefs-client1]root@solaris:~#
In a terminal window, retrieve the configuration information for the shared file system. Use the command samfsconfig
device-path
, where device-path
is the location where the command should start to search for file-system disk devices (such as /dev/dsk/*
or /dev/zvol/dsk/rpool/*
).
[sharefs-client1]root@solaris:~#samfsconfig
/dev/dsk/*
If the host has access to the metadata devices for the file system and is thus suitable for use as a potential metadata server, the samfsconfig
output closely resembles the mcf
file that you created on the file-system metadata server.
In our example, host sharefs-client1
has access to the metadata devices (equipment type mm
), so the command output shows the same equipment listed in the mcf
file on the server, sharefs-mds
. Only the host-assigned device controller numbers differ:
[sharefs-client1]root@solaris:~#samfsconfig
/dev/dsk/*
# Family Set 'sharefs' Created Thu Feb 21 07:17:00 2013 # Generation 0 Eq count 4 Eq meta count 1 sharefs 300 ma sharefs - /dev/dsk/c1t0d0s0 301 mm sharefs - /dev/dsk/c1t3d0s0 302 mr sharefs - /dev/dsk/c1t3d0s1 303 mr sharefs -
If the host does not have access to the metadata devices for the file system, the samfsconfig
command cannot find the metadata devices and thus cannot fit the Oracle HSM devices that it discovers into the file-system configuration. The command output lists Ordinal
0
—the metadata device—under Missing
Slices
, fails to include the line that identifies the file-system family set, and comments out the listings for the data devices.
In our example, host sharefs-client2
has access to the data devices only. So the samfsconfig
output looks like this:
[sharefs-client2]root@solaris:~#samfsconfig
/dev/dsk/*
# Family Set 'sharefs' Created Thu Feb 21 07:17:00 2013 # Missing slices # Ordinal 0 # /dev/dsk/c4t3d0s0 302 mr sharefs - # /dev/dsk/c4t3d0s1 303 mr sharefs -
Copy the entries for the shared file system from the samfsconfig
output. Then, in a second window, open the /etc/opt/SUNWsamfs/mcf
file in a text editor, and paste the copied entries into the file.
In our first example, the host, sharefs-client1
, has access to the metadata devices for the file system, so the mcf
file starts out looking like this:
[sharefs-client1]root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #---------------- --------- --------- --------- ------ --------------- sharefs 300 ma sharefs - /dev/dsk/c1t0d0s0 301 mm sharefs - /dev/dsk/c1t3d0s0 302 mr sharefs - /dev/dsk/c1t3d0s1 303 mr sharefs -
In the second example, the host, sharefs-client2
, does not have access to the metadata devices for the file system, so the mcf
file starts out looking like this:
[sharefs-client2]root@solaris:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #---------------- --------- --------- --------- ------ --------------- # /dev/dsk/c4t3d0s0 302 mr sharefs - # /dev/dsk/c4t3d0s1 303 mr sharefs -
If the host has access to the metadata devices for the file system, add the shared
parameter to the Additional Parameters
field of the entry for the shared file system.
In the first example, the host, sharefs-client1
, has access to the metadata:
[sharefs-client1]root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #---------------- --------- --------- --------- ------ --------------- sharefs 300 ma sharefs -shared
/dev/dsk/c1t0d0s0 301 mm sharefs - /dev/dsk/c1t3d0s0 302 mr sharefs - /dev/dsk/c1t3d0s1 303 mr sharefs -
If the host does not have access to the metadata devices for the file-system, add a line for the shared file system and include the shared
parameter
[sharefs-client2]root@solaris:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #---------------- --------- --------- --------- ------ ---------------sharefs
300
ma
sharefs
-
shared
# /dev/dsk/c4t3d0s0 302 mr sharefs - # /dev/dsk/c4t3d0s1 303 mr sharefs -
If the host does not have access to the metadata devices for the file system, add a line for the metadata device. Set the Equipment
Identifier
field to nodev
(no device) and set the remaining fields to exactly the same values as they have on the metadata server:
[sharefs-client2]root@solaris:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #---------------- --------- --------- --------- ------ --------------- sharefs 300 ma sharefs on sharednodev
301
mm
sharefs
on
# /dev/dsk/c4t3d0s0 302 mr sharefs - # /dev/dsk/c4t3d0s1 303 mr sharefs -
If the host does not have access to the metadata devices for the file system, uncomment the entries for the data devices.
[sharefs-client2]root@solaris:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #---------------- --------- --------- --------- ------ --------------- sharefs 300 ma sharefs on shared nodev 301 mm sharefs on/dev/dsk/c4t3d0s0
302
mr
sharefs
-
/dev/dsk/c4t3d0s1
303
mr
sharefs
-
Make sure that the Device State
field is set to on
for all devices, save the mcf
file, and close the editor.
In our first example, the host, sharefs-client1
, has access to the metadata devices for the file system, so the mcf
file ends up looking like this:
[sharefs-client1]root@solaris:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #---------------- --------- --------- --------- ------ --------------- sharefs 300 ma sharefs on shared /dev/dsk/c1t0d0s0 301 mm sharefs on /dev/dsk/c1t3d0s0 302 mr sharefson
/dev/dsk/c1t3d0s1 303 mr sharefson
:wq
[sharefs-client1]root@solaris:~#
In the second example, the host, sharefs-client2
, does not have access to the metadata devices for the file system, so the mcf
file starts ends up like this:
[sharefs-client2]root@solaris:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #---------------- --------- --------- --------- ------ --------------- sharefs 300 ma sharefs on shared nodev 301 mm sharefs on /dev/dsk/c4t3d0s0 302 mr sharefson
/dev/dsk/c4t3d0s1 303 mr sharefson
:wq
[sharefs-client2]root@solaris:~#
Check the mcf
file for errors by running the sam-fsd
command, and correct any errors found.
The sam-fsd
is an initialization command that reads Oracle HSM configuration files. It will stop if it encounters an error. In the example, we check the mcf
file on sharefs-client1
:
[sharefs-client1]root@solaris:~# sam-fsd
On the shared file-system host, log in as root
.
In the example, the Oracle HSM shared file system is sharefs
, and the host is a client named sharefs-client1
:
[sharefs-client1]root@solaris:~#
Back up the operating system's /etc/vfstab
file.
[sharefs-client1]root@solaris:~#cp
/etc/vfstab /etc/vfstab.backup
Open the /etc/vfstab
file in a text editor, and add a line for the shared file system.
In the example, we open the file in the vi
text editor and add a line for the sharefs
family set device:
[sharefs-client1]root@solaris:~#vi
/etc/vfstab
#File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------- ------ ---- ------- ------------------------ /devices - /devices devfs - no - /proc - /proc proc - no - ...sharefs
-
/sharefs
samfs
-
no
To mount the file system on the client as a shared file system, enter the shared
option in the Mount Options
column of the vfstab
entry for the shared file system.
If we wanted the current client to mount the shared file system sharefs
read-only, we would edit the vfstab
entry as shown in the example below:
#File
#Device Device Mount System fsck Mount Mount
#to Mount to fsck Point Type Pass at Boot Options
#-------- ------- -------- ------ ---- ------- ------------------------
/devices - /devices devfs - no -
/proc - /proc proc - no -
...
sharefs - /sharefs samfs - no shared
Add any other desired mount options using commas as separators, and make any other desired changes to the /etc/vfstab
file. Then save the /etc/vfstab
file.
In the example, we add no additional mount options:
#File
#Device Device Mount System fsck Mount Mount
#to Mount to fsck Point Type Pass at Boot Options
#-------- ------- -------- ------ ---- ------- -------------------------
/devices - /devices devfs - no -
/proc - /proc proc - no -
...
sharefs - /sharefs samfs - no shared
:wq
[sharefs-client1]root@solaris:~#
Create the mount point specified in the /etc/vfstab
file, and set the access permissions for the mount point.
The mount-point permissions must be the same as on the metadata server and on all other clients. Users must have execute (x
) permission to change to the mount-point directory and access files in the mounted file system. In the example, we create the /sharefs
mount-point directory and set permissions to 755
(-rwxr-xr-x
):
[sharefs-client1]root@solaris:~#mkdir
/sharefs
[sharefs-client1]root@solaris:~#chmod
755
/sharefs
[sharefs-client1]root@solaris:~#
Mount the shared file system:
[sharefs-client1]root@solaris:~#mount
/sharefs
[sharefs-client1]root@solaris:~#
If you are adding a potential metadata server host as a distributed tape I/O datamover, go to "Configuring Datamover Clients for Distributed Tape I/O".
Stop here.
On the Linux client, log in as root
.
In the example, the Oracle HSM shared file system is sharefs
, and the host is a Linux client named sharefs-clientL
:
[sharefs-clientL][root@linux ~]#
In a terminal window, retrieve the configuration information for the shared file system using the samfsconfig
device-path
command, where device-path
is the location where the command should start to search for file-system disk devices (such as /dev/*
).
Since Linux hosts do not have access to the metadata devices for the file system, the samfsconfig
command cannot find the metadata devices and thus cannot fit the Oracle HSM devices that it discovers into the file-system configuration. The command output lists Ordinal
0
—the metadata device—under Missing
Slices
, fails to include the line that identifies the file-system family set, and comments out the listings for the data devices.
In our example, the samfsconfig
output for Linux host sharefs-clientL
looks like this:
[sharefs-clientL][root@linux ~]#samfsconfig
/dev/*
# Family Set 'sharefs' Created Thu Feb 21 07:17:00 2013 # # Missing slices # Ordinal 0 # /dev/sda4 302 mr sharefs - # /dev/sda5 303 mr sharefs -
Copy the entries for the shared file system from the samfsconfig
output. Then, in a second window, open the /etc/opt/SUNWsamfs/mcf
file in a text editor, and paste the copied entries into the file.
In the example, the mcf
file for the Linux the host, sharefs-clientL
, starts out looking like this:
[sharefs-clientL][root@linux ~]#vi
/etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- --------- ------ ------------- # /dev/sda4 302 mr sharefs - # /dev/sda5 303 mr sharefs -
In the mcf
file, insert a line for the shared file system, and include the shared
parameter.
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- --------- ------ -------------sharefs
300
ma
sharefs
-
shared
# /dev/sda4 302 mr sharefs - # /dev/sda5 303 mr sharefs -
In the mcf
file, insert lines for the file system's metadata devices. Since the Linux host does not have access to metadata devices, set the Equipment
Identifier
field to nodev
(no device) and then set the remaining fields to exactly the same values as they have on the metadata server:
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- --------- ------ ------------- sharefs 300 ma sharefs on sharednodev
301
mm
sharefs
on
# /dev/sda4 302 mr sharefs - # /dev/sda5 303 mr sharefs -
In the mcf
file, uncomment the entries for the Linux data devices.
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- --------- ------ ------------- sharefs 300 ma sharefs on shared nodev 301 mm sharefs on/dev/sda4
302
mr
sharefs
-
/dev/sda5
303
mr
sharefs
-
Make sure that the Device State
field is set to on
for all devices, and save the mcf
file.
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- --------- ------ ------------- sharefs 300 ma sharefson
shared nodev 301 mm sharefson
/dev/sda4 302 mr sharefson
/dev/sda5 303 mr sharefson
:wq
[sharefs-clientL][root@linux ~]#
Check the mcf
file for errors by running the sam-fsd
command, and correct any errors found.
The sam-fsd
is an initialization command that reads Oracle HSM configuration files. It will stop if it encounters an error. In the example, we check the mcf
file on the Linux client, sharefs-clientL
:
[sharefs-clientL][root@linux ~]# sam-fsd
On the Linux client, log in as root
.
In the example, the Oracle HSM shared file system is sharefs
, and the host is a Linux client named sharefs-clientL
:
[sharefs-clientL][root@linux ~]#
Back up the operating system's /etc/fstab
file.
[sharefs-clientL][root@linux ~]#cp
/etc/fstab
/etc/fstab.backup
Open the /etc/fstab
file in a text editor, and start a line for the shared file system.
In the example, we use the vi
text editor and add a line for the sharefs
family set device:
[sharefs-clientL][root@linux ~]#vi
/etc/fstab
#File #Device Mount System Mount Dump Pass #to Mount Point Type Options Frequency Number #-------- ------- -------- ------------------------- --------- ------ ... /proc /proc proc defaultssharefs
/sharefs
samfs
In the fourth column of the file, add the mandatory shared
mount option.
[sharefs-clientL][root@linux ~]# vi /etc/fstab
#File
#Device Mount System Mount Dump Pass
#to Mount Point Type Options Frequency Number
#-------- ------- -------- ------------------------- --------- ------
...
/proc /proc proc defaults
sharefs /sharefs samfs shared
In the fourth column of the file, add any other desired mount options using commas as separators.
Linux clients support the following additional mount options:
rw
, ro
retry
meta_timeo
rdlease
, wrlease
, aplease
minallocsz
, maxallocsz
noauto
, auto
In the example, we add the option noauto
:
#File #Device Mount System Mount Dump Pass #to Mount Point Type Options Frequency Number #-------- ------- -------- ------------------------- --------- ------ ... /proc /proc proc defaultssharefs
/sharefs
samfs
shared,noauto
Enter zero (0
) in each of the two remaining columns in the file. Then save the /etc/fstab
file.
#File #Device Mount System Mount Dump Pass #to Mount Point Type Options Frequency Number #-------- ------- -------- ------------------------- --------- ------ ... /proc /proc proc defaults sharefs /sharefs samfs shared,noauto0
0
:wq
[sharefs-clientL][root@linux ~]#
Create the mount point specified in the /etc/fstab
file, and set the access permissions for the mount point.
The mount-point permissions must be the same as on the metadata server and on all other clients. Users must have execute (x
) permission to change to the mount-point directory and access files in the mounted file system. In the example, we create the /sharefs
mount-point directory and set permissions to 755
(-rwxr-xr-x
):
[sharefs-clientL][root@linux ~]#mkdir
/sharefs
[sharefs-clientL][root@linux ~]#chmod
755
/sharefs
Mount the shared file system. Use the command mount
mountpoint
, where mountpoint
is the mount-point directory specified in the /etc/fstab
file.
As the example shows, the mount
command generates a warning. This is normal and can be ignored:
[sharefs-clientL][root@linux ~]#mount
/sharefs
Warning: loading SUNWqfs will taint the kernel: SMI license See http://www.tux.org/lkml/#export-tainted for information about tainted modules. Module SUNWqfs loaded with warnings
Stop here.
Removing a host from a shared file system is simply a matter of the removing it from the server configuration, as described below (to fully deconfigure the host, uninstall the software and the configuration files):
Log in to the Oracle HSM metadata server as root
.
In the example, the Oracle HSM shared file system is sharefs
, and the metadata server host is sharefs-mds
:
[sharefs-mds]root@solaris:~#
Log in to each client as root
, and unmount the shared file system.
Remember that potential metadata servers are themselves clients. In the example, we have three clients: sharefs-client1
, sharefs-client2
, and sharefs-mds_alt
, a potential metadata server. For each client, we log in using ssh
, unmount the file system sharefs
, and close the ssh
session:
[sharefs-mds]root@solaris:~#ssh
root@sharefs-client1
Password: [sharefs-client1]root@solaris:~#umount sharefs
[sharefs-client1]root@solaris:~#exit
[sharefs-mds]root@solaris:~#ssh
root@sharefs-client2
Password: [sharefs-client2]root@solaris:~#umount sharefs
[sharefs-client2]root@solaris:~#exit
[sharefs-mds]root@solaris:~#ssh
root@sharefs-mds_alt
Password: [sharefs-mds_alt]root@solaris:~#umount sharefs
root@solaris:~#exit
[sharefs-mds]root@solaris:~#
On the metadata server, unmount the shared file system.
[sharefs-mds]root@solaris:~# umount sharefs
On the metadata server, rename the file /etc/opt/SUNWsamfs/hosts.
filesystem
to /etc/opt/SUNWsamfs/hosts.
filesystem
.bak
, where filesystem
is the name of the file system from which you are removing the client host.
Note that the command below is entered as a single line—the line break is escaped by the backslash character:
[sharefs-mds]root@solaris:~#mv
/etc/opt/SUNWsamfs/hosts.sharefs
\/etc/opt/SUNWsamfs/hosts.sharefs.bak
Capture the current shared file system host configuration top a file. From the metadata server, run the command samsharefs -R
filesystem
, redirecting the output to the file /etc/opt/SUNWsamfs/hosts.
filesystem
, where filesystem
is the name of the file system to which you are adding the client host.
The samsharefs
command displays the host configuration for the specified Oracle HSM shared file system. Redirecting the output to a file creates a new hosts file. In the example, we run the command from the metadata server sharefs-mds
:
[sharefs-mds]root@solaris:~#samsharefs
-R
sharedqfs1
>
//etc/opt/SUNWsamfs/hosts.sharedqfs1
Open the newly created hosts file in a text editor.
In the example, we use the vi
editor. We need to remove the client sharefs-client3
:
[sharefs-mds]root@solaris:~#vi
/etc/opt/SUNWsamfs/hosts.sharefs
# Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------------ ---------------------- ------- --- ---------- sharefs-mds 10.79.213.117 1 0 server sharefs-mds_alt 10.79.213.217 2 0 sharefs-client1 10.79.213.133 0 0 sharefs-client2 10.79.213.47 0 0 sharefs-client3 10.79.213.49 0 0
In the hosts file, delete the line that corresponds to the client host that you need to remove. Then save the file, and close the editor.
In the example, we delete the entry for the host sharefs-client3
:
[sharefs-mds]root@solaris:~# vi /etc/opt/SUNWsamfs/hosts.sharefs
# Server On/ Additional
#Host Name Network Interface Ordinal Off Parameters
#------------------ ---------------------- ------- --- ----------
sharefs-mds 10.79.213.117 1 0 server
sharefs-mds_alt 10.79.213.217 2 0
sharefs-client1 10.79.213.133 0 0
sharefs-client2 10.79.213.47 0 0
:wq
[sharefs-mds]root@solaris:~#
Update the file-system with the revised hosts file. From the metadata server, use the command samsharefs
-R
-u
filesystem
, where filesystem
is the name of the file system from which you are removing the client host.
[sharefs-mds]root@solaris:~#samsharefs
-u
sharefs
On the metadata server host, mount the shared file system.
In the examples, the /etc/vfstab
file contains an entry for the sharefs
file system, so we use the simple mounting syntax (see the mount_samfs
man page for full information):
[sharefs-mds]root@solaris:~#mount
sharefs
On the each client host, mount the shared file system.
Remember that potential metadata servers are themselves clients. In the example, we have three clients: sharefs-client1
, sharefs-client2
, and sharefs-mds_alt
, a potential metadata server. For each client, we log in using ssh
, unmount the file system sharefs
, and close the ssh
session:
[sharefs-mds]root@solaris:~#ssh
root@sharefs-mds_alt
Password: [sharefs-mds_alt]root@solaris:~#mount
sharefs
sharefs-mds_alt]root@solaris:~#exit
[sharefs-mds]root@solaris:~#ssh
root@sharefs-client1
Password: [sharefs-client1]root@solaris:~#mount
sharefs
sharefs-client1]root@solaris:~#exit
[sharefs-mds]root@solaris:~#ssh
root@sharefs-client2
Password: [sharefs-client2]root@solaris:~#mount
sharefs
sharefs-client2]root@solaris:~#exit
[sharefs-mds]root@solaris:~#
Stop here.
Starting with Oracle HSM Release 6.0, any client of a shared archiving file system that runs on Solaris 11 or higher can attach tape drives and carry out tape I/O on behalf of the file system. Distributing tape I/O across these datamover hosts greatly reduces server overhead, improves file-system performance, and allows significantly more flexibility when scaling Oracle HSM implementations. As your archiving needs increase, you now have the option of either replacing Oracle HSM metadata servers with more powerful systems (vertical scaling) or spreading the load across more clients (horizontal scaling).
To configure a client for distributed tape I/O, proceed as follows:
Connect all devices that will be used for distributed I/O to the client.
If you have not already done so, carry out the procedure "Connecting Tape Drives Using Persistent Bindings". Then return here.
Log in to the shared archiving file system's metadata server as root
.
In the example, the host name is samsharefs-mds
:
[samsharefs-mds]root@solaris:~#
Make sure that the metadata server is running Oracle HSM Solaris 11 or higher.
[samsharefs-mds]root@solaris:~#uname
-r
5.11 [samsharefs-mds]root@solaris:~#
Make sure that all clients that serve as datamovers are running Oracle HSM Solaris 11 or higher.
In the example, we open a terminal window for each client host, samsharefs-client1
and samsharefs-client2
, and log in remotely using ssh
. The log-in banner displays the Solaris version:
[samsharefs-mds]root@solaris:~#ssh
root@samsharefs-client1
... Oracle Corporation SunOS 5.11 11.1 September 2013 [samsharefs-client1]root@solaris:~# [samsharefs-mds]root@solaris:~#ssh
root@samsharefs-client2
... Oracle Corporation SunOS 5.11 11.1 September 2013 [samsharefs-client2]root@solaris:~#
On the metadata server, open the file /etc/opt/SUNWsamfs/defaults.conf
in a text editor, enable distributed I/O by uncommenting the line distio =
and setting the value to on
.
By default, distio
is off
(disabled).
In the example, we open the file in the vi
editor and add the line:
[samsharefs-mds]root@solaris:~#vi
/etc/opt/SUNWsamfs/defaults.conf
# These are the defaults. To change the default behavior, uncomment the # appropriate line (remove the '#' character from the beginning of the line) # and change the value. ...distio
=
on
Next, identify the device types that should participate in distributed I/O. To use device type dev
with distributed I/O, add the line dev
_distio
=
on
to the defaults.conf
file. To exclude device type dev
from distributed I/O, add the line dev
_distio
=
off
. Save the file, and close the editor.
By default, Oracle HSM T10000 drives and LTO drives are allowed to participate in distributed I/O (ti_distio
=
on
and li_distio
=
on
), while all other types are excluded. In the example, we exclude LTO drives:
[samsharefs-mds]root@solaris:~# vi /etc/opt/SUNWsamfs/defaults.conf # These are the defaults. To change the default behavior, uncomment the # appropriate line (remove the '#' character from the beginning of the line) # and change the value. ... distio = onli_distio
=
off
:wq
[samsharefs-mds]root@solaris:~#
On each client that will serve as a datamover, edit the defaults.conf
file so that it matches the file on the server.
In the example, we edit the defaults.conf
file on client samsharefs-client1
using vi
, save the file, and close the editor:
[samsharefs-mds]root@solaris:~#ssh
root@samsharefs-client1
Password: [samsharefs-client1]root@solaris:~#vi
/etc/opt/SUNWsamfs/defaults.conf
# These are the defaults. To change the default behavior, uncomment the # appropriate line (remove the '#' character from the beginning of the line) # and change the value. ...distio
=
on
li_distio
=
off
:wq
[samsharefs-client1]root@solaris:~# [samsharefs-mds]root@solaris:~#
On each client that will serve as a datamover, open the /etc/opt/SUNWsamfs/mcf
file in a text editor. Add all of the tape devices that the metadata server is using for distributed tape I/O. Make sure that the device order and equipment numbers are identical to those in the mcf
file on the metadata server.
In the example, we edit the mcf
file on client samsharefs-client1
using vi
:
[samsharefs-client1]root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------------- --------- --------- ---------- ------ ------------- samsharefs 800 ms samsharefs on ... # Archival storage for copies:/dev/rmt/60cbn
901
ti
on
/dev/rmt/61cbn
902
ti
on
/dev/rmt/62cbn
903
ti
on
/dev/rmt/63cbn
904
ti
on
If the tape library listed in the /etc/opt/SUNWsamfs/mcf
file on the metadata server is configured on the client that will serve as a datamover, specify the library family set as the family set name for the tape devices that are being used for distributed tape I/O. Save the file.
In the example, the library is configured on the host, so we use the family set name library1
for the tape devices:
[samsharefs-client1]root@solaris:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------------- --------- --------- ---------- ------ ------------- samsharefs 800 ms samsharefs on ... # Archival storage for copies:/dev/scsi/changer/c1t0d5 900
rb
library1
on
/dev/rmt/60cbn 901 tilibrary1
on /dev/rmt/61cbn 902 tilibrary1
on /dev/rmt/62cbn 903 tilibrary1
on /dev/rmt/63cbn 904 tilibrary1
on:wq
[samsharefs-client1]root@solaris:~#
If the tape library listed in the /etc/opt/SUNWsamfs/mcf
file on the metadata server is not configured on the client that will serve as a datamover, use a hyphen (-
) as the family set name for the tape devices that are being used for distributed tape I/O.
In the example, the library is not configured on the host:
[samsharefs-client2]root@solaris:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------------- --------- --------- ---------- ------ ------------- samsharefs 800 ms samsharefs on ... # Archival storage for copies: /dev/rmt/60cbn 901 ti-
on /dev/rmt/61cbn 902 ti-
on /dev/rmt/62cbn 903 ti-
on /dev/rmt/63cbn 904 ti-
on:wq
[samsharefs-client2]root@solaris:~#
If you need to enable or disable distributed tape I/O for particular archive set copies, open the server's /etc/opt/SUNWsamfs/archiver.cmd
file in a text editor and add the -distio
parameter to the copy directive. Set -distio
on
to enable or off
to disable distributed I/O. Save the file, and close the editor.
In the example, we use the vi editor to turn distributed I/O off
for copy 1
and on
for copy 2
:
[samsharefs-mds]root@solaris:~#vi
/etc/opt/SUNWsamfs/archiver.cmd
# archiver.cmd # Generated by config api Mon Nov 22 14:31:39 2013 ... # # Copy Parameters Directives params allsets -sort path -offline_copy stageahead allsets.1 -startage 10m -startsize 500M -startcount 500000-distio
off
allsets.2 -startage 24h -startsize 20G -startcount 500000-distio
on
:wq
[samsharefs-mds]root@solaris:~#
On each host, check the mcf
file for errors by running the sam-fsd
command, and correct any errors found.
The sam-fsd
is an initialization command that reads Oracle HSM configuration files. It will stop if it encounters an error. In the example, we check the mcf
file on the Linux client, sharefs-clientL
:
[sharefs-clientL][root@linux ~]# sam-fsd
On the server, tell the Oracle HSM software to read the modified configuration files and reconfigure itself accordingly. Use the command samd config
, and correct any errors found.
In the example, we run the samd config
command on the server, sharefs-mds
:
[samsharefs-mds]root@solaris:~#samd
config
Stop here.
When you add a host that serves as either a potential metadata server or a distributed I/O datamover client, you must configure removable media devices using persistent bindings. The Solaris operating system attaches drives to the system device tree in the order in which it discovers the devices at startup. This order may or may not reflect the order in which devices are discovered by other file system hosts or the order in which they are physically installed in the tape library. So you need to bind the devices to the new host in the same way that they are bound to the other hosts and in the same order in which they are installed in the removable media library.
The procedures below outline the required steps (for full information, see the devfsadm
and devlinks
man pages and the administration documentation for your version of the Solaris operating system):
If you have moved, added, or removed drives in a library or replaced or reconfigured the library associated with an archiving Oracle HSM shared file system, Update Persistent Bindings to Reflect Changes to the Hardware Configuration.
If you are adding a new metadata server or datamover client to an archiving Oracle HSM shared file system, Persistently Bind a New File System Host to Removable Media Devices
Log in to the active metadata server host as root
.
[sharefs-mds]root@solaris:~#
Create a new drive-mapping file as described in "Determining the Order in Which Drives are Installed in the Library".
In the example, the device-mappings.txt
file looks like this:
[sharefs-mds]root@solaris:~#vi
/root/device-mappings.txt
LIBRARY SOLARIS SOLARIS DEVICE LOGICAL PHYSICAL NUMBER DEVICE DEVICE ------- ------------- ----------------------------------------------------- 2 /dev/rmt/0cbn -> ../../devices/pci@8.../st@w500104f00093c438,0:cbn 1 /dev/rmt/1cbn -> ../../devices/pci@8.../st@w500104f0008120fe,0:cbn 3 /dev/rmt/2cbn -> ../../devices/pci@8.../st@w500104f000c086e1,0:cbn 4 /dev/rmt/3cbn -> ../../devices/pci@8.../st@w500104f000b6d98d,0:cbn
Open the /etc/devlink.tab
file in a text editor.
In the example, we use the vi
editor:
[sharefs-mds]root@solaris:~#vi
/etc/devlink.tab
# Copyright (c) 1993, 2011, Oracle and/or its affiliates. All rights reserved. # This is the table used by devlinks # Each entry should have 2 fields; but may have 3. Fields are separated # by single tab ('\t') characters. ...
Using the device-mappings.txt
file as a guide, add a line to the /etc/devlink.tab
file that remaps a starting node in the Solaris tape device tree, rmt/
node-number
, to the first drive in the library. The line should be in the form type=ddi_byte:tape;
addr=
device_address
,0;
rmt/
node-number
\M0
, where device_address
is the physical address of the device and node-number
is the device's position in the Solaris device tree. Choose a node number that is high enough to avoid conflicts with any devices that Solaris configures automatically (Solaris starts from node 0
).
In the example, we note that the device address for the first device in the library, 1
, is w500104f0008120fe
and see that the device is currently attached to the host at rmt/1
:
[sharefs-mds] vi /root/device-mappings.txt LIBRARY SOLARIS SOLARIS DEVICE LOGICAL PHYSICAL NUMBER DEVICE DEVICE ------- ------------- ----------------------------------------------------- 2 /dev/rmt/0cbn -> ../../devices/pci@8.../st@w500104f00093c438,0:cbn1
/dev/rmt/1
cbn -> ../../devices/pci@8.../st@w500104f0008120fe
,0:cbn 3 /dev/rmt/2cbn -> ../../devices/pci@8.../st@w500104f000c086e1,0:cbn 4 /dev/rmt/3cbn -> ../../devices/pci@8.../st@w500104f000b6d98d,0:cbn
So we create a line in /etc/devlink.tab
that remaps rmt/60
to the number 1
drive in the library, w500104f0008120fe
:
[sharefs-mds]root@solaris:~#vi
/etc/devlink.tab
# Copyright (c) 1993, 2011, Oracle and/or its affiliates. All rights reserved. ... type=ddi_byte:tape;addr=w500104f0008120fe
,0;rmt/60
\M0:w
Continue to add lines to the /etc/devlink.tab
file for each tape device that is assigned for Oracle HSM archiving, so that the drive order in the device tree on the metadata server matches the installation order on the library. Save the file, and close the editor.
In the example, we note the order and addresses of the three remaining devices—library drive 2
at w500104f00093c438
, library drive 3
at w500104f000c086e1
, and library drive 4
at w500104f000c086e1
:
[sharefs-mds]root@solaris:~#vi
/root/
device-mappings.txt
...2
/dev/rmt/0cbn -> ../../devices/pci@8\.../st@w500104f00093c438
,0:cbn 1 /dev/rmt/1cbn -> ../../devices/pci@8\.../st@w500104f0008120fe,0:cbn3
/dev/rmt/2cbn -> ../../devices/pci@8\.../st@w500104f000c086e1
,0:cbn4
/dev/rmt/3cbn -> ../../devices/pci@8\.../st@w500104f000b6d98d
,0:cbn
Then we map the device addresses to next three Solaris device nodes, maintaining the same order as in the library:
[sharefs-mds]root@solaris:~#vi
/etc/devlink.tab
... type=ddi_byte:tape;addr=w500104f0008120fe,0; rmt/60\M0 type=ddi_byte:tape;addr=w500104f00093c438
,0;rmt/61
\M0 type=ddi_byte:tape;addr=w500104f000c086e1
,0;rmt/62
\M0 type=ddi_byte:tape;addr=w500104f000b6d98d
,0;rmt/63
\M0:wq
[sharefs-mds]root@solaris:~#
Delete all existing links to the tape devices in /dev/rmt
.
[sharefs-mds]root@solaris:~# rm /dev/rmt/*
Create new, persistent tape-device links from the entries in the /etc/devlink.tab
file. Use the command devfsadm -c tape
.
Each time that the devfsadm
command runs, it creates new tape device links for devices specified in the /etc/devlink.tab
file using the configuration specified by the file. The -c tape
option restricts the command to creating new links for tape-class devices only:
[sharefs-mds]root@solaris:~# devfsadm -c tape
Repeat the operation on each potential metadata server and datamover in the shared file system configuration. In each case, add the same lines to the /etc/devlink.tab
file, delete the links in /dev/rmt
, and run devfsadm -c tape
.
In the example, we use ssh
to log in to each host in turn, and configure the same four logical devices, rmt/60\M0
, rmt/61\M0
, rmt/62\M0
, and rmt/63\M0
:
[sharefs-mds]root@solaris:~#ssh
root@sharefs-mds_alt
Password: [sharefs-mds_alt]root@solaris:~#vi /etc/devlink.tab
... type=ddi_byte:tape;addr=w500104f0008120fe,0; rmt/60\M0 type=ddi_byte:tape;addr=w500104f00093c438,0; rmt/61\M0 type=ddi_byte:tape;addr=w500104f000c086e1,0; rmt/62\M0 type=ddi_byte:tape;addr=w500104f000b6d98d,0; rmt/63\M0:wq
[sharefs-mds_alt]root@solaris:~#rm /dev/rmt/*
[sharefs-mds_alt]root@solaris:~#devfsadm -c tape
[sharefs-mds_alt]root@solaris:~#exit
sharefs-mds]root@solaris:~#ssh
root@sharefs-client1
Password: [sharefs-client1]root@solaris:~#vi /etc/devlink.tab
... type=ddi_byte:tape;addr=w500104f0008120fe,0; rmt/60\M0 type=ddi_byte:tape;addr=w500104f00093c438,0; rmt/61\M0 type=ddi_byte:tape;addr=w500104f000c086e1,0; rmt/62\M0 type=ddi_byte:tape;addr=w500104f000b6d98d,0; rmt/63\M0:wq
[sharefs-client1]root@solaris:~#rm /dev/rmt/*
[sharefs-client1]root@solaris:~#devfsadm -c tape
[sharefs-client1]root@solaris:~#exit
[sharefs-mds]root@solaris:~#
Return to "Configuring Datamover Clients for Distributed Tape I/O" or "Configuring Additional File System Clients".
Log in to the host as root
.
[sharefs-mds]root@solaris:~#
If the physical order of the drives in the media library has changed since the existing file-system hosts were configured, create a new mapping file as described in "Determining the Order in Which Drives are Installed in the Library".
In the example, the device-mappings.txt
file looks like this:
[sharefs-mds]root@solaris:~#vi
/root/device-mappings.txt
LIBRARY SOLARIS SOLARIS DEVICE LOGICAL PHYSICAL NUMBER DEVICE DEVICE ------- ------------- ----------------------------------------------------- 2 /dev/rmt/0cbn -> ../../devices/pci@8.../st@w500104f00093c438,0:cbn 1 /dev/rmt/1cbn -> ../../devices/pci@8.../st@w500104f0008120fe,0:cbn 3 /dev/rmt/2cbn -> ../../devices/pci@8.../st@w500104f000c086e1,0:cbn 4 /dev/rmt/3cbn -> ../../devices/pci@8.../st@w500104f000b6d98d,0:cbn
Open the /etc/devlink.tab
file in a test editor.
In the example, we use the vi
editor:
[sharefs-mds]root@solaris:~#vi
/etc/devlink.tab
# Copyright (c) 1993, 2011, Oracle and/or its affiliates. All rights reserved. # This is the table used by devlinks # Each entry should have 2 fields; but may have 3. Fields are separated # by single tab ('\t') characters. ...
Using the device-mappings.txt
file as a guide, remap a starting node in the Solaris tape device tree, rmt/
node-number
, to the first drive in the library. Add a line to the /etc/devlink.tab
file of the form type=ddi_byte:tape;
addr=
device_address
,0;
rmt/
node-number
\M0
, where: device_address
is the physical address of the device and node-number
is the device's position in the Solaris device tree. Choose a node number that is high enough to avoid conflicts with any devices that Solaris configures automatically (Solaris starts from node 0
).
In the example, we note that the device address for the first device in the library, 1
, is w500104f0008120fe
and see that the device is currently attached to the host at rmt/1
:
[sharefs-mds] vi /root/device-mappings.txt LIBRARY SOLARIS SOLARIS DEVICE LOGICAL PHYSICAL NUMBER DEVICE DEVICE ------- ------------- ----------------------------------------------------- 2 /dev/rmt/0cbn -> ../../devices/pci@8.../st@w500104f00093c438,0:cbn1
/dev/rmt/1
cbn -> ../../devices/pci@8.../st@w500104f0008120fe
,0:cbn 3 /dev/rmt/2cbn -> ../../devices/pci@8.../st@w500104f000c086e1,0:cbn 4 /dev/rmt/3cbn -> ../../devices/pci@8.../st@w500104f000b6d98d,0:cbn
So we create a line in /etc/devlink.tab
that remaps rmt/60
to the number 1
drive in the library, w500104f0008120fe
:
[sharefs-mds]root@solaris:~#vi
/etc/devlink.tab
# Copyright (c) 1993, 2011, Oracle and/or its affiliates. All rights reserved. ... type=ddi_byte:tape;addr=w500104f0008120fe
,0;rmt/60
\M0:w
Continue to add lines to the /etc/devlink.tab
file for each tape device that is assigned for Oracle HSM archiving, so that the drive order in the device tree on the metadata server matches the installation order on the library. Save the file.
In the example, we note the order and addresses of the three remaining devices—library drive 2
at w500104f00093c438
, library drive 3
at w500104f000c086e1
, and library drive 4
at w500104f000c086e1
:
[sharefs-mds]root@solaris:~#vi
/root/
device-mappings.txt
...2
/dev/rmt/0cbn -> ../../devices/pci@8.../st@w500104f00093c438
,0:cbn 1 /dev/rmt/1cbn -> ../../devices/pci@8.../st@w500104f0008120fe,0:cbn3
/dev/rmt/2cbn -> ../../devices/pci@8.../st@w500104f000c086e1
,0:cbn4
/dev/rmt/3cbn -> ../../devices/pci@8.../st@w500104f000b6d98d
,0:cbn
Then we map the device addresses to the next three Solaris device nodes, maintaining the same order as in the library:
[sharefs-mds]root@solaris:~#vi
/etc/devlink.tab
... type=ddi_byte:tape;addr=w500104f0008120fe,0; rmt/60\M0 type=ddi_byte:tape;addr=w500104f00093c438
,0;rmt/61
\M0 type=ddi_byte:tape;addr=w500104f000c086e1
,0;rmt/62
\M0 type=ddi_byte:tape;addr=w500104f000b6d98d
,0;rmt/63
\M0:wq
[sharefs-mds]root@solaris:~#
Delete all existing links to the tape devices in /dev/rmt
.
[sharefs-mds]root@solaris:~# rm /dev/rmt/*
Create new, persistent tape-device links from the entries in the /etc/devlink.tab
file. Use the command devfsadm -c tape
.
Each time that the devfsadm
command runs, it creates new tape device links for devices specified in the /etc/devlink.tab
file using the configuration specified by the file. The -c tape
option restricts the command to creating new links for tape-class devices only:
[sharefs-mds]root@solaris:~# devfsadm -c tape
On each potential metadata server and datamover in the shared file system configuration, add the same lines to the /etc/devlink.tab
file, delete the links in /dev/rmt
, and run devfsadm -c tape
.
In the example, we use ssh
to log in to the potential metadata server host sharefs-mds_alt
and the client host sharefs-client1
. We then configure the same four logical devices, rmt/60\M0
, rmt/61\M0
, rmt/62\M0
, and rmt/63\M0
, on each:
[sharefs-mds]root@solaris:~#ssh
root@sharefs-mds_alt
Password: [sharefs-mds_alt]root@solaris:~#vi
/etc/devlink.tab
...type=ddi_byte:tape;addr=
w500104f0008120fe
,0;
rmt/60
\M0
type=ddi_byte:tape;addr=
w500104f00093c438
,0;
rmt/61
\M0
type=ddi_byte:tape;addr=
w500104f000c086e1
,0;
rmt/62
\M0
type=ddi_byte:tape;addr=
w500104f000b6d98d
,0;
rmt/63
\M0
:wq
[sharefs-mds_alt]root@solaris:~#rm
/dev/rmt/*
[sharefs-mds_alt]root@solaris:~#devfsadm
-c
tape
[sharefs-mds_alt]root@solaris:~#exit
[sharefs-mds]root@solaris:~#ssh
root@sharefs-client1
Password: [sharefs-client1]root@solaris:~#vi
/etc/devlink.tab
...type=ddi_byte:tape;addr=
w500104f0008120fe
,0;
rmt/60
\M0
type=ddi_byte:tape;addr=
w500104f00093c438
,0;
rmt/61
\M0
type=ddi_byte:tape;addr=
w500104f000c086e1
,0;
rmt/62
\M0
type=ddi_byte:tape;addr=
w500104f000b6d98d
,0;
rmt/63
\M0
:wq
[sharefs-client1]root@solaris:~#rm
/dev/rmt/*
[sharefs-client1]root@solaris:~#devfsadm
-c
tape
[sharefs-client1]root@solaris:~#exit
[sharefs-mds]root@solaris:~#
Return to "Configuring Datamover Clients for Distributed Tape I/O" or "Configuring Additional File System Clients".
The procedures in this section move the metadata service for the file system from the current host (the active metadata server) to a standby host (the potential metadata server). Which procedure you use depends on the health of the server host that you are replacing:
Activate a Potential Metadata Server to Replace a Faulty Active Metadata Server
Activate a Potential Metadata Server to Replace a Healthy Active Metadata Server
This procedure lets you move the metadata service off of an active metadata server host that has stopped functioning. It activates a potential metadata server, even if a file system is still mounted. Proceed as follows:
Caution: Never activate a potential metadata server until you have stopped, disabled, or disconnected the faulty metadata server!To activate a potential server when a file system is mounted and the active metadata server is down, you have to invoke the |
If the active metadata server is faulty, make sure that it cannot access the metadata devices before you do anything else. Power the affected host off, halt the host, or disconnect the failed host from the metadata devices.
Wait at least until the maximum lease time has run out, so that all client read, write, and append leases can expire.
Log in to a potential metadata server as root
.
In the example, we log in to the potential metadata server sharefs-mds_alt
:
[sharefs-mds_alt]root@solaris:~#
Activate the potential metadata server. From the potential metadata server, issue the command samsharefs -R
-s
server
file-system
, where server
is the host name of the potential metadata server and file-system
is the name of the Oracle HSM shared file system.
In the example, the potential metadata server is sharefs-mds_alt
and the file system name is sharefs
:
[sharefs-mds_alt]root@solaris:~#samsharefs
-R
-s
sharefs-mds_alt
sharefs
If you need to check the integrity of a file system and repair possible problems, unmount the file system now using the procedure "Unmount a Shared File System".
If you have unmounted the file system, perform the file system check. Use the command samfsck -F
file-system
, where -F
specifies repair of errors and where file-system
is the name of the file system.
In the example, we check and repair the file system name is sharefs
:
[sharefs-mds_alt]root@solaris:~#samfsck
-F
sharefs
Stop here.
You can move the metadata service off of a healthy, active metadata server host and on to a newly activated potential metadata server when required. For example, you might transfer metadata services to an alternate host to keep file systems available while you upgrade or replace the original server host or some of its components. Proceed as follows:
Log in to both the active and potential metadata servers as root
.
In the example, we log in to the active metadata server, sharefs-mds
. Then, in a second terminal window, we use secure shell (ssh
) to log in to the potential metadata server sharefs-mds_alt
:
[sharefs-mds]root@solaris:~#
[sharefs-mds]root@solaris:~# ssh root@sharefs-mds_alt
Password:
[sharefs-mds-alt]root@solaris:~#
If the active metadata server mounts an Oracle HSM archiving file system, finish active archiving and staging jobs and stop any new activity before proceeding further. See "Idle Archiving and Staging Processes".
If the active metadata server mounts an Oracle HSM archiving file system, idle removable media drives and stop the library-control daemon. See"Stop Archiving and Staging Processes".
If you use a crontab
entry to run the recycler process, remove the entry and make sure that the recycler is not currently running.
Activate the potential metadata server. From the potential metadata server, issue the command samsharefs
-s
server
file-system
, where server
is the host name of the potential metadata server and file-system
is the name of the Oracle HSM shared file system.
In the example, the potential metadata server is sharefs-mds_alt
and the file system name is sharefs
:
[sharefs-mds_alt]root@solaris:~#samsharefs
-s
sharefs-mds_alt
sharefs
Load the configuration files and start Oracle HSM processes on the potential metadata server. Use the command samd config
.
For archiving shared file systems, the samd config
command restarts archiving processes and the library control daemon. But shared file system clients that are waiting for files to be staged from tape to the primary disk cache must reissue the stage requests.
If you still need to use a crontab
entry to run the recycler process, restore the entry.
Stop here.
To convert an unshared file system to a shared file system, carry out the following tasks:
On each metadata server, you must create a hosts file that lists network address information for the servers and clients of a shared file system. The hosts file is stored alongside the mcf
file in the /etc/opt/SUNWsamfs/
directory. During the initial creation of a shared file system, the sammkfs -S
command configures sharing using the settings stored in this file. So create it now, using the procedure below.
Gather the network host names and IP addresses for the hosts that will share the file system as clients.
In the examples below, we will share the samqfs1
file system with the clients samqfs1-mds_alt
(a potential metadata server), samqfs1-client1
, and samqfs1-client2
.
Log in to the metadata server as root
.
In the example, we log in to the host samqfs1-mds
:
[samqfs1-mds]root@solaris:~#
Using a text editor, create the file /etc/opt/SUNWsamfs/hosts.
family-set-name
on the metadata server, replacing family-set-name
with the name of the family-set name of the file-system that you intend to share.
In the example, we create the file hosts.samqfs1
using the vi
text editor. We add some optional headings, starting each line with a hash sign (#
), indicating a comment:
[samqfs1-mds]root@solaris:~# vi /etc/opt/SUNWsamfs/hosts.samqfs1
# /etc/opt/SUNWsamfs/hosts.samqfs1
# Server On/ Additional
#Host Name Network Interface Ordinal Off Parameters
#------------------ ---------------------- ------- --- ----------
Enter the host name of the metadata server in the first column and the corresponding IP address or domain name the second. Separate the columns with whitespace characters.
In the example, we enter the host name and IP address of the metadata server, samqfs1-mds
and 10.79.213.117
, respectively:
[samqfs1-mds]root@solaris:~# vi /etc/opt/SUNWsamfs/hosts.samqfs1 # /etc/opt/SUNWsamfs/hosts.samqfs1 # Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------------ ---------------------- ------- --- ----------samqfs1-mds
10.79.213.117
Add a third column, separated from the network address by whitespace characters. In this column, enter the ordinal number of the server (1
for the active metadata server, 2
for the first potential metadata server, and so on).
In this example, there is only one metadata server, so we enter 1
:
[samqfs1-mds]root@solaris:~# vi /etc/opt/SUNWsamfs/hosts.samqfs1
# /etc/opt/SUNWsamfs/hosts.samqfs1
# Server On/ Additional
#Host Name Network Interface Ordinal Off Parameters
#------------------ ---------------------- ------- --- ----------
samqfs1-mds 10.79.213.117 1
Add a fourth column, separated from the server ordinal number by whitespace characters. In this column, enter 0
(zero).
A 0
, -
(hyphen), or blank value in the fourth column indicates that the host is on
—configured with access to the shared file system. A 1
(numeral one) indicates that the host is off
—configured but without access to the file system (for information on using these values when administering shared file systems, see the samsamqfs1
man page).
[samqfs1-mds]root@solaris:~# vi /etc/opt/SUNWsamfs/hosts.samqfs1
# /etc/opt/SUNWsamfs/hosts.samqfs1
# Server On/ Additional
#Host Name Network Interface Ordinal Off Parameters
#------------------ ---------------------- ------- --- ----------
samqfs1-mds 10.79.213.117 1 0
Add a fifth column, separated from the on/off status column by whitespace characters. In this column, enter the keyword server
to indicate the currently active metadata server:
[samqfs1-mds]root@solaris:~# vi /etc/opt/SUNWsamfs/hosts.samqfs1
# /etc/opt/SUNWsamfs/hosts.samqfs1
# Server On/ Additional
#Host Name Network Interface Ordinal Off Parameters
#------------------ ---------------------- ------- --- ----------
samqfs1-mds 10.79.213.117 1 0 server
If you plan to include one or more hosts as a potential metadata servers, create an entry for each. Increment the server ordinal each time. But do not include the server
keyword (there can be only one active metadata server per file system).
In the example, the host samqfs1-mds_alt
is a potential metadata server with the server ordinal 2
. Until and unless we activate it as a metadata server, it will be a client:
[samqfs1-mds]root@solaris:~# vi /etc/opt/SUNWsamfs/hosts.samqfs1
# /etc/opt/SUNWsamfs/hosts.samqfs1
# Server On/ Additional
#Host Name Network Interface Ordinal Off Parameters
#------------------ ---------------------- ------- --- ----------
samqfs1-mds 10.79.213.117 1 0 server
samqfs1-mds_alt 10.79.213.217 2
0
Add a line for each client host, each with a server ordinal value of 0
.
A server ordinal of 0
identifies the host as a client. In the example, we add two clients, samqfs1-client1
and samqfs1-client2
.
[samqfs1-mds]root@solaris:~# vi /etc/opt/SUNWsamfs/hosts.samqfs1 # /etc/opt/SUNWsamfs/hosts.samqfs1 # Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------------ ---------------------- ------- --- ---------- samqfs1-mds 10.79.213.17 1 0 server samqfs1-mds_alt 10.79.213.7 2 0samqfs1-client1
10.79.213.33
0
0
samqfs1-client2
10.79.213.47
0
0
Save the /etc/opt/SUNWsamfs/hosts.
family-set-name
file, and quit the editor.
In the example, we save the changes to /etc/opt/SUNWsamfs/hosts.samqfs1
and exit the vi
editor:
[samqfs1-mds]root@solaris:~# vi /etc/opt/SUNWsamfs/hosts.samqfs1
# /etc/opt/SUNWsamfs/hosts.samqfs1
# Server On/ Additional
#Host Name Network Interface Ordinal Off Parameters
#------------------ ---------------------- ------- --- ----------
samqfs1-mds 10.79.213.117 1 0 server
samqfs1-mds 10.79.213.117 1 0 server
samqfs1-mds_alt 10.79.213.217 2 0
samqfs1-client1 10.79.213.133 0 0
samqfs1-client2 10.79.213.147 0 0
:wq
[samqfs1-mds]root@solaris:~#
Place a copy of the new /etc/opt/SUNWsamfs/hosts.
family-set-name
file on any potential metadata servers that are included in the shared file-system configuration.
In the examples, we place a copy on the host samqfs1-mds_alt
:
[samqfs1-mds]root@solaris:~#sftp
root@samqfs1-mds_alt
Password: sftp>cd /etc/opt/SUNWsamfs/
sftp>put /etc/opt/SUNWsamfs/hosts.samqfs1
sftp>bye
[samqfs1-mds]root@solaris:~#
Now Share the Unshared File System and Configure the Clients.
Log in to the metadata server as root
.
In the example, we log in to the host samqfs1-mds
:
[samqfs1-mds]root@solaris:~#
If you do not have current backup copies of the system files and configuration files, create backups now. See "Backing Up the Oracle HSM Configuration".
If you do not have a current file-system recovery point file and a recent copy of the archive log, create them now. See "Backing Up File Systems".
If you set up an automated backup process for the file system during initial configuration, you may not need additional backups.
If you are converting an archiving file system, finish active archiving and staging jobs and stop any new activity before proceeding further. See "Idle Archiving and Staging Processes" and "Stop Archiving and Staging Processes".
Unmount the file system. Use the command umount
family-set-name
, where family-set-name
is the family-set name of the file-system that you intend to share.
For more information on mounting and unmounting Oracle HSM file systems, see the mount_samfs
man page. In the example, we unmount the samqfs1
file system:
[samqfs1-mds]root@solaris:~#umount
samqfs1
[samqfs1-mds]root@solaris:~#
Convert the file system to an Oracle HSM shared file system. Use the command samfsck
-
S
-
F
file-system-name
, where file-system-name
is the family-set name of the file system.
In the example, we convert the file system named samqfs1
:
[samqfs1-mds]root@solaris:~#samfsck
-S
-F
samqfs1
Open the /etc/opt/SUNWsamfs/mcf
file in a text editor, and locate the line for the file system.
In the example, we use the vi
editor:
[samqfs1-mds]root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- ------- ------ -----------------samqfs1
200
ma
samqfs1
on
/dev/dsk/c0t0d0s0 201 mm samqfs1 on /dev/dsk/c0t3d0s0 202 md samqfs1 on /dev/dsk/c0t3d0s1 203 md samqfs1 on
In the mcf
file, add the shared
parameter to the additional parameters field in the last column of the file system entry. Then save the file and close the editor.
[samqfs1-mds]root@solaris:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- ------- ------ -----------------samqfs1
200
ma
samqfs1
on
shared
/dev/dsk/c0t0d0s0 201 mm samqfs1 on /dev/dsk/c0t3d0s0 202 md samqfs1 on /dev/dsk/c0t3d0s1 203 md samqfs1 on:wq
[samqfs1-mds]root@solaris:~#
Open the /etc/vfstab
file in a text editor, and locate the line for the file system.
In the example, we use the vi
editor:
[samqfs1-mds]root@solaris:~#vi
/etc/vfstab
#File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------- ------ ---- ------- ------------------------- /devices - /devices devfs - no - /proc - /proc proc - no - ...samqfs1
-
/samqfs1
samfs
-
yes
In the /etc/vfstab
file, and add the shared
mount option to mount options field in the last column of the file system entry. Then save the file and close the editor.
[samqfs1-mds]root@solaris:~# vi /etc/vfstab #File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------- ------ ---- ------- ------------------------- /devices - /devices devfs - no - /proc - /proc proc - no - ...samqfs1
-
/samqfs1
samfs
-
yes
shared
:wq
[samqfs1-mds]root@solaris:~#
Initialize the shared file system and host configuration. Use the command samsharefs -u -R
family-set-name
, where family-set-name
is the family-set name of the file system.
[samqfs1-mds]root@solaris:~# samsharefs -u -R samqfs1
Tell the Oracle HSM software to re-read the mcf
file and reconfigure itself accordingly:
[samqfs1-mds]root@solaris:~# samd config
Mount the shared file system on the metadata server.
[samqfs1-mds]root@solaris:~# mount /samqfs1
If your hosts are configured with multiple network interfaces, see "Use Local Hosts Files to Route Network Communications".
Add any required clients to the newly shared file system, using the procedures outlined in "Configuring Additional File System Clients".
Individual hosts do not require local hosts files. The file system's global file on the metadata server identifies the active metadata server and the network interfaces of active and potential metadata servers for all file system hosts (see "Create a Hosts File on the Active and Potential Metadata Servers"). But local hosts files can be useful when you need to selectively route network traffic between file-system hosts that have multiple network interfaces.
Each file-system host identifies the network interfaces for the other hosts by first checking the /etc/opt/SUNWsamfs/hosts.
family-set-name
file on the metadata server, where family-set-name
is the name of the file system family specified in the /etc/opt/SUNWsamfs/mcf
file. Then it checks for its own, specific /etc/opt/SUNWsamfs/hosts.
family-set-name
.local
file. If there is no local hosts file, the host uses the interface addresses specified in the global hosts file in the order specified in the global file. But if there is a local hosts file, the host compares it with the global file and uses only those interfaces that are listed in both files in the order specified in the local file. By using different addresses in each file, you can thus control the interfaces used by different hosts.
To configure local hosts files, use the procedure outlined below:
On the metadata server host and on each potential metadata server host, create a copy of the global hosts file, /etc/opt/SUNWsamfs/hosts.
family-set-name
, as described in "Create a Hosts File on the Active and Potential Metadata Servers".
For the examples in this section, the shared file system, sharefs2
, includes an active metadata server, sharefs2-mds
, and a potential metadata server, sharefs2-mds_alt
, each with two network interfaces. There are also two clients, sharefs2-client1
andsharefs2-client2
.
We want the active and potential metadata servers to communicate with each other via private network addresses and with the clients via host names that Domain Name Service (DNS) can resolve to addresses on the public, local area network (LAN). So /etc/opt/SUNWsamfs/hosts.sharefs2
, the file system's global host file, specifies a private network address in the Network Interface
field of the entries for the active and potential servers and a host name for the interface address of each client. The file looks like this:
# /etc/opt/SUNWsamfs/hosts.sharefs2 # Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------------ ---------------------- ------- --- ---------- sharefs2-mds 172.16.0.129 1 0 server sharefs2-mds_alt 172.16.0.130 2 0 sharefs2-client1 sharefs2-client1 0 0 sharefs2-client2 sharefs2-client2 0 0
Create a local hosts file on each of the active and potential metadata servers, using the path and file name /etc/opt/SUNWsamfs/hosts.
family-set-name
.local
, where family-set-name
is the name specified for the shared file system in the /etc/opt/SUNWsamfs/mcf
file. Only include interfaces for the networks that you want the active and potential servers to use.
In the example, we want the active and potential metadata servers to communicate with each other over the private network, so the local hosts file on each server, hosts.sharefs2.local
, lists only private addresses for active and potential servers:
[sharefs2-mds]root@solaris:~#vi
/etc/opt/SUNWsamfs/hosts.sharefs2.local
# /etc/opt/SUNWsamfs/hosts.sharefs2 # Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------------ ---------------------- ------- --- ----------sharefs2-mds
172.16.0.129
1
0
server
sharefs2-mds_alt
172.16.0.130
2
0
:wq
[sharefs2-mds]root@solaris:~#ssh
root@sharefs2-mds_alt
Password:
[sharefs2-mds_alt]root@solaris:~#vi
/etc/opt/SUNWsamfs/
hosts.sharefs2.local
# /etc/opt/SUNWsamfs/hosts.sharefs2 # Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------------ ---------------------- ------- --- ----------sharefs2-mds
172.16.0.129
1
0
server
sharefs2-mds_alt
172.16.0.130
2
0
:wq
[sharefs2-mds_alt]root@solaris:~#exit
[sharefs2-mds]root@solaris:~#
Create a local hosts file on each of the clients, using the path and file name /etc/opt/SUNWsamfs/hosts.
family-set-name
.local
, where family-set-name
is the name specified for the shared file system in the /etc/opt/SUNWsamfs/mcf
file. Only include interfaces for the networks that you want the clients to use.
In our example, we want the clients to communicate with the server only via the public network. So the file includes only the host names of the active and potential metadata servers:
[sharefs2-mds]root@solaris:~#ssh
root@sharefs2-client1
Password: [sharefs2-client1]root@solaris:~#vi
/etc/opt/SUNWsamfs/hosts.sharefs2.local
# /etc/opt/SUNWsamfs/hosts.sharefs2 # Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------------ ---------------------- ------- --- ----------sharefs2-mds
sharefs2-mds
1
0
server
sharefs2-mds_alt
sharefs2-mds_alt
2
0
:wq
[sharefs2-client1]root@solaris:~#exit
[sharefs2-mds]root@solaris:~#ssh
root@sharefs2-client2
Password:
[sharefs2-client2]root@solaris:~#vi /etc/opt/SUNWsamfs/hosts.sharefs2.local
# /etc/opt/SUNWsamfs/hosts.sharefs2 # Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------------ ---------------------- ------- --- ----------sharefs2-mds
sharefs2-mds
1
0
server
sharefs2-mds_alt
sharefs2-mds_alt
2
0
:wq
[sharefs2-client2]root@solaris:~#exit
[sharefs2-mds]root@solaris:~#
If you started this procedure while finishing the configuration of the server, add clients. Go to "Configuring Additional File System Clients".
When you need to unshare a file system, proceed as follows:
Log in to the metadata server as root
.
In the example, we log in to the host samqfs1-mds
:
[samqfs1-mds]root@solaris:~#
Remove the clients from the metadata server configuration using the procedure "Remove the Host from the File System Hosts File".
If you do not have current backup copies of the system files and configuration files, create backups now. See "Backing Up the Oracle HSM Configuration".
If you do not have a current file-system recovery point file and a recent copy of the archive log, create them now. See "Backing Up File Systems".
If you set up an automated backup process for the file system during initial configuration, you may not need additional backups.
If you are converting an archiving file system, finish active archiving and staging jobs and stop any new activity before proceeding further. See "Idle Archiving and Staging Processes" and "Stop Archiving and Staging Processes".
Unmount the file system. Use the command umount
family-set-name
, where family-set-name
is the name specified for the shared file system in the /etc/opt/SUNWsamfs/mcf
file.
For more information on mounting and unmounting Oracle HSM file systems, see the mount_samfs
man page. In the example, we unmount the samqfs1
file system:
[samqfs1-mds]root@solaris:~# umount samqfs1
Convert the Oracle HSM shared file system to an unshared file system. Use the command samfsck -F -U
file-system-name
, where file-system-name
is the name specified for the shared file system in the /etc/opt/SUNWsamfs/mcf
file.
In the example, we convert the file system named samqfs1
:
[samqfs1-mds]root@solaris:~# samfsck -F -U samqfs1
Open the /etc/opt/SUNWsamfs/mcf
file in a text editor, and locate the line for the file system.
In the example, we use the vi
editor:
[samqfs1-mds]root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- ------- ------ -----------------samqfs1
200
ma
samqfs1
on
shared
/dev/dsk/c0t0d0s0 201 mm samqfs1 on /dev/dsk/c0t3d0s0 202 md samqfs1 on /dev/dsk/c0t3d0s1 203 md samqfs1 on
In the mcf
file, delete the shared
parameter from the additional parameters field in the last column of the file system entry. Then save the file and close the editor.
[samqfs1-mds]root@solaris:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- ------- ------ -----------------samqfs1
200
ma
samqfs1
on
/dev/dsk/c0t0d0s0 201 mm samqfs1 on /dev/dsk/c0t3d0s0 202 md samqfs1 on /dev/dsk/c0t3d0s1 203 md samqfs1 on:wq
[samqfs1-mds]root@solaris:~#
Open the /etc/vfstab
file in a text editor, and locate the line for the file system.
In the example, we use the vi
editor:
[samqfs1-mds]root@solaris:~#vi
/etc/vfstab
#File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------- ------ ---- ------- ------------------------- /devices - /devices devfs - no - /proc - /proc proc - no - ...samqfs1
-
/samqfs1
samfs
-
yes
shared
In the /etc/vfstab
file, delete the shared
mount option from the mount options field in the last column of the file system entry. Then save the file and close the editor.
In the example, we use the vi
editor:
[samqfs1-mds]root@solaris:~#vi
/etc/vfstab
#File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------- ------ ---- ------- ------------------------- /devices - /devices devfs - no - /proc - /proc proc - no - ...samqfs1 - /samqfs1 samfs - yes
:wq
[samqfs1-mds]root@solaris:~#
Delete the file /etc/opt/SUNWsamfs/hosts.
file-system-name
.
Tell the Oracle HSM software to re-read the mcf
file and reconfigure itself accordingly:
[samqfs1-mds]root@solaris:~# samd config
Mount the file system.
[samqfs1]root@solaris:~# mount /samqfs1
Stop here.