This chapter covers file-system maintenance and reconfiguration tasks. The first section, Managing Oracle HSM File Systems, addresses maintenance of all Oracle HSM file systems, archiving and non-archiving, shared and unshared (standalone). The second section, Managing Oracle HSM Shared File Systems, deals with special considerations that affect shared file systems.
This section outlines the following tasks:
Set file system quotas to control the online and total storage space that a given user or collection of users can consume within the file system. You can set quotas by user ID, by group ID, or by an administrator-defined admin set ID that groups users by common characteristics, such as participation in a particular project. The admin set ID is especially useful when a project includes users from several groups and spans multiple directories and files.
You enable quotas by mounting a file system with the quota
mount option (set by default) and disable quotas by mounting it with the noquota
mount option. You define the quotas by placing one or more quota files in the file system root directory: .quota_u
, .quota_g
, and .quota_a
, which set quotas for users, groups, and admin sets, respectively. The first record in each file, record 0
, sets the default values. Subsequent records set values specific to particular users, groups, or admin sets.
Quotas allocate usable file-system space, not simply storage space. They thus set upper limits on both the number of 512-byte blocks allocated on the media and the number of inodes allocated in the file system. The block count measures the storage space per se. The inode count measures the resources available for accessing that storage. A single file that used a great many blocks of storage space but only one inode might thus take up the same amount of file-system space as a great many empty, zero-length files that take up many inodes and no blocks.
Each quota can include both soft and hard limits. A hard limit defines the maximum file-system resources that all of a given owner's files can use temporarily. A soft limit defines the maximum file-system resources that the owner's files can use indefinitely. Resource usage can grow to amounts that lie between the soft and hard limits only for brief intervals, as defined by the grace period in the quota.
This section describes the following administrative tasks:
characterizing the storage requirements of users, groups, and organizational units
creating admin sets for projects and for directories used by multiple groups
To set sustainable quotas, you have to set limits that accommodate user requirements in a way that is both manageable and scalable. So, before setting quotas, estimate the storage requirements of your users. To keep the process manageable, start by classifying user requirements as broadly as possible, so that you address the greatest number of requirements with the smallest amount of administrative effort. You can then specially assess a small number of user requirements that do not fit in to the broader categories. The results will provide the broad outlines of the quotas and types of limits that you will set.
The approach outlined below starts by identifying the file-system requirements of access-control groups, since most organizations already define these groups. Then it defines special sets of users whose needs do not align well with those of the standard groups. Then and only then does it begin to address any requirements that are unique to individual users. Proceed as follows:
Since your existing access-control groups already gather together users with similar resource requirements, start by defining the average storage requirements of any groups that will use the file system. Estimate both the average amount of storage space used (in 512-kilobyte blocks) and the average number of files stored, which is equivalent to the average number of inodes used.
Since group members typically have similar organizational roles and work responsibilities, they frequently need access to the same directories and files and generally make similar demands on storage. In the example, we identify three groups that will use the file system /hsm/hqfs1
: dev
(Product Development), cit
(Corporate Information Technologies), and pgmt
(Program Management). We list the groups, the number of members in each, and their average individual and group requirements in a simple spreadsheet:
Group | Users | Average Blocks Per User | Average Files Per User | Average Blocks/Group | Average Files/Group |
---|---|---|---|---|---|
dev |
30 |
67108864 |
500 |
2013265920 |
15000 |
cit |
15 |
10485760 |
50 |
157286400 |
750 |
pmgt |
6 |
20971520 |
200 |
125829120 |
1200 |
Total Blocks/ Files (Average) |
Next, carry out the same calculations for the maximum amount of storage space and the maximum number of files that group members will store at any given time. Record the results.
In the example, we record the results in a new spreadsheet:
Group | Users | Maximum Blocks Per User | Maximum Files Per User | Maximum Blocks/Group | Maximum Files/Group |
---|---|---|---|---|---|
dev |
30 |
100663296 |
1000 |
3019898880 |
30000 |
cit |
15 |
15728640 |
100 |
235929600 |
1500 |
pmgt |
6 |
31457280 |
400 |
188743680 |
2400 |
Total Blocks/ Files (Maximum) |
Now identify any sets of users that belong to different groups but share distinct storage requirements that cannot be not addressed on the basis of group membership. Make the same estimates and carry out the same calculations for each identified organization as you did for each access-control group.
In the example, we identify two company projects that will require storage allocations, code named portal
and lockbox
. Members of the engineering, marketing, compliance, test, and documentation groups will work together on these projects and will use the same directories and many of the same files. So we add them to our requirements spreadsheets:
Group | Users | Average Blocks Per User | Average Files Per User | Average Blocks/Group | Average Files/Group |
---|---|---|---|---|---|
dev |
30 |
67108864 |
500 |
2013265920 |
15000 |
cit |
15 |
10485760 |
50 |
157286400 |
750 |
pmgt |
6 |
20971520 |
200 |
125829120 |
1200 |
portal |
10 |
31457280 |
400 |
314572800 |
4000 |
lockbox |
12 |
31457280 |
500 |
377487360 |
6000 |
Total Blocks/ Files (Average) |
Group | Users | Maximum Blocks Per User | Maximum Files Per User | Maximum Blocks/Group | Maximum Files/Group |
---|---|---|---|---|---|
dev |
30 |
100663296 |
1000 |
3019898880 |
30000 |
cit |
15 |
15728640 |
100 |
235929600 |
1500 |
pmgt |
6 |
31457280 |
400 |
188743680 |
2400 |
portal |
10 |
37748736 |
700 |
377487360 |
7000 |
lockbox |
12 |
45613056 |
600 |
547356672 |
7200 |
Total Blocks/ Files (Maximum) |
Now identify any individual users whose requirements have not yet been addressed. Make the same estimates and carry out the same calculations for each user as you did for each access-control group and non-group organization.
Where possible, address user requirements collectively, so that polices are uniform and management overhead is at a minimum. However, when individual requirements are unique, you need to address them individually. In the example, we identify jr23547
in the pgmt
group as a user whose special responsibilities require special storage allocations. So we add him to our requirements spreadsheets:
Group | Users Per Set | Average Blocks Per User | Average Files Per User | Average Blocks | Average Files |
---|---|---|---|---|---|
dev |
30 |
67108864 |
500 |
2013265920 |
15000 |
cit |
15 |
10485760 |
50 |
157286400 |
750 |
pmgt |
6 |
20971520 |
200 |
125829120 |
1200 |
portal |
10 |
31457280 |
400 |
314572800 |
4000 |
lockbox |
12 |
31457280 |
500 |
377487360 |
6000 |
jr23547 |
1 |
10485760 |
600 |
10485760 |
600 |
Total Blocks/ Files (Average) |
Group | Users | Maximum Blocks Per User | Maximum Files Per User | Maximum Blocks/Group | Maximum Files/Group |
---|---|---|---|---|---|
dev |
30 |
100663296 |
1000 |
3019898880 |
30000 |
cit |
15 |
15728640 |
100 |
235929600 |
1500 |
pmgt |
6 |
31457280 |
400 |
188743680 |
2400 |
portal |
10 |
37748736 |
700 |
377487360 |
7000 |
lockbox |
12 |
45613056 |
600 |
547356672 |
7200 |
jr23547 |
1 |
100663296 |
2000 |
100663296 |
2000 |
Total Blocks/ Files (Maximum) |
Finally, calculate the average and maximum blocks and files that all users require.
Group | Users | Average Blocks Per User | Average Files Per User | Average Blocks/Group | Average Files/Group |
---|---|---|---|---|---|
dev |
30 |
67108864 |
500 |
2013265920 |
15000 |
cit |
15 |
10485760 |
50 |
157286400 |
750 |
pmgt |
6 |
20971520 |
200 |
125829120 |
1200 |
portal |
10 |
31457280 |
400 |
314572800 |
4000 |
lockbox |
12 |
31457280 |
500 |
377487360 |
6000 |
jr23547 |
1 |
10485760 |
600 |
10485760 |
600 |
Total Blocks/ Files (Average) |
2998927360 |
27550 |
Group | Users | Maximum Blocks Per User | Maximum Files Per User | Maximum Blocks/Group | Maximum Files/Group |
---|---|---|---|---|---|
dev |
30 |
100663296 |
1000 |
3019898880 |
30000 |
cit |
15 |
15728640 |
100 |
235929600 |
1500 |
pmgt |
6 |
31457280 |
400 |
188743680 |
2400 |
portal |
10 |
37748736 |
700 |
377487360 |
7000 |
lockbox |
12 |
45613056 |
600 |
547356672 |
7200 |
jr23547 |
1 |
100663296 |
2000 |
100663296 |
2000 |
Total Blocks/ Files (Average) |
4470079488 |
50100 |
If you need to administer project-based quotas or other quotas that cannot be defined by access-control group and user IDs, create admin sets for projects and for directories used by multiple groups.
If you need to set quotas on newly created file systems that do not yet hold files, enable quotas on these new file systems now.
Otherwise, if you need to set quotas on existing file systems that already hold files, enable quotas on these older file systems now.
An admin set is a directory hierarchy or an individual directory or file that is identified for quota purposes by an admin set ID. All files created with a specified admin set ID or stored in a directory with a specified admin set ID have the same quotas, regardless of the user or group IDs that actually own the files. To define admin sets, proceed as follows:
Log in to the file-system server as root
.
In the example, the server is named mds
:
root@mds:~#
If you are using an admin set to configure storage quotas for a new project or team, create a new directory somewhere within the file system for this project or team.
In the example, we create the directory in the /hsm/hqfs1
file system and name it portalproject/
for the project of the same name:
root@mds:~# mkdir /hsm/hqfs1/portalproject
Assign an admin set ID to the directory or file on which you need to set a quota. Use the command samchaid
[
-fhR
]
admin-set-id
directory-or-file-name
, where:
-f
forces the assignment and does not report errors.
-h
assigns the admin set ID to symbolic links. Without this option, the group of the file referenced by the symbolic link is changed.
-R
assigns the admin set ID recursively to subdirectories and files.
admin-set-id
is a unique integer value.
directory-or-file-name
is the name of the directory or file to which you are assigning the admin set ID.
In the example, we assign the admin ID 1
to the directory /hsm/hqfs1/portalproject/
and all of its subdirectories and files.
root@mds:~# samchaid -R 1 /hsm/hqfs1/portalproject/
You can check the assignment, if desired. Use the command sls -D
directory-path
, where the -D
specifies a detailed Oracle HSM directory listing for files and directories in directory-path
:
root@mds:~# sls -D /hsm/hqfs1/ /portalproject: mode: drwxr-xr-x links: 2 owner: root group: root length: 4096 admin id: 1 inode: 1047.1 project: user.root(1) access: Feb 24 12:49 modification: Feb 24 12:44 changed: Feb 24 12:49 attributes: Feb 24 12:44 creation: Feb 24 12:44 residence: Feb 24 12:44
If you need to set quotas on newly created file systems that do not yet hold files, enable quotas on these new file systems now.
Otherwise, if you need to set quotas on existing file systems that already hold files, enable quotas on these older file systems now.
Use this procedure if you are creating a new file system and if no files currently reside in the file system.
Log in to the file-system server as root
.
In the example, the server is named mds
:
root@mds:~#
If the new file system is not currently mounted, mount it before proceeding.
root@mds:~# mount /hsm/newfs root@mds:~#
If you have to set up quotas for groups, create a group quota file in the file-system root directory, .quota_g
. Use the Solaris command dd
if=/dev/zero
of=
mountpoint
/.quota_g
bs=4096
count=
number-blocks
, where:
if=/dev/zero
specifies null characters from the UNIX special file /dev/zero
as the input.
of=
mountpoint
/.quota_g
specifies the output file, where mountpoint
is the mount point directory for the file system.
bs=4096
sets the block size for the write to 4096
bytes.
count=
number-blocks
specifies the number of blocks to write. This value depends on the number of records that the file will hold. There is one 128-byte record for each specified quota, so one block can accommodate 32 records.
In the example, we create the group quota file for the file system newfs
mounted at /newfs
. During the requirements-gathering phase, we identified three groups that need quotas on the file system, dev
, cit
, and pgmt
. We do not anticipate adding any other group quotas, so we size the file at one block:
root@mds:~# dd if=/dev/zero of=/hsm/newfs/.quota_g bs=4096 count=1
If you have to set up quotas for admin sets, create an admin set quota file in the file-system root directory, .quota_a
. Use the Solaris command dd
if=/dev/zero
of=
mountpoint
/.quota_a
bs=4096
, where:
mountpoint
is the mount point directory for the file system.
.quota_a
is the name of the output file.
4096
is the block size for the write in bytes.
number-blocks
is the number of blocks to write.
In the example, we create the admin sets quota file for the file system newfs
mounted at /newfs
. During the requirements-gathering phase, we identified two projects that need quotas on the file system, portal
(admin set ID 1
) and lockbox
(admin set ID 2
). We do not anticipate adding any other admin set quotas, so we size the file at one block:
root@mds:~# dd if=/dev/zero of=/hsm/newfs/.quota_a bs=4096 count=1
If you have to set up quotas for users, create a user quota file, .quota_u
, in the file-system root directory. Use the Solaris command dd
if=/dev/zero
of=
mountpoint
/.quota_u
bs=4096
count=
number-blocks
, where:
mountpoint
is the mount point directory for the file system.
.quota_u
is the name of the output file.
4096
is the block size for the write in bytes.
number-blocks
is the number of blocks to write.
In the example, we create the user quota file for the file system newfs
mounted at /newfs
. During the requirements-gathering phase, we identified one user that needed specific quotas on the file system, jr23547
. We do not anticipate adding any other individual user quotas, so we size the file at one block:
root@mds:~# dd if=/dev/zero of=/hsm/newfs/.quota_u bs=4096 count=1
Unmount the file system.
You must unmount the file system before you can remount it and enable the quota files.
root@mds:~# umount /hsm/newfs
Perform a file system check.
root@mds:~# samfsck -F newfs samfsck: Configuring file system samfsck: Enabling the sam-fsd service. name: newfs1 version: 2A First pass Second pass Third pass ... root@mds:~#
Remount the file system.
The system enables quotas when it detects one or more quota files in the root directory of the file system.
You do not need to include the quota
mount option in the /etc/vfstab
or samfs.cmd
file, because file systems are mounted with quotas enabled by default.
root@mds:~# mount /hsm/newfs
If you need to set quotas on existing file systems that already hold files, enable quotas on these older file systems now.
Otherwise, set quotas as needed.
Use this procedure if you are creating quotas for a file system that already holds files.
Log in to the file-system server as root
.
In the example, the server is named mds
:
root@mds:~#
Open the /etc/vfstab
file in a text editor, and make sure that the noquota
mount option has not been set.
In the example, we open the file in the vi
text editor. The noquota
mount option has been set:
root@mds:~# vi /etc/vfstab #File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #------------ ------- --------------- ------ ---- ------- ------------ /devices - /devices devfs - no - /proc - /proc proc - no - ... hqfs1 - /hsm/hqfs1 samfs - no noquota
If the noquota
mount option has been set in the /etc/vfstab
file, delete it and save the file.
root@mds:~# vi /etc/vfstab #File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #------------ ------- --------------- ------ ---- ------- ------------ /devices - /devices devfs - no - /proc - /proc proc - no - ... hqfs1 - /hsm/hqfs1 samfs - no - :wq root@mds:~#
If a samfs.cmd
file exists in the directory /etc/opt/SUNWsamfs/
, open it in a text editor. In the section fs
=
filesystem
, where filesystem
is the name of the file system in which you are enabling quotas, search for the mount option noquota
, which may also be written \0
quota
.
In the example, the noquota
mount option has been applied to the hqfs
file system:
root@mds:~# vi /etc/opt/SUNWsamfs/samfs.cmd #inodes = 0 fs = hqfs1 # forcedirectio (default no forcedirectio) high = 80 low = 70 # weight_size = 1. # weight_age = 1. # readahead = 128 ... noquota # stripe = 1 (ms filesystem or ma filesystem with mr disks) # stripe = 0 (ma filesystem with striped groups) # dio_rd_form_min = 256
If the noquota
mount option has been set in the /etc/opt/SUNWsamfs/samfs.cmd
file, comment it out or delete it. Save the file, and close the editor.
In the example, we comment out the line:
root@mds:~# vi /etc/opt/SUNWsamfs/samfs.cmd #inodes = 0 fs = hqfs1 # forcedirectio (default no forcedirectio) high = 80 low = 70 # weight_size = 1. # weight_age = 1. # readahead = 128 ... # noquota # stripe = 1 (ms filesystem or ma filesystem with mr disks) # stripe = 0 (ma filesystem with striped groups) # dio_rd_form_min = 256 :wq root@mds:~#
If you made changes to the /etc/vfstab
and/or /etc/opt/SUNWsamfs/samfs.cmd
files, unmount the file system and then remount it.
When you remove or disable the noquota
mount option in a configuration file, you must unmount and remount the affected file system to enable quotas.
root@mds:~# umount /hsm/hqfs1 root@mds:~# mount /hsm/hqfs1 root@mds:~#
Change to the root directory of the file system and check for any existing quota files. Use the Solaris command ls
-a
and look for the files .quota_g
, .quota_a
, and/or .quota_u
.
In the example, no quota files currently exist.
root@mds:~# cd /hsm/hqfs1 root@mds:~# ls -a /hsm/hqfs1 . .archive .fuid .stage portalproject .. .domain .inodes lost+found root@mds:~#
If quota files exist, do not modify them.
If you have to set up quotas for groups and the group quota file, .quota_g
, does not already exist in the file-system root directory, create the file now. Use the Solaris command dd
if=
/dev/zero
of=
mountpoint
/.quota_g
bs=
4096
count=
number-blocks
, where:
if=
/dev/zero
specifies null characters from the UNIX special file /dev/zero
as the input.
of=
mountpoint
/.quota_g
specifies the output file, where mountpoint
is the mount point directory for the file system.
bs=
4096
sets the block size for the write to 4096
bytes.
count=
number-blocks
specifies the number of blocks to write. This value depends on the number of records that the file will hold. There is one 128-byte record for each specified quota, so one block can accommodate 32 records.
In the example, we create the group quota file for the file system /hsm/hqfs1
mounted at /hsm/hqfs1
. During the requirements-gathering phase, we identified three groups that need quotas on the file system, dev
, cit
, and pgmt
. We do not anticipate adding any other group quotas, so we size the file at one block:
root@mds:~# dd if=/dev/zero of=/hsm/hqfs1/.quota_g bs=4096 count=1 1+0 records in 1+0 records out root@mds:~#
If you have to set up quotas for admin sets and the admin sets quota file, .quota_a
, does not already exist in the file-system root directory, create the file now. Use the Solaris command dd
if=
/dev/zero
of=
mountpoint
/.quota_a
bs=
4096
count=
number-blocks
, where:
mountpoint
is the mount point directory for the file system.
.quota_a
is the name of the output file.
4096
is the block size for the write in bytes.
number-blocks
is the number of blocks to write.
In the example, we create the admin sets quota file for the file system /hsm/hqfs1
mounted at /hsm/hqfs1
. During the requirements-gathering phase, we identified two projects that need quotas on the file system, portal
(admin set ID 1
) and lockbox
(admin set ID 2
). We do not anticipate adding any other admin set quotas, so we size the file at one block:
root@mds:~# dd if=/dev/zero of=/hsm/hqfs1/.quota_a bs=4096 count=1 1+0 records in 1+0 records out root@mds:~#
If you have to set up quotas for users and the user quota file, .quota_u
, does not already exist in the file-system root directory, create the file now. Use the Solaris command dd
if=
/dev/zero
of=
mountpoint
/.quota_u
bs=
4096
count=
number-blocks
, where:
mountpoint
is the mount point directory for the file system.
.quota_u
is the name of the output file.
4096
is the block size for the write in bytes.
number-blocks
is the number of blocks to write.
In the example, we create the user quota file for the file system /hsm/hqfs1
mounted at /hsm/hqfs1
. During the requirements-gathering phase, we identified one user that needed specific quotas on the file system, jr23547
. We do not anticipate adding any other individual user quotas, so we size the file at one block:
root@mds:~# dd if=/dev/zero of=/hsm/hqfs1/.quota_u bs=4096 count=1 1+0 records in 1+0 records out root@mds:~#
Unmount the file system.
root@mds:~# umount /hsm/hqfs1 root@mds:~#
Perform a file system check.
root@mds:~# samfsck -F /hsm/hqfs1 samfsck: Configuring file system samfsck: Enabling the sam-fsd service. name: hqfs1 version: 2A First pass Second pass Third pass ... root@mds:~#
Remount the file system.
The system enables quotas when it detects one or more quota files in the root directory of the file system.
You do not need to include the quota
mount option in the /etc/vfstab
or samfs.cmd
file, because file systems are mounted with quotas enabled by default.
root@mds:~# mount /hsm/hqfs1 root@mds:~#
Otherwise, set quotas as needed.
You set new quotas and adjust existing ones using the samquota
command. Follow the procedure below:
Once you have characterized storage requirements, decide on the appropriate quotas for each group, user, and non-group organization. Consider the following factors and make adjustments as necessary:
the size of the file system compared to the average and maximum number of blocks that all users require
the number of inodes in the file system compared to the average and maximum number of inodes that all users require
the numbers and types of users that are likely to be close to their maximum requirement at any given time.
Log in to the file-system server as root
.
In the example, the server is named mds
:
root@mds:~#
Set limits for each group that requires them. Use the command samquota
-b
number-blocks
:
type
[
:
scope
]
-f
number-files
:
type
[
:
scope
]
-t
interval
[
:
scope
]
-G
groupID
[
directory-or-file
]
, where:
-b
number-blocks
sets the maximum number of 512-kilobyte blocks that can be stored in the file system to number-blocks
, an integer (see the samquota
man page for alternative ways of specifying size). A value of 0
(zero) specifies an unlimited number of blocks.
:
is a field separator.
type
specifies the kind of limit, either h
for a hard limit or s
for a soft limit.
scope
(optional) identifies the type of storage that is subject to the limit. It can be either o
for online (disk-cache) storage only or t
for total storage, which includes both disk-cache and archival storage (the default).
-f
number-files
sets the maximum number of files that can be stored in the file system to number-files
, an integer. A value of 0
(zero) specifies an unlimited number of files.
-t
number-seconds
sets the grace period, the time during which soft limits can be exceeded, to number-seconds
, an integer representing a number of seconds (see the samquota
man page for alternative ways of specifying time).
-G
groupID
specifies a group name or integer identifier for the group. A value of 0
(zero) sets the default limits for all groups.
directory-or-file
(optional) is the mount point directory for a specific file system or a specific directory or file on which you need to set a quota.
In the example, we use our estimates from the requirements-gathering phase to set both hard and soft limits on both the amount of storage space in the /hsm/mds
file system that group dev
can use and the numbers of files that it can store. We set the grace period to 43200
seconds (twelve hours) for online storage only:
root@mds:~# samquota -b 3019898880:h:t -f 30000:h:t -G dev /hsm/mds root@mds:~# samquota -b 2013265920:s:t -f 15000:s:t -t 43200:o -G dev /hsm/mds root@mds:~#
Set limits for each admin set that requires them. Use the command samquota
-b
number-blocks
:
type
[
:
scope
]
-f
number-files
:
type
[
:
scope
]
-t
interval
[
:
scope
]
-A
adminsetID
[
directory-or-file
]
, where -A
adminsetID
is the integer value that uniquely identifies the admin set.
Setting adminsetID
to 0
(zero) sets the default limits for all admin sets.
In the example, we use our estimates from the requirements-gathering phase to set both hard and soft limits on both the amount of storage space in the /hsm/mds
file system that the portal
project (admin set ID 1
) can use and the numbers of files that it can store. We set the grace period to 43200
seconds (twelve hours) for total storage used, which is the default scope:
root@mds:~# samquota -b 377487360:h:t -f 7000:h:t -A 1 /hsm/mds root@mds:~# samquota -b 314572800:s:t -f 4000:s:t -t 43200 -A 1 /hsm/mds root@mds:~#
Set limits for each individual user that requires them. Use the command samquota
-b
number-blocks
:
type
[
:
scope
]
-f
number-files
:
type
[
:
scope
]
-t
interval
[
:
scope
]
-U
userID
[
directory-or-file
]
, where -U
userID
is a user name or integer identifier for the user.
Setting userID
to 0
(zero) sets the default limits for all users.
In the example, we use our estimates from the requirements-gathering phase to set both hard and soft limits on both the amount of storage space in the /hsm/mds
file system that user jr23547
can use and the numbers of files that jr23547
can store. We set the grace period to 1209600
seconds (two weeks) for total storage used, which is the default scope:
root@mds:~# samquota -b 100663296:h:t -f 600:h:t -U jr23547 /hsm/mds root@mds:~# samquota -b 10485760:s:t -f 2000:s:t -t 1209600 -U jr23547 /hsm/mds root@mds:~#
Stop here.
If you mount an Oracle HSM file system with the noquota
mount option when there are quota files in the root directory, quota records become inconsistent as blocks or files are allocated or freed. In this situation, proceed as follows:
Log in to the file-system server as root
.
In the example, the server is named mds
:
root@mds:~#
Unmount the affected file system.
In the example, we unmount file system hqfs1
:
root@mds:~# umount /hsm/hqfs1 root@mds:~#
Open the /etc/vfstab
file in a text editor, and make sure that the noquota
mount option has not been set.
In the example, we open the file in the vi
text editor. The noquota
mount option has been set:
root@mds:~# vi /etc/vfstab #File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #------------ ------- --------------- ------ ---- ------- ------------ /devices - /devices devfs - no - /proc - /proc proc - no - ... hqfs1 - /hsm/hqfs1 samfs - no noquota
If the noquota
mount option has been set in the /etc/vfstab
file, delete it and save the file.
root@mds:~# vi /etc/vfstab #File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #------------ ------- --------------- ------ ---- ------- ------------ /devices - /devices devfs - no - /proc - /proc proc - no - ... hqfs1 - /hsm/hqfs1 samfs - no - :wq root@mds:~#
Open the /etc/opt/SUNWsamfs/samfs.cmd
file in a text editor, and make sure that the noquota
mount option has not been set.
In the example, we open the file in the vi
text editor. The noquota
mount option has not been set:
root@mds:~# vi /etc/opt/SUNWsamfs/samfs.cmd # These are the defaults. To change the default behavior, uncomment the # appropriate line (remove the '#' character from the beginning of the line) # and change the value. # #inodes = 0 #fs = mds # forcedirectio (default no forcedirectio) # high = 80 # low = 70 # weight_size = 1. # weight_age = 1. # readahead = 128 ... # dio_wr_ill_min = 0 # dio_wr_consec = 3 # qwrite (ma filesystem, default no qwrite) # shared_writer (ma filesystem, default no shared_writer) # shared_reader (ma filesystem, default no shared_reader)
If the noquota
mount option has been set in the /etc/opt/SUNWsamfs/samfs.cmd
file, delete it and save the file.
Repair the inconsistent quota records. Use the command samfsck
-F
family-set-name
, where family-set-name
is the family set name for the file system in the /etc/opt/SUNWsamfs/mcf
file.
root@mds:~# samfsck -F hqfs1 samfsck: Configuring file system samfsck: Enabling the sam-fsd service. name: hqfs1 version: 2A First pass Second pass Third pass ... root@mds:~#
Remount the file system.
The system enables quotas when it detects one or more quota files in the root directory of the file system.
You do not need to include the quota
mount option in the /etc/vfstab
or samfs.cmd
file, because file systems are mounted with quotas enabled by default.
root@mds:~# mount /hsm/hqfs1 root@mds:~#
Stop here.
Both administrators and users can monitor quotas and resource usage. The root
user can generate quota reports on users, groups, or admin sets with the samquota
command. File-system users can check their own quotas using the squota
command:
Log in to the file-system server as root
.
In the example, the server is named mds
:
root@mds:~#
To display quota statistics for all groups, use the command samquota
-g
[
directory-or-file
]
, where the optional directory-or-file
parameter limits the scope of the report to the file system mounted on the specified directory, the specified directory itself, or the specified file.
In the example, we request a report for the hqfs1
file system, which is mounted at /hsm/hqfs1
:
root@mds:~# samquota -g /hsm/hqfs1
To display quota statistics for all admin sets, use the command samquota
-a
[
directory-or-file
]
, where the optional directory-or-file
parameter limits the scope of the report to the file system mounted on the specified directory, the specified directory itself, or the specified file.
In the example, we request a report for the hqfs1
file system, which is mounted at /hsm/hqfs1
:
root@mds:~# samquota -a /hsm/hqfs1
To display quota statistics for all users, use the command samquota
-u
[
directory-or-file
]
, where the optional directory-or-file
parameter limits the scope of the report to the file system mounted on the specified directory, the specified directory itself, or the specified file.
In the example, we request a report for the hqfs1
file system, which is mounted at /hsm/hqfs1
:
root@mds:~# samquota -u /hsm/hqfs1
To display quota statistics for a specific group, use the command samquota
-G
groupID
[
directory-or-file
]
, where groupID
specifies a group name or integer identifier for the group and where the optional directory-or-file
parameter limits the scope of the report to the file system mounted on the specified directory, the specified directory itself, or the specified file.
In the example, we request a report on quotas for the dev
group in the hqfs1
file system, which is mounted at /hsm/hqfs1
:
root@mds:~# samquota -G dev /hsm/hqfs1
To display quota statistics for a specific admin set, use the command samquota
-A
adminsetID
[
directory-or-file
]
, where adminsetID
specifies an integer identifier for the admin set and where the optional directory-or-file
parameter limits the scope of the report to the file system mounted on the specified directory, the specified directory itself, or the specified file.
In the example, we request a report on quotas for admin set 1
in the hqfs1
file system, which is mounted at /hsm/hqfs1
:
root@mds:~# samquota -A 1 /hsm/hqfs1
To display quota statistics for a specific user, use the command samquota
-U
userID
[
directory-or-file
]
, where userID
specifies a user name or integer identifier for the user and where the optional directory-or-file
parameter limits the scope of the report to the file system mounted on the specified directory, the specified directory itself, or the specified file.
In the example, we request a report on the quotas for user jr23547
in the hqfs1
file system, which is mounted at /hsm/hqfs1
:
root@mds:~# samquota -U jr23547 /hsm/hqfs1
Stop here.
Log in to a file-system host using your user ID.
In the example, we log in to host mds
as user od447
:
od447@mds:~#
To display quota statistics for all groups, use the command squota
[
directory-or-file
]
, where the optional directory-or-file
parameter limits the scope of the report to the file system mounted on the specified directory, the specified directory itself, or the specified file.
In the example, we request a report for all file systems:
od447@mds:~# squota
Limits
Type ID In Use Soft Hard
/hsm/hqfs1
Files group 101 1 1000 1200
Blocks group 101 8 20000 30000
Grace period 25920
No user quota entry.
od447@mds:~#
Stop here.
When you need to, you can do any of the following:
If a group, user, or admin set has exceeded the specified soft limit for its quota and needs to remain above the soft limit temporarily but for a period that is longer than the current grace period allows, you can grant the extension as follows:
Log in to the file-system server as root
.
In the example, we log in to host mds
:
root@mds:~#
Check the quota that requires an extension. Use the command samquota
-
quota-type
ID
[
directory-or-file
]
where:
quota-type
ID
is G
plus a group name or ID number, A
plus an admin set ID number, or U
plus a user name or ID number.
directory-or-file
(optional) is either the mount point directory for a specific file system or the specific directory or file for which you need to extend a grace period.
In the example, the dev
group is significantly over the soft limit and has only a couple of hours left in its grace period:
root@mds:~# samquota -G dev /hsm/hqfs1 Online Limits Total Limits Type ID In Use Soft Hard In Use Soft Hard /hsm/hqfs1 Files group 101 323 15000 30000 323 15000 30000 Blocks group 101 3109330961 2013265920 3019898880 3109330961 2013265920 3019898880 Grace period 4320 4320 ---> Warning: soft limits to be enforced in 2h21m16s root@mds:~#
Extend the grace period, if warranted. Use the command samquota
-
quota-type
ID
-x
number-seconds
[
directory-or-file
]
, where:
quota-type
ID
is G
plus a group name or ID number, A
plus an admin set ID number, or U
plus a user name or ID number.
directory-or-file
(optional) is either the mount point directory for a specific file system or the specific directory or file for which you need to extend a grace period.
number-seconds
is an integer representing the number of seconds in the extension (see the samquota
man page for alternative ways of specifying time).
Enter y
(yes) when prompted to continue.
In the example, we extend the grace period for the dev
group to 2678400
seconds (31 days) for files in the hqfs1
file system:
root@mds:~# samquota -G dev -x 2678400 /hsm/hqfs1 Setting Grace Timer: continue? y
When we recheck the dev
group quota, the grace period has been extended:
root@mds:~# samquota -G dev /hsm/hqfs1 Online Limits Total Limits Type ID In Use Soft Hard In Use Soft Hard /hsm/hqfs1 Files group 101 323 15000 30000 323 15000 30000 Blocks group 101 43208 2013265920 3019898880 43208 2013265920 3019898880 Grace period 2678400 2678400 ---> Warning: soft limits to be enforced in 31d root@mds:~#
If a group, admin set, or user regularly needs extensions, re-evaluate storage requirements and consider setting new quotas.
Stop here.
If a group, user, or admin set has exceeded the specified soft limit for its quota and cannot free space quickly enough to get below the soft limit before the current grace period expires, you can restart the grace period. Proceed as follows:
Log in to the file-system server as root
.
In the example, we log into host mds
:
root@mds:~#
Check the quota that requires an extension. Use the command samquota
-
quota-type
ID
[
directory-or-file
]
where:
quota-type
ID
is G
plus a group name or ID number, A
plus an admin set ID number, or U
plus a user name or ID number.
directory-or-file
(optional) is either the mount point directory for a specific file system or a specific directory or file for which you need to extend a grace period.
In the example, the cit
group is over the soft limit for the hqfs1
file system and has just over an hour left in its grace period:
root@mds:~# samquota -G cit /hsm/hqfs1 Online Limits Total Limits Type ID In Use Soft Hard In Use Soft Hard /hsm/hqfs1 Files group 119 762 750 1500 762 750 1500 Blocks group 119 3109330961 2013265920 3019898880 120096782 157286400 235929600 Grace period 4320 4320 ---> Warning: soft limits to be enforced in 1h11m23s root@mds:~#
To reset the grace period to its full starting size the next time that a file or block is allocated, clear the grace period timer. Use the command samquota
-
quota-type
ID
-x
clear
[
directory-or-file
]
, where:
quota-type
ID
is G
plus a group name or ID number, A
plus an admin set ID number, or U
plus a user name or ID number.
directory-or-file
(optional) is either the mount point directory for a specific file system or the specific directory or file for which you need to extend a grace period.
Enter y
(yes) when prompted to continue.
In the example, we clear the grace-period timer for the cit
group's quota on the hqfs1
file system.
root@mds:~# samquota -G cit -x clear /hsm/hqfs1 Setting Grace Timer: continue? y root@mds:~#
When we recheck the cit
group quota, a file has been allocated and the grace period has been reset to 12h
, 12 hours (4320
seconds):
root@mds:~# samquota -G cit /hsm/hqfs1 Online Limits Total Limits Type ID In Use Soft Hard In Use Soft Hard /hsm/hqfs1 Files group 119 763 750 1500 763 750 1500 Blocks group 119 3109330961 2013265920 3019898880 120096782 157286400 235929600 Grace period 4320 4320 ---> Warning: soft limits to be enforced in 12h root@mds:~#
Alternatively, to reset the grace period to its full starting size immediately, reset the grace period timer. Use the command samquota
-
quota-type
ID
-x
reset
[
directory-or-file
]
.
quota-type
ID
is G
plus a group name or ID number, A
plus an admin set ID number, or U
plus a user name or ID number.
directory-or-file
(optional) is either the mount point directory for a specific file system or the specific directory or file for which you need to extend a grace period.
Enter y
(yes) when prompted to continue.
In the example, we clear the grace-period timer for the cit
group's quota on the hqfs1
file system.
root@mds:~# samquota -G cit -x reset /hsm/hqfs1 Setting Grace Timer: continue? y root@mds:~#
When we recheck the cit
group quota, the grace period has been reset to 12h
, 12 hours (4320
seconds):
root@mds:~# samquota -G cit /hsm/hqfs1 Online Limits Total Limits Type ID In Use Soft Hard In Use Soft Hard /hsm/hqfs1 Files group 119 762 750 1500 762 750 1500 Blocks group 119 3109330961 2013265920 3019898880 120096782 157286400 235929600 Grace period 4320 4320 ---> Warning: soft limits to be enforced in 12h root@mds:~#
Stop here.
Log in to the file-system server as root
.
In the example, we log into host mds
:
root@mds:~#
Check the grace period that you need to cut short. Use the command samquota
-
quota-type
ID
[
directory-or-file
]
where:
quota-type
ID
is G
plus a group name or ID number, A
plus an admin set ID number, or U
plus a user name or ID number.
directory-or-file
(optional) is either the mount point directory for a specific file system or a specific directory or file for which you need to extend a grace period.
In the example, the cit
group is over the soft limit and has eleven hours left in its grace period, but we need to end the grace period early:
root@mds:~# samquota -G cit /hsm/hqfs1 Online Limits Total Limits Type ID In Use Soft Hard In Use Soft Hard /hsm/hqfs1 Files group 119 822 750 1500 822 750 1500 Blocks group 119 3109330961 2013265920 3019898880 120096782 157286400 235929600 Grace period 4320 4320 ---> Warning: soft limits to be enforced in 11h root@mds:~#
Expire the grace period. Use the command samquota
-
quota-type
ID
-x
expire
[
directory-or-file
]
, where:
quota-type
ID
is G
plus a group name or ID number, A
plus an admin set ID number, or U
plus a user name or ID number.
directory-or-file
(optional) is either the mount point directory for a specific file system or a specific directory or file for which you need to extend a grace period.
In the example, we expire the grace period for the cit
group:
root@mds:~# samquota -G cit -x expire /hsm/hqfs1 Setting Grace Timer: continue? y
When we re-check quotas, soft limits for the cit
group are being enforced as hard limits:
root@mds:~# samquota -G cit /hsm/hqfs1 Online Limits Total Limits Type ID In Use Soft Hard In Use Soft Hard /hsm/hqfs1 Files group 119 762 750 1500 762 750 1500 Blocks group 119 3109330961 2013265920 3019898880 120096782 157286400 235929600 Grace period 4320 4320 ---> Online soft limits under enforcement (since 6s ago) root@mds:~#
Stop here.
You can inhibit file-system resource allocations by creating inconsistent quota values. When the file system detects that quota values are not consistent for a user, group, or admin set, it prevents that user, group, or admin set from using any more system resources. So setting the hard limit for a quota lower than the corresponding soft limit stops further allocations. To use this technique, proceed as follows:
Log in to the file-system server as root
.
In the example, we log into host mds
:
root@mds:~#
Back up the quota so that you can restore it later. Export the current configuration, and redirect the information to a file. Use the command samquota
-
quota-type
ID
[
directory-or-file
]
>
file
where:
quota-type
ID
is G
plus a group name or ID number, A
plus an admin set ID number, or U
plus a user name or ID number.
directory-or-file
(optional) is either the mount point directory for a specific file system or a specific directory or file for which you need to extend a grace period.
file
is the name of the output file.
In the example, we export the quota for the cit
group to the file restore.hqfs1.quota_g.cit
in the root
user's home directory:
root@mds:~# samquota -G cit -e /hsm/hqfs1 > /root/restore.hqfs1.quota_g.cit root@mds:~#
Check the output. Use the Solaris command more
<
file
, where file
is the name of the output file.
root@mds:~# more < /root/restore.hqfs1.quota_g.cit
# Type ID
# Online Limits Total Limits
# soft hard soft hard
# Files
# Blocks
# Grace Periods
samquota -G 119 \
-f 750:s:o -f 1500:h:o -f 750:s:t -f 1500:h:t \
-b 157286400:s:o -b 235929600:h:o -b 157286400:s:t -b 235929600:h:t \
-t 4320:o -t 4320:t
root@mds:~#
Set the hard limits for the quota to 0
(zero) and set the soft limits to 1
(or any non-zero value). Use the command samquota
-
quota-type
ID
-f 1:s -f 0:h -b 1:s -b 0:h
[
directory-or-file
]
.
quota-type
ID
is G
plus a group name or ID number, A
plus an admin set ID number, or U
plus a user name or ID number.
directory-or-file
(optional) is the mount point directory for a specific file system or a specific directory or file for which you need to extend a grace period.
In the example, we make the quota settings for the cit
group in the /hsm/hqfs1
file system inconsistent, and thereby stop new resource allocations.
root@mds:~# samquota -G cit -f 1:s -f 0:h -b 1:s -b 0:h /hsm/hqfs1 root@mds:~#
When we check the quota for the cit
group, zero quotas are in effect. The exclamation point characters (!
) show all current use as over-quota, so no further allocations will be made:
root@mds:~# samquota -G cit /hsm/hqfs1 Online Limits Total Limits Type ID In Use Soft Hard In Use Soft Hard /sam6 Files group 119 822! 1 0 822! 1 0 Blocks group 119 3109330961! 1 0 3109330961! 1 0 Grace period 4320 4320 ---> Quota values inconsistent; zero quotas in effect. root@mds:~#
When you are ready resume normal allocations by restoring the modified quota to its original state. Execute the backup file that you created as a shell script. Use the Solaris command sh
file
, where file
is the name of the backup file.
In the example, we restore the quota for the cit
group by executing the file /root/restore.hqfs1.quota_g.cit
root@mds:~# sh /root/restore.hqfs1.quota_g.cit Setting Grace Timer: continue? y Setting Grace Timer: continue? y root@mds:~#
When we check the quota, normal limits have been restored and allocations are no longer blocked:
root@mds:~# samquota -G cit /hsm/hqfs1 Online Limits Total Limits Type ID In Use Soft Hard In Use Soft Hard /hsm/hqfs1 Files group 119 822 750 1500 822 750 1500 Blocks group 119 3109330961 2013265920 3019898880 120096782 157286400 235929600 Grace period 4320 4320 ---> Warning: soft limits to be enforced in 11h root@mds:~#
Stop here.
To remove or disable quotas for a file system, disable quotas in the mount process.
Log in to the file-system server as root
.
In the example, we log in to host mds
:
root@mds:~#
Open the /etc/vfstab
file in a text editor, add the noquota
mount option to the mount options column of the file system row, and save the file.
In the example, we open the file in the vi
text editor, and set the noquota
mount option for the hqfs1
file system:
root@mds:~# vi /etc/vfstab #File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #------------ ------- --------------- ------ ---- ------- ------------ /devices - /devices devfs - no - /proc - /proc proc - no - ... hqfs1 - /hsm/hqfs1 samfs - no noquota :wq root@mds:~#
If the file system is mounted, unmount it.
You must unmount and then remount a file system so that the operating system reloads the /etc/vfstab
file and makes the specified changes. In the example, we unmount the hqfs1
file system:
root@mds:~# umount /hsm/hqfs1 root@mds:~#
Mount the file system.
In the example, we mount the hqfs1
file system:
root@mds:~# mount /hsm/hqfs1 root@mds:~#
If you expect to reinstate quotas later, leave the quota files in pace.
If you do not expect to reinstate quotas or if you need to reclaim the space consumed by quota files, use the Solaris command rm
to delete the files .quota_g
, .quota_a
, and/or .quota_u
from the root directory of the file system.
In the example, we remove all quota files from the /hsm/hqfs1
file system root directory:
root@mds:~# rm /hsm/hqfs1/.quota_g root@mds:~# rm /hsm/hqfs1/.quota_a root@mds:~# rm /hsm/hqfs1/.quota_u root@mds:~#
Stop here.
In general, you manage archiving file systems in much the same was as you would non-archiving file systems. However, you must stop the archiving process before carrying out most file-system management tasks. When active, the archiving processes make changes to the file-system's primary disk cache. So you must quiesce these processes before you do maintenance work on the disk cache. This section covers the following tasks:
Log in to the file system host as root
.
In the example, we log in to host mds
:
root@mds:~#
Idle all archiving processes. Use the command samcmd
aridle
.
This command will allow current archiving and staging to complete, but will not start any new jobs:
root@mds:~# samcmd aridle root@mds:~#
Idle all staging processes. Use the command samcmd
stidle
.
This command will allow current archiving and staging to complete, but will not start any new jobs:
root@mds:~# samcmd stidle root@mds:~#
Wait for active archiving jobs to complete. Check on the status of the archiving processes using the command samcmd
a
.
When archiving processes are Waiting
for
:arrun
, the archiving process is idle:
root@mds:~# samcmd a Archiver status samcmd 5.4 10:20:34 May 20 2014 samcmd on samfs-mds sam-archiverd: Waiting for :arrun sam-arfind: ... Waiting for :arrun
Wait for active staging jobs to complete. Check on the status of the staging processes using the command samcmd
u
.
When staging processes are Waiting for :strun
, the staging process is idle:
root@mds:~# samcmd u Staging queue samcmd 5.4 10:20:34 May 20 2014 samcmd on solaris.demo.lan Staging queue by media type: all sam-stagerd: Waiting for :strun root@mds:~#
To fully quiesce the system, stop archiving and staging processes as well.
If you have not already done so, idle archiving and staging processes.
If you have not already done so, log in to the file system host as root
.
In the example, we log in to host mds
:
root@mds:~#
Idle all removable media drives before proceeding further. For each drive, use the command samcmd
equipment-number
idle
, where equipment-number
is the equipment ordinal number assigned to the drive in the /etc/opt/SUNWsamfs/mcf
file.
This command will allow current archiving and staging jobs to complete before turning drives off
, but will not start any new work. In the example, we idle four drives, with ordinal numbers 801
, 802
, 803
, and 804
:
root@mds:~# samcmd 801 idle root@mds:~# samcmd 802 idle root@mds:~# samcmd 803 idle root@mds:~# samcmd 804 idle root@mds:~#
Wait for running jobs to complete.
We can check on the status of the drives using the command samcmd
r
. When all drives are notrdy
and empty
, we are ready to proceed.
root@mds:~# samcmd r Removable media samcmd 5.4 18:37:09 Feb 17 2014 samcmd on hqfs1host ty eq status act use state vsn li 801 ---------p 0 0% notrdy empty li 802 ---------p 0 0% notrdy empty li 803 ---------p 0 0% notrdy empty li 804 ---------p 0 0% notrdy empty root@mds:~#
When the archiver and stager processes are idle and the tape drives are all notrdy
, stop the library-control daemon. Use the command samd
stop
.
root@mds:~# samd stop root@mds:~#
Proceed with file-system maintenance.
When maintenance is complete, restart archiving and staging processes.
When you restart operations, pending stages are reissued and archiving is resumed.
Stop here.
When you are ready, resume normal, automatic operation, proceed a follows:
Log in to the file system host as root
.
In the example, we log in to host mds
:
root@mds:~#
Restart the Oracle HSM library-control daemon. Use the command samd
start
.
root@mds:~# samd start root@mds:~#
Stop here.
Renaming a file system is a two-step process. First you change the family set name for the file system by editing the /etc/opt/SUNWsamfs/mcf
file. Then you have the samfsck
-R
-F
command read the new name and update the superblock on the corresponding disk devices. To rename a file system, use the procedure below:
Log in to the file-system server as root
.
In the example, we log in to host mds
:
root@mds:~#
If you are renaming an archiving file system, finish active archiving and staging jobs and stop any new activity before proceeding further.
Unmount the file system that you need to rename.
In the example, we unmount file system hqfs1
:
root@mds:~# umount hqfs1 root@mds:~#
Open the /etc/opt/SUNWsamfs/mcf
file in a text editor, land, in the first column of the file, change the equipment identifier of the file system to the new name.
In the example, we use the vi
editor to change the name of the file system hqfs1
(equipment ordinal 100
) to hpcc
:
root@mds:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------- --------- --------- ------------ ------ ---------- hpcc 100 ms hqfs1 on /dev/dsk/c1t3d0s3 101 md hqfs1 on /dev/dsk/c1t4d0s5 102 md hqfs1 on
In the fourth column of the file, change the family set name of the file system to the new value. You may also change the file-system equipment identifier in the first column, but do not change anything else. Save the file and close the editor.
In the example, we change both the family set name of the file system from hqfs1
to hpcc
:
root@mds:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------- --------- --------- ------------ ------ ---------- hpcc 100 ms hpcc on /dev/dsk/c1t3d0s3 101 md hpcc on /dev/dsk/c1t4d0s5 102 md hpcc on :wq root@mds:~#
If you are renaming an archiving file system, update the corresponding file system directive in the archiver.cmd
file and, if configured, the stager.cmd
file.
In the example, we use the vi
editor to to change the directive fs
=
hqfs1
to fs
=
hpcc
:
root@mds:~# vi /etc/opt/SUNWsamfs/archiver.cmd # archiver.cmd: configuration file for archiving file systems #----------------------------------------------------------------------- # General Directives archivemeta = off # default examine = noscan # default #----------------------------------------------------------------------- # Archive Set Assignments fs = hpcc ... :wq root@mds:~# vi /etc/opt/SUNWsamfs/archiver.cmd # stager.cmd logfile = /var/opt/SUNWsamfs/log/stager drives= hp30 1 copysel = 4:3:2:1 fs = hpcc ... :wq root@mds:~#
Check the mcf
file for errors by running the sam-fsd
command, and correct any that are detected.
The sam-fsd
is an initialization command that reads Oracle HSM configuration files. It will stop if it encounters an error. In the example, sam-fsd
does not report any errors:
root@mds:~# sam-fsd
Trace file controls:
sam-amld /var/opt/SUNWsamfs/trace/sam-amld
cust err fatal ipc misc proc date
...
Would start sam-archiverd()
Would start sam-stagealld()
Would start sam-stagerd()
Would start sam-amld()
root@mds:~#
Rewrite the file-system super block to reflect the new family set name. Use the command samfsck
-R
-F
family-set-name
, where family-set-name
is the family set name that you just specified in the /etc/opt/SUNWsamfs/mcf
file.
When issued with the -R
and -F
options, the samfsck
command reads the new family set name and the corresponding disk-storage equipment identifiers from the /etc/opt/SUNWsamfs/mcf
file. It then rewrites the super block on the specified disk devices with the new family set name. In the example, we run the command with the new hpcc
family set name:
root@mds:~# samfsck -R -F hpcc ... root@mds:~#
Open the /etc/vfstab
file in a text editor, and locate the entry for the file system that you are renaming.
In the example, we open the file in the vi
text editor. We need to change the hqfs1
file system entry to use the new name:
root@mds:~# vi /etc/vfstab #File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #------------ ------- --------------- ------ ---- ------- ------------ /devices - /devices devfs - no - /proc - /proc proc - no - ... hqfs1 - /hsm/hqfs1 samfs - no -
In the /etc/vfstab
entry for the file system that you have renamed, change the file system name in the first column and the mount-point directory name in the third column (if required), and save the file.
In the example, we change the name of the hqfs1
file system to hpcc
and change the mount point to match:
root@mds:~# vi /etc/vfstab #File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #------------ ------- --------------- ------ ---- ------- ------------ /devices - /devices devfs - no - /proc - /proc proc - no - ... hpcc - /hsm/hpcc samfs - no - :wq root@mds:~#
Create the new mount-point directory for the new file system, if required, and set the access permissions for the mount point.
Users must have execute (x
) permission to change to the mount-point directory and access files in the mounted file system. In the example, we create the /hsm/hpcc
mount-point directory and set permissions to 755
(-rwxr-xr-x
):
root@mds:~# mkdir /hsm/hpcc root@mds:~# chmod 755 /hsm/hpcc root@mds:~#
Tell the Oracle HSM software to re-read the mcf
file and reconfigure itself accordingly. Use the command samd
config
.
root@mds:~# samd config Configuring SAM-FS ... root@mds:~#
If samd config
reports errors, correct them and re-issue the command until no errors are found.
Mount the file system.
In the example, we use the new mount point directory:
root@mds:~# mount /hsm/hpcc
Stop here.
When file systems report errors via samu
, Oracle HSM Manager, or the /var/adm/sam-log
file, follow the procedure below:
Log in to the file-system server as root
.
In the example, we log in to host mds
:
root@mds:~#
If you are repairing an archiving file system, finish active archiving and staging jobs and stop any new activity before proceeding further.
Unmount the affected file system.
You may need to try more than once if you are waiting for archiving to stop. In the example, we unmount file system hqfs1
:
root@mds:~# umount /hsm/hqfs1 samfs umount: /hsm/hqfs1: is busy root@mds:~# umount /hsm/hqfs1 root@mds:~#
Repair the file system. Use the command samfsck
-F
-V
family-set-name
, where family-set-name
is the family set name specified for the file system in the /etc/opt/SUNWsamfs/mcf
file.
It is often a good idea to save the repair results to a date-stamped file for later reference and for diagnostic purposes, when necessary. So in the example, we save the results by piping the samfsck
output to the command tee
/var/tmp/
samfsck-FV.
family-set-name
.`
date
'+%Y%m%d.%H%M%S'
`
:
root@mds:~# samfsck -F -V hqfs1 | tee /var/tmp/samfsck-FV.hqfs1. `date '+%Y%m%d.%H%M%S'` name: /hsm/hqfs1 version: 2A First pass Second pass Third pass NOTICE: ino 2.2, Repaired link count from 8 to 14 Inodes processed: 123392 total data kilobytes = 1965952 total data kilobytes free = 1047680 total meta kilobytes = 131040 total meta kilobytes free = 65568 INFO: FS samma1 repaired: start: May 19, 2014 10:57:13 AM MDT finish: May 19, 2014 10:57:37 AM MDT NOTICE: Reclaimed 70057984 bytes NOTICE: Reclaimed 9519104 meta bytes root@mds:~#
Remount the file system.
root@mds:~# mount /hsm/hqfs1 root@mds:~#
Stop here.
Before you add devices to an existing file system, you should consider your requirements and your alternatives. Make sure that enlarging the existing file system is the best way to meet growing capacity requirements. If you need more physical storage space to accommodate new projects or user communities, creating one or more new Oracle HSM file systems may be a better choice. Multiple, smaller file systems will generally offer better performance than one much larger file system, and the smaller file systems may be easier to create and maintain.
Once you have decided that you need to enlarge a file system, you can take either of two approaches:
You can add devices to the mounted file system (recommended)
Proceed as follows:
Log in to the file-system server as root
.
In the example, we log in to host mds
:
root@mds:~#
Open the /etc/opt/SUNWsamfs/mcf
file in a text editor, and locate the file system that you need to enlarge.
In the examples, we use the vi
editor. We need to enlarge two file systems, the general-purpose samqfsms
file system and the high-performance samqfs2ma
file system:
root@mds:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------- --------- --------- --------- ------ --------------- samqfsms 100 ms samqfsms on /dev/dsk/c1t3d0s3 101 md samqfsms on /dev/dsk/c1t4d0s5 102 md samqfsms on samqfs2ma 200 ma samqfs2ma on /dev/dsk/c1t3d0s3 201 mm samqfs2ma on /dev/dsk/c1t3d0s5 202 md samqfs2ma on /dev/dsk/c1t4d0s5 203 md samqfs2ma on
If you are adding devices to a general-purpose ms
file system, add additional data/metadata devices to the end of the file system definition in the mcf
file. Then save the file, and close the editor.
You can add up to 252 logical devices. In the example, we add two devices, 103
and 104
, to the samqfsms
file system:
root@mds:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------- --------- --------- --------- ------ --------------- samqfsms 100 ms samqfsms on /dev/dsk/c1t3d0s3 101 md samqfsms on /dev/dsk/c1t4d0s5 102 md samqfsms on /dev/dsk/c1t3d0s7 103 md samqfsms on /dev/dsk/c1t4d0s7 104 md samqfsms on :wq root@mds:~#
If you are adding devices to a high-performance ma
file system, add data devices and one or more mm
disk devices to the end of the file system definition in the mcf
file. Then save the file, and close the editor.
Always add new devices at the end of the list of existing devices. You can add up to 252, adding metadata devices proportionately as you add data devices. In the example, we add one mm
metadata device, 204
, and two md
data devices, 205
and 206
, to the samqfs2ma
file system:
root@mds:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------- --------- --------- --------- ------ --------------- ... samqfs2ma 200 ma samqfs2ma on /dev/dsk/c1t3d0s3 201 mm samqfs2ma on /dev/dsk/c1t3d0s5 202 md samqfs2ma on /dev/dsk/c1t4d0s5 203 md samqfs2ma on /dev/dsk/c1t5d0s6 204 mm samqfs2ma on /dev/dsk/c1t3d0s7 205 md samqfs2ma on /dev/dsk/c1t4d0s7 206 md samqfs2ma on :wq root@mds:~#
Check the mcf
file for errors by running the sam-fsd
command, and correct any that are detected.
The sam-fsd
is an initialization command that reads Oracle HSM configuration files. It will stop if it encounters an error:
root@mds:~# sam-fsd
If the sam-fsd
command finds an error in the mcf
file, edit the file to correct the error and recheck as described in the preceding step.
In the example below, sam-fsd
reports an unspecified problem with a device:
root@mds:~# sam-fsd Problem in mcf file /etc/opt/SUNWsamfs/mcf for filesystem samqfsms sam-fsd: Problem with file system devices.
Usually, such errors are the result of inadvertent typing mistakes. Here, when we open the mcf
file in an editor, we find that we have typed a letter o
instead of a 0 in the equipment name for device 104
, the second new md
device:
samqfsms 100 ms samqfsms on /dev/dsk/c1t3d0s3 101 md samqfsms on /dev/dsk/c1t4d0s5 102 md samqfsms on /dev/dsk/c1t3d0s7 103 md samqfsms on /dev/dsk/c1t4dos7 104 md samqfsms on ^
If the sam-fsd
command runs without error, the mcf
file is correct. Proceed to the next step.
The example is a partial listing of error-free output:
root@mds:~# sam-fsd
Trace file controls:
sam-amld /var/opt/SUNWsamfs/trace/sam-amld
cust err fatal ipc misc proc date
...
Would start sam-archiverd()
Would start sam-stagealld()
Would start sam-stagerd()
Would start sam-amld()
root@mds:~#
Tell the Oracle HSM software to re-read the mcf
file and reconfigure itself accordingly. Use the command samd
config
.
root@mds:~# samd config Configuring SAM-FS root@mds:~#
Make sure that samd
config
has updated the Oracle HSM file system configuration to include the new devices. Use the command samcmd
f
.
The devices should be in the off
state. In the example, samcmd
f
shows the new devices, 103
and 104
, and both are off
:
root@mds:~# samcmd f File systems samcmd 5.4 16:57:35 Feb 27 2014 samcmd on mds ty eq state device_name status high low mountpoint server ms 100 on samqfsms m----2----- 80% 70% /samqfsms md 101 on /dev/dsk/c1t3d0s3 md 102 on /dev/dsk/c1t4d0s5 md 103 off /dev/dsk/c1t3d0s7 md 104 off /dev/dsk/c1t4d0s7 root@mds:~#
Enable the newly added devices. For each device, use the command samcmd
add
equipment-number
, where equipment-number
is the equipment ordinal number assigned to the device in the /etc/opt/SUNWsamfs/mcf
file.
In the example, we enable new devices, 103
and 104
:
root@mds:~# samcmd add 103 root@mds:~# samcmd add 104
If you are working on a shared file system, finish configuring the new devices using the procedure for shared file systems.
If you are working on an unshared, standalone file system, make sure that the devices were added and are ready for use by the file system. Use the command samcmd
m
, check the results.
When the device is in the on
state, it has been added successfully and is ready to use. In the example, we have successfully added devices 103
and 104
:
root@mds:~# samcmd f Mass storage status samcmd 5.4 17:17:08 Feb 27 2014 samcmd on mds ty eq status use state ord capacity free ra part high low ms 100 m----2----- 13% on 3.840G 3.588G 1M 16 80% 70% md 101 31% on 0 959.938M 834.250M md 102 13% on 1 959.938M 834.250M md 103 0% on 2 959.938M 959.938M md 104 0% on 3 959.938M 959.938M root@mds:~#
Stop here.
When you add devices to a shared file system, you must carry out a few more steps before the devices are configured on all file-system hosts. Proceed as follows:
Log in to the file-system metadata server host as root
.
In the example, the metadata server host is named mds1
:
root@mds1:~#
Make sure that the new devices were added to the metadata server. Use the command samcmd
m
.
When the device is in the unavail
state, it has been added successfully but is not yet ready for use. In the example, we have successfully added devices 103
and 104
:
root@mds1:~# samcmd f Mass storage status samcmd 5.4 17:17:08 Feb 27 2014 samcmd on metadata-server ty eq status use state ord capacity free ra part high low ms 100 m----2----- 13% on 3.840G 3.588G 1M 16 80% 70% md 101 31% on 0 959.938M 834.250M md 102 13% on 1 959.938M 834.250M md 103 0% unavail 2 959.938M 959.938M md 104 0% unavail 3 959.938M 959.938M root@mds1:~#
Log in to each file-system client hosts as root
.
Remember to include potential metadata servers, since they are also clients. In the example, we need to log in to a potential metadata server, named mds2
, and two clients, clnt1
and clnt2L
(a Linux host). So we open three terminal windows and use secure shell (ssh
):
root@mds1:~# ssh root@mds2 Password: root@mds2:~# root@mds1:~# ssh root@clnt1 Password: root@clnt1:~# root@mds1:~# ssh root@clnt2L Password: [root@clnt2L ~]#
If the client is a Linux client, unmount the shared file system.
[root@clnt2L ~]# umount /hsm/shrfs1 [root@clnt2L ~]#
On each client, open the /etc/opt/SUNWsamfs/mcf
file in a text editor, and add the new devices to the end of the file system definition, just as you did on the server.
In the example, we add devices 103
and 104
to the mcf
file on clnt1
:
root@clnt1:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------- --------- --------- --------- ------ ---------- shrfs1 100 ms shrfs1 on shared /dev/dsk/c1t3d0s3 101 md shrfs1 on /dev/dsk/c1t4d0s5 102 md shrfs1 on /dev/dsk/c1t3d0s7 103 md shrfs1 on /dev/dsk/c1t4d0s7 104 md shrfs1 on :wq root@clnt1:~#
On each client, check the mcf
file for errors by running the sam-fsd
command, and correct any that are detected.
root@clnt1:~# sam-fsd ... root@clnt1:~# [root@clnt2L ~]# sam-fsd ... [root@clnt2L ~]#
On each client, tell the Oracle HSM software to re-read the mcf
file and reconfigure itself accordingly:
root@clnt1:~# samd config ... root@clnt1:~# [root@clnt2L ~]# samd config ... [root@clnt2L ~]#
If the client is a Linux client, mount the shared file system.
[root@clnt2L ~]# mount /hsm/shrfs1 [root@clnt2L ~]#
Once all clients have been configured, return to the metadata server, and enable storage allocation on the new devices. For each device, use the command samcmd
alloc
equipment-number
, where equipment-number
is the equipment ordinal number assigned to the device in the /etc/opt/SUNWsamfs/mcf
file.
In the example, we enable storage allocation on devices 103
and 104
:
root@mds1:~# samcmd alloc 103 root@mds1:~# samcmd alloc 104
Finally, make sure that the devices are ready for use by the file system. Use the command samcmd
m
, and check the results.
When the device is in the on
state, it has been added successfully and is ready to use. In the example, we have successfully added devices 103
and 104
:
root@mds1:~# samcmd f Mass storage status samcmd 5.4 17:17:08 Feb 27 2014 samcmd on metadata-server ty eq status use state ord capacity free ra part high low ms 100 m----2----- 13% on 3.840G 3.588G 1M 16 80% 70% md 101 31% on 0 959.938M 834.250M md 102 13% on 1 959.938M 834.250M md 103 0% on 2 959.938M 959.938M md 104 0% on 3 959.938M 959.938M root@mds1:~#
Stop here.
Proceed as follows:
Log in to the file-system server host as root
.
In the example, the metadata server host is named mds
:
root@mds:~#
If you are adding devices to an archiving file system, finish active archiving and staging jobs and stop any new activity before proceeding further.
Unmount the file system.
Do not proceed until you have unmounted the file system. In the example, we unmount file system hqfs1
:
root@mds:~# umount /hsm/hqfs1 root@mds:~#
Open the /etc/opt/SUNWsamfs/mcf
file in a text editor, and locate the file system that you need to enlarge.
In the example, we use the vi
editor. We need to enlarge the hqfs1
file system:
root@mds:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- --------- ------ --------- hqfs1 100 ms hqfs1 on /dev/dsk/c1t3d0s3 101 md hqfs1 on /dev/dsk/c1t4d0s5 102 md hqfs1 on
If you are adding devices to a high-performance ma
file system, you must add metadata storage along with the data storage. Add enough additional mm
disk devices to store the metadata for the data devices that you add. Then save the file, and close the editor.
You can add up to 252 logical devices. In the example, we add one mm
metadata device and two data devices to the hqfs1
file system:
root@mds:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- --------- ------ --------- hqfs1 200 ma hqfs1 on /dev/dsk/c1t3d0s3 201 mm hqfs1 on /dev/dsk/c1t5d0s6 204 mm hqfs1 on /dev/dsk/c1t3d0s5 202 md hqfs1 on /dev/dsk/c1t4d0s5 203 md hqfs1 on /dev/dsk/c1t3d0s7 205 md hqfs1 on /dev/dsk/c1t4dos7 206 md hqfs1 on :wq root@mds:~#
If you are adding devices to a general-purpose ms
file system, add additional data/metadata devices to the file system definition in the mcf
file. Then save the file, and close the editor.
You can add up to 252 logical devices. In the example, we add two devices to the hqfs1
file system:
root@mds:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- --------- ------ --------- hqfs1 100 ms hqfs1 on /dev/dsk/c1t3d0s3 101 md hqfs1 on /dev/dsk/c1t4d0s5 102 md hqfs1 on /dev/dsk/c1t3d0s7 103 md hqfs1 on /dev/dsk/c1t4dos7 104 md hqfs1 on :wq root@mds:~#
Check the mcf
file for errors by running the sam-fsd
command, and correct any that are detected.
The sam-fsd
is an initialization command that reads Oracle HSM configuration files. It will stop if it encounters an error:
root@mds:~# sam-fsd
Trace file controls:
sam-amld /var/opt/SUNWsamfs/trace/sam-amld
...
Would start sam-archiverd()
Would start sam-stagealld()
Would start sam-stagerd()
Would start sam-amld()
root@mds:~#
Tell the Oracle HSM software to re-read the mcf
file and reconfigure itself accordingly:
root@mds:~# samd config ... root@mds:~#
Incorporate the new devices into file system. Use the command samgrowfs
family-set-name
, where family-set-name
is the family set name specified for the file system in the /etc/opt/SUNWsamfs/mcf
file.
In the example, we grow the hqfs1
file system:
root@mds:~# samgrowfs hqfs1 ... root@mds:~#
Remount the file system.
root@mds:~# mount /hsm/hqfs1 root@mds:~#
If you added devices to an archiving file system, restart the Oracle HSM library-management daemon. Use the command samd
start
.
root@mds:~# samd start ... root@mds:~#
If you neglected to unmount the file system before making changes and if, consequently, the file system will not mount, restore the original mcf
file by deleting references to the added devices. Then run samd
config
to restore the configuration, unmount the file system, and start over.
Stop here.
When required, you can remove data devices from mounted Oracle HSM file systems. Typically this becomes necessary when you need to replace a failed unit or when you need to free up under-utilized devices for other uses. There are, however, some limitations.
You can only remove data devices. You cannot remove any devices used to hold metadata, since metadata defines the organization of the file system itself. This means that you can remove md
, mr
, and striped-group devices from high-performance ma
file systems only. You cannot remove mm
metadata devices from ma
file systems. Nor can you remove md
devices from general purpose ms
file systems, since these devices store both data and metadata.
To remove devices, you must also have somewhere to move any valid data files that reside on the target device. This means that you cannot remove all the devices. One device must always remain available in the file system and it must have enough free capacity to hold all files residing on the devices that you remove. So, if you need to remove a striped group, you must have another available striped group configured with an identical number of member devices.
To remove devices, proceed as follows:
Carry out the following tasks:
samexplorer
Log in to the file-system server host as root
.
In the example, the metadata server host is named mds
:
root@mds:~#
Create a samexplorer
report. Use the command samexplorer
path/
hostname
.
YYYY
MM
DD
.
hh
mm
z
.tar.gz
, where:
path
is the path to the chosen directory.
hostname
is the name of the Oracle HSM file system host.
YYYY
MM
DD
.
hh
mm
z
is a date and time stamp.
By default, the file is called /tmp/SAMreport.
hostname
.
YYYY
MM
DD
.
hh
mm
z
.tar.gz
. In the example, we use the directory /zfs1/hsmcfg/
, where /zfs1
is a file system that has no components in common with the Oracle HSM file system:
root@mds:~# samexplorer /zfs1/hsmcfg/SAMreport.mds.2016013.1659MST.tar.gz Report name: /zfs1/hsmcfg/SAMreport.mds.2016013.1659MST.tar.gz Lines per file: 1000 Output format: tar.gz (default) Use -u for unarchived/uncompressed. Please wait............................................. Please wait............................................. Please wait...................................... The following files should now be ftp'ed to your support provider as ftp type binary. /zfs1/hsmcfg/SAMreport.mds.2016013.1659MST.tar.gz
Log in to the file-system server host as root
.
In the example, the metadata server host is named mds
:
root@mds:~#
Select the location where the recovery point file will be stored. The selected location must share no devices with the file system that you are backing up and must have room to store an unusually large file.
The devices that we intend to remove may contain files that have not been archived. Since such files exist only as single copies, we will have to create a recovery point file that stores at least some data as well as metadata. This can substantially increase the size of the recovery point file.
In the example, we create a subdirectory, tmp/
, in a file system has no components in common with the Oracle HSM file system, /zfs1
:
root@mds:~# mkdir /zfs1/tmp/ root@mds:~#
Change to the file system's root directory.
In the example, we change to the mount-point directory /hsm/hqfs1
:
root@mds:~# cd /hsm/hqfs1 root@mds:~#
Back up the file-system metadata and any unarchived data. Use the command samfsdump
-f
-u
recovery-point
, where recovery-point
is the path and file name of the finished recovery point file.
Note that the -u
option adds the data portion of unarchived files to the recovery point. This can greatly increase the size of the file.
In the example, we create a recovery point file for the hqfs1
file system called hqfs1-20140313.025215
in the directory /zfs1/hsmcfg/
. We check the result using the command ls
-l
:
root@mds:~# cd /hsm/hqfs1 root@mds:~# samfsdump -f /zfs1/hsmcfg/hqfs1-`date '+%Y%m%d.%H%M%S'` -T /hsm/hqfs1 samfsdump statistics: Files: 10010 Directories: 2 Symbolic links: 0 Resource files: 0 Files as members of hard links : 0 Files as first hard link : 0 File segments: 0 File archives: 10010 Damaged files: 0 Files with data: 0 File warnings: 0 Errors: 0 Unprocessed dirs: 0 File data bytes: 0 root@mds:~# ls -l /zfs1/hsmcfg/hqfs1* -rw-r--r-- 1 root other 5376517 Mar 13 02:52 /zfs1/hsmcfg/hqfs1-20140313.025215 root@mds:~#
Now remove devices from the mounted, high-performance file system.
You must remove devices one at a time. For each device, proceed as follows:
Log in to the file-system server host as root
.
In the example, the metadata server host is named mds
:
root@mds:~#
Open the /etc/opt/SUNWsamfs/mcf
file, and note the equipment ordinal number for the device that you need to remove.
In the example, we use the vi
editor. We need to remove device /dev/dsk/c1t4d0s7
from the equipment list for the hqfs1
file system. The equipment ordinal number is 104
:
root@mds:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------- --------- --------- --------- ------ -------------- hqfs1 100 ms hqfs1 on /dev/dsk/c1t3d0s3 101 md hqfs1 on /dev/dsk/c1t4d0s5 102 md hqfs1 on /dev/dsk/c1t3d0s7 103 md hqfs1 on /dev/dsk/c1t4d0s7 104 md hqfs1 on :q root@mds:~#
Before you try to remove a device, make sure that the remaining devices in the file system can accept any files that have to be moved from the device that you intend to delete.
Make sure that the remaining devices have adequate capacity.
If the device is a striped group, make sure that the file system contains another striped group with an equivalent configuration.
For example, if the striped group that you plan to remove has four equipment numbers, you must have another striped group that is in the ON state and has four equipment numbers.
Make sure that the file system that you plan to modify has a version 2A superblock. Use the command samfsinfo
filesystem-name
, where filesystem-name
is the name of the file system.
In the example, file system hqfs1
uses a version:
2A
superblock:
root@mds:~# /opt/SUNWsamfs/sbin/samfsinfo hqfs1 samfsinfo: filesystem hqfs1 is mounted. name: hqfs1 version: 2A time: Tuesday, June 28, 2011 6:07:36 AM MDT feature: Aligned Maps count: 4 ... root@mds:~#
If the file system does not have a version 2A superblock, stop here. You cannot remove devices while this file system is mounted.
If you are removing devices from an Oracle HSM archiving file system, release all archived files from the disk device that you are removing. Use the command samcmd
release
equipment-number
, where equipment-number
is the equipment ordinal number that identifies the device in the /etc/opt/SUNWsamfs/mcf
file.
If the device is a striped group, provide the equipment number of any device in the group.
The Oracle HSM software changes the state of the specified device to noalloc
(no allocations) so that no new files are stored on it, and starts releasing previously archived files. Once the device contains no unarchived files, the software removes the device from the file system configuration and changes its state to off
.
In the example, we release files from device 104
in the archiving file system hqfs1
:
root@mds:~# samcmd release 104
If you are removing a device from an Oracle HSM non-archiving file system, move all remaining valid files off the disk device that you are removing. Use the command samcmd
remove
equipment-number
, where equipment-number
is the equipment ordinal number that identifies the device in the /etc/opt/SUNWsamfs/mcf
file.
The Oracle HSM software changes the state of the specified device to noalloc
(no allocations) so that no new files are stored on it, and starts moving files that contain valid data to the remaining devices in the file system. When all files have been moved, the software removes the device from the file system configuration and changes its state to off
.
In the example, we move files off of device 104
:
root@mds:~# samcmd remove 104
Monitor the progress of the selected process, samcmd
remove
or samcmd
release
. Use the command samcmd
m
and/or watch the log file and /var/opt/SUNWsamfs/trace/sam-shrink
file.
The release
process completes fairly quickly if all files have been archived, because it merely releases space associated with files that have been copied to archival media. Depending on the amount of data and the number of files, the remove
process takes considerably longer because it must move files between disk devices.
root@mds:~# samcmd m ty eq status use state ord capacity free ra part high low ms 100 m----2----- 27% on 3.691G 2.628G 1M 16 80% 70% md 101 27% on 0 959.938M 703.188M md 102 28% on 1 899.938M 646.625M md 103 13% on 2 959.938M 834.250M md 104 0% noalloc 3 959.938M 959.938M root@mds:~#
If you are using samcmd
release
and the target device does not enter the off
state, there are unarchived files on the device. Wait for the archiver to run and archiving to complete. Then use the command samcmd
release
again. You can check on the progress of archiving by using the command samcmd
a
.
The release
process cannot free the disk space until unarchived files are archived.
root@mds:~# samcmd a Archiver status samcmd 5.4 14:12:14 Mar 1 2014 samcmd on mds sam-archiverd: Waiting for resources sam-arfind: hqfs1 mounted at /hsm/hqfs1 Files waiting to start 4 schedule 2 archiving 2 root@mds:~#
If samcmd
release
fails because one or more unarchived files cannot be archived, move the unarchived files to another device. Use the command samcmd
remove
equipment-number
, just as you would when removing devices from a non-archiving, standalone file system.
In the example, we move files off of device 104
:
root@mds:~# samcmd remove 104
Once the device state has been changed to off
, open the /etc/opt/SUNWsamfs/mcf
file in a text editor, locate the file system, and update the equipment list to reflect the changes. Save the file and close the editor.
In the example, samcmd
m
shows that 104
is off
. So we use the vi
editor to open the mcf
file. We remove the entry for device 104
from the equipment list for the hqfs1
file system and save our changes:
root@mds:~# samcmd m ty eq status use state ord capacity free ra part high low ms 100 m----2----- 27% on 3.691G 2.628G 1M 16 80% 70% md 101 27% on 0 959.938M 703.188M md 102 28% on 1 899.938M 646.625M md 103 13% on 2 959.938M 834.250M md 104 0% off 3 959.938M 959.938M root@mds:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------- --------- --------- --------- ------ --------- hqfs1 100 ms hqfs1 on /dev/dsk/c1t3d0s3 101 md hqfs1 on /dev/dsk/c1t4d0s5 102 md hqfs1 on /dev/dsk/c1t3d0s7 103 md hqfs1 on :wq root@mds:~#
Check the modified mcf
file for errors by running the sam-fsd
command, and correct any errors that are detected.
The sam-fsd
command will stop if it encounters an error. In the example, it reports no errors:
root@mds:~# sam-fsd
...
Would start sam-archiverd()
Would start sam-stagealld()
Would start sam-stagerd()
Would start sam-amld()
root@mds:~#
Tell the Oracle HSM software to re-read the mcf
file and reconfigure itself accordingly:
root@mds:~# samd config
Stop here.
This section outlines the following tasks:
When you mount or unmount a shared file system, the order in which you mount or unmount the metadata server and the clients is important.
For failover purposes, the mount options should be the same on the metadata server and all potential metadata servers. For example, you can create a samfs.cmd
file that contains the mount options and copy that file to all of the hosts.
For more information about mounting shared file systems, see the mount_samfs
man page.
Log in to the Oracle HSM metadata server and client hosts as root
.
In the example, we log in to the metadata server host for the shrfs1
file system, mds
. Then we open a terminal window for each client, clnt1
and clnt2
. We use ssh
(Secure Shell) to log in:
root@mds:~# ssh root@clnt1 Password: root@clnt1:~# root@mds:~# ssh root@clnt2 Password: root@clnt2:~#
If the file system has an entry in the Solaris /etc/vfstab
file, mount the shared file system on the metadata server host using the command mount
mountpoint
, where mountpoint
is the mount point directory on the host's root file system.
Always mount the file system on the metadata server host first, before mounting the file system on clients.
In the example, the shrfs1
file system has the following entry in the /etc/vfstab
file:
shrfs1 - /hsm/shrfs1 samfs - no shared
So we can mount the file system by supplying only the mount point parameter:
root@mds:~# mount /hsm/shrfs1 root@mds:~#
If the file system does not have an entry in the Solaris /etc/vfstab
file, mount the shared file system on the metadata server host using the command mount
-F
samfs
-o
shared
mountpoint
, where mountpoint
is the mount point directory on the host's root file system.
Always mount the file system on the metadata server host first, before mounting the file system on clients.
In the example, the shrfs1
file system has no entry in the /etc/vfstab
file:
root@mds:~# mount -F samfs -o shared /shrfs1 root@mds:~#
If the file system has an entry in the Solaris /etc/vfstab
file, mount the shared file system on each client host using the command mount
mountpoint
, where mountpoint
is the mount point directory on the host's root file system.
You can mount the file system on the client hosts in any order.
root@clnt1:~# mount /shrfs1 root@clnt1:~# root@clnt2:~# mount /shrfs1 root@clnt2:~#
If the file system does not have an entry in the Solaris /etc/vfstab
file, mount the shared file system on each client host using the command mount
-F
samfs
-o
shared
mountpoint
, where mountpoint
is the mount point directory on the host's root file system.
You can mount the file system on the client hosts in any order.
root@clnt1:~# mount -F samfs -o shared /shrfs1 root@clnt1:~# root@clnt2:~# mount -F samfs -o shared /shrfs1 root@clnt2:~#
Stop here.
Log in to the Oracle HSM metadata server and client hosts as root
.
In the example, we log in to the metadata server host for the shrfs1
file system, mds
. Then we open a terminal window for each client, shrfs1-clnt1
and shrfs1-client2
and use ssh
(Secure Shell) to log in:
root@mds:~# ssh root@shrfs1-clnt1 Password: root@clnt1:~# root@mds:~# ssh root@shrfs1-client2 Password: root@clnt2:~#
If the file system is shared through NFS or SAMBA, unshare the file system before you unmount it. On the metadata server, use the command unshare
mount-point
, where mount-point
is the mount point directory of the Oracle HSM file system.
root@mds:~# unshare /shrfs1 root@mds:~#
Unmount the Oracle HSM shared file system from each client. Use the command umount
mount-point
, where mount-point
is the mount point directory of the Oracle HSM file system.
See the umount_samfs
man page for further details. In the example, we unmount /sharedqfs1
from our two clients, shrfs1-clnt1
and shrfs1-client2
:
root@clnt1:~# umount /shrfs1 root@clnt1:~# exit root@mds:~# root@clnt2:~# umount /shrfs1 root@clnt1:~# exit root@mds:~#
Unmount the Oracle HSM shared file system from the metadata server. Use the command umount
-o
await_clients=
interval
mount-point
, where mount-point
is the mount point directory of the Oracle HSM file system and interval
is the number of seconds by which the -o
await_clients
option delays execution.
When the umount
command is issued on the metadata server of an Oracle HSM shared file system, the -o
await_clients
option makes umount
wait the specified number of seconds so that clients have time to unmount the share. It has no effect if you unmount an unshared file system or issue the command on an Oracle HSM client. See the umount_samfs
man page for further details.
In the example, we unmount the /shrfs1
file system from the server, allowing 60
seconds for clients to unmount:
root@mds:~# umount -o await_clients=60 /shrfs1 root@mds:~#
Stop here.
This section provides instructions for configuring additional hosts as clients of a shared file system and for de-configuring existing clients. It covers the following tasks:
There are three parts to the process of adding a client host to a shared file system:
First, you add the host information to the shared file system configuration.
Then you configure the shared file system on the host, using the procedure specific to the host operating system, Solaris or Linux.
Finally, you mount the shared file system on the host, using the procedure specific to the host operating system, Solaris or Linux.
Log in to the Oracle HSM metadata server as root
.
In the example, the metadata server host is mds
:
root@mds1:~#
Back up the file /etc/opt/SUNWsamfs/hosts.
filesystem
, where filesystem
is the name of the file system to which you are adding the client host.
In the example, the Oracle HSM shared file system is named shrfs1
:
root@mds1:~# cp /etc/opt/SUNWsamfs/hosts.shrfs1 /etc/opt/SUNWsamfs/hosts.shrfs1.bak
If the shared file system is mounted, run the command samsharefs
filesystem
from the active metadata server, redirecting output to a file, /etc/opt/SUNWsamfs/hosts.
filesystem
, where filesystem
is the name of the file system to which you are adding the client host.
The samsharefs
command displays the host configuration for an Oracle HSM shared file system. Redirecting the output to a file creates a new hosts file:
root@mds1:~# samsharefs shrfs1 > /etc/opt/SUNWsamfs/hosts.shrfs1
If the shared file system is not mounted, run the command samsharefs
-R
filesystem
from an active or potential metadata server, redirecting output to the file /etc/opt/SUNWsamfs/hosts.
filesystem
, where filesystem
is the name of the file system to which you are adding the client host.
The samsharefs
-R
command can only be run from an active or potential metadata server (see the samsharefs
man page for more details). The samsharefs
command displays the host configuration for an Oracle HSM shared file system. Redirecting the output to a file creates a new hosts file. In the example, we run the command from the metadata server mds
:
root@mds1:~# samsharefs -R shrfs1 > /etc/opt/SUNWsamfs/hosts.shrfs1
Open the newly created hosts file in a text editor.
In the example, we use the vi
editor. The host configuration includes the active metadata server, mds1
, one client that is also a potential metadata server, mds2
, and two other clients, clnt1
and clnt2
:
root@mds1:~# vi /etc/opt/SUNWsamfs/hosts.shrfs1 # Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------------ ---------------------- ------- --- ---------- mds1 10.79.213.117 1 0 server mds2 10.79.213.217 2 0 clnt1 10.79.213.133 0 0 clnt2 10.79.213.47 0 0
In the hosts file, add a line for the new client host, save the file, and close the editor.
In the example, we add an entry for the host clnt3
:
root@mds1:~# vi /etc/opt/SUNWsamfs/hosts.shrfs1 # Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------------ ---------------------- ------- --- ---------- mds1 10.79.213.117 1 0 server mds2 10.79.213.217 2 0 clnt1 10.79.213.133 0 0 clnt2 10.79.213.47 0 0 clnt3 10.79.213.49 0 0 :wq root@mds1:~#
If the file system is mounted, update the file-system from the active metadata server. Use the command samsharefs
-u
filesystem
, where filesystem
is the name of the file system to which you are adding the client host.
The samsharefs
command re-reads the revised hosts file and updates the configuration:
root@mds1:~# samsharefs -u shrfs1
If the file system is not mounted, update the file-system from an active or potential metadata server. Use the command samsharefs
-R
-u
filesystem
, where filesystem
is the name of the file system to which you are adding the client host.
The samsharefs
command re-reads the revised hosts file and updates the configuration:
root@mds1:~# samsharefs -R -u shrfs1
If you are adding a Solaris host, configure the shared file system as a Solaris client.
If you are adding a Linux host, configure the shared file system as a Linux client.
On the shared file-system client, log in as root
.
In the example, the Oracle HSM shared file system is shrfs1
, and the client host is clnt1
:
root@clnt1:~#
In a terminal window, retrieve the configuration information for the shared file system. Use the command samfsconfig
device-path
, where device-path
is the location where the command should start to search for file-system disk devices (such as /dev/dsk/*
or /dev/zvol/dsk/rpool/*
).
root@clnt1:~# samfsconfig /dev/dsk/*
If the host has access to the metadata devices for the file system and is thus suitable for use as a potential metadata server, the samfsconfig
output closely resembles the mcf
file that you created on the file-system metadata server.
In our example, host clnt1
has access to the metadata devices (equipment type mm
), so the command output shows the same equipment listed in the mcf
file on the server, mds
. Only the host-assigned device controller numbers differ:
root@clnt1:~# samfsconfig /dev/dsk/* # Family Set 'shrfs1' Created Thu Feb 21 07:17:00 2013 # Generation 0 Eq count 4 Eq meta count 1 shrfs1 300 ma shrfs1 - /dev/dsk/c1t0d0s0 301 mm shrfs1 - /dev/dsk/c1t3d0s0 302 mr shrfs1 - /dev/dsk/c1t3d0s1 303 mr shrfs1 -
If the host does not have access to the metadata devices for the file system, the samfsconfig
command cannot find the metadata devices and thus cannot fit the Oracle HSM devices that it discovers into the file-system configuration. The command output lists Ordinal
0
—the metadata device—under Missing
Slices
, fails to include the line that identifies the file-system family set, and comments out the listings for the data devices.
In our example, host shrfs1-client2
has access to the data devices only. So the samfsconfig
output looks like this:
root@clnt2:~# samfsconfig /dev/dsk/* # Family Set 'shrfs1' Created Thu Feb 21 07:17:00 2013 # Missing slices # Ordinal 0 # /dev/dsk/c4t3d0s0 302 mr shrfs1 - # /dev/dsk/c4t3d0s1 303 mr shrfs1 -
Copy the entries for the shared file system from the samfsconfig
output. Then, in a second window, open the /etc/opt/SUNWsamfs/mcf
file in a text editor, and paste the copied entries into the file.
In our first example, the host, clnt1
, has access to the metadata devices for the file system, so the mcf
file starts out looking like this:
root@clnt1:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #---------------- --------- --------- --------- ------ --------------- shrfs1 300 ma shrfs1 - /dev/dsk/c1t0d0s0 301 mm shrfs1 - /dev/dsk/c1t3d0s0 302 mr shrfs1 - /dev/dsk/c1t3d0s1 303 mr shrfs1 -
In the second example, the host, shrfs1-client2
, does not have access to the metadata devices for the file system, so the mcf
file starts out looking like this:
root@clnt2:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #---------------- --------- --------- --------- ------ --------------- # /dev/dsk/c4t3d0s0 302 mr shrfs1 - # /dev/dsk/c4t3d0s1 303 mr shrfs1 -
If the host has access to the metadata devices for the file system, add the shared
parameter to the Additional Parameters
field of the entry for the shared file system.
In the first example, the host, clnt1
, has access to the metadata:
root@clnt1:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #---------------- --------- --------- --------- ------ --------------- shrfs1 300 ma shrfs1 - shared /dev/dsk/c1t0d0s0 301 mm shrfs1 - /dev/dsk/c1t3d0s0 302 mr shrfs1 - /dev/dsk/c1t3d0s1 303 mr shrfs1 -
If the host does not have access to the metadata devices for the file-system, add a line for the shared file system and include the shared
parameter
root@clnt2:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #---------------- --------- --------- --------- ------ --------------- shrfs1 300 ma shrfs1 - shared # /dev/dsk/c4t3d0s0 302 mr shrfs1 - # /dev/dsk/c4t3d0s1 303 mr shrfs1 -
If the host does not have access to the metadata devices for the file system, add a line for the metadata device. Set the Equipment
Identifier
field to nodev
(no device) and set the remaining fields to exactly the same values as they have on the metadata server:
root@clnt2:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #---------------- --------- --------- --------- ------ --------------- shrfs1 300 ma shrfs1 on shared nodev 301 mm shrfs1 on # /dev/dsk/c4t3d0s0 302 mr shrfs1 - # /dev/dsk/c4t3d0s1 303 mr shrfs1 -
If the host does not have access to the metadata devices for the file system, uncomment the entries for the data devices.
root@clnt2:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #---------------- --------- --------- --------- ------ --------------- shrfs1 300 ma shrfs1 on shared nodev 301 mm shrfs1 on /dev/dsk/c4t3d0s0 302 mr shrfs1 - /dev/dsk/c4t3d0s1 303 mr shrfs1 -
Make sure that the Device State
field is set to on
for all devices, save the mcf
file, and close the editor.
In our first example, the host, clnt1
, has access to the metadata devices for the file system, so the mcf
file ends up looking like this:
root@clnt1:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #---------------- --------- --------- --------- ------ --------------- shrfs1 300 ma shrfs1 on shared /dev/dsk/c1t0d0s0 301 mm shrfs1 on /dev/dsk/c1t3d0s0 302 mr shrfs1 on /dev/dsk/c1t3d0s1 303 mr shrfs1 on :wq root@clnt1:~#
In the second example, the host, shrfs1-client2
, does not have access to the metadata devices for the file system, so the mcf
file starts ends up like this:
root@clnt2:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #---------------- --------- --------- --------- ------ --------------- shrfs1 300 ma shrfs1 on shared nodev 301 mm shrfs1 on /dev/dsk/c4t3d0s0 302 mr shrfs1 on /dev/dsk/c4t3d0s1 303 mr shrfs1 on :wq root@clnt2:~#
Check the mcf
file for errors by running the sam-fsd
command, and correct any errors found.
The sam-fsd
is an initialization command that reads Oracle HSM configuration files. It will stop if it encounters an error. In the example, we check the mcf
file on clnt1
and find no errors:
root@clnt1:~# sam-fsd
...
Would start sam-archiverd()
Would start sam-stagealld()
Would start sam-stagerd()
Would start sam-amld()
root@mds:~#
On the shared file-system host, log in as root
.
In the example, the Oracle HSM shared file system is shrfs1
, and the host is a client named clnt1
:
root@clnt1:~#
Back up the operating system's /etc/vfstab
file.
root@clnt1:~# cp /etc/vfstab /etc/vfstab.backup
Open the /etc/vfstab
file in a text editor, and add a line for the shared file system.
In the example, we open the file in the vi
text editor and add a line for the shrfs1
family set device:
root@clnt1:~# vi /etc/vfstab #File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------- ------ ---- ------- -------------------- /devices - /devices devfs - no - /proc - /proc proc - no - ... shrfs1 - /hsm/shrfs1 samfs - no
To mount the file system on the client as a shared file system, enter the shared
option in the Mount Options
column of the vfstab
entry for the shared file system.
If we wanted the current client to mount the shared file system shrfs1
read-only, we would edit the vfstab
entry as shown in the example below:
#File
#Device Device Mount System fsck Mount Mount
#to Mount to fsck Point Type Pass at Boot Options
#-------- ------- -------- ------ ---- ------- --------------------
/devices - /devices devfs - no -
/proc - /proc proc - no -
...
shrfs1 - /hsm/shrfs1 samfs - no shared
Add any other desired mount options using commas as separators, and make any other desired changes to the /etc/vfstab
file. Then save the /etc/vfstab
file.
In the example, we add no additional mount options:
#File
#Device Device Mount System fsck Mount Mount
#to Mount to fsck Point Type Pass at Boot Options
#-------- ------- -------- ------ ---- ------- --------------------
/devices - /devices devfs - no -
/proc - /proc proc - no -
...
shrfs1 - /hsm/shrfs1 samfs - no shared
:wq
root@clnt1:~#
Create the mount point specified in the /etc/vfstab
file, and set the access permissions for the mount point.
The mount-point permissions must be the same as on the metadata server and on all other clients. Users must have execute (x
) permission to change to the mount-point directory and access files in the mounted file system. In the example, we create the /hsm/shrfs1
mount-point directory and set permissions to 755
(-rwxr-xr-x
):
root@clnt1:~# mkdir /hsm root@clnt1:~# mkdir /hsm/shrfs1 root@clnt1:~# chmod 755 /hsm/shrfs1 root@clnt1:~#
Mount the shared file system:
root@clnt1:~# mount /hsm/shrfs1 root@clnt1:~#
If you the new file system client is connected to tape devices and is to serve as a datamover, configure the client for distributed I/O.
Otherwise, stop here.
On the Linux client, log in as root
.
In the example, the Oracle HSM shared file system is shrfs1
, and the host is a Linux client named clnt2L
:
[root@clnt2L ~]#
In a terminal window, retrieve the configuration information for the shared file system using the samfsconfig
device-path
command, where device-path
is the location where the command should start to search for file-system disk devices (such as /dev/*
).
Since Linux hosts do not have access to the metadata devices for the file system, the samfsconfig
command cannot find the metadata devices and thus cannot fit the Oracle HSM devices that it discovers into the file-system configuration. The command output lists Ordinal
0
—the metadata device—under Missing
Slices
, fails to include the line that identifies the file-system family set, and comments out the listings for the data devices.
In our example, the samfsconfig
output for Linux host clnt2L
looks like this:
[root@clnt2L ~]# samfsconfig /dev/* # Family Set 'shrfs1' Created Thu Feb 21 07:17:00 2013 # # Missing slices # Ordinal 0 # /dev/sda4 302 mr shrfs1 - # /dev/sda5 303 mr shrfs1 -
Copy the entries for the shared file system from the samfsconfig
output. Then, in a second window, open the /etc/opt/SUNWsamfs/mcf
file in a text editor, and paste the copied entries into the file.
In the example, the mcf
file for the Linux the host, clnt2L
, starts out looking like this:
[root@clnt2L ~]# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- --------- ------ ------------- #/dev/sda4 302 mr shrfs1 - #/dev/sda5 303 mr shrfs1 -
In the mcf
file, insert a line for the shared file system, and include the shared
parameter.
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- --------- ------ ------------- shrfs1 300 ma shrfs1 - shared #/dev/sda4 302 mr shrfs1 - #/dev/sda5 303 mr shrfs1 -
In the mcf
file, insert lines for the file system's metadata devices. Since the Linux host does not have access to metadata devices, set the Equipment
Identifier
field to nodev
(no device) and then set the remaining fields to exactly the same values as they have on the metadata server:
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- --------- ------ ------------- shrfs1 300 ma shrfs1 on shared nodev 301 mm shrfs1 on #/dev/sda4 302 mr shrfs1 - #/dev/sda5 303 mr shrfs1 -
In the mcf
file, uncomment the entries for the Linux data devices.
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- --------- ------ ------------- shrfs1 300 ma shrfs1 on shared nodev 301 mm shrfs1 on /dev/sda4 302 mr shrfs1 - /dev/sda5 303 mr shrfs1 -
Make sure that the Device State
field is set to on
for all devices, and save the mcf
file.
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- --------- ------ ------------- shrfs1 300 ma shrfs1 on shared nodev 301 mm shrfs1 on /dev/sda4 302 mr shrfs1 on /dev/sda5 303 mr shrfs1 on :wq [root@clnt2L ~]#
Check the mcf
file for errors by running the sam-fsd
command, and correct any errors found.
The sam-fsd
is an initialization command that reads Oracle HSM configuration files. It will stop if it encounters an error. In the example, we check the mcf
file on the Linux client, clnt2L
and find no errors:
[root@clnt2L ~]# sam-fsd
...
Would start sam-archiverd()
Would start sam-stagealld()
Would start sam-stagerd()
Would start sam-amld()
root@mds:~#
On the Linux client, log in as root
.
In the example, the Oracle HSM shared file system is shrfs1
, and the host is a Linux client named clnt2L
:
[root@clnt2L ~]#
Back up the operating system's /etc/fstab
file.
[root@clnt2L ~]# cp /etc/fstab /etc/fstab.backup
Open the /etc/fstab
file in a text editor, and start a line for the shared file system.
In the example, we use the vi
text editor and add a line for the shrfs1
family set device:
[root@clnt2L ~]# vi /etc/fstab #File #Device Mount System Mount Dump Pass #to Mount Point Type Options Frequency Number #-------- ------- -------- ------------------------- --------- ------ ... /proc /proc proc defaults shrfs1 /hsm/shrfs1 samfs
In the fourth column of the file, add the mandatory shared
mount option.
[root@clnt2L ~]# vi /etc/fstab
#File
#Device Mount System Mount Dump Pass
#to Mount Point Type Options Frequency Number
#-------- ------- -------- ------------------------- --------- ------
...
/proc /proc proc defaults
shrfs1 /hsm/shrfs1 samfs shared
In the fourth column of the file, add any other desired mount options using commas as separators.
Linux clients support the following additional mount options:
rw
, ro
retry
meta_timeo
rdlease
, wrlease
, aplease
minallocsz
, maxallocsz
noauto
, auto
In the example, we add the option noauto
:
#File
#Device Mount System Mount Dump Pass
#to Mount Point Type Options Frequency Number
#-------- ------- -------- ------------------------- --------- ------
...
/proc /proc proc defaults
shrfs1 /hsm/shrfs1 samfs shared,noauto
Enter zero (0
) in each of the two remaining columns in the file. Then save the /etc/fstab
file.
#File #Device Mount System Mount Dump Pass #to Mount Point Type Options Frequency Number #-------- ------- -------- ------------------------- --------- ------ ... /proc /proc proc defaults shrfs1 /hsm/shrfs1 samfs shared,noauto 0 0 :wq [root@clnt2L ~]#
Create the mount point specified in the /etc/fstab
file, and set the access permissions for the mount point.
The mount-point permissions must be the same as on the metadata server and on all other clients. Users must have execute (x
) permission to change to the mount-point directory and access files in the mounted file system. In the example, we create the /shrfs1
mount-point directory and set permissions to 755
(-rwxr-xr-x
):
[root@clnt2L ~]# mkdir /hsm [root@clnt2L ~]# mkdir /hsm/shrfs1 [root@clnt2L ~]# chmod 755 /hsm//shrfs1
Mount the shared file system. Use the command mount
mountpoint
, where mountpoint
is the mount-point directory specified in the /etc/fstab
file.
As the example shows, the mount
command generates a warning. This is normal and can be ignored:
[root@clnt2L ~]# mount /hsm/shrfs1 Warning: loading SUNWqfs will taint the kernel: SMI license See http://www.tux.org/lkml/#export-tainted for information about tainted modules. Module SUNWqfs loaded with warnings
Stop here.
Removing a host from a shared file system is simply a matter of the removing it from the server configuration, as described below (to fully deconfigure the host, uninstall the software and the configuration files):
Log in to the Oracle HSM metadata server as root
.
In the example, the Oracle HSM shared file system is shrfs1
, and the metadata server host is mds1
:
root@mds1:~#
Log in to each client as root
, and unmount the shared file system.
Remember that potential metadata servers are themselves clients. In the example, we have three clients: clnt1
, clnt2
, and mds2
, a potential metadata server. For each client, we log in using ssh
, unmount the file system shrfs1
, and close the ssh
session:
root@mds1:~# ssh root@clnt1 Password: root@clnt1:~# umount /hsm/shrfs1 root@clnt1:~# exit root@mds1:~# ssh root@clnt2 Password: root@clnt2:~# umount /hsm/shrfs1 root@clnt2:~# exit root@mds1:~# ssh root@mds2 Password: root@mds2:~# umount /hsm/shrfs1 root@mds1:~# exit root@mds1:~#
On the metadata server, unmount the shared file system.
root@mds1:~# umount /hsm/shrfs1 root@mds1:~#
On the metadata server, rename the file /etc/opt/SUNWsamfs/hosts.
filesystem
to /etc/opt/SUNWsamfs/hosts.
filesystem
.bak
, where filesystem
is the name of the file system from which you are removing the client host.
root@mds1:~# mv /etc/opt/SUNWsamfs/hosts.shrfs1 /etc/opt/SUNWsamfs/hosts.shrfs1.bak root@mds1:~#
Capture the current shared file system host configuration top a file. From the metadata server, run the command samsharefs -R
filesystem
, redirecting the output to the file /etc/opt/SUNWsamfs/hosts.
filesystem
, where filesystem
is the name of the file system to which you are adding the client host.
The samsharefs
command displays the host configuration for the specified Oracle HSM shared file system. Redirecting the output to a file creates a new hosts file. In the example, we run the command from the metadata server mds
:
root@mds1:~# samsharefs -R shrfs1 > /etc/opt/SUNWsamfs/hosts.shrfs1 root@mds1:~#
Open the newly created hosts file in a text editor.
In the example, we use the vi
editor. We need to remove the client shrfs1-clnt2
:
root@mds1:~# vi /etc/opt/SUNWsamfs/hosts.shrfs1 # Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------------ ---------------------- ------- --- ---------- mds 10.79.213.117 1 0 server mds2 10.79.213.217 2 0 clnt1 10.79.213.133 0 0 clnt2 10.79.213.47 0 0
In the hosts file, delete the line that corresponds to the client host that you need to remove. Then save the file, and close the editor.
In the example, we delete the entry for the host clnt2
:
root@mds1:~# vi /etc/opt/SUNWsamfs/hosts.shrfs1
# Server On/ Additional
#Host Name Network Interface Ordinal Off Parameters
#------------------ ---------------------- ------- --- ----------
mds 10.79.213.117 1 0 server
mds2 10.79.213.217 2 0
clnt1 10.79.213.133 0 0
:wq
root@mds1:~#
Update the file-system with the revised hosts file. From the metadata server, use the command samsharefs
-R
-u
filesystem
, where filesystem
is the name of the file system from which you are removing the client host.
root@mds1:~# samsharefs -u shrfs1
On the metadata server host, mount the shared file system.
In the examples, the /etc/vfstab
file contains an entry for the shrfs1
file system, so we use the simple mounting syntax (see the mount_samfs
man page for full information):
root@mds1:~# mount /hsm/shrfs1
On the each client host, mount the shared file system.
Remember that potential metadata servers are themselves clients. In the example, we now have two clients: clnt1
and mds2
, a potential metadata server. For each client, we log in using ssh
, mount the file system shrfs1
, and close the ssh
session:
root@mds1:~# ssh root@mds2 Password: root@mds2:~# mount /hsm/shrfs1 root@mds2:~# exit root@mds1:~# ssh root@clnt1 Password: root@clnt1:~# mount /hsm/shrfs1 root@clnt1:~# exit root@mds1:~#
Stop here.
Starting with Oracle HSM Release 6.1.4, any client of a shared archiving file system that runs on Solaris 11 or higher can attach tape drives and carry out tape I/O on behalf of the file system. Distributing tape I/O across these datamover hosts greatly reduces server overhead, improves file-system performance, and allows significantly more flexibility when scaling Oracle HSM implementations. As your archiving needs increase, you now have the option of either replacing Oracle HSM metadata servers with more powerful systems (vertical scaling) or spreading the load across more clients (horizontal scaling).
To configure a client for distributed tape I/O, proceed as follows:
Connect all devices that will be used for distributed I/O to the client.
If you have not already done so, connect the tape devices using persistent bindings. Then return here.
Log in to the shared archiving file system's metadata server as root
.
In the example, the host name is mds
:
root@mds:~#
Make sure that the metadata server is running Oracle HSM Solaris 11 or higher.
root@mds:~# uname -r 5.11 root@mds:~#
Make sure that all clients that serve as datamovers are running Oracle HSM Solaris 11 or higher.
In the example, we open a terminal window for each client host, clnt1
and clnt2
, and log in remotely using ssh
. The log-in banner displays the Solaris version:
root@mds:~# ssh root@clnt1 ... Oracle Corporation SunOS 5.11 11.1 September 2013 root@clnt1:~# root@mds:~# ssh root@clnt2 ... Oracle Corporation SunOS 5.11 11.1 September 2013 root@clnt2:~#
On the metadata server, copy the file /opt/SUNWsamfs/examples/defaults.conf
to the directory /etc/opt/SUNWsamfs/
.
root@mds:~# cp /opt/SUNWsamfs/examples/defaults.conf /etc/opt/SUNWsamfs/ root@mds:~#
On the metadata server, open the file /etc/opt/SUNWsamfs/defaults.conf
in a text editor.
By default, distio
is off
(disabled). In the example, we open the file in the vi
editor:
root@mds:~# vi /etc/opt/SUNWsamfs/defaults.conf # These are the defaults. To change the default behavior, uncomment the # appropriate line (remove the '#' character from the beginning of the line) ... #distio = on
In the defaults.conf
file, enable distributed I/O by uncommenting the line distio =
on
.
By default, distio
is off
(disabled).
root@mds:~# vi /etc/opt/SUNWsamfs/defaults.conf # These are the defaults. To change the default behavior, uncomment the # appropriate line (remove the '#' character from the beginning of the line) ... distio = on
Next, identify each device type that should participate in distributed I/O by adding a line to the defaults.conf
file of the form dev
_distio
=
on
, where dev
is one of the equipment type identifiers listed in Appendix A, "Glossary of Equipment Types". To exclude device type dev
from distributed I/O, add the line dev
_distio
=
off
.
By default, Oracle StorageTek T10000 drives and LTO drives are allowed to participate in distributed I/O, while all other types are excluded. In the example, we plan to use LTO drives, so we do not need to make any edits:
root@mds:~# vi /etc/opt/SUNWsamfs/defaults.conf # These are the defaults. To change the default behavior, uncomment the # appropriate line (remove the '#' character from the beginning of the line) ... distio = on
Next, identify each device type that should not participate in distributed I/O by adding a line to the defaults.conf
file of the form dev
_distio
=
off
, where dev
is one of the equipment type identifiers listed in Appendix A, "Glossary of Equipment Types".
In the example, we do not want to use Oracle StorageTek T10000 drives with distributed I/O. Since, by default, Oracle StorageTek T10000 drives are allowed to participate in distributed I/O, we have to add the line ti_distio
=
off
to the defaults.conf
file:
root@mds:~# vi /etc/opt/SUNWsamfs/defaults.conf # These are the defaults. To change the default behavior, uncomment the # appropriate line (remove the '#' character from the beginning of the line) ... distio = on
Save the defaults.conf
file, and close the editor.
root@mds:~# vi /etc/opt/SUNWsamfs/defaults.conf
# These are the defaults. To change the default behavior, uncomment the
# appropriate line (remove the '#' character from the beginning of the line)
...
distio = on
ti_distio = off
:wq
root@mds:~#
On each client that will serve as a datamover, edit the defaults.conf
file so that it matches the file on the server.
In the example, we use Secure Shell (ssh
) to remotely log in to client clnt1
and edit the defaults.conf
file:
root@mds:~# ssh root@clnt1 Password: root@clnt1:~# vi /etc/opt/SUNWsamfs/defaults.conf # These are the defaults. To change the default behavior, uncomment the # appropriate line (remove the '#' character from the beginning of the line) # and change the value. ... distio = on ti_distio = off :wq root@clnt1:~# exit root@mds:~#
On each client that will serve as a datamover, open the /etc/opt/SUNWsamfs/mcf
file in a text editor. Add all of the tape devices that the metadata server is using for distributed tape I/O. Make sure that the device order and equipment numbers are identical to those in the mcf
file on the metadata server.
In the example, we edit the mcf
file on client clnt1
using vi
:
root@clnt1:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------------- --------- --------- ---------- ------ ------------- shrfs1 800 ms samsharefs on ... # Archival storage for copies: /dev/rmt/60cbn 901 li on /dev/rmt/61cbn 902 li on /dev/rmt/62cbn 903 li on /dev/rmt/63cbn 904 li on
If the tape library listed in the /etc/opt/SUNWsamfs/mcf
file on the metadata server is configured on the client that will serve as a datamover, specify the library family set as the family set name for the tape devices that are being used for distributed tape I/O. Save the file, and close the editor.
In the example, the library is configured on the host, clnt1
, so we use the family set name lib1
for the tape devices:
root@clnt1:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------------- --------- --------- ---------- ------ ------------- shrfs1 800 ms samsharefs on ... # Archival storage for copies: /dev/scsi/changer/c1t0d5 900 rb lib1 on /dev/rmt/60cbn 901 li lib1 on /dev/rmt/61cbn 902 li lib1 on /dev/rmt/62cbn 903 li lib1 on /dev/rmt/63cbn 904 li lib1 on :wq root@clnt1:~#
If the tape library listed in the /etc/opt/SUNWsamfs/mcf
file on the metadata server is not configured on the client that will serve as a datamover, use a hyphen (-
) as the family set name for the tape devices that are being used for distributed tape I/O. Save the file, and close the editor.
In the example, the library is not configured on the host, clnt1
, so we use the hyphen as the family set name for the drives:
root@clnt2:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------------- --------- --------- ---------- ------ ------------- shrfs1 800 ms samsharefs on ... # Archival storage for copies: /dev/rmt/60cbn 901 ti - on /dev/rmt/61cbn 902 ti - on /dev/rmt/62cbn 903 ti - on /dev/rmt/63cbn 904 ti - on :wq root@clnt2:~#
If you need to enable or disable distributed tape I/O for particular archive set copies, open the server's /etc/opt/SUNWsamfs/archiver.cmd
file in a text editor and add the -distio
parameter to the copy directive. Set -distio
on
to enable or off
to disable distributed I/O. Save the file, and close the editor.
In the example, we use the vi editor to turn distributed I/O off
for copy 1
and on
for copy 2
:
root@mds:~# vi /etc/opt/SUNWsamfs/archiver.cmd # archiver.cmd # Generated by config api Mon Nov 22 14:31:39 2013 ... # # Copy Parameters Directives params allsets -sort path -offline_copy stageahead allsets.1 -startage 10m -startsize 500M -startcount 500000 -distio off allsets.2 -startage 24h -startsize 20G -startcount 500000 -distio on :wq root@mds:~#
On each host, check the mcf
file for errors by running the sam-fsd
command, and correct any errors found.
The sam-fsd
is an initialization command that reads Oracle HSM configuration files. It will stop if it encounters an error. In the example, we check the mcf
file on clients clnt1
and clnt2
:
root@mds:~# ssh root@clnt1 Password: root@clnt1:~# sam-fsd ... root@clnt1:~# exit root@mds:~# ssh root@clnt2 Password: root@clnt2:~# sam-fsd ... root@clnt2:~# exit root@mds:~#
On the server, tell the Oracle HSM software to read the modified configuration files and reconfigure itself accordingly. Use the command samd
config
, and correct any errors found.
In the example, we run the samd config
command on the server, mds
:
root@mds:~# samd config Configuring SAM-FS ... root@mds:~#
Stop here.
When you add a host that serves as either a potential metadata server or a distributed I/O datamover client, you must configure removable media devices using persistent bindings. The Solaris operating system attaches drives to the system device tree in the order in which it discovers the devices at startup. This order may or may not reflect the order in which devices are discovered by other file system hosts or the order in which they are physically installed in the tape library. So you need to bind the devices to the new host in the same way that they are bound to the other hosts and in the same order in which they are installed in the removable media library.
The procedures below outline the required steps (for full information, see the devfsadm
and devlinks
man pages and the administration documentation for your version of the Solaris operating system):
If you have moved, added, or removed drives in a library or replaced or reconfigured the library associated with an archiving Oracle HSM shared file system, update persistent bindings to reflect the changes.
If you are adding a new metadata server or datamover client to an archiving Oracle HSM shared file system, persistently bind the new file system host to the removable media devices
Log in to the active metadata server host as root
.
root@mds1:~#
Create a new drive-mapping file as described in "Determining the Order in Which Drives are Installed in the Library".
In the example, the device-mappings.txt
file looks like this:
root@mds1:~# vi /root/device-mappings.txt LIBRARY SOLARIS SOLARIS DEVICE LOGICAL PHYSICAL NUMBER DEVICE DEVICE ------- ------------- ----------------------------------------------------- 2 /dev/rmt/0cbn -> ../../devices/pci@8.../st@w500104f00093c438,0:cbn 1 /dev/rmt/1cbn -> ../../devices/pci@8.../st@w500104f0008120fe,0:cbn 3 /dev/rmt/2cbn -> ../../devices/pci@8.../st@w500104f000c086e1,0:cbn 4 /dev/rmt/3cbn -> ../../devices/pci@8.../st@w500104f000b6d98d,0:cbn
Open the /etc/devlink.tab
file in a text editor.
In the example, we use the vi
editor:
root@mds1:~# vi /etc/devlink.tab # Copyright (c) 1993, 2011, Oracle and/or its affiliates. All rights reserved. # This is the table used by devlinks # Each entry should have 2 fields; but may have 3. Fields are separated # by single tab ('\t') characters. ...
Using the device-mappings.txt
file as a guide, remap a starting node in the Solaris tape device tree to the first drive in the library. In the /etc/devlink.tab
file, add a line of the form type=ddi_byte:tape;
addr=
device_address
,0;
rmt/
node-number
\M0
, where device_address
is the physical address of the device and node-number
is a position in the Solaris device tree that is high enough to avoid conflicts with any devices that Solaris configures automatically (Solaris starts from node 0
).
In the example, we note that the device address for the first device in the library, 1
, is w500104f0008120fe
and see that the device is currently attached to the host at rmt/1
:
root@mds1:~# cat /root/device-mappings.txt LIBRARY SOLARIS SOLARIS DEVICE LOGICAL PHYSICAL NUMBER DEVICE DEVICE ------- ------------- ----------------------------------------------------- 2 /dev/rmt/0cbn -> ../../devices/pci@8.../st@w500104f00093c438,0:cbn 1 /dev/rmt/1cbn -> ../../devices/pci@8.../st@w500104f0008120fe,0:cbn 3 /dev/rmt/2cbn -> ../../devices/pci@8.../st@w500104f000c086e1,0:cbn 4 /dev/rmt/3cbn -> ../../devices/pci@8.../st@w500104f000b6d98d,0:cbn
So we create a line in /etc/devlink.tab
that remaps rmt/60
to the number 1
drive in the library, w500104f0008120fe
:
root@mds1:~# vi /etc/devlink.tab # Copyright (c) 1993, 2011, Oracle and/or its affiliates. All rights reserved. ... type=ddi_byte:tape;addr=w500104f0008120fe,0; rmt/60\M0 :w
Continue to add lines to the /etc/devlink.tab
file for each tape device that is assigned for Oracle HSM archiving, so that the drive order in the device tree on the metadata server matches the installation order on the library. Save the file, and close the editor.
In the example, we note the order and addresses of the three remaining devices—library drive 2
at w500104f00093c438
, library drive 3
at w500104f000c086e1
, and library drive 4
at w500104f000c086e1
:
root@mds1:~# cat /root/device-mappings.txt ... 2 /dev/rmt/0cbn -> ../../devices/pci@8\.../st@w500104f00093c438,0:cbn 1 /dev/rmt/1cbn -> ../../devices/pci@8\.../st@w500104f0008120fe,0:cbn 3 /dev/rmt/2cbn -> ../../devices/pci@8\.../st@w500104f000c086e1,0:cbn 4 /dev/rmt/3cbn -> ../../devices/pci@8\.../st@w500104f000b6d98d,0:cbn
Then we map the device addresses to next three Solaris device nodes, maintaining the same order as in the library:
root@mds1:~# vi /etc/devlink.tab ... type=ddi_byte:tape;addr=w500104f0008120fe,0; rmt/60\M0 type=ddi_byte:tape;addr=w500104f00093c438,0; rmt/61\M0 type=ddi_byte:tape;addr=w500104f000c086e1,0; rmt/62\M0 type=ddi_byte:tape;addr=w500104f000b6d98d,0; rmt/63\M0 :wq root@mds1:~#
Delete all existing links to the tape devices in /dev/rmt
.
root@mds1:~# rm /dev/rmt/*
Create new, persistent tape-device links from the entries in the /etc/devlink.tab
file. Use the command devfsadm -c tape
.
Each time that the devfsadm
command runs, it creates new tape device links for devices specified in the /etc/devlink.tab
file using the configuration specified by the file. The -c tape
option restricts the command to creating new links for tape-class devices only:
root@mds1:~# devfsadm -c tape
Repeat the operation on each potential metadata server and datamover in the shared file system configuration. In each case, add the same lines to the /etc/devlink.tab
file, delete the links in /dev/rmt
, and run devfsadm -c tape
.
In the example, The file system configuration includes one potential metadata server, mds2
, and one client, clnt1
. We use ssh
to log in to each host in turn, and configure the same four logical devices: rmt/60\M0
, rmt/61\M0
, rmt/62\M0
, and rmt/63\M0
.
root@mds1:~# ssh root@mds2 Password: root@mds2:~# vi /etc/devlink.tab ... type=ddi_byte:tape;addr=w500104f0008120fe,0; rmt/60\M0 type=ddi_byte:tape;addr=w500104f00093c438,0; rmt/61\M0 type=ddi_byte:tape;addr=w500104f000c086e1,0; rmt/62\M0 type=ddi_byte:tape;addr=w500104f000b6d98d,0; rmt/63\M0 :wq root@mds2:~# rm /dev/rmt/* root@mds2:~# devfsadm -c tape root@mds2:~# exit root@mds1:~# ssh root@clnt1 Password: root@clnt1:~# vi /etc/devlink.tab ... type=ddi_byte:tape;addr=w500104f0008120fe,0; rmt/60\M0 type=ddi_byte:tape;addr=w500104f00093c438,0; rmt/61\M0 type=ddi_byte:tape;addr=w500104f000c086e1,0; rmt/62\M0 type=ddi_byte:tape;addr=w500104f000b6d98d,0; rmt/63\M0 :wq root@clnt1:~# rm /dev/rmt/* root@clnt1:~# devfsadm -c tape root@clnt1:~# exit root@mds1:~#
Return to the task that you were performing: "Configuring Datamover Clients for Distributed Tape I/O" or "Configuring Additional File System Clients".
Log in to the host as root
.
root@mds1~#
If the physical order of the drives in the media library has changed since the existing file-system hosts were configured, create a new mapping file as described in "Determining the Order in Which Drives are Installed in the Library".
In the example, the device-mappings.txt
file looks like this:
root@mds1~# cat /root/device-mappings.txt LIBRARY SOLARIS SOLARIS DEVICE LOGICAL PHYSICAL NUMBER DEVICE DEVICE ------- ------------- ----------------------------------------------------- 2 /dev/rmt/0cbn -> ../../devices/pci@8.../st@w500104f00093c438,0:cbn 1 /dev/rmt/1cbn -> ../../devices/pci@8.../st@w500104f0008120fe,0:cbn 3 /dev/rmt/2cbn -> ../../devices/pci@8.../st@w500104f000c086e1,0:cbn 4 /dev/rmt/3cbn -> ../../devices/pci@8.../st@w500104f000b6d98d,0:cbn
Open the /etc/devlink.tab
file in a test editor.
In the example, we use the vi
editor:
root@mds1~# vi /etc/devlink.tab # Copyright (c) 1993, 2011, Oracle and/or its affiliates. All rights reserved. # This is the table used by devlinks # Each entry should have 2 fields; but may have 3. Fields are separated # by single tab ('\t') characters. ...
Using the device-mappings.txt
file as a guide, remap a starting node in the Solaris tape device tree, rmt/
node-number
, to the first drive in the library. Add a line to the /etc/devlink.tab
file of the form type=ddi_byte:tape;
addr=
device_address
,0;
rmt/
node-number
\M0
, where: device_address
is the physical address of the device and node-number
is the device's position in the Solaris device tree. Choose a node number that is high enough to avoid conflicts with any devices that Solaris configures automatically (Solaris starts from node 0
).
In the example, we note that the device address for the first device in the library, 1
, is w500104f0008120fe
and see that the device is currently attached to the host at rmt/1
:
root@mds1~# cat /root/device-mappings.txt LIBRARY SOLARIS SOLARIS DEVICE LOGICAL PHYSICAL NUMBER DEVICE DEVICE ------- ------------- ----------------------------------------------------- 2 /dev/rmt/0cbn -> ../../devices/pci@8.../st@w500104f00093c438,0:cbn 1 /dev/rmt/1cbn -> ../../devices/pci@8.../st@w500104f0008120fe,0:cbn 3 /dev/rmt/2cbn -> ../../devices/pci@8.../st@w500104f000c086e1,0:cbn 4 /dev/rmt/3cbn -> ../../devices/pci@8.../st@w500104f000b6d98d,0:cbn
So we create a line in /etc/devlink.tab
that remaps rmt/60
to the number 1
drive in the library, w500104f0008120fe
:
root@mds1~# vi /etc/devlink.tab # Copyright (c) 1993, 2011, Oracle and/or its affiliates. All rights reserved. ... type=ddi_byte:tape;addr=w500104f0008120fe,0; rmt/60\M0 :w
Continue to add lines to the /etc/devlink.tab
file for each tape device that is assigned for Oracle HSM archiving, so that the drive order in the device tree on the metadata server matches the installation order on the library. Save the file.
In the example, we note the order and addresses of the three remaining devices—library drive 2
at w500104f00093c438
, library drive 3
at w500104f000c086e1
, and library drive 4
at w500104f000c086e1
:
root@mds1~# cat /root/device-mappings.txt ... 2 /dev/rmt/0cbn -> ../../devices/pci@8.../st@w500104f00093c438,0:cbn 1 /dev/rmt/1cbn -> ../../devices/pci@8.../st@w500104f0008120fe,0:cbn 3 /dev/rmt/2cbn -> ../../devices/pci@8.../st@w500104f000c086e1,0:cbn 4 /dev/rmt/3cbn -> ../../devices/pci@8.../st@w500104f000b6d98d,0:cbn
Then we map the device addresses to the next three Solaris device nodes, maintaining the same order as in the library:
root@mds1~# vi /etc/devlink.tab ... type=ddi_byte:tape;addr=w500104f0008120fe,0; rmt/60\M0 type=ddi_byte:tape;addr=w500104f00093c438,0; rmt/61\M0 type=ddi_byte:tape;addr=w500104f000c086e1,0; rmt/62\M0 type=ddi_byte:tape;addr=w500104f000b6d98d,0; rmt/63\M0 :wq root@mds1~#
Delete all existing links to the tape devices in /dev/rmt
.
root@mds1~# rm /dev/rmt/*
Create new, persistent tape-device links from the entries in the /etc/devlink.tab
file. Use the command devfsadm -c tape
.
Each time that the devfsadm
command runs, it creates new tape device links for devices specified in the /etc/devlink.tab
file using the configuration specified by the file. The -c tape
option restricts the command to creating new links for tape-class devices only:
root@mds1~# devfsadm -c tape
On each potential metadata server and datamover in the shared file system configuration, add the same lines to the /etc/devlink.tab
file, delete the links in /dev/rmt
, and run devfsadm -c tape
.
In the example, we use ssh
to log in to the potential metadata server host mds2
and the client host clnt1
. We then configure the same four logical devices, rmt/60\M0
, rmt/61\M0
, rmt/62\M0
, and rmt/63\M0
, on each:
root@mds1~# ssh root@mds2 Password: root@mds2:~# vi /etc/devlink.tab ... type=ddi_byte:tape;addr=w500104f0008120fe,0; rmt/60\M0 type=ddi_byte:tape;addr=w500104f00093c438,0; rmt/61\M0 type=ddi_byte:tape;addr=w500104f000c086e1,0; rmt/62\M0 type=ddi_byte:tape;addr=w500104f000b6d98d,0; rmt/63\M0 :wq root@mds2:~# rm /dev/rmt/* root@mds2:~# devfsadm -c tape root@mds2:~# exit root@mds1~# ssh root@clnt1 Password: root@clnt1:~# vi /etc/devlink.tab ... type=ddi_byte:tape;addr=w500104f0008120fe,0; rmt/60\M0 type=ddi_byte:tape;addr=w500104f00093c438,0; rmt/61\M0 type=ddi_byte:tape;addr=w500104f000c086e1,0; rmt/62\M0 type=ddi_byte:tape;addr=w500104f000b6d98d,0; rmt/63\M0 :wq root@clnt1:~# rm /dev/rmt/* root@clnt1:~# devfsadm -c tape root@clnt1:~# exit root@mds1~#
Return to the task that you were performing: "Configuring Datamover Clients for Distributed Tape I/O" or "Configuring Additional File System Clients".
The procedures in this section move the metadata service for the file system from the current host (the active metadata server) to a standby host (the potential metadata server). How you proceed depends on the health of the current host:
If the current, active host has failed, you move the metadata service from the faulty host to the healthy standby host, even if file systems are still mounted.
If the current, active host is healthy, you unmount the file systems before you move the metadata service from the current to the standby host.
This procedure lets you move the metadata service off of an active metadata server host that has stopped functioning. It activates a potential metadata server, even if a file system is still mounted. Proceed as follows:
Caution:
Never activate a potential metadata server until you have stopped, disabled, or disconnected the faulty metadata server!To activate a potential server when a file system is mounted and the active metadata server is down, you have to invoke the samsharefs
command with the -R
option, which acts on raw devices rather than on file-system interfaces. So, if you activate a potential metadata server while the faulty server is still connected to the devices, the faulty server can corrupt the file system.
If the active metadata server is faulty, make sure that it cannot access the metadata devices before you do anything else. Power the affected host off, halt the host, or disconnect the failed host from the metadata devices.
Wait at least until the maximum lease time has run out, so that all client read, write, and append leases can expire.
Log in to a potential metadata server as root
.
In the example, we log in to the potential metadata server mds2
:
root@mds2:~#
Activate the potential metadata server. From the potential metadata server, issue the command samsharefs -R
-s
server
file-system
, where server
is the host name of the potential metadata server and file-system
is the name of the Oracle HSM shared file system.
In the example, the file system name is shrfs1
:
root@mds2:~# samsharefs -R -s mds2 shrfs1
If you need to check the integrity of a file system and repair possible problems, unmount the file system now using the procedure "Unmount a Shared File System".
If you have unmounted the file system, perform the file system check. Use the command samfsck
-F
file-system
, where -F
specifies repair of errors and where file-system
is the name of the file system.
In the example, we check and repair the file system name is shrfs1
:
root@mds2:~# samfsck -F shrfs1 samfsck: Configuring file system samfsck: Enabling the sam-fsd service. name: shrfs1 version: 2A ... root@mds2:~#
Stop here.
You can move the metadata service off of a healthy, active metadata server host and on to a newly activated potential metadata server when required. For example, you might transfer metadata services to an alternate host to keep file systems available while you upgrade or replace the original server host or some of its components. Proceed as follows:
Log in to both the active and potential metadata servers as root
.
In the example, we log in to the active metadata server, mds1
. Then, in a second terminal window, we use secure shell (ssh
) to log in to the potential metadata server mds2
:
root@mds1~# ssh root@mds2
Password:
root@mds2:~#
If the active metadata server mounts an Oracle HSM archiving file system, finish active archiving and staging jobs and stop any new activity before proceeding further.
If you use a crontab
entry to run the recycler process, remove the entry and make sure that the recycler is not currently running.
Activate the potential metadata server. From the potential metadata server, issue the command samsharefs
-s
server
file-system
, where server
is the host name of the potential metadata server and file-system
is the name of the Oracle HSM shared file system.
In the example, the potential metadata server is mds2
and the file system name is shrfs1
:
root@mds2:~# samsharefs -s mds2 shrfs1
Load the configuration files and start Oracle HSM processes on the potential metadata server. Use the command samd config
.
For archiving shared file systems, the samd config
command restarts archiving processes and the library control daemon. But shared file system clients that are waiting for files to be staged from tape to the primary disk cache must reissue the stage requests.
If you still need to use a crontab
entry to run the recycler process, restore the entry.
Stop here.
To convert an unshared file system to a shared file system, carry out the following tasks:
On each metadata server, you must create a hosts file that lists network address information for the servers and clients of a shared file system. The hosts file is stored alongside the mcf
file in the /etc/opt/SUNWsamfs/
directory. During the initial creation of a shared file system, the sammkfs -S
command configures sharing using the settings stored in this file. So create it now, using the procedure below.
Gather the network host names and IP addresses for the hosts that will share the file system as clients.
In the examples below, we will share the shrfs1
file system with the clients mds2
(a potential metadata server), clnt1
, and clnt2
.
Log in to the metadata server as root
.
In the example, we log in to the host mds
:
root@mds1~#
Using a text editor, create the file /etc/opt/SUNWsamfs/hosts.
family-set-name
on the metadata server, replacing family-set-name
with the name of the family-set name of the file-system that you intend to share.
In the example, we create the file hosts.shrfs1
using the vi
text editor. We add some optional headings, starting each line with a hash sign (#
), indicating a comment:
root@mds1~# vi /etc/opt/SUNWsamfs/hosts.shrfs1
# /etc/opt/SUNWsamfs/hosts.shrfs1
# Server On/ Additional
#Host Name Network Interface Ordinal Off Parameters
#------------------ ---------------------- ------- --- ----------
Enter the host name of the metadata server in the first column and the corresponding IP address or domain name the second. Separate the columns with whitespace characters.
In the example, we enter the host name and IP address of the metadata server, mds1
and 10.79.213.117
, respectively:
root@mds1~# vi /etc/opt/SUNWsamfs/hosts.shrfs1 # /etc/opt/SUNWsamfs/hosts.shrfs1 # Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------------ ---------------------- ------- --- ---------- mds1 10.79.213.117
Add a third column, separated from the network address by whitespace characters. In this column, enter the ordinal number of the server (1
for the active metadata server, 2
for the first potential metadata server, and so on).
In this example, there is only one metadata server, so we enter 1
:
root@mds1~# vi /etc/opt/SUNWsamfs/hosts.shrfs1
# /etc/opt/SUNWsamfs/hosts.shrfs1
# Server On/ Additional
#Host Name Network Interface Ordinal Off Parameters
#------------------ ---------------------- ------- --- ----------
mds1 10.79.213.117 1
Add a fourth column, separated from the server ordinal number by whitespace characters. In this column, enter 0
(zero).
A 0
, -
(hyphen), or blank value in the fourth column indicates that the host is on
—configured with access to the shared file system. A 1
(numeral one) indicates that the host is off
—configured but without access to the file system (for information on using these values when administering shared file systems, see the samshrfs1
man page).
root@mds1~# vi /etc/opt/SUNWsamfs/hosts.shrfs1
# /etc/opt/SUNWsamfs/hosts.shrfs1
# Server On/ Additional
#Host Name Network Interface Ordinal Off Parameters
#------------------ ---------------------- ------- --- ----------
mds1 10.79.213.117 1 0
Add a fifth column, separated from the on/off status column by whitespace characters. In this column, enter the keyword server
to indicate the currently active metadata server:
root@mds1~# vi /etc/opt/SUNWsamfs/hosts.shrfs1
# /etc/opt/SUNWsamfs/hosts.shrfs1
# Server On/ Additional
#Host Name Network Interface Ordinal Off Parameters
#------------------ ---------------------- ------- --- ----------
mds1 10.79.213.117 1 0 server
If you plan to include one or more hosts as a potential metadata servers, create an entry for each. Increment the server ordinal each time. But do not include the server
keyword (there can be only one active metadata server per file system).
In the example, the host mds2
is a potential metadata server with the server ordinal 2
. Until and unless we activate it as a metadata server, it will be a client:
root@mds1~# vi /etc/opt/SUNWsamfs/hosts.shrfs1
# /etc/opt/SUNWsamfs/hosts.shrfs1
# Server On/ Additional
#Host Name Network Interface Ordinal Off Parameters
#------------------ ---------------------- ------- --- ----------
mds1 10.79.213.117 1 0 server
mds2 10.79.213.217 2 0
Add a line for each client host, each with a server ordinal value of 0
.
In the example, we add two clients, clnt1
and clnt2
.
root@mds1~# vi /etc/opt/SUNWsamfs/hosts.shrfs1 # /etc/opt/SUNWsamfs/hosts.shrfs1 # Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------------ ---------------------- ------- --- ---------- mds1 10.79.213.117 1 0 server mds2 10.79.213.217 2 0 clnt1 10.79.213.33 0 0 clnt2 10.79.213.47 0 0
Save the /etc/opt/SUNWsamfs/hosts.
family-set-name
file, and quit the editor.
In the example, we save the changes to /etc/opt/SUNWsamfs/hosts.shrfs1
and exit the vi
editor:
root@mds1~# vi /etc/opt/SUNWsamfs/hosts.shrfs1
# /etc/opt/SUNWsamfs/hosts.shrfs1
# Server On/ Additional
#Host Name Network Interface Ordinal Off Parameters
#------------------ ---------------------- ------- --- ----------
mds1 10.79.213.117 1 0 server
mds2 10.79.213.217 2 0
clnt1 10.79.213.33 0 0
clnt2 10.79.213.47 0 0
:wq
root@mds1~#
Place a copy of the new /etc/opt/SUNWsamfs/hosts.
family-set-name
file on any potential metadata servers that are included in the shared file-system configuration.
In the examples, we use Secure File Transfer Protocol to place a copy on host mds2
:
root@mds1~# sftp root@mds2 Password: sftp> cd /etc/opt/SUNWsamfs/ sftp> put /etc/opt/SUNWsamfs/hosts.shrfs1 sftp> bye root@mds1~#
Now share the unshared file system and configure the clients.
Log in to the metadata server as root
.
In the example, we log in to the host mds
:
root@mds1~#
If you do not have current backup copies of the system files and configuration files, create backups now. Use the procedures in "Backing Up the Oracle HSM Configuration".
If you do not have a current file-system recovery point file and a recent copy of the archive log, create them now. Use the procedures in "Backing Up File Systems".
If you set up an automated backup process for the file system during initial configuration, you may not need additional backups.
If you are converting an archiving file system, finish active archiving and staging jobs and stop any new activity before proceeding further.
Unmount the file system. Use the command umount
family-set-name
, where family-set-name
is the family-set name of the file-system that you intend to share.
For more information on mounting and unmounting Oracle HSM file systems, see the mount_samfs
man page. In the example, we unmount the hqfs1
file system:
root@mds1~# umount /hsm/hqfs1 root@mds1~#
Convert the file system to an Oracle HSM shared file system. Use the command samfsck
-
S
-
F
file-system-name
, where file-system-name
is the family-set name of the file system.
In the example, we convert the file system named hqfs1
:
root@mds1~# samfsck -S -F hqfs1 ... root@mds1~#
Open the /etc/opt/SUNWsamfs/mcf
file in a text editor, and locate the line for the file system.
In the example, we use the vi
editor:
root@mds1~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- ------- ------ ----------------- hqfs1 200 ma hqfs1 on /dev/dsk/c0t0d0s0 201 mm hqfs1 on /dev/dsk/c0t3d0s0 202 md hqfs1 on /dev/dsk/c0t3d0s1 203 md hqfs1 on
In the mcf
file, add the shared
parameter to the additional parameters field in the last column of the file system entry. Then save the file and close the editor.
root@mds1~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- ------- ------ ----------------- hqfs1 200 ma hqfs1 on shared /dev/dsk/c0t0d0s0 201 mm hqfs1 on /dev/dsk/c0t3d0s0 202 md hqfs1 on /dev/dsk/c0t3d0s1 203 md hqfs1 on :wq root@mds1~#
Open the /etc/vfstab
file in a text editor, and locate the line for the file system.
In the example, we use the vi
editor:
root@mds1~# vi /etc/vfstab #File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------- ------ ---- ------- ----------------------- /devices - /devices devfs - no - /proc - /proc proc - no - ... hqfs1 - /hsm/hqfs1 samfs - yes
In the /etc/vfstab
file, and add the shared
mount option to mount options field in the last column of the file system entry. Then save the file and close the editor.
root@mds1~# vi /etc/vfstab #File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------- ------ ---- ------- ----------------------- /devices - /devices devfs - no - /proc - /proc proc - no - ... hqfs1 - /hsm/hqfs1 samfs - yes shared :wq root@mds1~#
Initialize the shared file system and host configuration. Use the command samsharefs -u -R
family-set-name
, where family-set-name
is the family-set name of the file system.
root@mds1~# samsharefs -u -R hqfs1
Tell the Oracle HSM software to re-read the mcf
file and reconfigure itself accordingly:
root@mds1~# samd config
Configuring SAM-FS ...
root@mds:~#
Mount the shared file system on the metadata server.
root@mds1~# mount /hsm/hqfs1
If your hosts are configured with multiple network interfaces, use local host files to route network communications.
Add any required clients to the newly shared file system, using the procedures outlined in "Configuring Additional File System Clients".
Individual hosts do not require local hosts files. The file system's global file on the metadata server identifies the active metadata server and the network interfaces of active and potential metadata servers for all file system hosts (see "Create a Hosts File on the Active and Potential Metadata Servers"). But local hosts files can be useful when you need to selectively route network traffic between file-system hosts that have multiple network interfaces.
Each file-system host identifies the network interfaces for the other hosts by first checking the /etc/opt/SUNWsamfs/hosts.
family-set-name
file on the metadata server, where family-set-name
is the name of the file system family specified in the /etc/opt/SUNWsamfs/mcf
file. Then it checks for its own, specific /etc/opt/SUNWsamfs/hosts.
family-set-name
.local
file. If there is no local hosts file, the host uses the interface addresses specified in the global hosts file in the order specified in the global file. But if there is a local hosts file, the host compares it with the global file and uses only those interfaces that are listed in both files in the order specified in the local file. By using different addresses in each file, you can thus control the interfaces used by different hosts.
To configure local hosts files, use the procedure outlined below:
On the metadata server host and on each potential metadata server host, create a copy of the global hosts file, /etc/opt/SUNWsamfs/hosts.
family-set-name
.
For the examples in this section, the shared file system, shrfs1
, includes an active metadata server, mds1
, and a potential metadata server, mds2
, each with two network interfaces. There are also two clients, clnt1
and clnt2
.
We want the active and potential metadata servers to communicate with each other via private network addresses and with the clients via host names that Domain Name Service (DNS) can resolve to addresses on the public, local area network (LAN). So /etc/opt/SUNWsamfs/hosts.shrfs1
, the file system's global host file, specifies a private network address in the Network Interface
field of the entries for the active and potential servers and a host name for the interface address of each client. The file looks like this:
# /etc/opt/SUNWsamfs/hosts.shrfs1 # Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------------ ---------------------- ------- --- ---------- mds1 172.16.0.129 1 0 server mds2 172.16.0.130 2 0 clnt1 clnt1 0 0 clnt2 clnt2 0 0
Create a local hosts file on each of the active and potential metadata servers, using the path and file name /etc/opt/SUNWsamfs/hosts.
family-set-name
.local
, where family-set-name
is the name specified for the shared file system in the /etc/opt/SUNWsamfs/mcf
file. Only include interfaces for the networks that you want the active and potential servers to use.
In the example, we want the active and potential metadata servers to communicate with each other over the private network, so the local hosts file on each server, hosts.shrfs1.local
, lists only private addresses for active and potential servers:
root@mds1:~# vi /etc/opt/SUNWsamfs/hosts.shrfs1.local # /etc/opt/SUNWsamfs/hosts.shrfs1.local # Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------------ ---------------------- ------- --- ---------- mds1 172.16.0.129 1 0 server mds2 172.16.0.130 2 0 :wq root@mds1:~# ssh root@mds2 Password:
root@mds2:~# vi /etc/opt/SUNWsamfs/hosts.shrfs1.local # /etc/opt/SUNWsamfs/hosts.shrfs1.local # Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------------ ---------------------- ------- --- ---------- mds1 172.16.0.129 1 0 server mds2 172.16.0.130 2 0 :wq root@mds2:~# exit root@mds1:~#
Create a local hosts file on each of the clients, using the path and file name /etc/opt/SUNWsamfs/hosts.
family-set-name
.local
, where family-set-name
is the name specified for the shared file system in the /etc/opt/SUNWsamfs/mcf
file. Only include interfaces for the networks that you want the clients to use.
In our example, we want the clients to communicate with the server only via the public network. So the file includes only the host names of the active and potential metadata servers:
root@mds1:~# ssh root@clnt1 Password: root@clnt1:~# vi /etc/opt/SUNWsamfs/hosts.shrfs1.local # /etc/opt/SUNWsamfs/hosts.shrfs1 # Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------------ ---------------------- ------- --- ---------- mds1 mds1 1 0 server mds2 mds2 2 0 :wq root@clnt1:~# exit root@mds1:~# ssh root@clnt2 Password:
root@clnt2:~# vi /etc/opt/SUNWsamfs/hosts.shrfs1.local # /etc/opt/SUNWsamfs/hosts.shrfs1.local # Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------------ ---------------------- ------- --- ---------- mds1 mds1 1 0 server mds2 mds2 2 0 :wq root@clnt2:~# exit root@mds1:~#
If you started this procedure while finishing the configuration of the server, add clients.
When you need to unshare a file system, proceed as follows:
Log in to the metadata server as root
.
In the example, we log in to the host mds
:
root@mds:~#
If you do not have current backup copies of the system files and configuration files, create backups now. See "Backing Up the Oracle HSM Configuration".
If you do not have a current file-system recovery point file and a recent copy of the archive log, create them now. See "Backing Up File Systems".
If you set up an automated backup process for the file system during initial configuration, you may not need additional backups.
If you are converting an archiving file system, finish active archiving and staging jobs and stop any new activity before proceeding further.
Unmount the file system. Use the command umount
family-set-name
, where family-set-name
is the name specified for the shared file system in the /etc/opt/SUNWsamfs/mcf
file.
For more information on mounting and unmounting Oracle HSM file systems, see the mount_samfs
man page. In the example, we unmount the hqfs1
file system:
root@mds:~# umount /hsm/hqfs1
Convert the Oracle HSM shared file system to an unshared file system. Use the command samfsck -F -U
file-system-name
, where file-system-name
is the name specified for the shared file system in the /etc/opt/SUNWsamfs/mcf
file.
In the example, we convert the file system named hqfs1
:
root@mds:~# samfsck -F -U hqfs1
samfsck: Configuring file system
samfsck: Enabling the sam-fsd service.
name: hqfs1 version: 2A
First pass
Second pass
Third pass
...
root@mds:~#
Open the /etc/opt/SUNWsamfs/mcf
file in a text editor, and locate the line for the file system.
In the example, we use the vi
editor:
root@mds:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- ------- ------ ----------------- hqfs1 200 ma hqfs1 on shared /dev/dsk/c0t0d0s0 201 mm hqfs1 on /dev/dsk/c0t3d0s0 202 md hqfs1 on /dev/dsk/c0t3d0s1 203 md hqfs1 on
In the mcf
file, delete the shared
parameter from the additional parameters field in the last column of the file system entry. Then save the file and close the editor.
root@mds:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- ------- ------ ----------------- hqfs1 200 ma hqfs1 on /dev/dsk/c0t0d0s0 201 mm hqfs1 on /dev/dsk/c0t3d0s0 202 md hqfs1 on /dev/dsk/c0t3d0s1 203 md hqfs1 on :wq root@mds:~#
Open the /etc/vfstab
file in a text editor, and locate the line for the file system.
In the example, we use the vi
editor:
root@mds:~# vi /etc/vfstab #File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------- ------ ---- ------- ----------------------- /devices - /devices devfs - no - /proc - /proc proc - no - ... hqfs1 - /hsm/hqfs1 samfs - yes shared
In the /etc/vfstab
file, delete the shared
mount option from the mount options field in the last column of the file system entry. Then save the file and close the editor.
In the example, we use the vi
editor:
root@mds:~# vi /etc/vfstab #File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------- ------ ---- ------- ----------------------- /devices - /devices devfs - no - /proc - /proc proc - no - ... hqfs1 - /hsm/hqfs1 samfs - yes :wq root@mds:~#
Delete the file /etc/opt/SUNWsamfs/hosts.
file-system-name
.
Tell the Oracle HSM software to re-read the mcf
file and reconfigure itself accordingly:
root@mds:~# samd config
Configuring SAM-FS ...
root@mds:~#
Mount the file system.
root@mds:~# mount /hsm/hqfs1
root@mds:~#
Stop here.