Skip Headers

Oracle® Files Administrator's Guide
Release 2 (9.0.4)

Part Number B10872-02
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Feedback

Go to previous page
Previous
Go to next page
Next
View PDF

9 Maintenance and Tuning

This chapter provides important information about on-going system maintenance, tuning, and recovery. As with any production system, your implementation of Oracle Files should include a basic disaster recovery plan. To manage growth of content in an Oracle Files instance, the system provides some unique capabilities, such as LOB (large objects) file archiving.

This chapter includes the following topics:

Backup and Recovery

Planning for failures is one of the most important jobs of any system administrator or DBA. Be sure to implement a daily or weekly backup plan that meets the needs of your business and operations environment. Take advantage of the Oracle database backup capabilities, built right into the database.

Always back up the system before upgrading, migrating new data, or making other major changes. See Oracle9i Backup and Recovery Concepts for additional information.


Note:

In addition to the Oracle Files schema, there are three "special" schemas that ensure secure connectivity to other systems. When you back up your system, make sure to include these schemas.

The special schema names are derived from the Oracle Files schema name. For example, if the Oracle Files schema name is IFSSYS, the additional schemas are IFSSYS$CM, IFSSYS$DR, and IFSSYS$ID.


LOB (Large Objects) Management

Oracle Files data is comprised of content and metadata. The majority of data stored in Oracle Files is content and is stored in LOBs (Large Objects) in database tablespaces. All documents are stored as Binary Large Objects (BLOBs), which is one type of LOB provided by the database. See "Provide Adequate Storage to Improve Performance" for more information.

Through BLOB management, Oracle Files provides data archiving. Content in the Archive is periodically moved from BLOBs to BFILE. The content is still accessible, and is visible to the Subscriber Administrator.

Moving Content Off-line or Near-line

As the amount of content stored increases, it may become desirable to move content to a cheaper medium. BFILE support provides off-line and near-line storage.

Oracle Files provides transparent access to content stored as either a BLOB (online storage) or a BFILE (near-line storage). A BFILE is a read-only Oracle data type consisting of a directory object and a filename.

Moving Content to BFILEs

Oracle Files moves content from the Archive to BFILEs on a periodic basis. The following two agents control the behavior:

  • FilesArchiveFileToBFileAgent

  • FilesDelayedArchiveFileToBFileAgent

Base Path

The file the BFILE points to is located in the database's $ORACLE_HOME/ifsbfiles/files_schema. UNIX users may configure the real location of the BFILE using symbolic links. Windows users must use this BFILE base path.

Relative Path

From the base path, the FilesArchiveFileToBFileAgent and the FilesDelayedArchiveFileToBFileAgent associate a relative path. The path now looks like this:

$ORACLE_HOME/ifsbfiles/files_schema/yyyy/dd/mm/hh/ifsbfile_id

ifsbfile_id is the file naming pattern that associates a unique ID to each piece of content.


Note:

If you are using Oracle Collaboration Suite Release 2 (9.0.4.1) or if you have downloaded and applied the Oracle Collaboration Suite Release 2 Patch Set 1 (9.0.4.2.0), the path appears as follows:
$ORACLE_HOME/ifsbfiles/files_schema/yyyy/dd/mm/hh/mm/ss/ifsbfile_id

Use Symbolic Linking to Change the Storage Location (UNIX only)

You can create a symbolic link to connect the base file location to a different location, such as an HSM system or a tape backup system, rather than limit your BFILE storage to the database's $ORACLE_HOME.

Disabling the BFILE Agents

Some administrators may want to temporarily disable BFILE storage while they formulate an archival strategy and arrange for adequate disk space. To disable BFILE storage, you must ensure that the Oracle Files BFILE agents are not initially started by the node.

To do this, use the Oracle Enterprise Manager Web site to update the node configuration that is used by the node that runs these agents, as follows:

  1. From the Oracle9iAS Farm Home page, click the name of the application server on which Oracle Files is running. The Oracle9iAS Instance Home page appears, listing all the components running on the application server instance. The Oracle Files domain appears in the following format:

    iFS_db_host:port:db_service:files_schema
    
    
  2. Click the name of the Oracle Files domain. The Oracle Files home page appears, listing the Domain Controller and nodes that comprise the domain.

  3. Click Node Configurations, under the Configuration heading.

  4. On the Node Configurations page, click the name of the node configuration that is used by the node that runs the BFILE agents. The Edit page appears.

  5. Under the Servers heading, select the FilesArchiveFileToBFileAgent and click Edit.

  6. Clear Initially Started on the Edit Server page and click OK.

  7. Under the Servers heading, select the FilesDelayedArchiveFileToBFileAgent and click Edit.

  8. Clear Initially Started on the Edit Server page and click OK.

  9. Click OK on the Edit Node Configuration page.

  10. Return to the Oracle Files Home page and restart the affected node.

Performance Tuning

Performance is typically affected by network input/output (I/O), hard-disk drive I/O, memory (random access memory) I/O, or some combination of these three or other factors. Adjusting one of the factors sometimes moves the bottleneck to a new location, so you must approach the tuning task in a logical manner.

The performance tips in this section cover the basics and include:

See the Oracle9i Database Performance Tuning Guide and Reference for complete information.

Run the Oracle Files analyze.sql Script Frequently

Oracle Files uses Oracle9i Database Server's Cost-Based Optimizer (CBO) to determine the most efficient way to execute SQL statements. For the CBO to work properly, the Oracle Files analyze.sql script should be run as part of regular Oracle Files operations, especially after large volume changes to the data, such as after users have loaded a large number of files into the instance. This script generates statistics about the distribution of data in Oracle Files so that the CBO can choose the most efficient way to execute SQL statements. For more information about the Cost-Based Optimizer, see the Oracle9i Database Performance Tuning Guide and Reference.

Run the script during non-busy periods to avoid impeding performance for users.

The analyze.sql script, which makes calls to the DBMS_STATS package, exports schema statistics to a backup table, so you can restore statistics later, if necessary, as discussed in "Restoring Prior Statistics" in the following section. To run the script, enter the following at the command line:

cd $ORACLE_HOME/ifs/files/admin/sql
sqlplus files_schema/password@connect_string @analyze.sql files_schema

This script may take a while to run, especially if Oracle Files contains a large number of documents.

Restoring Prior Statistics

Before gathering new statistics, the analyze.sql script exports backup statistics to the IFS_BACKUP_STATS table, marking the set of statistics with a timestamp. You can query the table for existing saved sets by executing this SQL statement:

SQL> select distinct statid from ifs_backup_stats;

This query returns a list of all statistics by statid (the date and time stamp). For example:

STATID
------------------------------
01-MAY-02 02:15.36
04-MAY-02 20:00.15
08-MAY-02 02:15.48
11-MAY-02 06:21.40
11-MAY-02 20:15.37

You can then restore the statistics from a day and time when you know your performance was better. For example, if you find that after using the statistics from your 8:00 pm running of analyze that performance is worse, then you can restore your statistics from earlier that day using:

SQL> @import_backup_stats.sql user_name '08-MAY-02 06:21.40'

By restoring the statistics, you are directing the CBO to revert to the way it previously executed SQL statements.

Provide Adequate Storage to Improve Performance

The largest consumption of disk space occurs on the disks that actually contain the documents residing in Oracle Files, namely the Indexed Medias and Non-Indexed Medias tablespaces. This section explains how the documents are stored and how to calculate the amount of space those documents will require.

Document Storage and Sizing Issues

BLOBs provide for transactional semantics much like the normal data stored in a database. To meet the criteria of transactional semantics, BLOBs must be broken down into smaller pieces which are individually modifiable and recoverable. These smaller pieces are referred to as chunks. Chunks are actually a group of one or more sequential database blocks from a tablespace that contains a BLOB column.

Both database blocks and chunk information within those blocks (BlockOverhead) impose some amount of overhead for the stored data. BlockOverhead is presently 60 bytes per block and consists of the block header, the BLOB header, and the block checksum. Oracle Files configures its BLOBs to have a 32 K chunk size. As an example, assume that the DB_BLOCK_SIZE parameter of the database is set to 8192 (8 K). A chunk would require four contiguous blocks and impose an overhead of 240 bytes. The usable space within a chunk would be 32768-240=32528 bytes.

Each document stored in Oracle Files will consist of some integral number of chunks. Using the previous example, a 500K document will actually use 512000/32528=15.74=16 chunks. Sixteen chunks will take up 16*32 K = 524288 bytes. The chunking overhead for storing this document would then be 524288-512000=12288 bytes which is 2.4 percent of the original document's size. The chunk size used by Oracle Files is set to optimize access times for documents. Note that small documents, less than one chunk, will incur a greater disk space percentage overhead since they must use at least a single chunk.

Another structure required for transactional semantics on BLOBs is the BLOB Index. Each BLOB index entry can point to eight chunks of a specific BLOB object (NumLobPerIndexEntry = 8). Continuing the example, whereas a 500 K document takes up 16 chunks, two index entries would be required for that object. Each entry takes 46 bytes (LobIndexEntryOverhead) and is then stored in an Oracle B*Tree index, which in turn has its own overhead depending upon how fragmented that index becomes.

The last factor affecting BLOB space utilization is the PCTVERSION parameter used when creating the BLOB column. For information about how PCTVERSION works, please consult the Oracle9i SQL Reference.

Oracle Files uses the default PCTVERSION of 10 percent for the BLOB columns it creates. This reduces the possibility of "ORA-22924 snapshot too old" errors occurring in read consistent views. By default, a minimum of a 10 percent increase in chunking space must be added in to the expected disk usage to allow for persistent PCTVERSION chunks.

For large systems where disk space is an issue, set the PCTVERSION to 1 to reduce disk storage requirements. This may be done at any time in a running system with these SQL commands:

alter table odmm_contentstore modify lob (globalindexedblob) (pctversion 1);
alter table odmm_contentstore modify lob (intermediablob) (pctversion 1);
alter table odmm_contentstore modify lob (intermediablob_t) (pctversion 1);
alter table odmm_nonindexedstore modify lob (nonindexedblob2) (pctversion 1);

The steps for calculating BLOB tablespace usage are as follows:

  1. Calculate the number of chunks a file will take up by figuring the number of blocks per chunk and then subtracting the BlockOverhead (60 bytes) from the chunk size to get the available space per chunk.

  2. Divide the file size by the available space per chunk to get the number of chunks.

    chunks = roundup(FileSize/(ChunkSize-((ChunkSize/BlockSize) * BlockOverhead)))
    
    

    For example, if FileSize = 100,000, ChunkSize = 32768, Blocksize = 8192, and BlockOverhead = 60, then Chunks = roundup (100000 /(32768 - ((32768 / 8192) * 60)))= 4 Chunks.

  3. Calculate the amount of disk space for a file by multiplying the number of chunks times the chunk size and then multiplying that result by the PCTVERSION factor. Then add the space for NumLobPerIndexEntry (8) and LobIndexEntryOverhead (46 bytes).

    FileDiskSpaceInBytes = roundup(chunks*ChunkSize*PctversionFactor) +
    roundup(chunks/NumLobPerIndexEntry*LobIndexEntryOverhead)
    
    

    Continuing from the preceding example, chunks = 4, ChunkSize = 32768, PctversionFactor = 1.1, NumLobPerIndexEntry = 8, and LobIndexEntryOverhead = 46, so FileDiskSpaceInBytes = roundup (4 * 32768 * 1.1) + (roundup(4/8) * 46) = 144226 FileDiskSpaceInBytes.

  4. Calculate the total disk space used for file storage by summing up the application of these formulas for each file to be stored in the BLOB.

    TableSpaceUsage = the total of FileDiskSpaceInBytes for all files stored
    
    

Oracle Files creates multiple BLOB columns. The space calculation must be made for each tablespace based upon the amount of content that will qualify for storage in each tablespace.

Oracle Files Metadata and Infrastructure

The Oracle Files server keeps persistent information about the file system and the contents of that file system in database tables. These tables and their associated structures are stored in the Oracle Files Primary tablespace. These structures are required to support both the file system and the various protocols and APIs that make use of that file system. The administration and planning tasks of this space should be very similar to operations on a normal Oracle database installation.

You should plan for approximately 6 K of overhead per document to be used from this tablespace, or about 2 percent of the overall content. If there is a significant number of custom metadata, such as attributes, subclasses or categories, this overhead should be much larger.

The initial disk space allocated for the primary tablespace is approximately 50 MB for a default installation. Of the 50 MB, 16 MB is actually used at the completion of installation. This includes instantiations for all required tables and indexes and the metadata required for the 700+ files that are loaded into Oracle Files as part of the installation. Different tables and indexes within this tablespace will grow at different rates depending on which features of Oracle Files get used in a particular installation.

Analyzing Performance Problems

After ensuring that you have run statistics properly and have enough free hard-disk drive to support the tablespaces, you may still have performance problems. If that is the case, you must determine whether the performance bottleneck is caused by the Oracle9i Database Server, by Oracle Files, or by other factors.

To isolate the problem, start by looking at which processes are running and how many resources they are consuming.

  1. Run top (on UNIX) or Task Manager (on Windows platforms) as you reproduce the problem.

  2. Determine whether a Java process, the Oracle shadow process, I/O, or a combination is the bottleneck during that time.

If the Database is the Bottleneck

If the bottleneck is the Oracle shadow process, use the Statspack utility to determine the SQL statement which is causing the largest number of buffer gets, and run Explain Plan on it.

If you see full table scans, then that may be the cause of the problem; the optimizer may not be choosing an appropriate plan. Report that problem to Oracle Support. Additional work must be done to isolate the problem.

For more information about the Statspack utility and Explain Plan, see the Oracle9i Database Performance Tuning Guide and Reference.

If the Java Processes are the Bottleneck

You may have too little memory. For example, if you see any java.lang.OutOfMemoryError errors in your log files, increase your Maximum Memory (Xmx) settings for that JVM. For more information about changing the Xmx setting, see the Oracle Files Planning Guide.

If users are complaining about poor response times, and top (on UNIX) or its equivalent (for example, Task Manager on Windows platforms), shows a Java process running at 100 percent of a CPU for a minute or longer, then the Xmx setting for Java may be too small.

  1. Turn on verbose garbage collection (verbosegc). To do this, edit the Java Command property of the node configuration. See Table 6-1, "Node Configuration Properties" for more information.

    In the node log file, output related to garbage collection appears as follows:

    GC[1] in 305 ms: (6144kb, 6% free) -> (14Mb, 61% free)
    
    

    GC[1] indicates a major garbage collection. GC[0] indicates a nursery space only garbage collection. In this example, the collection took 305 ms. At the start of the collection, heap size was 6144 kb, with 6 percent free. The heap expanded during collection to a 14 Mb heap with 61 percent free.

    • In JRE 1.3, major GCs show up as GC[1]. In other JVMs, they may show up as major GC.

    • A major GC occurs when the Garbage Collector has exhausted all available memory in the nursery, and has to go into the rest of the heap to reclaim memory.

  2. If major GCs generally occur more than once every 10 minutes (not just after startup), increase your Xmx settings for that JVM.

Obtaining Oracle Files Java Cache Statistics

If the bottleneck is an Oracle Files Java process, start by checking the percentage of cache hits for the Oracle Files service. Starting from the Oracle Enterprise Manager Web site Oracle9iAS Home page (http://host_name:1810):

  1. In the System Components list, click the name of the Oracle Files domain. The Oracle Files Home page appears, listing the components of the Oracle Files installation: Domain Controller, HTTP Node, and Node.

  2. Click the name of the node. The Node page appears, showing the current status of the node (Up or Down), the service running and its status, and a list of server objects running on this node.

  3. Click the name of the service. Typically, this will be IfsDefaultService. The Service page appears.

  4. Scroll to the Performance section and click Committed Data Cache Statistics. The Committed Data Cache Statistics page appears, showing Cache Size, Cache Puts, Cache Removes, Cache Lookups, and Cache Hits.

    The goal is to have a high percentage of Cache Hits; as much as 100 percent is possible. If the percentage of Cache Hits for the service is less than 98 percent, the size of the Committed Data Cache may be too small.

  5. To change the Cache settings for the current session, return to the previous page using the Back button in the browser or the previous link in the path displayed on the page. In the Configuration section, click Committed Data Cache Configuration.

  6. Proportionately increase all Cache settings (Cache Capacity, Normal Purge Trigger, Urgent Purge Trigger, Emergency Purge Trigger, Purge Target), and click Apply when you are finished.

    This will increase your memory usage on the middle-tier computer by approximately 3 KB per object. For example, if you increase cache capacity by 5000, your memory usage will grow by 15 MB.

  7. Run the test again, and observe the results.

To make the changes permanent, update the service configuration. See "Changing a Service Configuration" for more information.

Obtaining Oracle Files Connection Pool Statistics

For the Read-Only or Writable connection pool, increase the "Target max. number of connections" and "Absolute max. number of connections" if any of the following is true:

  • "Failed allocation" is greater than zero.

  • "Total Connections" is more than two higher than "Target max number of connections."

  • "Deferred allocations" is greater than 5 percent and "Average time to allocate" is more than 10 milliseconds.

Each additional Target or Absolute connection will use approximately 8 MB per connection on the middle tier and 1 MB per connection on the database.

Oracle Files Connection Pool Tuning

For the ReadOnly or Writable Connection Pool, increase the "TargetSize" and "MaximumSize" if any of the following is true:

  • "FailedAllocationCount" is greater than zero.

  • "CurrentSize" is greater than two higher than "TargetSize."

  • "DeferredAllocationCount" is greater than 5 percent of "ImmediateAllocationCount" and "AverageAllocationTime" is more than 10 milliseconds.

Note that each additional CurrentSize connection will use approximately 8 MB per connection on the middle tier and 1 MB per connection on the database.

The log file will have connection pool statistics similar to these:

Cache performance for S_LibraryObject cache
    CACHESIZE=409
    OBJECTCOUNT=409
    PUTCOUNT=818
    REMOVECOUNT=0
    FINDCOUNT=14617
    HITCOUNT=13949
    MISSCOUNT=668
    HITRATIO=0.9542997879181775
    MISSRATIO=0.04570021208182254

Cache performance for FolderPath cache
    CACHESIZE=15
    CacheSizeEstimate=15
    ACCESSSEQUENCE=599
    SequenceAtLastPurge=0
    PUTCOUNT=15
    REMOVECOUNT=0
    PURGECOUNT=0
    FINDCOUNT=557
    HITCOUNT=433
    MISSCOUNT=124
    HITRATIO=0.77737881508079
    MISSRATIO=0.22262118491921004

Cache performance for committed S_LibraryObjectData cache
    CACHESIZE=473
    CacheSizeEstimate=576
    ACCESSSEQUENCE=6821
    SequenceAtLastPurge=0
    PUTCOUNT=576
    REMOVECOUNT=0
    PURGECOUNT=0
    FINDCOUNT=27092
    HITCOUNT=26338
    MISSCOUNT=754
    HITRATIO=0.972168905950096   <=== THIS IS THE NUMBER TO WATCH
    MISSRATIO=0.02783109404990403

Cache performance for LibraryObject cache
    CACHESIZE=221
    OBJECTCOUNT=221
    PUTCOUNT=221
    REMOVECOUNT=0
    FINDCOUNT=1473
    HITCOUNT=1252
    MISSCOUNT=221
    HITRATIO=0.8499660556687033
    MISSRATIO=0.1500339443312967