D Troubleshooting Oracle WebCenter Content

This appendix describes some problems that you might encounter when using Oracle WebCenter Content and specifically Content Server. This appendix provides information on methods and tools that can be helpful with the troubleshooting process.

Note:

For details about troubleshooting Oracle WebCenter Content features such as Inbound Refinery, Content Tracker, Records, the WebCenter Content repository, Folios, and so forth, see Troubleshooting in Managing Oracle WebCenter Content.

This appendix includes the following sections:

D.1 Introduction to Troubleshooting Oracle WebCenter Content

This section provides guidelines and a process for using the information in this appendix. Using the following guidelines and process will focus and minimize the time you spend resolving problems.

Guidelines

When using the information in this appendix, Oracle recommends:

  • After performing any of the solution procedures in this chapter, immediately retry the failed task that led you to this troubleshooting information. If the task still fails when you retry it, perform a difference solution procedure in this chapter and then try the failed task again. Repeat this process until you resolve the problem.

  • Make notes about the solution procedures you perform, symptoms you see, and data you collect while troubleshooting. If you cannot resolve the problem using the information in this chapter and you must log a service request, the notes you make will expedite the process of solving the problem.

Process

Follow the process outlined in when using the information in this appendix. If the information in a particular section does not resolve your problem, proceed to the next step in the process.

Table D-1 Process for Using the Information in this Appendix

Step Section to Use Purpose

1

Getting Started with Troubleshooting Basics for Oracle WebCenter Content

Get started troubleshooting Oracle WebCenter Content and the Content Server instance. The procedures in this section quickly address a variety of problems.

2

Troubleshooting Oracle WebCenter Content Archiving

Perform problem-specific troubleshooting procedures for Oracle Content Server archiving issues. This section describes:

  • Possible causes of the problems

  • Solution procedures corresponding to teach of the possible causes

3

Using My Oracle Support for Additional Troubleshooting Information

Use My Oracle Support to get additional troubleshooting information. My Oracle Support provides access to several useful troubleshooting resources, including Knowledge Base articles and Community Forums and Discussions.

4

Using My Oracle Support for Additional Troubleshooting Information

Log a service request if the information in this appendix and My Oracle Support does not resolve your problem. You can log a service request using My Oracle Support at https://support.oracle.com.

D.2 Getting Started with Troubleshooting Basics for Oracle WebCenter Content

This section provides information on using various sources of useful, detailed information that can be helpful with the troubleshooting process.

For information about Content Server logging, see Monitoring Oracle WebCenter Content Server

D.2.1 Using Tracing

You can activate Content Server tracing to display detailed system information that may be very useful for troubleshooting and optimizing system performance.

D.2.1.1 Server-Wide Tracing

Server-wide tracing is used to view activities throughout the system. There are two ways to activate server-wide tracing.

Sometimes when troubleshooting issues, the exact cause of the issue may be difficult to find.  You may run across an error appearing in the log file.  But it may not have enough information about what went wrong. In such cases, event trap tracing allows you to specify keywords for content server to look for as it is writing out tracing in the server output.  If that keyword is found, all of the tracing in the buffer at that time will be sent to a separate event tracing output file. For more information on event trap tracing, see the A-Team blog http://www.ateam-oracle.com/caught-in-the-act/.

D.2.1.1.1 Activating Tracing From the Content Server Administration Interface

You can activate server-wide tracing from the Content Server administration interface.

To activate tracing from the Content Server administration interface:

  1. Choose Administration, then System Audit Information.

  2. Enable Full Verbose Tracing to see in-depth tracing for any active section that supports it.

  3. Specify the traces to activate.

  4. Click Update.

  5. Click View Server Output.

    Note:

    Tracing options are lost on system restart. To ensure your settings are retained after restarting the Content Server instance, enable Save before clicking Update.

D.2.1.1.2 Activating Tracing From an Applet

You can activate serve-wide tracing from an applet:

To activate tracing from an applet:

  1. Start an administrative applet.

  2. Choose Options, then Tracing.

  3. Select Server tracing.

  4. Select the tracings to activate or all and click OK.

The following tracing options are available. Additional tracing sections can be displayed in the list if components are added.

  • applet: This trace contains result sets from initialized applets, such as the Configuration Manager or User Admin.

  • archiver: This trace provides information about archiving activities, including the reading and writing of archiver data files and the time the activities were initiated and finished.

  • archiverlocks: This trace provides information about the locks put on files during archiving activities, including time initiated.

  • chunkedrequest: This trace displays the messages and headers that are created when large requests are 'chunked' in to smaller requests.

  • docprofile: This trace displays the computation of content profiles, specifically the evaluation of the rules that determine which fields are labels, hidden, and so on.

  • encoding: This trace provides information about encoding transformations that have occurred and the activities where encoding occurred.

  • filelock: This trace displays information about short-term system locks put on directories (during activities like archiving, for example) with a focus on collisions that occur and time outs.

  • filelonglock: This trace displays information about the creation, removal, and maintenance of long term locks imposed by the system.

  • filequeue: This trace displays information about accesses to a file queue.

  • indexer: This trace displays information about index functions that occur when the database is updated, including the steps taken to update the index and the time elapsed for each step.

  • indexermonitor: This trace provides a brief summary of automatic index activities, including time started and ended.

  • indexerprocess: This trace displays information about a manually launched index process and indicates if the process terminated properly.

  • localization: This trace displays information about localization usage and activities.

  • mail: This trace describes mail sent by the Content Server instance.

  • pagecreation: This trace displays information about the creation of displayed pages, including the server thread and the time taken to generate the page.

  • requestaudit: This trace provides summary reports on service requests, including the elapsed time for the requests and the number of requests made. For more information, see the "Expanding on requestaudit – Tracing who is doing what…and for how long" blog.

  • scheduledevents: This trace provides a list of hourly or daily background scheduled events.

  • schema: This trace provides information about schema publishing (tables and views published as .js files) and caching (tables cached in to Content Server memory).

  • searchquery: This trace displays information about recent searches, including the fields used to search on and the order of sorting for results.

  • socketrequests: This trace displays the date, time, and thread number of socket requests and the actions during the request.

  • system: This trace displays internal system messages, such as system socket requests and responses.

  • systemdatabase: This trace provides information about database activities, including queries executed, index updates, threads used, and time initiated.

  • transfermonitor: This trace displays information about the archiver and the batch file transfer activities.

  • userstorage: This trace describes the access of external user repositories, including what actions were taken during access.

  • workflow: This trace displays a list of metadata on content items going through workflow, including document title and revision number.

    Note:

    To facilitate international support, most tracing messages are in English and do not have translations.

D.2.1.2 Applet-Specific Tracing

For applet-specific tracing, the output goes to the browser Java console. To perform tracing by applet:

  1. Start the administration applet to be traced.
  2. Choose Options, then Tracing.
  3. Make your selections, and click OK. The output is directed to the browser Java console.

Figure D-1 Applet-Specific Tracing

Description of Figure D-1 follows
Description of "Figure D-1 Applet-Specific Tracing"

D.2.2 Using Stack Traces

The stack trace enables you to see what threads are currently running in the Content Server instance. It is a useful troubleshooting tool that provides information about the threads and enables you to monitor Content Server processing.

For instructions to initiate a current stack trace for the Content Server instance, see Oracle WebLogic Server documentation.

D.2.3 Using the Environment Packager

The Environment Packager is a diagnostic tool. It creates a zip file of the targeted state directories, log files, and other component and resource directories.

To create an environment zip file:

  1. Log in to the Content Server instance as an administrator.
  2. Choose Administration, then Environment Packager.
  3. On the Environment Packager page, select which parts of the environment should be packaged.
  4. When you are ready to create the environment zip file, click Start Packaging. A message is displayed while the zip file is being built, with a link to the zip file. The packaging process may take several minutes. The zip file link will not be available until the process has finished.

    Note:

    The packaged zip is named server_environment_*.zip. While the Content Server instance builds the packaged zip file, it will be located in IntradocDir/vault/~temp. When the build of the zip file is complete, it is moved to IntradocDir/weblayout/groups/secure/logs/env.

D.2.4 Using the Content Server Analyzer

The Content Server Analyzer application enables you to confirm the integrity of the Content Server repository components, including the file system, database, and search index. It can also assist system administrators in repairing some problems that are detected in the repository components.

Using the Content Server Analyzer, system administrators can do the following:

  • Confirm the accuracy of synchronization between three important Content Server database tables (Revisions, Documents, and DocMeta).

  • Confirm that the dRevClassID and dDocName fields are consistent across all revisions of content items.

  • Determine if the file system (native and web-viewable file repositories) contains any duplicate or missing files.

  • Ensure the accuracy of synchronization between the search index and the file system.

  • Ensure the accuracy of synchronization between the search index and the Revisions database table.

  • Ensure that the file system contains all necessary files.

  • Remove duplicate files from the Content Server repository either permanently or provisionally by moving them in to the logs/ directory.

  • Produce a general report on the state of content items in the Content Server repository.

The method to start the Content Server Analyzer depends on the operating system:

  • Windows: Choose Start, then Oracle Content Server, then instance, then Content Server Analyzer.

  • UNIX: Navigate to the DomainHome/ucm/cs/bin/ directory and run the Content Server Analyzer program.

These sections describe Content Server Analyzer tasks:

D.2.4.1 Accessing the Content Server Analyzer

To display the Content Server Analyzer, use one of the following methods:

  • Windows: Choose Start, then Programs, then Content Server, then instance, then Utilities, then Content Server Analyzer.

  • UNIX: Change to the DomainHome/ucm/cs/bin/ directory, type ./IdcAnalyze in a shell window, and press the RETURN key on your keyboard.

The Content Server Analyzer application is displayed.

D.2.4.2 Specifying a Custom Analyzer Log Directory

The logs/ directory is the default logging directory for the Content Server Analyzer. Analysis output files are written to this directory and extra files detected during a file system analysis process can be transferred here as well. Optionally, the default logs/ directory name and path can be changed as desired.

To customize the Analyzer log directory name and path:

  1. On the Content Server Analyzer: Configuration tab, place the cursor in the Analyzer log dir field.
  2. Enter the desired directory path. During the next analysis process, the Content Server Analyzer automatically creates the specified directory or directories in the DomainHome/ucm/cs/bin/ directory hierarchy.

D.2.4.3 Invoking the Analysis Process

To invoke the analysis process:

  1. On the Content Server Analyzer: Configuration tab, select and activate the desired options (checking the corresponding check boxes).
  2. Click Start Analysis.

    Note:

    If this is the very first time the Content Server Analyzer has been run, the output files in the logs/ directory are automatically created. On subsequent analysis processes, a confirmation message is displayed asking to overwrite the existing log file.

  3. Click Yes to overwrite the existing log file. The Content Server Analyzer: Progress tab automatically opens. A completion message opens when all of the selected analysis processes are finalized.

    Note:

    If you click No, the analysis process is terminated and you are prompted to manually remove files from the logs/ directory before running the Content Server Analyzer again.

  4. Click OK.

    The results are displayed in the console area on the Progress tab.

D.2.4.4 Analyzing the Content Server Database

The Check RevClassIDs and Clean database options are used to check the integrity of the database columns. The available options enable users to examine the three tables that are used to store content item revision information (DocMeta, Documents, and Revisions). The DocMeta file is examined for extra entries that are not found in the Revisions table. Similarly, the Documents table is examined to verify that there are sufficient entries to correspond to the entries in the Revisions table.

Shows image of options for analyzing the database

Note:

The Check RevClassIDs and Clean database options are activated and selectable only when the Check database option is selected.

To analyze the Content Server database:

  1. On the Content Server Analyzer: Configuration tab, select the applicable options.
  2. Click Start Analysis.

    The results are displayed in the console area on the Content Server Analyzer: Progress tab. For information about the analysis procedure, see Invoking the Analysis Process.

D.2.4.5 Analyzing the Content Server Search Index

The Check search index and csIDCAnalyzeCleanIndex options are used to check the entries in the Revisions table to ensure that all of the documents that belong in the index are properly listed. Additionally, a check can be performed to ensure that there are no duplicate entries in the search index.

Shows image of options for analyzing the search index

Note:

The csIDCAnalyzeCleanIndex option is activated and selectable only when the Check search index option is selected.

To analyze the Content Server search index:

  1. On the Content Server Analyzer: Configuration tab, select the applicable options.
  2. Click the Start Analysis button (for information about the analysis procedure, see Invoking the Analysis Process.

    The results are displayed in the console area on the Content Server Analyzer: Progress tab.

D.2.4.6 Viewing the Analysis Progress and Results

The Content Server Analyzer: Progress tab is displayed automatically when the Start Analysis button is clicked. The progress bars show when the Content Server Analyzer has completed processing the selected analysis options. The following image shows a partially finished analysis:

When the analysis process is complete, the results are displayed in the console area of the Progress tab. The results depend on what analysis options were selected. The following image of the console area shows the results from selecting database, search index, and file system options:

Note:

The Generate report option was not selected for this example. For an example of the generated status report, see Generating a Status Report.

Figure D-2 Example Console Display of Results

Description of Figure D-2 follows
Description of "Figure D-2 Example Console Display of Results"

D.2.4.7 Generating a Status Report

The status report generated by the Content Server Analyzer provides statistics about the content items in the repository. The status report output is displayed in the console area of the Progress tab.

To generate a status report:

  1. On the Content Server Analyzer: Configuration tab, select Generate report.
  2. Click Start Analysis.

    When the analysis process is complete, the status report information is displayed immediately following the standard analysis results in the console area of the Content server Analyzer: Progress tab.

D.2.4.8 Canceling the Status Report

The report generation feature can be suppressed after the analysis process has already started. To cancel the content item status report during the analysis process:

  1. During the analysis process, click Cancel on the Content Server Analyzer Application. You will be prompted about canceling after the current task is finished.
  2. Click Yes to suppress the status report.

    The status report is not included with the analysis results that are displayed in the console area of the Progress tab.

D.2.5 Using Debug Configuration Variables

The Content Server instance provides the debugging configuration variable IsDevelopmentEnvironment, which when set contributes applicable diagnostic information. This variable is set in the Content Server instance's configuration file (IntradocDir/config/config.cfg) during installation and when the Content Server instance is updated. IsDevelopmentEnvironment does the following:

  • Defines whether the Content Server instance should run in debug mode.

  • Enables a trace of script errors. If used as a parameter to a service call, script error information can be added to the bottom of the displayed page.

The debugging configuration variable AlwaysReportErrorPageStackTrace, also set in the Content Server instance's configuration file (IntradocDir/config/config.cfg), specifies that whenever an error occurs the stack trace is reported on the browser showing the Content Server interface.

Note:

See Configuration Variables in Configuration Reference for Oracle WebCenter Content.

D.2.6 Analyzing HDA Files

WebCenter Content administrators may need to analyze .hda (HDA) files in the process of troubleshooting Content Server issues. A HyperData File (HDA) is used to define properties and tabular data in a simple, structured ASCII file format. It is a text file (which can be identified by the suffix .hda in a file name) that is used by Content Server to determine which components are enabled and disabled and where to find the definition files for that component. The HDA file format is useful for data that changes frequently because the compact size and simple format make data communication faster and easier for Content Server. Details about HDA structure and use are available in HDA Files in Developing with Oracle WebCenter Content.

One option for reading a HDA is to add the IsPageDebug=1 option to a page URL. When IsPageDebug=1 is used, you can see a small gray tab at the bottom right-hand part of the browser window. When you click the tab, it expands and you can choose several pieces of information to display.

  • idocscript trace: Displays the nested includes you previously could view with the ScriptDebugTrace=1 variable in the 10gR1 release of Oracle Universal Content Management.

  • initial binder: Displays the local data and result sets coming back from the service, just as you would by adding the &IsJava=1 option to a page URL. In this display, it formats the results in easy-to-read tables instead of raw HDA format.

  • final binder: Displays all of the local data and result sets after executing all of the includes for the display of the page (not just from the Service call).

  • javascript log: Reports on the javascript functions and times being executed on the page.

For more information, see the Oracle Web page "What do you mean you don't read HDA?" at https://blogs.oracle.com/kyle/entry/what_do_you_mean_you.

D.3 Troubleshooting Oracle WebCenter Content Archiving

This section provides solutions to several common archiving issues for Content Server. Please attempt the recommended solutions before contacting Support.

This chapter covers the following topics:

D.3.1 Importing Issues

This section covers the following topics:

D.3.1.1 File Extension Errors on Import System

Symptom

I am receiving errors on the importing system indicating that there are transfer and file extension problems with the documents.

Problem

The following errors were issued to the Archiver log:

Error: Event generated by user <user_name> at host <host_name>. File I/O error. Saving to file collection.hda. Write error.
Error: Import error for archive <archive_name> in collection <collection_name>: 
Content item <item_name> was not successfully checked in. The primary and 
alternate files must have different extensions.

Recommendation

The I/O error on the export side probably corrupted the batch file and is, in turn, causing the file extension error on the import side. Possible solutions include:

  • Open the batch file in a text editor and check for invalid data. Try deleting the exported collection.hda file and manually re-run the export/import function.

  • In the exporting server, open the applicable collection.hda file and look for the lines associated with the content items that caused the file extension error. Some of the revisions of these content items may have the native file in the vault location listed in the alternate file location. There might also be a format entry for the alternate file. Delete these lines and re-import the files.

  • Add an alternate extensions configuration setting to the Content Server configuration config.cfg file (IntradocDir/config/config.cfg) on the importing server:

    1. Open the IntradocDir/config/config.cfg file in a text editor.

    2. Locate the General Option Variables section.

    3. Enter the following configuration setting:

      AllowSamePrimaryAlternateExtensions=true

      This configuration setting allows checked in content items to use identical document extensions for both the alternate and primary files.

    4. Save and close the config.cfg file.

      Note:

      Although it probably is not necessary to add this configuration setting to the Content Server config.cfg file on the exporting server, it may be worthwhile to do so for general preventative measures.

    5. Restart the Content Server instance.

D.3.1.2 Selecting Specific Batch Files for Import

Question

How can I select and re-run specific batch files from the General tab of the Archiver utility without deleting the remaining files that are required for backup purposes?

Recommendation

The most efficient method would be to create a new collection, copy the desired archives to the new collection, and run the import from there.

D.3.1.3 Import Maps Do Not Work After Archive Import

Symptom

I configured a value map to change metadata values during the import on an archive collection. But after the transfer, the import maps do not work.

Problem

The metadata values didn't reflect the configured metadata value changes.

Recommendation

To ensure that metadata value changes are retained when the files are exported into an archive and then later imported from that archive, the value maps must be configured on both sides of the transfer process. This means that the same value map must be configured on both the source (exporting) server as well as the target (importing) server.

D.3.1.4 Identifying Imported Content Items From Archive

Question

Due to a system crash, I need to import content from the old archive into a new archive without changing the content information (metadata) of the documents. How can I preface each content item using a letter or number to indicate that all the documents with this designation are new imports (but actually originated from the old archive)?

Recommendation

The archived documents can be re-imported and appropriately marked to distinguish them from other imported content items by applying an import map using the Content ID metadata field. An import map allows you to configure how values are copied from one metadata field to another during import. To set up the import mapm, complete the following steps:

  1. On the Import Maps tab of the Archiver utility, click Edit in the Field Maps section.

  2. On the Edit Value Maps page, select All (leave the Input Value field blank).

  3. Select Content ID from the Field list.

  4. Enter X<$dDocName$> in the Output Value field.

    Where 'X' is the letter or number used to distinguish the re-imported content items and 'dDocName' is the database table field value for the document Content ID.

  5. Click OK.

After you re-import the archive, the letter or number used for 'X' should be added to the content ID of each content item. Be sure to configure the same value map on both the source (exporting) server and the target (importing) server. This ensures that the metadata value changes are retained when the files are imported from the archive.

D.3.1.5 Duplicate Content Items in Content Server

Symptom

When I try to check in or import a content item, the following error message is issued:

Content item already exists.

Recommendation

This error is issued when archiving is done between contribution servers that are using the same autonumbering scheme for content IDs. For example:

  • Content ID 003 is checked in to Content Server instance A and later archived to Content Server instance B. If a file is checked in to Content Server instance B and the next auto-generated number happens to be 003, the error occurs.

  • Content ID 005 is checked in to both Content Server instance A and Content Server instance B. If this same content item is archived from Content Server instance A to Content Server instance B, the error occurs.

Possible solutions include:

  • Set up an import value map that will add a prefix to the content ID of the imported files. For details, see Identifying Imported Content Items From Archive.

  • In each Content Server instance, use the System Properties utility to set up an automatic numbering prefix for checked-in content items:

    1. Start the System Properties utility.

    2. Open the Options tab.

    3. Select Automatically assign a Content ID on check in.

    4. Enter the desired prefix in the Auto Name Prefix field.

    5. Click OK.

    6. Restart the Content Server instance.

D.3.1.6 Importing Archived Content to Proxied Server Fails

Symptom

I am trying to import content from an exported archive to my proxied Content Server instance, but the import fails.

Recommendation

For more information about Archiver problems, open and view the Archiver logs (accessible from the Content Server instance's Administration page). These logs provide the type of message along with more descriptive information about the logged messages.

For example, if the Archiver log indicates that an import problem involves a metadata field option value that is unavailable, information about configured option lists for metadata fields can be found on the Information Fields tab of the Configuration Manager utility (accessible from the Administration page).

Using this information, compare the option list for the problem metadata field on both the exporting and importing servers. If there are any differences, corrections in one of the servers will make both option lists identical. This would resolve the unavailable option discrepancy.

D.3.1.7 No Importing Errors But Documents Are Missing

Symptom

When I run the import function, no errors are issued, but not all of the documents are being imported.

Problem

I exported 428 documents from the development server along with the configuration information (the metadata fields). Then, I transferred the archive to the main production server and ran the import. No errors were issued, so I thought everything had gone well. Unfortunately, when I searched the documents, I discovered that only 198 of the original 428 were actually imported.

Recommendation

Suggestions to resolve this problem include:

  • Make sure that all Microsoft Word documents are included in the search index.

    Particular versions of the search component do not include Microsoft Word documents with embedded links in the search index. Thus, these files will not be found in search queries.

    You can remove all embedded links from the affected documents or add the following configuration setting to the IntradocDir/config/config.cfg file:

    CheckMkvdkDocCount=true
    

    This configuration setting ensures that all Word files are included in the search index. However, only the metadata is included, not the full text.

  • Try exporting the original set of documents and ensure that the source files are deleted. Then re-import the archive that was just exported.

D.3.1.8 Errors About Invalid Choice List Values

Symptom

My imports are failing.

Problem

The system issues error messages indicating that there are invalid choice list values. I am currently using an option list in the Dependent Choice List applet to configure and control the values.

Recommendation

Apparently a specific metadata taxonomy has been established for your option lists such that there are probably fields that are dependent on each other. In this case, certain values in option lists are available based on what values have been selected in a previous option list. Unfortunately, when using the Archiver, the dependencies in your option lists are obviously conflicting with the Content Server instance's capacity to work with custom metadata fields.

A workaround for the conflict involves using the Content Server instance's Configuration Manager utility rather than the Dependent Choice List applet. This necessitates that you enter the metadata fields and corresponding option list values on the Information Fields tab of Configuration Manager:

  1. Log in to the Content Server instance as an administrator.

  2. Choose Administration, then Configuration Manager.

  3. In the Configuration Manager window, select the Information Fields tab.

  4. Click the Add button and enter one of your metadata field names in the Add Custom Info Field dialog.

  5. Click OK.

  6. In the Add Custom Info Field window, complete the fields as appropriate.

  7. In the Option List Type field, choose the Select List Not Validated option.

  8. This option ensures that content whose specified value does not match one currently entered in the Use Option List field are nevertheless checked in with the specified value. The Use Option List field lists the name for the list of values a user may choose from for the specified field.

  9. Click OK.

  10. Click Update Database Design.

  11. Click Rebuild Search Index.

Use this method for the duration of your import process.

D.3.1.9 Import Fails Due to Missing Required Field

Symptom

I used the Archiver to export documents. Now, I'm trying to import them and the process fails.

Problem

When I try to import the previously exported documents, the Content Server issues an error indicating that the 'Company' metadata file is required.

Recommendation

You will need to use the Content Server instance's Configuration Manager utility to edit the Company field and make it a non-required field.

  1. Log in to the Content Server instance as an administrator.

  2. Choose Administration, then Configuration Manager.

  3. In the Configuration Manager window, select the Information Fields tab.

  4. Select the Company metadata field from the Field Info list.

  5. Click Edit.

  6. In the Edit Custom Info Field window, deselect Require Value.

  7. Click OK.

  8. Click Update Database Design.

  9. Click Rebuild Search Index.

You should now be able to successfully re-import the archive.

D.3.1.10 Changed Metadata Field Makes the Archiver Freeze During an Import

Symptom

Some of our product names have changed and we need to update one of the metadata fields in the affected documents. After exporting all the documents with the old product name metadata field, I then attempt to import the documents using the new product name metadata field. But, every time I try this, the Archiver processes only a portion of the total archiving task and then stops.

Problem

Once the Archiver freezes, I am unable to navigate the Content Server user interface and I must shut down all of the open browsers. Also, during the next five minutes after shutting down the browsers, I have no connectivity to the Content Server instance at all. After this five-minute interval, I can access the Content Server instance again.

In addition to this freezing problem, the following error message is issued:

Stream error (299) - SKIPPING

Recommendation

One or more processes seem to be interrupting the import. Some possible problem solutions could be any of the following:

D.3.1.10.1 Checking the Metadata Field Properties

The product name metadata field may not have been properly updated in Configuration Manager. Depending on the type of metadata field that the 'product name' is, changing the value could be the reason for the lock-up problem. Is the product name metadata field a (long) text field only or also an option list? If it is an option list, make sure that the new name value is a selection on the corresponding list.

  1. Log in to the Content Server instance as an administrator.

  2. Choose Administration, then Configuration Manager.

  3. In the Configuration Manager window, select the Information Fields tab.

  4. Select the product name metadata field from the Field Info list.

  5. Click Edit.

  6. In the Edit Custom Info Field window, if the Field Type value is Text or Long Text and Enable Option List is deselected, click OK or Cancel (this should not cause the lock-up problem).

    Otherwise,

    If Enable Option List is selected, then make sure that the new product name metadata field value is included as a selection on the corresponding list:

    1. Locate the Use Option List field and click Edit.

    2. Enter the new product name metadata field value in the Option List dialog.

    3. Click OK.

  7. Click OK again (on the Edit Custom Info Field window).

  8. Click Update Database Design.

  9. Click Rebuild Search Index.

D.3.1.10.2 Checking the Indexing Automatic Update Cycle

The lock-up problem may be due to the indexer's automatic update cycle. The error message indicates that the indexer is failing because it loses connectivity. Every five minutes, the indexer executes an automatic update cycle and could somehow be grabbing the index file and locking it. If so, it might be useful to disable the indexer's automatic update cycle while you run the import.

  1. Log in to the Content Server instance as an administrator.
  2. Choose Administration, then Repository Manager.
  3. In the Repository Manager window, select the Indexer tab.
  4. Click the Configure button in the Automatic Update Cycle section of the tab.
  5. In the Automatic Update Cycle window, deselect Indexer Auto Updates.
  6. Click OK.

    Note:

    Be sure to reactivate the automatic update cycle after completing the import. Otherwise, the server will no longer automatically update the index database, which could adversely impact future search results.

D.3.2 Exporting Issues

This section covers the following topics:

D.3.2.1 Total Export Possible with Blank Export Query

Question

If I do not create an export query to define the content items to export, will the entire contents of my Content Server be exported?

Recommendation

Yes, test exports have confirmed that leaving the Export Query section blank (not defining an export query) will ensure that the Content Server contents are completely exported.

D.3.2.2 New Check-Ins and Batch File Transfers

Question

If I check some documents in to the Content Server after I have initiated a large export (but before it completes), will these documents be included in the export? Or, does the Archiver read the timestamp information and determine that the new files are more recent than those originally allocated for the export and not include them? Also, what happens to the archive export if the connection between the servers is interrupted or lost during the export?

Recommendation

When the export is initiated, Archiver runs a query on the system to build a list of the documents that are to be exported. This information is cached and used to build the export archive. Therefore, any new documents that are checked in during the export process will not be included even if they match the export query definition.

If the connection between servers is disrupted, the export process on the source server continues but the transfer to the target server stops. The source server accumulates a number of batch files. While waiting to transfer these files, the source server continues to ping the target server for a connection at regular interval. When the connection is reestablished, the accumulated batch files are transferred to the target server.

If you have used an automated (replicated) transfer, the batch files and their associated content files are removed from the source Content Server. If you have used a manual transfer, the batch files and their associated content files remain in the source Content Server.

D.3.2.3 Exporting User Attributes

Question

How can I export users in an archive?

Recommendation

You can export a users.hda file, which contains the user attributes from the Users database table, as follows:

  1. Log in to the Content Server instance as an administrator.

  2. Choose Administration, then Archiver.

  3. In the Archiver window, select the Export Data tab.

  4. Click Edit in the Additional Data section.

  5. In the Edit Additional Data dialog, select Export User Configuration Information.

  6. Click OK.

D.3.2.4 Folder Archive Export Doesn't Work If Collections Table Has Many Records

Symptom

I use the folder archive export feature to move my website hierarchy created by Site Studio. Initially, I can export folders by using the Virtual Folder Administration Configuration page without any problem. However, as my website grows, this function does not work anymore. The following errors are issued during the export procedure:

Error <timestamp> Event generated by user '<user>' at host '<host_name>'. Referred 
to by http://<host>/intradoc‐cgi/nph‐idc_cgi.exe?IdcService= COLLECTION_GET_ADMIN_
CONFIG. Unable to retrieve content. Unable to execute service method 
'loadCollectionArchive'. (System Error: Unknown error.)
Error <timestamp> IdcAdmin: Event generated by user '<user>' at host '<host>'.
Unable to obtain the console output. Unable to execute the service 
'GET_SERVER_OUTPUT' on Content Server 'contribution'. Unable to receive request.
Response from host has been interrupted. Read timed out.

There is also an out-of-memory error in the Content Server output console:

<timestamp> SystemDatabase#Thread-13: SELECT * FROM Collections, ColMeta WHERE Collections.dCollectionID=ColMeta.dCollectionID AND dParentCollectionID=564
java.lang.OutOfMemoryError
Reporting error on thread Thread-13 occurring at <timestamp>.
java.lang.OutOfMemoryError
java.lang.OutOfMemoryError

Problem

Depending on the size of the folder hierarchy that is being exported as an archive file, the default heap size value for the Java Virtual Machine (JVM) may not be adequate.

Recommendation

Modify the heap size setting in the application server to provide more heap memory for Content Server. For details, see the appropriate application server documentation.

After restarting the Content Server instance, the archive export function should work correctly again.

D.3.3 Transfer Issues

This section covers the following topics:

D.3.3.1 Transfer Stopped When Target Locked Up

Symptom

The automated transfer function stopped when the target server locked up.

Problem

After restarting the target server, the log file listed an error message stating that there was a problem with a security group and that this prevented the import on the target server.

Recommendation

In this case, obviously the security group problem on the target server must be corrected before the transfer can proceed. Two additional procedures to perform that can help include:

D.3.3.1.1 Verifying and Testing the Outgoing Provider

Verifying and testing the outgoing provider ensures that it is set up and working properly:

  1. Log in to the source Content Server instance as an administrator.
  2. Go to the Administration page and click the Providers link.
  3. Click the Info link of the appropriate outgoing provider.
  4. Verify the information on the Outgoing Provider Information page.
  5. Return to the Providers page and click the Test link corresponding to the outgoing provider.
D.3.3.1.2 Restarting the Content Server

In some cases, after problems have been corrected on either the source or the target server, the source server may stop transferring or possibly the automation function no longer works. In either case, restarting the Content Server instance should resolve the problem.

D.3.3.2 Aborting/Deleting a Running Transfer

Question

I accidentally started transferring an excessively large file to the production Content Server instance. What is the most efficient way to stop the transfer process while it is running?

Recommendation

There are several methods to abort or delete a transfer, including:

D.3.3.2.1 Disabling the Outgoing Provider

The fastest method to abort a running transfer is to disable the source server's outgoing provider:

  1. Log in to the source Content Server instance as an administrator.
  2. Go to the Administration page and click the Providers link.
  3. Click the Info link of the appropriate outgoing provider on the Providers page.
  4. In the Outgoing Provider Information page, click the Disable button.
D.3.3.2.2 Deleting a Transfer from the Transfer To Tab

To delete a transfer from the Transfer To tab, complete the following steps:

  1. Log in to the source Content Server instance as an administrator.
  2. Choose Administration, then Archiver. The Archiver utility starts.
  3. Select Options, then Open Archive Collection.
  4. Select the applicable collection from the list.
  5. Click Open.
  6. In the Archiver window, select the source archive in the Current Archives list.
  7. Open the Transfer To tab.
  8. Click Remove in the Transfer Destination section.
  9. You are prompted to confirm the action.
  10. Click Yes.
D.3.3.2.3 Deleting an Automated Transfer

To delete an automated transfer from the Automation for Instance page:

  1. Log in to the source Content Server instance as an administrator.
  2. Choose Administration, then Archiver. The Archiver utility starts.
  3. Choose Options, then View Automation For Instance.
  4. In the Automation For Instance window, open the Transfers tab.
  5. Select the automated transfer to delete.
  6. Click Remove. The automated transfer is removed from the list.

D.3.3.3 Verifying the Integrity of Transferred Files

Question

What is the best approach to verify the integrity of the files that have been transferred between two servers? Obviously, the documents in the target Content Server instance should be identical to those in the source instance. I need to ensure that all documents were in fact transferred and if some were not transferred, I must determine which ones failed to transfer.

Recommendation

To ensure that the transferred documents are identical to those on the source server, two items can easily be checked.

  • The Revisions table:

    Specifically, match the contents of the dDocName and dRevLabel columns on both instances and verify the accuracy or discrepancies between them.

  • The file system:

    Check the native file repository:

    (DomainHome/ucm/cs/vault/content_type)

    and web-viewable file repository:

    (DomainHome/ucm/cs/weblayout/groups/public/documents/content_type)

    on each server and verify the ac curacy or discrepancies between them.

D.3.3.4 Transfer Process Is Not Working

Symptom

The transfer process is not setting up properly.

Recommendation

If the transfer process is not functioning correctly, check the outgoing provider on the source server and ensure that the information is correct. In particular, make sure that the server host name is correct and matches the HTTP server address.

To verify the server host name on the source server:

  1. Start the System Properties utility:

    ./SystemProperties

  2. Open the Internet tab.

  3. Note the HTTP server address setting.

  4. Go to the Administration page and click Providers.

  5. Click the Info link for the appropriate outgoing provider on the Providers page.

  6. In the Outgoing Provider Information page, check the server host name and make sure it corresponds exactly to the HTTP server address setting in System Properties.

  7. If the server host name setting is different than the HTTP server address, click the Edit button.

  8. Modify the Server Host Name setting as necessary.

  9. Click Update.

  10. Restart the Content Server instance.

D.3.4 Replication Issues

This section covers the following topic:

D.3.4.1 Stopping the Automatic Import Function

Question

How can I stop the automatic import function?

Recommendation

When content meets the specified criteria, the automatic importer is, by default, configured to automatically perform an import every five minutes. However, there are two ways to disable the automatic import function:

D.3.4.1.1 Unregistering an Importer from the Replication Tab

To unregister an importer from the Replication tab:

  1. Log in to the source Content Server instance as an administrator.
  2. Choose Administration, then Archiver.

    The Archiver utility starts.

  3. Select the archive in the Current Archives list.
  4. Select the Replication tab.
  5. Click Unregister.

    The automatic import function is disabled from the selected archive.

D.3.4.1.2 Deleting a Registered Importer from the Automation for Instance Page

To delete a registered importer from the Automation for the Instance page:

  1. Log in to the source Content Server instance as an administrator.
  2. Choose Administration, then Archiver. The Archiver utility starts.
  3. Choose Options, then View Automation For Instance.
  4. In the Automation For Instance window, open the Importers tab.
  5. Select the registered importer to delete.
  6. Click Remove.

    The registered importer is removed from the list.

D.3.5 Oracle Database Issues

This section covers the following topic:

D.3.5.1 Allotted Tablespace Exceeded

Symptom

I cannot transfer files. Every time I try to transfer files, I get 'max extents' error messages.

Problem

The following error messages (or similar) are issued:

IdcApp: Unable to execute query '<query_name>'. Error: ORA-01631: max # extents (50) reached in table <table_name>.
ORA-01631 max # extents (<text_string>) reached in table <table_name>.

Recommendation

When the Content Server instance creates its database tablespace, it only allocates 50 extents. As the database grows and is re-indexed, it uses more space (extents). Eventually, the 50 extents limit is exceeded. At some point in the transfer, one of your files tried to extend past the 'max extents' limit. In this case, try implementing one or more of the following solutions:

  • Look for weblayout queries that are excessively large, eliminate them, and retry your transfer.

  • Perhaps a Content Server user does not have the right permission grants (resource and connect) to the Content Server schema. That user must have the temporary tablespace and default tablespace set to the Content Server defaults.

  • If the system 'max extents' limit is less than the system maximum, you must increase the number of extents that are available. Refer to your Oracle Database documentation or ask your database administrator for the appropriate Oracle SQL command to increase the tablespace extents.

  • You can optionally choose to re-create the database using larger initial, next or percent to grow parameters for the tablespaces. In this case, it is advisable to set the initial extents and next extents to 1Mb. Set the percent to grow parameter (PCTINCREASE) to 0% to allow the tables to automatically grow on an as-needed basis.

D.3.5.2 Slow Oracle WebCenter Content Performance with Oracle Database

Symptom

My Oracle WebCenter Content instance is running slow. I checked the memory and CPU usage of the application server and it has plenty of resources. What could be going wrong?

Recommendation

An Oracle WebCenter Content instance runs on an application server and relies on a database server on the back end. If your application server tier is running fine, chances are that your database server tier may host the root of the problem. While many things could cause performance problems, on active Enterprise Content Management systems, keeping database statistics updated is extremely important.

Oracle Database has a set of built-in optimizer utilities that can help make database queries more efficient. It is strongly recommended to update or re-create the statistics about the physical characteristics of a table and the associated indexes in order to maximize the efficiency of optimizers. For more information, see:

http://www.ateam-oracle.com/gathering-statistics-for-an-oracle-webcenter-content-database/

D.3.6 Miscellaneous Issues

This section covers the following topics:

D.3.6.1 Archiving Does Not Work With Shared File System

Symptom

I am trying to transfer between two Content Server instances with access to a shared file system but it is not working.

Recommendation

When transferring between Content Server instances on a shared file system, the mapped or mounted drive must be available to both Content Server instances. This means that the computers must be on and logged in as a user who has system access to both Content Server instances. Make sure that all of the following conditions are met:

  • Both computers are turned on.

  • Both computers are logged in as a user with system access to both Content Server file systems.

  • The shared drive has been properly mapped or mounted so the Content Server instance can 'see' it. Having network access to the computer is not sufficient.

D.3.6.2 Archiving Does Not Work Over Outgoing Provider

Symptom

I am trying to transfer between two Content Server instances over an outgoing provider but it is not working.

Recommendation

The Content Server instance that has an outgoing provider set up is considered the 'local' server, and the target Content Server instance for the outgoing provider is considered the 'proxied' server. Files are always transferred in the direction of the outgoing provider, from the local (source) instance to the proxied (target) instance.

It is possible that when the outgoing provider was added and defined for the source Content Server instance, the Proxied check box was selected. However, because the relative web root is the same for both Content Server instances, the outgoing provider is confused. The Proxied check box should be selected only if the target Content Server instance was installed as an actual proxy of the local (master) Content Server instance. This server option should not be selected if the relative web root is the same for both Content Server instances.

D.4 Using My Oracle Support for Additional Troubleshooting Information

You can use My Oracle Support (formerly MetaLink) to help resolve Oracle Fusion Middleware problems. My Oracle Support contains several useful troubleshooting resources, such as:

  • Knowledge base articles

  • Community forums and discussions

  • Patches and upgrades

  • Certification information

Note:

You can also use My Oracle Support to log a service request.

You can access My Oracle Support at https://support.oracle.com.