PK  VwEoa,mimetypeapplication/epub+zipPK VwEiTunesMetadata.plista artistName Oracle Corporation book-info cover-image-hash 200103153 cover-image-path OEBPS/dcommon/oracle-logo.jpg package-file-hash 361672492 publisher-unique-id E26693-01 unique-id 450160947 genre Oracle Documentation itemName Oracle® Fusion Middleware Managing Oracle WebCenter Content, 11g Release 1 (11.1.1) releaseDate 2013-02-28T09:03:16Z year 2013 PK/(faPK VwEMETA-INF/container.xml PKYuPK VwEOEBPS/ibr_ucm_watermark.htmz Managing PDF Watermark

27 Managing PDF Watermark

A watermark is a image or text superimposed on selected pages in a PDF document. When enabled, the PDF Watermark component can apply a watermark at check-in (static watermark) or when a user requests to view or download a PDF document (dynamic watermark).

PDF Watermark can also add security features to PDF files as they are downloaded for viewing. Password security can be added, and the ability to print or copy the contents of the file can be enabled or disabled.

This chapter provides information about managing PDF Watermark:

27.1 Understanding PDF Watermark

This section discusses the following topics:

27.1.1 Types of Watermark

A static watermark is applied during content check-in as a follow-on step to the Inbound Refinery conversion. To select a watermark for content to be converted to PDF, enter a valid Watermark Template ID during check in. Only documents that Inbound Refinery converts to PDF can receive a static watermark. After a document receives a static watermark, all viewers of the document see the same watermark.

In the same way, content checked in by an automated process such as WebDAV or BatchLoader can also be given a static watermark, provided a valid Watermark Template ID is provided. For more information about creating templates and template IDs, see Section 27.1.2.

Dynamic watermarks are generated as needed when a user requests to view or download a PDF document. Dynamic watermarks can contain variable information (for example, the user name or the requesting user, or the date and time of download). For this reason, different users may see the same content with different watermarks. With dynamic watermarks, only the web layout form is watermarked. The original PDF file is unchanged in its vault location.

Dynamic watermarking is rules-based. If a request for a PDF document satisfies a pre-defined rule, the template associated with that rule is used to watermark a copy of the content before the copy is returned to the requesting user. System administrators define rules and set up specific conditions for determining which requested content gets a dynamic watermark.

For more information about specifying rules for dynamic watermarks, see Section 27.1.3.

The following kinds of watermarks can be used:

  • Text: Specified the text which can include metadata values for the content item, and include special keywords, such as $DATE$, that provide information about the content item at the point it is watermarked.

  • Image: An image in any of the supported bitmap (raster) formats.

  • Signature: If Electronic Signatures is enabled, a watermark can be created from the electronic signature metadata associated with a content item.

Details are specified for type of watermark as well as placement. One or more watermarks of any type can be used in a given document. Defined watermarks are stored in a template that is checked in with a content ID and default metadata values.

27.1.2 Templates

Whether a watermark is applied statically at check-in or dynamically when the PDF file is requested, the watermark information is stored in a template which includes information about the text or image watermark itself and any rules defined for its use.

Legacy schema templates are supported for watermarking, but any change to the template results in an upgrade to the new schema. Consequently, any template which is changed in Content Server version 11gR1 may not work correctly with older versions of Content Server.

For information about creating templates, see Section 27.2.3.

A template is checked into the content repository as a managed content item with default metadata and two additional metadata fields:

  • Watermark Template ID: The content ID (dDocName) for the template which can be assigned when the template is created. This is specified for static watermarks when the content item is checked in.

  • Watermark Template Type: A list of supported template types. Currently, a single option is provided, the default PDFW_Template.

In addition to these fields, additional metadata values can be specified. This helps ensure that default values are provided for fields that require a value.

27.1.2.1 Template Security

A user password requires the user to provide a password to open and view the PDF. An owner password restricts the ability to change the PDF file or modify the security settings within the PDF file.

These security settings set access restrictions within the associated PDF file itself using PDF security. These access restrictions are independent of access restrictions to the content item defined by Content Server.

User/Owner passwords are encrypted in the PDF Watermark Template with a third-party encryption library. Encryption is performed automatically when the template is saved and decryption is performed automatically when the template is used for a watermark.

Oracle does not provide an encryption library. Library bcprov-jdk14-138.jar is a recommended third-party encryption library that is downloadable BouncyCastle.org, but any library can be used. For information about specifying an encryption library, see Section 27.2.1.

Passwords in legacy templates are not encrypted until the template is saved. A template cannot be saved unless it ischanged. Therefore, to encrypt template passwords, edit each legacy template and make a minor change before saving.

27.1.3 Dynamic Watermark Rules

Rules are used to determine which template is applied for dynamic watermarking. After creating a template, the rules for the template can be defined. The same template can be used for static or dynamic watermarking. Rules are used only for dynamic watermarking.

If a template has multiple rules, the rules are applied in the order listed. Rules should be ordered with the most specific tests earlier in the list, and more general ones after that. All rules must test positive for the watermark to be applied.

Within a rule, criteria can be set based on the values of selected metadata fields. For example, you can test the dDocAuthor field for specific authors or the dDocType for a specific type of document. The order in which you define criteria for a rule does not matter. All criteria must be true for the associated rule to test positive.

27.1.4 PDF Optimization

PDFs that come from the PDF Converter may have been optimized for faster Web viewing. If a static watermark is applied to that content, the optimization is lost. Post-watermarking optimization requires a third-party optimizer which is not provided with PDF Watermark. To use optimization, a distiller engine/optimizer must be installed and fully operational. The chosen optimizer must be able to execute conversions with a command-line (for example, a script file or a .bat file).


Important:

A PDF optimizer is not provided with PDF Watermark. If using the Optimization feature, install a third-party distiller engine before use and verify it is fully operational. The optimizer must be able to execute conversions on a command-line (for example, a script file or a .bat file).


27.1.5 Watermark Placement

Options are available for placing text or image watermarks at the top (header), center, or bottom (footer) of the page or at a particular location on the page with X-Y coordinates. Multiple watermarks can be used on a given document.

In the image below, the reference points for each of the positions is indicated by a point. The example image in each of the positions shows the orientation relative to the associated reference point. Example text in each of the positions shows one of the available horizontal alignment options (left, right, center). All text alignment options are available at each of the positions.

Surrounding text describes watermark_orientation.gif.

For standard placement, text and images reference a point at the top-center, middle-center, or bottom-center of the page. Images center horizontally around the associated reference point. With text, you can specify the horizontal alignment of the text with respect to this reference point (left, right, or center aligned).

For explicitly placed watermarks, the coordinates are in points, with each point equal to 1/72". The origin (0, 0) is the lower left corner of the page. For images, these coordinates specify the lower left corner of the image with the image extending up and to the right. For text, these coordinates specify the horizontal reference point for the alignment options, similar to the standard placement options.

27.2 Configuring PDF Watermark

This section describes tasks that are used to manage PDF Watermark.

27.2.1 Specifying the Classpath for an Encryption Library

The passwords defined in a template set corresponding passwords within the PDF file itself when the PDF file is rendered. The passwords stored in the template are encrypted using a third-party encryption library. A reference to an encryption library used with PDF Watermark must be provided for the Content Server to encrypt passwords stored in the template.

Oracle does not provide an encryption library for encrypting passwords. This procedure assumes the use of a third-party encryption library. One such library is bcprov-jdk14-138.jar which is downloadable from BouncyCastle.org, but any library can be used.

  1. Download an encryption library .jar file and paste it into the /shared/classes directory.

  2. From the Windows Start menu, choose Programs then Content Server-[Instance Name] then Utilities then System Properties.

  3. On the Systems Properties applet, open the Paths tab.

  4. Specify the following as the classpath:

    JAVA_CLASSPATH;$SHAREDDIR/classes/bcprov-jdk14-138.jar
    
  5. Click OK to save the changes and exit the System Properties utility.

27.2.2 Starting PDF Watermark Administration

Templates, rules and configuration for PDF Watermark are managed in using the PDF Watermark Administration page.

  1. Choose Administration then PDF Watermark Administration from the Main Menu.

  2. Click PDF Watermark Administration.

    The PDF Watermark Administration page opens.

27.2.3 Adding or Editing Templates

Define any required metadata field values before creating a template. After checking in the template, it can be modified as needed.

To define specific metadata fields for template check in:

  1. Choose Administration then PDF Watermark Administration from the Main Menu.

  2. On the PDF Watermark Administration page, click the Configuration tab.

  3. Highlight the Field Name and Value pair to define/edit and click Edit.

  4. On the Edit Default Value page, set the value to be applied for the Field name selected.

  5. Click Apply.

To add a template:

  1. Select the Templates tab on the PDF Watermark Administration page.

  2. To edit an existing template, select the template and click Edit. To create a new template, click Add.

  3. On the Add New/Edit Template page, name the template and assign it a meaningful content ID. The ID cannot be changed after the template is checked in.

  4. To add security to the template, click the Security tab.

    1. Select an encryption bit-depth for Security Level. The higher the number, the stronger the encryption. This encryption level applies to the passwords defined in the PDF file itself, not the watermark template. Oracle does not provide an encryption library for encrypting passwords stored in the template. For more information about adding an encryption library, see Section 27.2.1.

    2. To restrict access to the PDF file associated with the template, specify a User Password. Users are required to specify this password to view or download the PDF.

    3. To restrict the ability to change the PDF file or modify the security settings within the PDF file, specify an Owner Password.

      This password applies to the associated PDF file itself, not the current watermark template. Access to the template is governed by Content Server's security model.

    4. To prevent the user from printing any portion of the PDF file, set Print Allow to No. To permit printing at low resolution, selected Degraded.

    5. To prevent the user from copying portions of the PDF file, set Copy Allow to No.

  5. Click OK when done.

27.2.3.1 Add or Edit a Text Watermark

To add or edit a text watermark:

  1. Open the Add New/Edit Template page and click the Text Watermark tab.

  2. To add a watermark, click Add. To edit an existing watermark, click Edit. The

  3. On the Add New/Edit Text Watermark page, specify the Text to appear in the watermark. The text can include embedded symbols that are replaced by document information, such as the page count or the document name when the watermark is rendered.

  4. Select the Location on the page where the watermark appears.

  5. If Explicit is selected, specify the X-Coordinate and Y-Coordinate location for the watermark. Values are specified in points. Each point is 1/72 in. measured from the lower-left corner of the page.

  6. Specify the Rotation of text from 0 to 359 degrees, rotated counter clockwise.

  7. Select the horizontal Alignment at the specified location.

  8. Select the font, size, weight and color. Weight is disabled for those fonts that do not have an extended weight.

  9. Select whether to Layer the watermark Over or Under the content in the PDF (including other watermarks).

  10. Specify a Page Range. If left blank, the page range includes the entire document.

    Separate specific pages with commas (1,2,4). Separate the first and last pages in a page range with a colon (12:24). Use the keyword LAST to designate the last page without having to specify the actual page number. For example: 1,2,4,12:24,50:LAST.

  11. Specify a Page Range Modifier to watermark Even Pages Only or Odd Pages Only.

  12. Click OK when done.

27.2.3.2 Add or Edit an Image Watermark

Images must be checked in before use. To add or edit an image watermark:

  1. Open the Add New/Edit Template page and click the Image Watermark tab.

  2. To add a watermark, click Add. To edit an existing watermark, click Edit.

  3. On the Add New/Edit Image Watermark page, specify the Content ID of the image to use. The image can be any supported bitmap (raster) image format, such as GIF or JPG.

  4. Optionally specify a Scale Factor to preserve or modify the size of the image when it is used as a watermark. Without specifying a scale factor, images are rendered at 72 dpi.

    The Scale Factor is expressed as a percentage based on the following formula: default_dpi / image_dpi * 100%. For example, to maintain the size and resolution of a 300 dpi image, you calculate the Scale Factor as follows: 72 dpi / 300 dpi * 100% = 24% for a Scale Factor of 24.

  5. Select the Location on the page where the watermark appears.

  6. If Explicit is used for the location, specify the X-Coordinate and Y-Coordinate location for the watermark. Values are specified in points. Each point is 1/72 in. measured from the lower-left corner of the page.

  7. Select whether to Layer the watermark Over or Under the content in the PDF (including other watermarks).

  8. Specify a Page Range. If left blank, the page range includes the entire document.

    Separate specific pages with commas (1,2,4). Separate the first and last pages in a page range with a colon (12:24). Use the keyword LAST to designate the last page without having to specify the actual page number. The following page range includes all of these options: 1,2,4,12:24,50:LAST.

  9. Specify a Page Range Modifier to watermark Even Pages Only or Odd Pages Only.

  10. Click OK when done.

27.2.3.3 Add or Edit an Electronic Signature Watermark

To add or edit an electronic signature watermark:

  1. Open the Add New/Edit Template page and click the Signature Watermark tab.

  2. To add a watermark, click Add. To edit an existing watermark, click Edit.

  3. On the Add New/Edit Signature Watermark page, specify the Label to appear in the watermark. The text can include embedded symbols that are replaced by document information, such as the page count or the document name when the watermark is rendered.

  4. Specify the Fields to use in the watermark as a comma-delimited list. The fields are the standard or user-defined fields from the Electronic Signatures table.

  5. Select the Location on the page where the watermark appears.

  6. If Explicit is used for the location, specify the X-Coordinate and Y-Coordinate location for the watermark. Values are specified in points. Each point is 1/72 in. measured from the lower-left corner of the page.

  7. Select the font, size, weight and color. Weight is disabled for those fonts that do not have an extended weight.

  8. Select whether to Layer the watermark Over or Under the content in the PDF (including other watermarks).

  9. Specify a Page Range. If left blank, the page range includes the entire document.

    Separate specific pages with commas (1,2,4). Separate the first and last pages in a page range with a colon (12:24). Use the keyword LAST to designate the last page without having to specify the actual page number. For example: 1,2,4,12:24,50:LAST.

  10. Specify a Page Range Modifier to watermark Even Pages Only or Odd Pages Only.

  11. Click OK when done.

27.2.4 Creating and Editing Rules

A template must be created before rules can be defined. For more information, see Section 27.2.3.

To add or edit a rule:

  1. Select the Rules tab on the PDF Watermark Administration page.

  2. To add a new rule, click Add. To change an existing rule, select a rule and click Edit.

  3. For a new rule, on the Add New/Edit Rule page, enter a Name that describes the way a rule is used. The name cannot be changed after saving. To change the name of a rule, delete the rule and create a new one.

  4. Select a Template ID to associate with the rule.

  5. To add a new criterion, click Add. To change an existing criterion, select the criterion and click Edit.

  6. For a new criterion, on the Add New/Edit Criteria page, select the Field Name. After saving, the field selection cannot be modified. To specify a different field name, create a new criterion.

  7. Specify the Value for the selected field. If the field chosen is a list, select a value from the options list.


    Note:

    Rules criteria are case-sensitive. The value must match the case of the returned value. For example, if the title you enter is "foobar" and the value returned is "FooBar", it is considered a mismatch and the rule fails.


  8. To save the criterion, click OK.

  9. To save the criteria for the current rule, click OK.

    Within a rule, the order of criteria does not matter. All criteria must be satisfied for the rule to apply.

  10. To change the order of the rules assigned to the template, select a rule and use the Move Up and Move Down buttons to change the position of the rule in the list.

    The order in which the rules are tested can be significant, depending on the criteria used. In general, order the rules with the most specific tests (number of criteria) high in the list, and the more general ones (fewer criteria) lower down.

  11. Click OK.

27.3 Watermarking Scenarios

This section discusses the following topics:

27.3.1 Static Watermarking Scenario

Content Server receives processed content from Inbound Refinery. Inbound Refinery must have PDF Converter installed, enabled, and configured to convert the necessary file formats into PDF. When the PDF file is presented, the watermark template selected during the content check in is applied.

Watermark elements are pre-defined in the template used to watermark the incoming PDF. The watermark is applied and is delivered to requesters without regard for dynamic watermarking rules. Rules-based watermarking (Dynamic) can also be applied in addition to a static watermark.

Surrounding text describes flow1.gif.

27.3.2 Dynamic Watermarking Scenario

When a Web-viewable PDF is requested by a user, a check is performed based on the defined rulesets to determine if a watermark is applied to the Web-viewable PDF delivered to the requester.

If a request for a PDF document satisfies a pre-defined rule, the template associated with that rule is used to watermark a copy of the content before the copy is returned to the requesting user.

Surrounding text describes flow2.gif.

Important:

A PDF optimizer is not provided with PDF Watermark. If using the Optimization feature, install a third-party distiller engine before use and verify the optimizer is fully operational. The optimizer must be able to execute conversions on a command-line (for example, a script file or a .bat file).


PK@zzPK VwE OEBPS/ibr_conversion_formats.htm Supported File Formats

28 Supported File Formats

Digital Asset Mananager and Conversion products support a variety of file formats and conversion options, based on the type of file being converted and the method used for the conversion.

This section discusses these topics:

28.1 File Formats Converted by Outside In Technology

Outside In Technology is used by Inbound Refinery, PDF Export, and XML Converter.

28.1.1 Inbound Refinery

Inbound Refinery includes Outside In Image Export 8.3.2, which can be used for the following:

  • To create thumbnails of files. Thumbnails are small preview images of content.

  • To convert files to multi-page TIFF files, enabling users to view the files through standard web browsers with a TIFF viewer plugin.

28.1.2 PDF Conversion

PDF conversion includes Outside In, which can be used with WinNativeConverter on Windows to create PDF files of some content items. Using the native applicaton, Outside In is used to print the files to PostScript, and then the PostScript files are converted to PDF using the configured PostScript distiller engine.

The Convert to PDF using Outside In option is selected on the Primary Web-Viewable Rendition page. When using this option, PDF conversion requires only a PostScript distiller engine.

28.1.2.1 PDF Export

PDF Export uses Outside In to export files directly to PDF without needing to first print to PostScript. By exporting to PDF directly, the use of a third-party distiller engine is not necessary.

28.1.3 XML Converter

Inbound Refinery includes Outside In XML Export and Search Export 8.1.9, which can be used to convert files to XML. This includes support for the following versions of the FlexionDoc and SearchML schemas:

  • FlexionDoc 5.1

  • SearchML 3.1

28.1.4 Outside In Technology

This section lists the file formats that can be converted using Outside In Technology for PDF, XML, and Inbound Refinery on either Windows or UNIX. The file formats are organized into the following categories:

28.1.4.1 Word Processing Formats

The following word processing file formats can be converted.

File formatsVersions

ANSI Text

7 & 8 bit

ASCII Text

7 & 8 bit

DEC WPS Plus (DX)

Versions through 3.1

DEC WPS Plus (WPL)

Versions through 4.1

DisplayWrite 2 & 3 (TXT)

All versions

DisplayWrite 4 & 5

Versions through 2.0

DOS character set

All versions

EBCDIC

All versions

Enable

Versions 3.0, 4.0, and 4.5

First Choice

Versions through 3.0

Framework

Version 3.0

Hangul

Versions 97 and 2002

IBM FFT

All versions

IBM Revisable Form Text

All versions

IBM Writing Assistant

Version 1.01

JustSystems Ichitaro

Versions 4.x–6.x, 8.x–13.x, and 2004

JustWrite

Versions through 3.0

Legacy

Versions through 1.1

Lotus AMI/AMI Professional

Versions through 3.1

Lotus Manuscript

Version 2.0

Lotus Word Pro (non-Windows)

Versions SmartSuite 97, Millennium, and Millennium 9.6 (text only)

Lotus Word Pro (Windows)

Versions SmartSuite 96 and 97, Millennium, and Millennium 9.6

MacWrite II

Version 1.1

MASS11

Versions through 8.0

Microsoft Rich Text Format (RTF)

All versions

Microsoft Word (DOS)

Versions through 6.0

Microsoft Word (Mac)

Versions 4.0–2004

Microsoft Word (Windows)

Versions through 2007

Microsoft WordPad

All versions

Microsoft Works (DOS)

Versions through 2.0

Microsoft Works (Mac)

Versions through 2.0

Microsoft Works (Windows)

Versions through 4.0

Microsoft Windows Write

Versions through 3.0

MultiMate

Versions through 4.0

Navy DIF

All versions

Nota Bene

Version 3.0

Novell Perfect Works

Version 2.0

Novell/Corel WordPerfect (DOS)

Versions through 6.1

Novell/Corel WordPerfect (Mac)

Versions 1.02 through 3.0

Novell/Corel WordPerfect (Windows)

Versions through 12.0

Office Writer

Versions 4.0–6.0

OpenOffice Writer (Windows & UNIX)

OpenOffice versions 1.1 and 2.0

PC-File Letter

Versions through 5.0

PC-File+ Letter

Versions through 3.0

PFS:Write

Versions A, B, and C

Professional Write (DOS)

Versions through 2.1

Professional Write Plus (Windows)

Version 1.0

Q&A (DOS)

Version 2.0

Q&A Write (Windows)

Version 3.0

Samna Word

Versions through Samna Word IV+

Signature

Version 1.0

SmartWare II

Version 1.02

Sprint

Versions through 1.0

StarOffice Writer

Versions 5.2 (text only) and 6.x–8.x

Total Word

Version 1.2

Unicode Text

All versions

UTF-8

All versions

Volkswriter 3 & 4

Versions through 1.0

Wang PC (IWP)

Versions through 2.6

WordMARC

Versions through Composer Plus

WordStar (DOS)

Versions through 7.0

WordStar (Windows)

Version 1.0

WordStar 2000 (DOS)

Versions through 3.0

XyWrite

Versions through III Plus


28.1.4.2 Desktop Publishing Formats

The following desktop publishing file formats can be converted.

File formatsVersions

Adobe FrameMaker (MIF)

Versions 3.0, 4.0, 5.0, 5.5, and 6.0 and Japanese 3.0, 4.0, 5.0, and 6.0 (text only)


28.1.4.3 Database Formats

The following database file formats can be converted.

File formatsVersions

Access

Versions through 2.0

dBASE

Versions through 5.0

DataEase

Version 4.x

dBXL

Version 1.3

Enable

Versions 3.0, 4.0, and 4.5

First Choice

Versions through 3.0

FoxBase

Version 2.1

Framework

Version 3.0

Microsoft Works (Windows)

Versions through 4.0

Microsoft Works (DOS)

Versions through 2.0

Microsoft Works (Mac)

Versions through 2.0

Paradox (DOS)

Versions through 4.0

Paradox (Windows)

Versions through 1.0

Personal R:BASE

Version 1.0

R:BASE 5000

Versions through 3.1

R:BASE System V

Version 1.0

Reflex

Version 2.0

Q & A

Versions through 2.0

SmartWare II

Version 1.02


28.1.4.4 Spreadsheet Formats

The following spreadsheet file formats can be converted.

File formatsVersions

Enable

Versions 3.0, 4.0, and 4.5

First Choice

Versions through 3.0

Framework

Version 3.0

Lotus 1-2-3 (DOS & Windows)

Versions through 5.0

Lotus 1-2-3 (OS/2)

Versions through 2.0

Lotus 1-2-3 Charts (DOS & Windows)

Versions through 5.0

Lotus 1-2-3 for SmartSuite

Versions 97–Millennium 9.6

Lotus Symphony

Versions 1.0, 1.1, and 2.0

Mac Works

Version 2.0

Microsoft Excel Charts

Versions 2.x–7.0

Microsoft Excel (Mac)

Versions 3.0–4.0, 98, 2001, 2002, 2004, and v.X

Microsoft Excel (Windows)

Versions 2.2 through 2007

Microsoft Multiplan

Version 4.0

Microsoft Works (Windows)

Versions through 4.0

Microsoft Works (DOS)

Versions through 2.0

Microsoft Works (Mac)

Versions through 2.0

Mosaic Twin

Version 2.5

Novell Perfect Works

Version 2.0

PFS:Professional Plan

Version 1.0

Quattro Pro (DOS)

Versions through 5.0 (text only)

Quattro Pro (Windows)

Versions through 12.0 (text only)

SmartWare II

Version 1.02

StarOffice/OpenOffice Calc (Windows and UNIX)

StarOffice versions 5.2–8.x and OpenOffice versions 1.1 and 2.0 (text only)

SuperCalc 5

Version 4.0

VP Planner 3D

Version 1.0


28.1.4.5 Presentation Formats

The following presentation file formats can be converted.

File formatsVersions

Corel/Novell Presentations

Versions through 12.0

Harvard Graphics (DOS)

Versions 2.x and 3.x

Harvard Graphics (Windows)

Windows versions

Freelance (Windows)

Versions through Millennium 9.6

Freelance (OS/2)

Versions through 2.0

Microsoft PowerPoint (Windows)

Versions 3.0–2007

Microsoft PowerPoint (Mac)

Versions 4.0–v.X

StarOffice/OpenOffice Impress (Windows and UNIX)

StarOffice versions 5.2 (text only) and 6.x–8.x (full support) and OpenOffice versions 1.1 and 2.0 (text only)


28.1.4.6 Graphic Formats

The following graphic file formats can be converted.

File formatsVersions

Adobe Photoshop (PSD)

All versions

Adobe Illustrator

Versions 7.0 and 9.0

Adobe FrameMaker graphics (FMV)

Vector/raster through 5.0

Adobe Acrobat (PDF)

Versions 1.0, 2.1, 3.0, 4.0, 5.0, 6.0, and 7.0 (including Japanese PDF)

Ami Draw (SDW)

Ami Draw

AutoCAD Interchange and Native Drawing formats (DXF and DWG)

AutoCAD Drawing Versions 2.5–2.6, 9.0–14.0, 2000i, and 2002

AutoCAD Drawing

Versions 2.5–2.6, 9.0–14.0, 2000i and 2002

AutoShade Rendering (RND)

Version 2.0

Binary Group 3 Fax

All versions

Bitmap (BMP, RLE, ICO, CUR, OS/2 DIB & WARP)

All versions

CALS Raster (GP4)

Type I and Type II

Corel Clipart format (CMX)

Versions 5–6

Corel Draw (CDR)

Versions 3.x–8.x

Corel Draw (CDR with TIFF header)

Versions 2.x–9.x

Computer Graphics Metafile (CGM)

ANSI, CALS NIST version 3.0

Encapsulated PostScript (EPS)

TIFF header only

GEM Paint (IMG)

All versions

Graphics Environment Mgr (GEM)

Bitmap and vector

Graphics Interchange Format (GIF)

All versions

Hewlett Packard Graphics Language (HPGL)

Version 2

IBM Graphics Data Format (GDF)

Version 1.0

IBM Picture Interchange Format (PIF)

Version 1.0

Initial Graphics Exchange Spec (IGES)

Version 5.1

JBIG2

JBIG2 graphic embeddings in PDF files

JFIF (JPEG not in TIFF format)

All versions

JPEG (including EXIF)

All versions

Kodak Flash Pix (FPX)

All versions

Kodak Photo CD (PCD)

Version 1.0

Lotus PIC

All versions

Lotus Snapshot

All versions

Macintosh PICT1 & PICT2

Bitmap only

MacPaint (PNTG)

All versions

Micrografx Draw (DRW)

Versions through 4.0

Micrografx Designer (DRW)

Versions through 3.1

Micrografx Designer (DSF)

Windows 95, version 6.0

Novell PerfectWorks (Draw)

Version 2.0

OS/2 PM Metafile (MET)

Version 3.0

Paint Shop Pro 6 (PSP)

Windows only, versions 5.0–6.0

PC Paintbrush (PCX and DCX)

All versions

Portable Bitmap (PBM)

All versions

Portable Graymap (PGM)

No specific version

Portable Network Graphics (PNG)

Version 1.0

Portable Pixmap (PPM)

No specific version

Postscript (PS)

Levels 1–2

Progressive JPEG

No specific version

Sun Raster (SRS)

No specific version

StarOffice/OpenOffice Draw (Windows and UNIX)

StarOffice versions 5.2–8.x and OpenOffice versions 1.1 and 2.0 (text only)

TIFF

Versions through 6

TIFF CCITT Group 3 & 4

Versions through 6

Truevision TGA (TARGA)

Version 2

Visio (preview)

Version 4

Visio

Versions 5, 2000, 2002, and 2003

WBMP

No specific version

Windows Enhanced Metafile (EMF)

No specific version

Windows Metafile (WMF)

No specific version

WordPerfect Graphics (WPG & WPG2)

Versions through 2.0

X-Windows Bitmap (XBM)

x10 compatible

X-Windows Dump (XWD)

x10 compatible

X-Windows Pixmap (XPM)

x10 compatible


28.1.4.7 Compressed Formats

The following compressed file formats can be converted.

File formatsVersions

GZIP


LZA Self Extracting Compress


LZH Compress


Microsoft Binder

Versions 7.0–97 (conversion of files contained in the Binder file is supported only on Windows)

UUEncode


UNIX Compress


UNIX TAR


ZIP

PKWARE versions through 2.04g


28.1.4.8 E-mail Formats

The following e-mail file formats can be converted.

File formatsVersions

Microsoft Outlook Folder (PST)

Microsoft Outlook Folder and Microsoft Outlook Offline Folder files versions 97, 98, 2000, 2002, and 2003

Microsoft Outlook Message (MSG)

Microsoft Outlook Message and Microsoft Outlook Form Template versions 97, 98, 2000, 2002, and 2003

MIME

MIME-encoded mail messages. For details, see the section "MIME Support Notes."


MIME Support Notes

The following is detailed information about support for MIME-encoded mail message formats:

  • MIME formats, including:

    • EML

    • MHT (Web Archive)

    • NWS (Newsgroup single-part and multi-part)

    • Simple Text Mail (defined in RFC 2822)

  • TNEF Format

  • MIME encodings, including:

    • base64 (defined in RFC 1521)

    • binary (defined in RFC 1521)

    • binhex (defined in RFC 1741)

    • btoa

    • quoted-printable (defined in RFC 1521)

    • utf-7 (defined in RFC 2152)

    • uue

    • xxe

    • yenc

Additionally, the body of a message can be encoded several ways. The following encodings are supported:

  • Text

  • HTML

  • RTF

  • TNEF

  • Text/enriched (defined in RFC1523)

  • Text/richtext (defined in RFC1341)

  • Embedded mail message (defined in RFC 822). This is handled as a link to a new message.

The attachments of a MIME message can be stored in many formats. All supported attachment types are processed.

28.1.4.9 Other Formats

The following other file formats can be converted.

File formatsVersions

Executable (EXE, DLL)


HTML

Versions through 3.0 (with some limitations)

Macromedia Flash

Macromedia Flash 6.x, Macromedia Flash 7.x, and Macromedia Flash Lite (text only)

Microsoft Project

Versions 98–2003 (text only)

MP3

ID3 information

vCard, vCalendar

Version 2.1

Windows Executable


WML

Version 5.2

XML

Text only

Yahoo! Instant Messenger

Versions 6.x and 7.x


28.2 File Formats Converted to PDF Using Third-Party Applications

When running on Windows, Inbound Refinery can use several third-party applications to create PDF files of content items. In most cases, a third-party application that can open and print the file is used to print the file to PostScript, and then the PostScript file is converted to PDF using the configured PostScript distiller engine. In some cases, Inbound Refinery can use a third-party application to convert a file directly to PDF.

The Convert to PDF using third-party applications option is selected on the Primary Web-Viewable Rendition page. When using this option, Inbound Refinery requires the following:

The following table lists the common file formats that can be converted to PDF using third-party applications on Windows. Please note the following important considerations:

* Adobe FrameMaker+SGML is not supported. Adobe FrameMaker .book files are not supported.

** Adobe Photoshop CS2 requires manual configuration.


Important:

For important installation tips and recommended settings related to these third-party applications.


28.3 File Formats Converted to PDF by Open Office

When running on either Windows or UNIX, Inbound Refinery can use OpenOffice to convert some file types directly to PDF. The Convert to PDF using OpenOffice option is selected on the Primary Web-Viewable Rendition page.

When using this option, Inbound Refinery requires only OpenOffice and not a third-party distiller engine.

The following table lists the common file formats that can be converted to PDF using OpenOffice on either Windows or UNIX. Please note the following important considerations:

PK HHPK VwEOEBPS/ucm_link_cat.htm Categorizing and Linking Content

9 Categorizing and Linking Content

Content Categorizer and Link Manager are optional components automatically installed with Oracle WebCenter Content Server. When enabled, Content Categorizer suggests metadata values for new documents checked in to Content Server, and for existing documents that have or do not have metadata values. When Link Manager is enabled, it evaluates, filters, and parses the URL links of indexed content items before extracting them for storage in a database table (ManagedLinks).

9.1 Using Content Categorizer

For Content Categorizer to recognize structural properties, the content must go through XML Conversion (eXtensible Markup Language). The conversion method is defined in the sccXMLConversion configuration variable. Content Categorizer uses Search Rules to suggest metadata values for content:

The Batch Categorizer that is included with the component can search a large number of files and create a Batch Loader control file containing appropriate metadata field values. The Batch Categorizer can also be used to recategorize content checked in to the repository.

9.1.1 XML Conversion


Important:

There is a problem with the XSLT transformation used to post-process PDF content converted using the Flexiondoc schema. When Flexiondoc schema are used, single words are assigned to individual XML elements, making the final XML unusable. It is necessary to use SearchML for categorizing PDF content.


Regardless of which XML converter is specified, the XML intermediate files are used only by Content Categorizer, so they are discarded after use, and documents are checked in to Content Server in their original source form. The only exception is content that is in XML format, which is not subjected to the translation process.

With each converter, the OutsideIn XML Export technology is used in combination with a custom XSLT style sheet (flexiondoc_to_scc.xsl) to produce XML in a two-stage process. In the first stage, the native document is converted to either Flexiondoc-formatted XML or SearchML-formatted XML.

In the second stage, the style sheet is used to further refine the XML so that it is searchable by Content Categorizer. Native document properties and text segments are isolated in XML elements, which are named after the corresponding document property, paragraph style, or character style (note that character styles are not supported by SearchML).

For a list of file formats supported by OutsideIn XML Export, see Chapter 40, "Input File Formats."

9.1.2 Search Rules Overview

Content Categorizer executes search rules depending on the type of rule defined:

  • Pattern Matching and Abstract Rules: Content Categorizer scans a content document looking for "landmarks." A landmark can be specific text, or it can be based on structural properties of the source document, such as styles, fonts, and formatting.

  • Option List Rule: Content Categorizer searches for keywords whose cumulative score determines which option of a list is selected. It does not look for either landmarks or specific XML tags.

  • Categorization Engine Rule: Content Categorizer invokes a 3rd-party categorizer engine and taxonomy to categorize a content item.

  • Filetype Rule: Content Categorizer looks for the document file type (the file name extension).

Normally, a user-entered value on the Content Check In Form prevents Content Categorizer from applying the search rules for that field. This is also true for list fields that have a default value, such as the Type field.


Important:

It is important to instruct contributors to leave any fields blank that they want to have filled by search rules.


For more information about search rules, see Section 9.1.5.

9.1.3 Running Content Categorizer

The following tasks must be done to run Content Categorizer:

  • Define the XML Conversion method. For more information, see Section 9.1.4.1.

  • Define search rules. For more information, see Section 9.1.5.6.

  • Optional: Define field properties, including default values for metadata fields. For more information, see Section 9.1.4.2.


    Important:

    To use the CATEGORY search rule, install, set up and register a categorizer engine before defining the CATEGORY rule for any metadata fields.


9.1.3.1 Operating Modes

Content Categorizer can operate in either Interactive mode or Batch mode. All modes require conversion of the source documents into XML intermediate form. However, the process flows of the modes are distinctly different.

  • Batch mode is used when recategorizing large numbers of documents in the repository. The system administrator uses a standalone utility to run Content Categorizer, then either performs a live update of content metadata or uses the output file from Batch Categorizer as input to the Batch Loader. For more information about the steps used during this process, see Section 9.1.3.1.1.

  • Interactive mode integrates Content Categorizer with the Content Check In Form and Info Update Form. Users click Categorize on the form to run Content Categorizer on a single content item. Any value that is returned by Content Categorizer is a suggested value, because the contributor can edit or replace the returned value. For more information about the steps taken during this process, see Section 9.1.3.1.2.

9.1.3.1.1 Running Batch Mode

The MaxQueryRows configuration variable is used to specify the maximum number of documents that can be included in a single batch load process. As such, it affects how many documents a user sees in Batch Categorizer. The default setting for this configuration variable is 200 but can be decreased or increased as necessary. For more information about the variable, see Oracle Fusion Middleware Configuration Reference for Oracle WebCenter Content.

The system administrator performs the following steps during the batch mode process:

  1. Run the Batch Categorizer application. For more information about running applications on UNIX systems, see the Oracle Fusion Middleware Administering Oracle WebCenter Content.

  2. If necessary, on the Batch Categorizer page, define filters and release date information to display a list of content to be categorized. Click Categorize.

  3. On the Categorize Existing page, select Live Update or Batch Loader.

    • The Live Update option updates the data in the repository immediately.

    • The Batch Loader option is used to create a control file, which is the output of the Content Categorizer process. The file contains an entry for each source document, and contains the values for each metadata field based on the search rules defined in Content Categorizer. You can edit this file before submitting it to the Batch Loader.

  4. To run the Batch Loader utility automatically after the Content Categorizer process is complete, select the Run Batch Loader check box.

  5. Enter the location and file name for the log file that contains error information about the Content Categorizer process.

  6. Choose Categorize All to work with all content items or Categorize Selected to use only the highlighted items in the content list.

  7. Choose to categorize the Latest Revision, which works with only the most recent revision of an item, or All Revisions.

  8. Choose to continue or discontinue the categorization process when Batch Categorizer encounters an error.

  9. Click OK. The Progress bar shows the progress as the batch process moves through its steps:

    1. Content Categorizer locates the source content.

    2. If the content is in XML format, no translation occurs, and the process continues at step d.

    3. If the content is not in XML format, conversion into XML occurs using the selected XML conversion method: Flexiondoc or SearchML.

    4. Content Categorizer applies the search rules to the XML and obtains values for the specified metadata fields.

    5. If Live Update was specified, database records are updated immediately. If Batch Loader was specified, an output control file is created, and the Batch Loader utility is run, if the option to do so after processing was specified.

  10. When the batch process is complete, review the error logs. Errors encountered by Batch Categorizer are displayed on the console and also recorded in the Batch Categorizer log (if specified). Errors encountered by Batch Loader are displayed on the console and also recorded in the system log.

If the optional AddCCToArchiveCheckin component is installed and enabled, all content loaded using the Batchloader utility is categorized automatically, based on predefined rule sets. For more information about defining rule sets, see Section 9.1.5.6.

9.1.3.1.2 Interactive Mode Process

The following steps occur during the check-in process:

  1. A contributor opens the Content Check In Form or the Info Update Form, selects a primary file (only on Content Check In Form), and clicks Categorize.

  2. The Content Check In Form copies the primary file to the host and calls the Content Categorizer service.

  3. Content Categorizer locates the source content.

  4. If the content is in XML format, no translation occurs, and the process continues at step 6.

  5. If the content is not in XML format, the specified conversion method is used.

  6. Content Categorizer applies the search rules to the XML and obtains suggested values for the specified metadata fields.

  7. Content Categorizer inserts the suggested metadata values into the Content Check In Form or Update Info Form, and returns the form to the contributor.

  8. The contributor can check in or submit the document with the suggested values, revise the metadata values, or cancel the check in or update.

If the optional AddCCToNewCheckin component is installed and enabled, when you click Check In on the Content Check In Form, it performs steps 2 through 6 and completes the check in process, provided the properties for dDocTitle are set to Override Contents.

If the properties of dDocTitle are not set to Override Contents, then an alert is displayed requesting that the required field is completed. Field properties are set using the CC Admin Applet. For more information, see Section 9.1.4.2.

9.1.4 Setting Up Content Categorizer

Before using Content Categorizer, install and configure the necessary software. This section discusses those tasks:

9.1.4.1 Setting XML Conversion Method

To set the XML conversion method in Content Categorizer:

  1. Choose Administration then Content Categorizer Administration from the Main menu.

  2. On the Content Categorizer Administration page, click Configuration.

  3. On the Configuration tab, select the sccXMLConversion property and click Edit or double-click the property.

  4. From the list on the Property Config page, select either Flexiondoc or SearchML as the XML conversion method.

  5. Click OK.

  6. Click Apply to save the changes.

9.1.4.2 Defining Field Properties (Optional)

When any rule for a field succeeds, the found value is used (in either Batch Loader operations or Live Update operations). However, depending on how the Override value is set, the found value does not override the existing value (Override is set to false).

When all rules for a field fail, no value is assigned to the field unless a default value is defined for the field and Use Default is set to true.

To define field properties for the metadata fields:

  1. Choose Administration then Content Categorizer Administration from the Main menu.

  2. On the Content Categorizer Administration page, click the Field Properties tab.

  3. Select a metadata field to be edited and click Edit, or double-click the field.

  4. On the Field Properties page, enter a default value for the field.

    The default value for a list field must match a value available for that field.

  5. Select the Override check box for the value returned by the categorization process to override an existing value for the field.

  6. Select the Use Default check box for the field's default value to be used if all rules fail (or are not defined) when the categorization process runs.

  7. Click OK.

  8. Repeat these steps for each field to be edited.

  9. Click Save Settings to save the changes.

9.1.5 Search Rules

Search rules define how Content Categorizer determines metadata values to return to the Content Check In Form or Info Update Form (for Interactive mode) or the batch file (for Batch mode).

This section discusses the following information regarding search rules:

Every search rule is defined by:

  • A rule type, which determines the method that Content Categorizer uses to search the XML document.

  • A key, which defines the XML element, phrase, or keyword that Content Categorizer looks for in the document, or the categorization engine/taxonomy that Content Categorizer uses to classify the document.

  • A count, which is used to refine the search criteria.

Consider the following guidelines when creating search rules:

  • You can apply search rules to any custom metadata field.

  • You can apply search rules to the Title, Comments, and Type standard metadata fields. You cannot define search rules for any other standard metadata fields (such as Author, Security Group, and Account).

  • You can define multiple search rules for a metadata field. (For a single metadata field, however, multiple CATEGORY rules referring to different taxonomies are not supported.)

  • Multiple search rules are run in the order specified, so that if a search rule does not result in a suggested value, the next rule is run. Arrange the list from most to least specific.

  • You can mix search rule types within a metadata field. For example, you can define an Option List rule, a Pattern Matching rule, and an Abstract rule for the same metadata field.

  • If none of the search rules specified for a metadata field can be satisfied, the field is left blank.

9.1.5.1 Pattern Matching Search Rules

Pattern Matching search rules look for specific text or a specific XML element and return an associated value. For example, the Invoice # metadata field contain the value that follows an Invoice: or Invoice Number: label in the source document, or it can contain the value that is within the <Invoice> tag in the XML document.

There are two general types of Pattern Matching rules: Tag Search and Text Search. Within each type are several sub-types.

  • Tag Search searches for the full name of an XML element that matches the key. If such an element is found, the text contained in the element is returned as the result. Tag searches are case sensitive. Sub-types include the following:

    • TAG_TEXT

    • TAG_ALLTEXT

  • Text Search searches for text that matches the key. If such text is found, the text near or following the key is returned as the result. Text searches are not case sensitive. Sub-types include the following:

    • TEXT_REMAINDER

    • TEXT_FULL

    • TEXT_ALLREMAINDER

    • TEXT_ALLFULL

    • TEXT_NEXT

    • TEXT_ALLNEXT

The key for a Pattern Matching search rule is either an XML element (for a Tag Search) or a text phrase (for a Text Search).

The count for a Pattern Matching search rule defines the number of tags or text phrases that must be matched before the rule returns results. For example, a count of 4 looks for the fourth occurrence of the key. If only three occurrences of the key are found in the document, the rule fails.The default count of 1 returns the first occurrence of the key.

The following examples illustrate the use of the Pattern Matching search rules.

Example: TAG_TEXT

This rule searches for the full name of an XML element that matches the key (including case). If such an element is found, all text that belongs to the element is concatenated and returned as the result.

  • Content: <TAG_A>Title: The Big <TAG_B>Bad</TAG_B> Wolf</TAG_A>

    <TAG_C>Subtitle: A <TAG_D>Morality</TAG_D> Play</TAG_C>

  • Rule: TAG_TEXT

  • Key: TAG_A

  • Returns: Title: The Big Wolf

Example: TAG_ALLTEXT

This rule searches for the full name of an XML element that matches the key (including case). If such an element is found, all text that belongs to the element, and to all children of the element, is concatenated and returned as the result.

  • Content: <TAG_A>Title: The Big <TAG_B>Bad</TAG_B> Wolf</TAG_A>

    <TAG_C>Subtitle: A <TAG_D>Morality</TAG_D> Play</TAG_C>

  • Rule: TAG_ALLTEXT

  • Key: TAG_A

  • Returns: Title: The Big Bad Wolf

Example: TEXT_REMAINDER

This rule searches for text that matches the key (except for case). If such text is found, any text following the key that belongs to the same XML element is returned as the result.

  • Content: <TAG_A>Title: The Big <TAG_B>Bad</TAG_B> Wolf</TAG_A>

    TAG_C>Subtitle: A <TAG_D>Morality</TAG_D> Play</TAG_C>

  • Rule: TEXT_REMAINDER

  • Key: Title:

  • Returns: The Big Wolf

Example: TEXT_ALLREMAINDER

This rule searches for text that matches the key (except for case). If such text is found, any text following the key that belongs to the same XML element, and to all children of the element, is returned as the result.

  • Content: TAG_A>Title: The Big <TAG_B>Bad</TAG_B> Wolf</TAG_A>

    <TAG_C>Subtitle: A <TAG_D>Morality</TAG_D> Play</TAG_C>

  • Rule: TEXT_ALLREMAINDER

  • Key: Title:

  • Returns: The Big Bad Wolf

Example: TEXT_FULL

This rule searches for text that matches the key (except for case). If such text is found, any text that belongs to the same XML element, including the key text, is returned as the result.

  • Content: <TAG_A>Title: The Big <TAG_B>Bad</TAG_B> Wolf</TAG_A>

    <TAG_C>Subtitle: A <TAG_D>Morality</TAG_D> Play</TAG_C>

  • Rule: TEXT_FULL

  • Key: Title:

  • Returns: Title: The Big Wolf

Example: TEXT_ALLFULL

This rule searches for text that matches the key (except for case). If such text is found, any text that belongs to the same XML element, including the key text and any text belonging to children of the element, is returned as the result.

  • Content: <TAG_A>Title: The Big <TAG_B>Bad</TAG_B> Wolf</TAG_A>

    <TAG_C>Subtitle: A <TAG_D>Morality</TAG_D> Play</TAG_C>

  • Rule: TEXT_ALLFULL

  • Key: Title:

  • Returns: Title: The Big Bad Wolf

Example: TEXT_NEXT

This rule searches for text that matches the key (except for case). If such text is found, any text that belongs to the next non-blank XML element is returned as the result. Blank elements and elements composed of non-printing characters are not selected as the return value.

  • Content: <TAG_A>Title: The Big <TAG_B>Bad</TAG_B> Wolf</TAG_A>

    <TAG_C>Subtitle: A <TAG_D>Morality</TAG_D> Play</TAG_C>

  • Rule: TEXT_NEXT

  • Key: Title:

  • Returns: Subtitle: A Play

Example: TEXT_ALLNEXT

This rule searches for text that matches the key (except for case). If such text is found, any text that belongs to the next non-blank XML element, and to all children of the element, is returned as the result. Blank elements and elements composed of non-printing characters are not selected as the return value.

  • Content: <TAG_A>Title: The Big <TAG_B>Bad</TAG_B> Wolf</TAG_A>

    <TAG_C>Subtitle: A <TAG_D>Morality</TAG_D> Play</TAG_C>

  • Rule: TEXT_ALLNEXT

  • Key: Title:

  • Returns: Subtitle: A Morality Play

9.1.5.2 Abstract Search Rules

Abstract search rules look for an XML element and return a descriptive sentence or paragraph from that element. For example, the Summary metadata field could be filled by a returned value of "Germany is a large country in size, culture, and worldwide economics. One of Germany's largest industries includes the manufacturing of world class automobiles like BMW, Mercedes, and Audi."

The Abstract rule type is useful where there is no readily identifiable or explicitly tagged block of text in the content item. Typically, these rules are used to suggest summary or topic information about the document.

There are two types of abstract search rules: First Paragraph and First Sentence.

  • First Paragraph searches for the full name of an XML element that matches the key. The entire paragraph of the first such element that meets the size criteria (specified by the count) is returned as the result.

  • First Sentence searches for the full name of an XML element that matches the key. If such an element is found, the first sentence of the element is returned as the result.

The key for an Abstract search rule is an XML element.

The count is interpreted differently for the First Paragraph and First Sentence search rules.

  • For a First Paragraph search rule, the count is a size threshold measured in percent:

    1. The rule searches the document for all paragraphs that match the key.

    2. The rule calculates the average size (based on character count) of the paragraphs that match the key.

    3. The rule multiplies the average size by the count percentage (0 = 0%, 100 = 100%).

    4. The rule looks for the first paragraph larger than the resulting number.

    For example, if the count is set to 75 and the average paragraph size is 100 characters, the rule returns the first paragraph larger than 75 characters that matches the key.

    If the count is set to the default of 1, the rule is likely to return the first paragraph that matches the key.

  • For a First Sentence search rule, the count is the number of elements that have their first sentences returned.

    For example, if the count is set to 3, the rule returns the first sentence from each of the first three elements that match the key.

The following examples illustrate the use of the Abstract search rules.

Example: FIRST_PARAGRAPH

This example returns the first <Text> element that exceeds one-half the average <Text> element paragraph size. Note that the <Title> element does not match the key value, so it is ignored for both the search and for the average length calculation.

  • Content: <Title>Poem</Title>

    <Text>Mary had</Text>

    <Text>a little Lamb</Text>

    <Text>The fleece was white as snow</Text>

    <Text>And everywhere that Mary went the lamb was sure to go</Text>

  • Rule: FIRST_PARAGRAPH

  • Key: Text

  • Count: 50

  • Returns: The fleece was white as snow.

Example: FIRST_SENTENCE

This example returns the first sentence of the first two <Text> elements. Note that the<Title> element does not match the key value, so it is excluded from the search.

  • Content: x<Title>Barefoot in the Park</Title>

    <Text>See Dick run. See Jane run. See Dick and Jane.</Text>

    <Text>See Spot run. See Puff chase Spot.</Text>

    <Text>See Dick chase Spot and Puff.</Text>

  • Rule: FIRST_SENTENCE

  • Key: Text

  • Count: 2

  • Returns: See Dick run. See Spot run.

9.1.5.3 Option List Search Rule

The Option List search rule looks for keywords within the source document, applies a score for each keyword found, and returns the value that has the highest keyword score.

For example, if the keywords margin, SEC filing, or invoice were found in a document, the suggested value for the Department field would be Accounting, while the keywords tolerance, assembly, or inventory would return Manufacturing as the suggested value.

  • The Option List search rule usually applies to metadata fields that have a list defined in the Configuration Manager.

  • Option list names and values (called categories in Content Categorizer) appear in Content Categorizer as specified in the Configuration Manager. If a custom list field is created or changed while the CC Admin Applet is open, close and reopen the applet to see the changes.

  • The current version of Content Server automatically inserts a blank value as the default value in a custom list field. In this case, the first value (by default, a blank value) is not considered a user-entered value, and the Option List search rule is applied. To prevent the Option List search rule from overriding the first value in a custom list field, provide a default value for that list on the Configuration Manager Applet.

There is one type of Option List search rule, which searches for keywords (single words or phrases) that match the keywords defined in the key.

  • Keywords can be single words (for example, dog) or multiple-word phrases (for example, black dog).

  • Keywords can use the following defined set of operators to further refine a search:

    • $$AND$$

    • $$OR$$

    • $$AND_NOT$$

    • $$NEAR$$

  • Keywords are pre-assigned to each category (value) in the list, and each keyword has a weight assigned to it.

  • The number of occurrences of each keyword found in the document is multiplied by its weight, resulting in a keyword score.

  • The keyword scores for each category are added, resulting in a category score.

  • The category with the highest score is returned as the suggested value.

  • If there is a tie between categories, the category earliest in the list is returned as the suggested value.

  • Use the weights Always and Never to override the scores and count threshold.

    • An occurrence of a keyword with the Always weight forces the category to be returned as the suggested value, regardless of score.

    • An occurrence of a keyword with the Never weight disqualifies the category from being returned as the suggested value, regardless of score.

    • If two categories have keywords assigned the Always weight, and both keywords occur in the document, the keyword first found in the document takes precedence.


      Important:

      Option List searches are case sensitive and must match exactly. For example, Invoice, Invoices, invoice, and invoices must be defined to retrieve all instances of this keyword.


The key for an Option List search rule is the Option List name, as shown on the Option Lists tab of the Admin Applet.

The count for an Option List search rule sets a minimum threshold score for the rule to return results. For example, if the count is set to 50, and the highest accumulated keyword score is 45, the rule fails.

The following examples illustrate the use of the Option List search rule.

Example 1: Option List

In this example, the score for Dick and Spot is 30 (3 occurrences x 10), and the score for Jane and Puff is 20 (2 occurrences x 10). Dick is returned as the suggested value because it is earlier in the list than Spot:

  • Content: <Title>Barefoot in the Park</Title>

    <Text>See Dick run. See Jane run. See Dick and Jane.</Text>

    <Text>See Spot run. See Puff chase Spot.</Text>

    <Text>See Dick chase Spot and Puff.</Text>

  • Rule: OPTION_LIST

  • Key: MainCharacterList

  • Count: 10

  • Option List Categories, Keywords, and Weight: Dick: Dick=10, boy=5, Richard=2

    Jane: Jane=10, girl=5, Janie=2

    Spot: Spot=10, dog=5

    Puff: Puff=10, cat=5

  • Returns: Dick

Example 2: Option List

In this example, Spot is returned as the suggested value because its score of 60 (3 occurrences x 20) is higher than the other categories:

  • Content: <Title>Barefoot in the Park</Title>

    <Text>See Dick run. See Jane run. See Dick and Jane.</Text>

    <Text>See Spot run. See Puff chase Spot.</Text>

    <Text>See Dick chase Spot and Puff.</Text>

  • Rule: OPTION_LIST

  • Key: MainCharacterList

  • Count: 10

  • Option List Categories, Keywords, and Weight: Dick: Dick=10, boy=5, Richard=2

    Jane: Jane=10, girl=5, Janie=2

    Spot: Spot=20, dog=10

    Puff: Puff=10, cat=5

  • Returns: Spot

Example 3: Option List

In this example, the rule fails because none of the scores is above the Count threshold of 50:

  • Content: <Title>Barefoot in the Park</Title>

    <Text>See Dick run. See Jane run. See Dick and Jane.</Text>

    <Text>See Spot run. See Puff chase Spot.</Text>

    <Text>See Dick chase Spot and Puff.</Text>

  • Rule: OPTION_LIST

  • Key: MainCharacterList

  • Count: 50

  • Option List Categories, Keywords, and Weight: Dick: Dick=10, boy=5, Richard=2

    Jane: Jane=10, girl=5, Janie=2

    Spot: Spot=10, dog=5

    Puff: Puff=10, cat=5

  • Returns: Fail

Example 4: Option List

In this example, Puff is returned as the suggested value because the keyword "Puff" has a weight of Always:

  • Content: <Title>Barefoot in the Park</Title>

    <Text>See Dick run. See Jane run. See Dick and Jane.</Text>

    <Text>See Spot run. See Puff chase Spot.</Text>

    <Text>See Dick chase Spot and Puff.</Text>

  • Rule: OPTION_LIST

  • Key: MainCharacterList

  • Count: 10

  • Option List Categories, Keywords, and Weight: Dick: Dick=10, boy=5, Richard=2

    Jane: Jane=10, girl=5, Janie=2

    Spot: Spot=10, dog=5

    Puff: Puff=Always, cat=5

  • Returns: Puff

9.1.5.4 Categorization Engine Search Rule

The Categorization Engine search rule uses a 3rd-party categorizer engine and defined taxonomy to determine and return a value that represents a category within the specified taxonomy, for example, News/Technology/Computers.

There is one type of Categorization Engine search rule, which uses the categorizer engine and taxonomy specified in the Key to return a value for the field.

The key for a Categorization Engine search rule is the name of the categorizer engine followed by the name of the taxonomy. For example, EngineName/TaxonomyName. If an engine name is defined in the Key field, Content Categorizer defaults to the first engine displayed in the Categorizer Engines list. If only one engine is defined, just enter the taxonomy name in the Key field.

The count for a Categorization Engine search rule sets a minimum confidence level threshold for the returned results.

When a categorization engine returns a category (or set of categories) for a given query, a confidence level is also returned, which is often expressed as a percentage for each category. The Category rule always accepts the highest-confidence category, unless the confidence level is below the count value specified for the rule, in which case the rule fails. For example, if the count is set to 50, and the highest-confidence category returned is 45, the rule fails.

The default count of 1 would always accept the highest-confidence category returned by the categorizer engine. The actual range for the Count value depends on the categorizer engine that is being used.

9.1.5.5 Filetype Search Rule

The Filetype search rule looks at the file name extension of a document and returns a term, usually a file type description associated with the file name extension.

There is one type of Filetype search rule, which uses the file name extension of the primary (native) file to return a value for the field.

When the Filetype search rule is defined for a metadata field, the file name extension of the content item is matched against all values in the DocFormatsWizard table. This table is found in the file doc_config.htm, which is located in the IntradocDir/shared/config/resources/ directory.

If a match is found, the associated value in the Description column is extracted and translated. The resulting string is returned as the suggested metadata value for the field. If the primary file path has no extension, or if the extension does not match any of the "extensions" values in the DocFormatsWizard table, the rule fails and the next rule in the list for the metadata field is executed.

The key for a FILETYPE search rule is not used when determining a metadata value. Leave the Key field blank.

The count for a FILETYPE search rule is not used when determining a metadata value. Leave the Count field blank.

If a FILETYPE rule is created with non-blank Key or Count fields, a warning message is displayed indicating that these fields are not supported by the rule.

The following examples illustrate the use of the Filetype Search rule.

Example 1: Filetype Search

  • Primary File: policies.doc

  • Rule: FILETYPE

  • Key: blank

  • Count: blank

  • Returns: Microsoft Word Document

Example 2: Filetype Search

  • Primary File: procedures.wpd

  • Rule: FILETYPE

  • Key: blank

  • Count: blank

  • Returns: Corel WordPerfect Document

9.1.5.6 Creating Search Rules

During startup, Content Categorizer takes a snapshot of the current metadata field configuration including field names and lengths. If the metadata field configuration changes, restart Content Server before running the Content Categorizer Admin Applet to add or modify any search rules.


Important:

Content Categorizer requires a non-empty rule set for any file type (.doc, .txt, .xml, and so on) it is called to examine. If no rules exist for a given file type, Content Categorizer throws an exception.The easiest way to protect against this is to add at least one rule to the Default rule set. The Default rule set is used for all file types which do not have a custom rule set assigned.


To define search rules for any metadata field:

  1. Choose Administration then Content Categorizer Administration.

  2. On the Content Categorizer Administration page, click the Rule Sets tab.

  3. In the Ruleset pane, select the ruleset from the list, or click Add to add and name a new ruleset. A ruleset contains multiple rules that apply to specific documents or a particular document type. If a specific ruleset is not defined for a given document or document type, the default ruleset is used.

  4. Select a metadata field from the Field list.

  5. Click Add.

  6. On the Add/Edit Rule for field_name page, select the rule type from the Rule list.

  7. Enter the search rule key in the Key field.

    If CATEGORY is used, enter the categorization engine name (if there are multiple items in list of Categorizer Engines), followed by slash (/), followed by taxonomy name. For example: EngineName/TaxonomyName

    For an OPTION_LIST search rule, keywords for the list must be defined on the Option List tab.

  8. Enter the count in the Count field. For TAG and TEXT types, this is the number of tags or text phrases that must be matched before the rule returns results. For example, a count of 4 looks for the fourth occurrence of the key.

    If only three occurrences of the key are found in the document, the rule fails. The default count of 1 returns the first occurrence of the key.

    For FIRST_PARAGRAPH, this is the size threshold measured in percent. The first paragraph matching the key that is larger than the count percentage multiplied by the average paragraph size is returned. For example, if the count is set to 75 and the average paragraph size is 100 characters, the rule returns the first paragraph larger than 75 characters that matches the key. If the count is set to the default of 1, the rule is likely to return the first paragraph that matches the key.

    For FIRST_SENTENCE, this is the number of elements that have their first sentences returned. For example, if the count is set to 3, the rule returns the first sentence from each of the first three elements that match the key.

    For CATEGORY, this is the minimum confidence level threshold for the rule to return results. For example, if the count is set to 50, and the highest-confidence category has a confidence level of 45, the rule fails.

  9. Click OK when done.

  10. Add search rules to each metadata field as necessary.

    • To delete a rule, select the rule in the Rules List and click Delete.

    • To edit a rule, select the rule in the Rules List and click Edit.

    • To adjust the order of the rules, select the rule in the Rules List and click Move Up or Move Down. Rules are applied in the order listed. If the first rule succeeds, no other rules are applied. If the first rule fails, then the next rule is applied, and so forth.


      Important:

      If a CATEGORY rule is added, edited, or deleted, a dialog prompts you to apply the changes and build, rebuild, or check for orphaned query trees for this rule on the Query Trees tab.


  11. Click Apply to save the changes, or click OK to save the changes and close the Content Categorizer Administration page.

To define the keywords and weights for a list:

  1. Choose Administration then Content Categorizer Administration from the Main menu.

  2. On the Content Categorizer Administration page, click the Option Lists tab.

  3. Select a list from the Option List. The list includes the Type ($DocType) list, plus lists of all custom metadata fields that have a list defined in the Configuration Manager.


    Caution:

    When a list metadata field is deleted from the Configuration Manager, the field is removed from the Rule Sets tab, but it still appears in the Option List list on the Option Lists tab. Be careful not to select an obsolete list.


  4. Select a value from the Category list. Only the pre-defined values for the list are included.

  5. Enter a keyword or phrase in the Keyword field. Option List searches are case sensitive and must match exactly.

    • Keywords can be single words or multiple-word phrases.

    • Keywords can include Boolean-type expressions, where the following set of binary operators are valid: $$AND$$, $$OR$$, $$AND_NOT$$, $$NEAR$$

  6. Select a weight for the keyword.

    • Always: If the keyword is found, the selected category is returned as the suggested value, regardless of the score.

    • Weight: This number multiplied by the number of occurrences of the keyword is the category's score. The category with the highest score is returned as the suggested value for the list field.

    • Never: If the keyword is found, the selected category is not returned as the suggested value, regardless of the score.

  7. Click Add.

  8. Enter keywords for each category in the selected list.

    • To delete a keyword, select the keyword in the Keywords list and click Delete.

    • To edit a keyword, select the keyword in the Keywords list, click Edit, edit the keyword, the weight or both, and click Update.

  9. Click Apply to save the changes, or click OK to save the changes and close the page.

You can configure the configuration file so Content Categorizer ignores the Type default value and applies search rules to the Type field.

This procedure applies only to the Type (dDocType) field. You cannot apply search rules to the other standard list fields (Security Group, Author, and Account).

To apply search rules to the Type field:

  1. Open the config.cfg file located in the IntradocDir/config/ directory in a text-only editor such as WordPad.

  2. Add the following line to the file:

    ForceDocTypeChoice=true
    
  3. Save and close the file.

  4. Stop and restart Content Server.

9.1.6 Sample doc_config.htm Page

The following is a sample doc_config.htm page.

<@table DocFormatsWizard@>
dFormatExtensionsdConversiondDescription

application/

corel-wordperfect, application/wordperfect

wpd

WordPerfect

apWordPerfectDesc

application/

vnd.framemaker

fm

FrameMaker

apFramemakerDesc

application/

vnd.framebook

bk, book

FrameMaker

apFrameMakerDesc

application/vnd.mif

mif

FrameMaker

apFrameMakerInterchangeDesc

application/lotus-1-2-3

123, wk3, wk4

123

apLotus123Desc

application/lotus-freelance

prz

Freelance

apLotusFreelanceDesc

application/lotus-wordpro

lwp

WordPro

apLotusWordProDesc

application/msword, application/ms-word

doc, dot

Word

apMicrosoftWordDesc

application/vnd.ms-excel, application/ms-excel

xls

Excel

apMicrosoftExcelDesc

application/

vnd.ms-powerpoint, application/ms-powerpoint

ppt

PowerPoint

apMicrosoftPowerPointDesc

application/vnd.ms-project, application/ms-project

mpp

MSProject

apMicrosoftProjectDesc

application/ms-publisher

pub

MSPub

apMicrosoftPublisherDesc

application/write

wri

Word

apMicrosoftWriteDesc

application/rtf

rtf

Word

apRtfDesc

application/vnd.visio

vsd

Visio

apVisioDesc

application/vnd.illustrator

ai

Illustrator

apIllustratorDesc

application/vnd.photoshop

psd

PhotoShop

apPhotoshopDesc

application/vnd.pagemaker

p65

PageMaker

apPageMakerDesc

image/gif

drw, igx, flo, abc, igt

iGrafx

apiGrafxDesc

text/postscript

ps

Distiller

apDistillerDesc

application/hangul

hwp

Hangul97

apHangul97Desc

application/ichitaro

jtd, jtt

Ichitaro

apIchitaroDesc

image/graphic

gif, jpeg, jpg, png, bmp, tiff, tif

ImageThumbnail

apThumbnailsDesc

image/application

txt, eml, msg

NativeThumbnail

apNativeThumbnailsDesc


<@end@>
<@table PdfConversions@> 
dFormatExtensionsdConversiondDescription

application/pdf

pdf

PDFOptimization

apPdfOptimization

application/pdf

pdf

ImageThumbnail

apPdfThumbnailsDesc


<@end@>

9.1.7 XSLT Transformation

Content Server uses a two-step process for categorizing content. The first step translates content into an XML format, the second step transforms the XML file into another XML file useful to Content Categorizer. The process is transparent in that the original content is not modified, and both the translated and transformed XML files are discarded after use.

This section covers the following topics:

9.1.7.1 Translation

The translation step uses the OutsideIn XML Export filters to output the XML in either SearchML or Flexiondoc XML format, depending on the type of content being translated and if the format is available for the platform being used. This translation process enables Categorizer to support a large number of different source document formats.

The transformation step uses eXtensible Style Sheet Language Transformations (XSLT) to transform the initial XML output into an XML equivalent that Content Categorizer can easily search and analyze based on search rules defined by the user.

An overview of the transformation process can be useful to anyone interested in the categorization process, and serve as a starting point for users who would like to define their own XSLT style sheets to accommodate their specific document processing needs.

Translation Using OutsideIn XML Export Filters

A run-time version of the OutsideIn XML Export product is integrated and installed with Content Server, and it filters content checked in for categorization. The Export filters convert content to XML for transformation using Categorizer's XSLT style sheets. The transformation is necessary because the Export XML schemas, Flexiondoc and SearchML, are not in a form easily searched by Content Categorizer rules.

For a list of file formats supported by OutsideIn XML Export, see Chapter 40, "Input File Formats."

9.1.7.2 Transformation Using XSLT Style Sheets

Two style sheets are included with Content Categorizer and applied based on the initial translation format provided by the OutsideIn XML Export filter. The style sheets are located in the following directory:

/IntradocDir/data/contentcategorizer/stylesheets/

For content items output in SearchML, searchml_to_scc.xsl is applied. For content items output in Flexiondoc, flexiondoc_to_scc.xsl is applied. SearchML and Flexiondoc both reproduce style designations found in the source content, but they do so differently, in ways not detectable by Content Categorizer rules. The appropriate steeliest can recognize the necessary style information in each format and use that information as the basis for transforming the final output tags into an XML document useful to Content Categorizer.

The similarity between SearchML and Flexiondoc depends on the degree to which internal styles or metadata are used in the content. When working with content using named styles, such as Microsoft Word, the resultant output is similar. When working with content in formats such a PDF or text, results come out with more generic tagging.


Important:

There is a problem with the XSLT transformation used to post-process PDF content that is output in Flexiondoc format. When Flexiondoc is used, single words are assigned to individual XML elements, making the final XML unsuitable for most Categorizer search rules. It is recommended to use SearchML for categorizing PDF content.


9.1.7.3 SearchML Transformation

When the OutsideIn XML Export filter translates content into SearchML XML format, it identifies the properties of the content item, such as title, subject, and author, and tags them as a <doc_property> element. It distinguishes the properties by a type attribute. It also identifies document text and tags it as a <p> element. It distinguishes styles within text by an s attribute.

9.1.7.4 Flexiondoc Transformation

When the OutsideIn XML Export filter translates content into Flexiondoc XML format, it identifies the properties of the content item, such as title, subject, and author, and tags them as a <doc_property> element, just like SearchML. However, it distinguishes the properties by a name attribute, instead of type.

Where Flexiondoc differs from SearchML is in how it identifies styles. Paragraph styles are tagged with <tx.p> tags, and character styles are tagged with <tx.r> tags, but each have an attribute based on a unique style id, in addition to a name attribute.

All styles are defined in child elements of the <style_tables> element of the Flexiondoc XML file, and given an id attribute, which is called when referencing the style, and which the template file uses to define a style key with a name attribute.

9.2 Using the Link Manager Component

Link Manager is an optional component bundled with and automatically installed with Content Server. When the component is enabled, it evaluates, filters, and parses the URL links of indexed content items before extracting them for storage in a database table (ManagedLinks). After the ManagedLinks table is populated with the extracted URL links, the Link Manager component references this table to generate link search results, lists of link references for the Content Information page, and the resource information for the Link Info page.

The Link Manager component enables users to:

The search results, link references lists, and Link Info pages are useful to determine what content items are affected by content additions, changes, or revision deletions. For example, before deleting a content item, you can verify that any URL references contained in it are insignificant. Another use might be to monitor how content items are being used.

The Link Manager component extracts the URL links during the indexing cycle, so only the URL links of released content items are extracted. For content items with multiple revisions, only the most current released revision has entries in the database table. If the Link Manager component is installed after content items are checked in, perform a rebuild to ensure that all links are included in the ManagedLinks table.

Link Manager does all of its work during the indexing cycle and it increases the amount of time required to index content items and to rebuild collections.

The amount of time required depends on the type and size of the content items involved. That is, if the file is converted, this requires more time than text-based (HTML) files.

For information about disabling Link Manager during the rebuild cycle, see the LkDisableOnRebuild and LkReExtractOnRebuild variables in Oracle Fusion Middleware Configuration Reference for Oracle WebCenter Content.

This section discusses the following topics:

9.2.1 Link Extraction Process


Caution:

The Link Manager component uses HtmlExport 8 for file conversion. A link extractor template file is included with the Link Manager component. HtmlExport 8 requires this template. Do not edit this file.


The Link Manager consists of an extraction engine and a pattern engine. The extraction engine includes a conversion engine (HtmlExport). The conversion engine is used to convert files that the extraction engine cannot natively parse to a text-based file format (HTML).

Link Manager does not use HtmlExport to convert files that contain any of the following strings in the file format: hcs, htm, image, text, xml, jsp, and asp. These text-based files are handled by Link Manager without need for conversion.

During the indexing cycle, the Link Manager component searches the checked-in content items to find URL Links as follows:

  1. The extraction engine converts the file using the conversion engine (if necessary).

  2. The extraction engine then uses the pattern engine to access the link evaluation rules defined in the Link Manager Patterns table.

  3. The evaluation rules tell the extraction engine how to sort, filter, evaluate, and parse the accepted URL links in the content items.

  4. The accepted URL links are inserted or updated in the ManagedLinks table.

This graphic is described in surrounding text

Important:

To execute successfully, HtmlExport requires either a virtual or physical video interface adaptor (VIA). Most Windows environments have graphics capabilities that provide HtmlExport access to a frame buffer. UNIX systems, however, may not have graphics cards and do not have a running X-Windows Server for use by HtmlExport. For systems without graphics cards, you can install and use a virtual frame buffer (VFB).


9.2.1.1 File Formats and Conversion

Various file formats (such as Word) must be converted by the conversion engine (HtmlExport) before links can be extracted. Because Link Manager can extract links in text-based files (HTML) without requiring conversion, Link Manager does not use HtmlExport to convert files that contain any of the following strings in the file format: hcs, htm, image, text, xml, jsp, and asp.

Link Manager also handles all the variations of these file formats. For example, the hcs string matches the dynamic server page strings of hcst, hcsp, and hcsf. The image string matches all comparable variants such as image/gif, image/jpeg, image/rgb, image/tiff, and so on. To prevent other types of files from being converted, use the LkDisallowConversionFormats configuration variable. For more information, see Oracle Fusion Middleware Configuration Reference for Oracle WebCenter Content.

Link Manager recognizes links in the following file formats:

  • Text-based formats (txt, html, xml, jsp, asp, csv, hcst, hcsf, and hcsp)

  • E-mail (msg and eml)

  • Microsoft Word

  • Microsoft Excel

  • OpenOffice Writer

  • OpenOffice Calc

9.2.1.2 Link Status

All new and existing links are managed during the indexing cycle. When content items are checked in, the accepted links in the content items are added to or updated in the Managed Links table. Existing links are evaluated for changes resulting from content items being checked in or deleted. As links are added or monitored, they are marked as valid or invalid.

When one content item in the system references another content item in the system, the resulting link is marked as valid. When an existing link references a deleted content item, the link is reevaluated and the status changes from valid to invalid. Statuses are recorded as Y (valid) or N (invalid) in the dLkState column of the Managed Links table and displayed for the user in the State column of the Link Info page as Valid or Invalid.

9.2.2 Configuring Link Manager

You can specify the following Link Manager configuration variables in the IntradocDir/config/config.cfg file:

  • AllowForceDelete

  • HasSiteStudio

  • LkRefreshBatchSize

  • LkRefreshErrorsAllowed

  • LkRefreshErrorPercent

  • LkRefreshErrorTHreshold

  • LkDisableOnRebuild

  • LkDisallowConversonFormats

  • LkReExtractOnRebuild

  • LkIsSecureSearch

For information about using these configuration variables, see Oracle Fusion Middleware Configuration Reference for Oracle WebCenter Content.

9.2.2.1 Link Patterns

The Link Manager component uses an extraction engine that references the link patterns defined in a resource table. These link patterns are rules that tell the extraction engine how to sort the different links, which links to filter out, which links to accept, and how to parse the links for more information.

To customize the DomainHome/ucm/LinkManager/resources/linkmanager_resource.htm resource table, you can add new rules or edit the existing default rules. Customize the table using standard component architecture. The table includes the following columns.

Column NameDescription

lkpName

The name of the pattern and the primary key of the table. Used mainly in error handling and to allow other components to directly target the override of a specified rule.

lkpDescription

An explanation of the purpose of the pattern.

lkpType

The initial screening of the URL:

  • Prefix: If the path begins with a specified parameter, the condition is met.

  • Contains: If the path contains a specified parameter, then the condition is met.

  • Service: If the URL contains a value for IdcService and if this value matches a parameter, the condition is met.

The extraction engine is a two-step engine. The 'prefix' and 'contains' types are used on the path part of the URL, while the 'service' type is used on the query string part of the URL.

lkpParameters

A comma-delimited list of patterns or parameters used by the type. The parameters are Idoc Script capable and are initially evaluated for Idoc Script. The engine uses the following rules for extracting the patterns from the parameters:

  • The parameter string is evaluated for Idoc Script.

  • The parameters are parsed using the comma separator. The result is a list of patterns.

  • Each pattern is XML decoded.

One rule looks for a URL that begins with the resolved value for <$HttpRelativeWebRoot$> by setting lpkParameters to <$HttpRelativeWebRoot$>.

A later rule can look for a URL that literally begins with <$HttpRelativeWebRoot$> by setting the parameter to &lt;$HttpRelativeWebRoot$&gt;

lkpAccept

Determines if the URL is accepted if the pattern is matched:

  • Pass: No determination is made. The 'action' is used to determine how this URL is processed.

  • Filter: the URL is rejected. This value is usually combined with lkpContinue=false to stop the processing.

  • Accept: The URL is accepted.

lkpContinue

Determines if the pattern processing engine continues to parse the URL. If true, the processing continues. If false, processing stops.

lkpLinkType

Specifies the URL type determined for this link.

lkpAction

A function defined in the LinkHandler class referring to a method in the LinkImplementor class used to further parse and evaluate the URL.

LinkImplementor can be class aliased or extended.

lkpOrder

The order in which the patterns are to be evaluated.

lkpEnabled

Determines if this rule is evaluated. It is calculated and evaluated during start up when the patterns are loaded.


You can add new rules or edit the existing default rules using standard component architecture.

9.2.2.2 Database Tables

Two database tables are maintained with Link Manager:

  • Managed Links Table: A link is stored in the Managed Links table if the pattern engine successfully processes it and determines that the link is acceptable. Each link in the table is assigned a unique class id (dLkClassId) and each row in the table has a unique GUID (dLkGUID). A single link can consist of multiple rows in the table if multiple resources define the link and each resource can independently break the link.

    For example, in Site Studio, you can define a single link by both a node and a content item. If the node is missing, the link breaks. If the content item is missing, the link breaks. In this case, there are two resources that do not depend on each other and each can break the link. Consequently, each resource is managed separately in the ManagedLinks table.

    To improve query execution performance, standard indexes are added to the dDocName and dLkResource columns in the Managed Links table. System administrators are responsible for adjusting these indexes to accommodate specific database tuning requirements in various system environments.

  • Link Reference Count Table: This table maps the content items to the number of times each is referenced in the ManagedLinks table. A content item in this table might not be a content item that is currently managed by Content Server. If there is an entry for a content item in this table, it only indicates that a link in the ManagedLinks table, as parsed by the pattern engine, has referenced the content item as a 'doc' resource.

    When a content item is checked in and a link references it, the link is marked as valid. When a link references a deleted content item, the link is marked as invalid. Notice that the dLkState column indicates the link's status as Y (valid) or N (invalid).

9.2.2.3 Link Manager Filters

The Link Manager component provides filters for parts of the pattern engine that allow customization of some very specific behavior. In general, the rules of the pattern engine are usually the ones to be modified. In certain circumstances Link Manager explicitly creates and uses filters to augment its standard behavior.

  • extractLinks Filter: Used during the extraction process when the extraction engine parses the accepted URL links. As links are extracted, Link Manager looks for specific HTML tags. However, other HTML tags might also contain relevant links. If so, use this filter to extract the additional links.

    The tag is passed to the filter as a cached object with the key HtmlTag. The value (or link) is passed back to the parse with the key HtmlValue. If the filter extracts extra information, be aware that the passed-in binder is flushed before being passed to the pattern engine. The service.setCachedObject and service.getCachedOject methods should pass and retrieve the extra information, respectively.

    By default, it looks for the following HTML tags: <a>, <link>, <iframe>, <img>, <script>, and <frame>.

  • linkParseService Filter: Used during the extraction process when the pattern engine evaluates links that use the IdcService parameter. After evaluation, the link binder and service are provided for the linkParseService filter.

    The service contains the binder for the parsed URL and information map. Customize the values in the parsed URL binder by adjusting certain parameters or customize the information map (which tells the parseService method what parameters to extract from the URL binder and how to map the data to resource types).

  • sortAndDecodeLinks filter: Only available from the 'refresh' option. It is only called when users are refreshing the links. The service contains the 'LinkSetMap' which includes a sorted list of links contained in the ManagedLinks table. The refresh validates the Site Studio links and the existence of all links referring to 'doc' resources. You can create a component that augments the standard validation.

9.2.3 Site Studio Integration


Important:

When using Site Studio, set the HasSiteStudio configuration variable value to true. This variable enables the Site Studio-specific patterns for parsing 'friendly' URLs for the pattern engine. For more information about the HasSiteStudio variable, see Oracle Fusion Middleware Configuration Reference for Oracle WebCenter Content.


When configured to work with Site Studio, Link Manager obtains links from Site Studio by directly requesting a parsing of the links that Site Studio has identified. In return, Site Studio provides information about the links pertaining to its operation and components. In particular, Site Studio provides information about the node/section, if a content item is used, the state of the content item, the type of link (friendly, page, or node), and if the link is valid.

Site Studio does not load its project information when the Standalone applications are launched. Therefore the Site Studio links are not properly evaluated if a rebuild or index update cycle is started and completed by a standalone application.

When a user changes links using the Site Studio designer, Link Manager checks filter events. If a node is deleted, Link Manager marks all links using the deleted node as invalid, thus managing links that directly reference the node ID. Additionally, with information provided by Site Studio, Link Manager can accurately determine the state of the link.

Friendly URLs (links that do not reference the node ID or dDocName) are more difficult to manage and vfalidate. When a node property changes, Link Manager marks all friendly links (both relative and absolute) that use the node as invalid and broken. Link Manager cannot retrace the parent chain to determine what part of the link was changed, how to fix it, or determine if it is actually broken.

Site Studio uses two types of managed links:

  • Completely Managed Links: These are any links using the SS_GET_PAGE IdcService or links to nodes that include the following:

    • javascript:nodelink(Node,Site)

    • javascript:nodelink(Node)

    • ssNODELINK/Site/Node

    • ssNODELINK/Node

    Also links to pages that include the following:

    • ssLINK/Doc

    • ssLINK/Node/Doc

    • ssLINK/Site/Node/Doc

    • ssLink(Doc)

    • ssLink(Doc,Node)

    • ssLink(Doc,Node,Site)

    • javascript:link(Doc)

    • javascript:link(Doc,Node,Site)

  • Provisionally Managed Links: The following Site Studio links are managed up to Site Studio node changes. Use the 'refresh' option from the Managed Links Administration page to determine state of the links. If the majority of links are of this form and nodes have changed dramatically, you should refresh or recompute the links.

    • Absolute (or full URLs): http://site/node/doc.htm

    • Friendly links to nodes

      <!--$ssServerRelativeSiteRoot-->dir/dir/index.htm

      [!--$ssServerRelativeSiteRoot--]dir/dir/index.htm

      <%=ssServerRelativeSiteRoot%>dir/dir/index.htm

    • Friendly links to pages

      <!--$ssServerRelativeSiteRoot-->dir/dir/doc.htm

      [!--$ssServerRelativeSiteRoot--]dir/dir/doc.htm

      <%=ssServerRelativeSiteRoot%>dir/dir/doc.htm

9.2.4 Link Administration

This section covers the following topics:

9.2.4.1 Alternative Refresh Methods

In addition to the refresh activities available on the Managed Links Administration page, you can use alternative methods to update the Managed Links and Link Reference Count tables:

  • Using the Repository Manager, perform a collection rebuild. This process rebuilds the entire search index, and the old index collection is replaced with a new index collection when the rebuild successfully completes.

    If Repository Manager is opened as a standalone application, the alternate refresh method can only be used when the HasSiteStudio configuration variable is disabled. When information is requested from Site Studio and the Repository Manager is in standalone mode, Site Studio is not initialized completely and does not return accurate information. This issue does not occur if the Repository Manager applet is used.

  • If custom fields have been added while content is in the system, use the Configuration Manager Rebuild Search Index to rebuild the search index.

9.2.4.2 Recomputing and Refreshing Links in the ManagedLinks Table

To reevaluate the links in the ManagedLinks table:

  1. Choose Administration then Managed Links Administration from the Main menu.

  2. On the Managed Links Administration page, use an option to manage links:

    • To recompute links: Click Go next to the Recompute links option. This refresh activity resubmits each link in the ManagedLinks table to the patterns engine. The link is evaluated according to the pattern rules and updated in the table. A link can be reclassified as another type of link depending on which patterns have been enabled or disabled. Use this option if the pattern rules have changed.

    • To refresh links: Click Go next to the Refresh links option. This activity checks each link in the ManagedLinks table and attempts to determine if the link is valid. For Site Studio links, the links are sent to the Site Studio decode method to determine what nodes and content items are used by the link. It also determines if the link is valid and is indeed a Site Studio link.

      Use this option after many changes to Site Studio node/section properties. LinkManager cannot completely track the changes to 'friendly' Site Studio links. By refreshing or forcing a validation on the links, Link Manager can more accurately determine which links are broken and which are valid.

    • To refresh the references counts: Click Go next to the Refresh option. This activity flushes the LinkReferenceCount table and queries the ManagedLinks table for the content item references. Both the 'recompute' and 'refresh' table activities try to maintain the LinkReferenceCount table. However, on occasion, this table can become out-of-sync and this option, when used on a quiet system, rebuilds this table.gv

    • To cancel a refresh activity: Click Go next to the Abort activity option. Only one refresh activity can be active at any one time.

    The Status area indicates how many links have been processed and how many errors have been encountered.

    Only one refresh activity can be active at any one time. Wait until the refresh activity completes and the 'Ready' status is displayed before attempting another refresh activity.

PKƭPK VwEOEBPS/dc_office2007.htm# Office 2007/2010 Considerations

41 Office 2007/2010 Considerations

This chapter provides a number of considerations related to dynamic conversion of Office 2007/2010 files.

This chapter covers the following topics:

41.1 All Office Applications

Please note the following conversion limitations that currently apply for all Office 2007/2010 applications:

41.2 Word 2007/2010

Please note the following conversion limitations that currently apply for Word 2007/2010 documents:

41.3 Excel 2007/2010

Please note the following conversion limitations that currently apply for Excel 2007/2010 spreadsheets:

41.4 PowerPoint 2007/2010

Please note the following conversion limitations that currently apply for PowerPoint 2007/2010 presentations:

41.5 Examples of Unsupported Objects

This section provides some examples of Office 2007/2010 objects that cannot be converted at this point.

Figure 41-1 Smart Art

Smart Art, an Office 2007 ebject that cannot be converted

Figure 41-2 Picture Styles and Effects

Description of Figure 41-2 follows

Figure 41-3 Word Art

Word Art, an Office 2007 object that cannot be converted.

Figure 41-4 Equations

Equations, an Office 2007 object that cannot be converted

Figure 41-5 Controls

Controls, an Office 2007 object that cannot be converted

Figure 41-6 Data Bars with Conditional Formatting, Color Scales, and Icon Sets

Data Bars with conditional formatting, Office 2007 objects

Figure 41-7 3D Effects in PowerPoint

3D effects in PowerPoint, Office 2007 object

Figure 41-8 Complex Gradients

Complex Gradients, Office 2007 object cannot be converted.

Figure 41-9 Complex Shapes with Varying Fills (1)

Complex Shapes, Office 2007 object that cannot be converted

Figure 41-10 Complex Shapes with Varying Fills (2)

Complex Shapes, Office 2007 object that cannot be converted
PK6J##PK VwEOEBPS/dcommon/oracle.gifJGIF87aiyDT2F'G;Q_oKTC[ 3-Bq{ttsoGc4I)GvmLZ).1)!ꑈ53=Z]'yuLG*)g^!8C?-6(29K"Ĩ0Яl;U+K9^u2,@@ (\Ȱ Ë $P`lj 8x I$4H *(@͉0dа8tA  DсSP v"TUH PhP"Y1bxDǕ̧_=$I /& .)+ 60D)bB~=0#'& *D+l1MG CL1&+D`.1qVG ( "D2QL,p.;u. |r$p+5qBNl<TzB"\9e0u )@D,¹ 2@C~KU 'L6a9 /;<`P!D#Tal6XTYhn[p]݅ 7}B a&AƮe{EɲƮiEp#G}D#xTIzGFǂEc^q}) Y# (tۮNeGL*@/%UB:&k0{ &SdDnBQ^("@q #` @1B4i@ aNȅ@[\B >e007V[N(vpyFe Gb/&|aHZj@""~ӎ)t ? $ EQ.սJ$C,l]A `8A o B C?8cyA @Nz|`:`~7-G|yQ AqA6OzPbZ`>~#8=./edGA2nrBYR@ W h'j4p'!k 00 MT RNF6̙ m` (7%ꑀ;PKl-OJPK VwEOEBPS/dcommon/oracle-logo.jpg|JFIFC    $.' ",#(7),01444'9=82<.342C  2!!22222222222222222222222222222222222222222222222222'7" }!1AQa"q2#BR$3br %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz w!1AQaq"2B #3Rbr $4%&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz ?( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQE!KEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEzE7V%ȣOΏ9??:a"\fSrğjAsKJ:nOzO=}E1-I)3(QEQEQEQEQEQEQE֝Hza<["2"pO#f8M[RL(,?g93QSZ uy"lx4h`O!LŏʨXZvq& c՚]+: ǵ@+J]tQ]~[[eϸ (]6A&>ܫ~+כzmZ^(<57KsHf妬Ϧmnẁ&F!:-`b\/(tF*Bֳ ~V{WxxfCnMvF=;5_,6%S>}cQQjsOO5=)Ot [W9 /{^tyNg#ЄGsֿ1-4ooTZ?K Gc+oyڙoNuh^iSo5{\ܹ3Yos}$.nQ-~n,-zr~-|K4R"8a{]^;I<ȤL5"EԤP7_j>OoK;*U.at*K[fym3ii^#wcC'IIkIp$󿉵|CtĈpW¹l{9>⪦׺*ͯj.LfGߍԁw] |WW18>w.ӯ! VӃ :#1~ +މ=;5c__b@W@ +^]ևՃ7 n&g2I8Lw7uҭ$"&"b eZ":8)D'%{}5{; w]iu;_dLʳ4R-,2H6>½HLKܹR ~foZKZ࿷1[oZ7׫Z7R¢?«'y?A}C_iG5s_~^ J5?œ tp]X/c'r%eܺA|4ծ-Ե+ْe1M38Ǯ `|Kյ OVڅu;"d56, X5kYR<̭CiطXԮ];Oy)OcWj֩}=܅s۸QZ*<~%뺃ȶp f~Bðzb\ݳzW*y{=[ C/Ak oXCkt_s}{'y?AmCjޓ{ WRV7r. g~Q"7&͹+c<=,dJ1V߁=T)TR՜*N4 ^Bڥ%B+=@fE5ka}ędܤFH^i1k\Sgdk> ֤aOM\_\T)8靠㡮3ģR: jj,pk/K!t,=ϯZ6(((((((49 xn_kLk&f9sK`zx{{y8H 8b4>ÇНE|7v(z/]k7IxM}8!ycZRQ pKVr(RPEr?^}'ðh{x+ՀLW154cK@Ng C)rr9+c:׹b Жf*s^ fKS7^} *{zq_@8# pF~ [VPe(nw0MW=3#kȵz晨cy PpG#W:%drMh]3HH<\]ԁ|_W HHҡb}P>k {ZErxMX@8C&qskLۙOnO^sCk7ql2XCw5VG.S~H8=(s1~cV5z %v|U2QF=NoW]ո?<`~׮}=ӬfԵ,=;"~Iy7K#g{ñJ?5$y` zz@-~m7mG宝Gٱ>G&K#]؃y1$$t>wqjstX.b̐{Wej)Dxfc:8)=$y|L`xV8ߙ~E)HkwW$J0uʟk>6Sgp~;4֌W+חc"=|ř9bc5> *rg {~cj1rnI#G|8v4wĿhFb><^ pJLm[Dl1;Vx5IZ:1*p)إ1ZbAK(1ׅ|S&5{^ KG^5r>;X׻K^? s fk^8O/"J)3K]N)iL?5!ƾq:G_=X- i,vi2N3 |03Qas ! 7}kZU781M,->e;@Qz T(GK(ah(((((((Y[×j2F}o־oYYq $+]%$ v^rϭ`nax,ZEuWSܽ,g%~"MrsrY~Ҿ"Fت;8{ѰxYEfP^;WPwqbB:c?zp<7;SBfZ)dϛ; 7s^>}⍱x?Bix^#hf,*P9S{w[]GF?1Z_nG~]kk)9Sc5Ո<<6J-ϛ}xUi>ux#ţc'{ᛲq?Oo?x&mѱ'#^t)ϲbb0 F«kIVmVsv@}kҡ!ˍUTtxO̧]ORb|2yԵk܊{sPIc_?ħ:Ig)=Z~' "\M2VSSMyLsl⺿U~"C7\hz_ Rs$~? TAi<lO*>U}+'f>7_K N s8g1^CeКÿE ;{+Y\ O5|Y{/o+ LVcO;7Zx-Ek&dpzbӱ+TaB0gNy׭ 3^c T\$⫫?F33?t._Q~Nln:U/Ceb1-im WʸQM+VpafR3d׫é|Aү-q*I P7:y&]hX^Fbtpܩ?|Wu󭏤ʫxJ3ߴm"(uqA}j.+?S wV ~ [B&<^U?rϜ_OH\'.;|.%pw/ZZG'1j(#0UT` Wzw}>_*9m>󑓀F?EL3"zpubzΕ$+0܉&3zڶ+jyr1QE ( ( ( ( ( ( ( (UIdC0EZm+]Y6^![ ԯsmܶ捆?+me+ZE29)B[;я*wGxsK7;5w)}gH~.Ɣx?X\ߚ}A@tQ(:ͧ|Iq(CT?v[sKG+*רqҍck <#Ljα5݈`8cXP6T5i.K!xX*p&ќZǓϘ7 *oƽ:wlຈ:Q5yIEA/2*2jAҐe}k%K$N9R2?7ýKMV!{W9\PA+c4w` Wx=Ze\X{}yXI Ү!aOÎ{]Qx)#D@9E:*NJ}b|Z>_k7:d$z >&Vv󃏽WlR:RqJfGإd9Tm(ҝEtO}1O[xxEYt8,3v bFF )ǙrPNE8=O#V*Cc𹾾&l&cmCh<.P{ʦ&ۣY+Gxs~k5$> ӥPquŽўZt~Tl>Q.g> %k#ú:Kn'&{[yWQGqF}AЅ׮/}<;VYZa$wQg!$;_ $NKS}“_{MY|w7G!"\JtRy+贾d|o/;5jz_6fHwk<ѰJ#]kAȎ J =YNu%dxRwwbEQEQEQEQEQEQEQEQEQE'fLQZ(1F)hQ@X1KEQE-Q@ 1KE3h=iPb(((1GjZ(-ʹRPbR@ 1KE7`bڒyS0(-&)P+ ڎԴP11F)h&:LRmQ@Q@Š(((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((?l:ϊw "{{-3j3%{+Zoi.{YUڪ1kcMմfݮ4Br%*8%I.QT-[MѭT-,`g]L)lXqjV4Ķ%ѡ')$r_Ĭ AVuoż9#`FApA+麶۵ƗZ_@QeP#8 Uh燮,/jXJ}9Q]рN:]dko-ıH^I$`I$$:[zޛ}:v%`$)' gVko-ıH^I$`I$$..3D ?+pcV4[MmK-/W(Z̲lTq(Q@'94 >SJJFCI'L~yᵷX$/$0UE$xs@U{=2K--cFI,٬|Cό<)a-8nQǰ_VBNM>rE+/uoż9#`FApA椬 eq-"mV89 Wcag]E7dt&i5j K%-p 3N=\o?:evjh$#b9#hɫirLiJ*+rcTY2K;iVQ|3@#*NFA7'u@yi}ŭim=춎if@NXr???6޽ +/R.\-3X&+2qF}\㼰L $0? ESVf&-P7h)Y3>e֍KVtku5 K,S,J[ g{EG[qo,sA*H2AG9z\][j3>˘]lNTC@(O]O+[U"ߌg2:zMմfݮ4Br%*8%I.QU9//വA.H,x$ƩxAn/u6 m$H2' 0zZԢxn⸷9$l]H #<4Ķ%ѡ')$r_Ĭ APQ@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@'K|U].VCyCO#GUH~9Й>/CcaSm(߇-'-ZoDQ03u]W?O5 `!ºv__Avڄ3Agw?±kc+[O?o?DA9Oz炼+OF$}_FZQ*a6OOY?u98we2٬32xvwt'S-@䯖xl9qyVOeX{Mw:!Udf_wYߝFv()9OIaY?TU1?\8wjiP:ʹ4ږEHKH|TW7gnזZeݕď@H|I%Y 'mcxⷆ|?᫻ kM/g-)nNGs^yi0>Dx0sEn݂2(/ľ|M߬lVviuCpCo+2x[ŧq9ݑTh6>``8d;ς_Ht/x҉+%O`(x]|Y=c9o-hX YK'kdzjok$ZvdfCb +89?g'dQU}|OX)oVWo{ab]!۴NҩKyK55oP_yf.$s8⻏Huw҈%Oow:֥H.FBER 3+|8ߎd~&,rȉW*I!qjH RFHдzfy?nVn7ۅ݌9EpwHݻw@8ɠ#'h%ԮoIjڅ<1\DY@FFPT\fx?ڤq +0,. 9fʧR9+o\kSi2^ ^0-+0eh|,V4+Jj|1"d>wpp0';@=ow:oú_Ϫ_E(w Hc]O`HpzHuw҈' *E-yW췚=/T@n/n_ rF ݆j|K6NQ6dg cՠC񆯥Ϫz(Ǩ Cs}qe,7cᯀwPiI}Zh\`fB `(vqQ@{,Kk`@+(I"q+䳿LoxĈ  Eps}sq,Gs2($WGXYqZAikvCb4\Np2I?X17m:t-d漙r'.I·N{ ^ V2gi z`0օ <j)h%`$l DX0Ñ]E\^GZD̛܌a'X3MN \[bǖ$I[}d[7m`Õ @<Т+Xu~]I )<:?^U4H'fFxd 7g]Eǿ7 b+KH{Hn' (u . #" d"0QӰZ؋,5m" {Vi6@HNLztP>m^tNϺu*s QzlQEq:t.$;$a(f$֦xYt]7J[q #egzA[P{}ON̵7#OJ߅oXhvdS3'e'ʶ(UOukjv>_]1'kFڞ= Ff"tݱ'd@*7YFlVQ1TEwq)io|  2,C%jȪpI;wr(6FrI4W SīYjsWDIϗq$l e!QrX}]+ºsRygu2UIUg8`O$ \Ï¿Ɠ:|#Qn773_Ht/x҉+jEE@E|hELi/^B!1y_G ? z׵[]OMw8QjW8<+OWo+e^KK+ɅQamq חwInYR "ye˲XĻeJ~g+@Q_.|Y.]Wa3[\s04C ':+]ĺ[yޣ Y #bĨ~ڧa>QJ>T~< KUw'͟1w۞7m9ڬxG't} \MF0Ė*OEeo@;R +h?h=?UO3 s`yV?-GKxOY֧@,rFon1 Vg/<(WCsxgqG^5|G j<8E[PĕR1@tW/oT7RQ?v((((((((((((((((((/CcaSmc烵ڷmoF.m,0Ğ |QZqϢ}au $:I@?\x'_qq+]] p( vU\`g'dQWW7iѦ[fuf UWjv#t O]uk x1ѧbF*q5y!mU.eƁ`@R(K? TMN]$NfwV01oV=ݎflMV|v0 hͽܩ|CI]G#v/O3pq]֟)R"3HC|;F\$K[]vg9 ՏjWz 'RXowld`1)(@Ēh?>.eሧu8dND(#pwW>_.x@DO,W5洬.JV YmF$W:+#uk+=:t*ws0TT,I ўۘ{cºŮA{$sgg`n߻swm+E</4K3|}pP0 3H;mc k5^ ^6m^LF9f㌓q@oG<++MOw \-µ`;0緥t%o4Hǖ0kjğ|0WdҼJ)".wiLڤI$} |?߳I[V!N~BJW'<pFNy7Ih[Uw|`#P ٵ-.ݣYli A8'~(>wGeiC9d!!XKQ6d"-4; '~ǝsHucڹ'𷍯>h{9<Twʣ8k;{k9wyyv<'+C'C?IWAn>LseY[|n^+M g5u ?jI;mQ9J|ˑ}2 ̃}oÿxUl vʶʜE{dCuo-QRHPF AbyU.ecAp 7=y'xOᏁ+ƻe9."}T!2e#$g9׬W/?V4k.H)n@=B :((((((((((((((((((7Nyj5]<W ĝ|-ԏ+𖭫$Xiym'8$x>*K5rudM, HHhP&x?1L ?551|챫;K\j7ouk]%̓K|. # (+jZ7ln4BvFfRda@EPEr~>F).'v'PI@p qps\< Ķj wxu[9"cv%*~lm$Hxwu" #Yz~bv9k?FYQK;@.i$a$f Z{ⷅu ?XE֡ AO7j.6mڪǫGw*k(((?YYQ{G@%$y$#e ē]Cd^? S)uuSci m䧙xM zxuqm+>GwwvG&vp .~@; +uo xY:DƩJ!װPEPEPEPEPEPEPEPEPEPEPwW"+[/wbO~{ssԦdmX_o)`r+]Ѩ<'M3_m+eݷv|%ی1=+4O?!|3v]~ǜ﷐ Sx+o.cuE!* 47Rبҵ>; W+y[qww}|>ww ݹ2L[s?0Xj 5mc$Xb7D>}5 o icUX.-]Tfyx8"4~I\uKnWق#$.U4?h'cR~GX,-ouͨ6%.$cp|n%v;և$O*Zc௉>3[8&M7S6y,lbUOAֽmO ?JB8v}(ɒCEA8^IenI:Li6E 8!6rPGG@n<4;{׊mB2IN3PU?# BOIo;l^?ڶ>xQ71^!%H¯3`1VTk+x#i#[C8u!JyG`304{ψ&/h u |_%p!$(ai~'>!.OYx\/ UP *cǴ4B2æj|'`?}Mke1p,tfE%m@ '|F[p]ilO(g*aBqDjEE@'< k~*񍕗5-2}:Ess}׭Q" <mA妙O]C}.]g,ǀ@zW ;£FWPEP^?J o^^?J o@EP{Nş A`?k^@s4Ԭ7OxOMе2ʠq$0P #ǖ~rewƍuMzwrR}HP0 6ዹcZy ʡR2A0  ;[/1w~%'JRTv 9=9v kzY&]2F,%$y$s[I熿k ((((((((x!(T)$r(eu#x 1^g+jۿo'6oc^Egg߇='~i-vWv233WûF(4I U,X$q^މE uo 3\nhӵ6\J䎘 PTOxY q bGY!7 ‚; O<|78G]ܿsڻ (%scq*S΃V9=AYG9?/,{uᱚ{cA*rX' (?YaA{j;&L$OUl#<Qך_!k:V[(bwm)"N:`sײQ@_OTd-[ZH./oHU~b\ɨ4hm?#>fDv~; ((g%vc|3$FeOW1teZxXgQr,y>Ex[{Eo|[W_=0IFfgb?W<+=z2I26BYCpH x PkrRy??}/<~˝gWxSZG4aI mU.s$(=(on]6y<;Oz?1ϱm"vz( 'DN𭎃qAoedclʨNF9޼?CψZΕ@%HG5P=WJ43S6Ka +iO4>Z{lݴX 8q3^Eq~oc=RYSԵmj q c;GP Uň =s&_P|ig5q}('Gu\G.nog9]$;⋟{ھʟ+/FJlq%(QHzSQ@)5]M6^aՎ0yy]{P\hݡl&lʃ=+(^׼/{xR`{vmF,x# t#\׃'ŷټ3n3DO\һ (CME'RMBd-۸׎ ޥ}k-S7zpҭmȥTlRzJ(YK_5%K4`UI@ѳ&wUO$ך|N-C^!e*Xq!Ԗe,$Q.pR]w2V7ַcΖp<*8`09/WR+^\B<'r#Pa6mºFpѴVPђT RF@8EQEQEQEQEQEQEQEQEu>֯,V9pcdۥcS&T~hGAwa[ǥGxm|uX-MI$`%$xs]ir]M)LdAr9BX8Ҁ#߈4QIo=w6ҺF*FJ+O95CͲyS L[OM g 7W]ZI}j.aӭ$X9 \.YH]4$%l2s>ΝyDp6Ă 0< OFNw>~i[H69X ;~u=gmkFl_kZtEF`%$H(:wNj9VR6M t8E>[/]`aʱl6w\?U~/=kޕi]$A#t"6H^ɤP:5n,F+` 4r?^oɩqn&ۑU2/,8=yEyxڏ u3:&XAF~bb:WI^?;*/ѤP~`0sm0F>|gXO{.5kcڨ!qA;sW7x/_ھͲA9X&BX.v.~qPƺ<+ujZCxRnMR HgmQJFs=+jEE@ѦΛ5n).|<Ʃ}x\5MLZƈ\"ds@9N滣h^$@;V2N", QikG{~>$jZE.ዎ/Ol? McCo~5ԭ崧?0\8;I*yq߃7񅄗^']yvF-RPrWxL- V: LRX)?Ar`;m !?ԭu{M>ST ͏/895I ~YxZ}BK;,sЄ*N q|bXXk3gtr<Ӡu=~ĺ|]rjГDJ!@D#dB;f"mR4)-oYnoczKAfSnH z߆+˭oXC{5Ԏ"ar%Q68U/9w«ˏ<ako: 7-Xۜg`U {|j hrZD̞X 2-2 ^w&VnCWq@=v.?\Rk=k|:w DTu PttmKi +xi,U(' z EJ}/AxU+1seb~6{=@9P'Tۊ }ŭQ/hp]j׏kp9v鎵m<9io`'bWOƀ6+c'Znm)~鏚p2G3D2F>n{y_\j:cl;/' v_s="mW};\]I:dg#S|V6e6ϳ2-7m`ÕZHF- t vg \iZƷ ;@7H2*qZ叇=FZE*͸rA9<*E5s_V|)sY-Q.zG >/D/hi:O;{7yql߷s<I< +?]g\}{IbyO.FBF #Y ҵ:vN3ι]~gҀ: IaY?TU߈׉-!} RP&ۏS? ~-xGNC6ik$fL2N^z(+(Ý&m{,_wl!m8e8?/NF}SI-Rh51yB8>p>(j=4Glw?ru?񟒀=(((((((((((((((((((((((((((( zno03haX r(((((((9[{ PȡԌA8GcagYgaiy rI8Q$5b((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((PK%8 ||PK VwEOEBPS/dcommon/cpyr.htmD Oracle Legal Notices

Oracle Legal Notices

Copyright Notice

Copyright © 1994-2014, Oracle and/or its affiliates. All rights reserved.

Trademark Notice

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group.

License Restrictions Warranty/Consequential Damages Disclaimer

This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited.

Warranty Disclaimer

The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing.

Restricted Rights Notice

If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable:

U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs. No other rights are granted to the U.S. Government.

Hazardous Applications Notice

This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications.

Third-Party Content, Products, and Services Disclaimer

This software or hardware and documentation may provide access to or information on content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services.

Alpha and Beta Draft Documentation Notice

If this document is in preproduction status:

This documentation is in preproduction status and is intended for demonstration and preliminary use only. It may not be specific to the hardware on which you are using the software. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to this documentation and will not be responsible for any loss, costs, or damages incurred due to the use of this documentation.

Oracle Logo

PK0hPK VwEOEBPS/dcommon/blafdoc.cssc@charset "utf-8"; /* Copyright 2002, 2011, Oracle and/or its affiliates. All rights reserved. Author: Robert Crews Version: 2011.8.12 */ body { font-family: Tahoma, sans-serif; /* line-height: 125%; */ color: black; background-color: white; font-size: small; } * html body { /* http://www.info.com.ph/~etan/w3pantheon/style/modifiedsbmh.html */ font-size: x-small; /* for IE5.x/win */ f\ont-size: small; /* for other IE versions */ } h1 { font-size: 165%; font-weight: bold; border-bottom: 1px solid #ddd; width: 100%; text-align: left; } h2 { font-size: 152%; font-weight: bold; text-align: left; } h3 { font-size: 139%; font-weight: bold; text-align: left; } h4 { font-size: 126%; font-weight: bold; text-align: left; } h5 { font-size: 113%; font-weight: bold; display: inline; text-align: left; } h6 { font-size: 100%; font-weight: bold; font-style: italic; display: inline; text-align: left; } a:link { color: #039; background: inherit; } a:visited { color: #72007C; background: inherit; } a:hover { text-decoration: underline; } a img, img[usemap] { border-style: none; } code, pre, samp, tt { font-family: monospace; font-size: 110%; } caption { text-align: center; font-weight: bold; width: auto; } dt { font-weight: bold; } table { font-size: small; /* for ICEBrowser */ } td { vertical-align: top; } th { font-weight: bold; text-align: left; vertical-align: bottom; } li { text-align: left; } dd { text-align: left; } ol ol { list-style-type: lower-alpha; } ol ol ol { list-style-type: lower-roman; } td p:first-child, td pre:first-child { margin-top: 0px; margin-bottom: 0px; } table.table-border { border-collapse: collapse; border-top: 1px solid #ccc; border-left: 1px solid #ccc; } table.table-border th { padding: 0.5ex 0.25em; color: black; background-color: #f7f7ea; border-right: 1px solid #ccc; border-bottom: 1px solid #ccc; } table.table-border td { padding: 0.5ex 0.25em; border-right: 1px solid #ccc; border-bottom: 1px solid #ccc; } span.gui-object, span.gui-object-action { font-weight: bold; } span.gui-object-title { } p.horizontal-rule { width: 100%; border: solid #cc9; border-width: 0px 0px 1px 0px; margin-bottom: 4ex; } div.zz-skip-header { display: none; } td.zz-nav-header-cell { text-align: left; font-size: 95%; width: 99%; color: black; background: inherit; font-weight: normal; vertical-align: top; margin-top: 0ex; padding-top: 0ex; } a.zz-nav-header-link { font-size: 95%; } td.zz-nav-button-cell { white-space: nowrap; text-align: center; width: 1%; vertical-align: top; padding-left: 4px; padding-right: 4px; margin-top: 0ex; padding-top: 0ex; } a.zz-nav-button-link { font-size: 90%; } div.zz-nav-footer-menu { width: 100%; text-align: center; margin-top: 2ex; margin-bottom: 4ex; } p.zz-legal-notice, a.zz-legal-notice-link { font-size: 85%; /* display: none; */ /* Uncomment to hide legal notice */ } /*************************************/ /* Begin DARB Formats */ /*************************************/ .bold, .codeinlinebold, .syntaxinlinebold, .term, .glossterm, .seghead, .glossaryterm, .keyword, .msg, .msgexplankw, .msgactionkw, .notep1, .xreftitlebold { font-weight: bold; } .italic, .codeinlineitalic, .syntaxinlineitalic, .variable, .xreftitleitalic { font-style: italic; } .bolditalic, .codeinlineboldital, .syntaxinlineboldital, .titleinfigure, .titleinexample, .titleintable, .titleinequation, .xreftitleboldital { font-weight: bold; font-style: italic; } .itemizedlisttitle, .orderedlisttitle, .segmentedlisttitle, .variablelisttitle { font-weight: bold; } .bridgehead, .titleinrefsubsect3 { font-weight: bold; } .titleinrefsubsect { font-size: 126%; font-weight: bold; } .titleinrefsubsect2 { font-size: 113%; font-weight: bold; } .subhead1 { display: block; font-size: 139%; font-weight: bold; } .subhead2 { display: block; font-weight: bold; } .subhead3 { font-weight: bold; } .underline { text-decoration: underline; } .superscript { vertical-align: super; } .subscript { vertical-align: sub; } .listofeft { border: none; } .betadraft, .alphabetanotice, .revenuerecognitionnotice { color: #f00; background: inherit; } .betadraftsubtitle { text-align: center; font-weight: bold; color: #f00; background: inherit; } .comment { color: #080; background: inherit; font-weight: bold; } .copyrightlogo { text-align: center; font-size: 85%; } .tocsubheader { list-style-type: none; } table.icons td { padding-left: 6px; padding-right: 6px; } .l1ix dd, dd dl.l2ix, dd dl.l3ix { margin-top: 0ex; margin-bottom: 0ex; } div.infoboxnote, div.infoboxnotewarn, div.infoboxnotealso { margin-top: 4ex; margin-right: 10%; margin-left: 10%; margin-bottom: 4ex; padding: 0.25em; border-top: 1pt solid gray; border-bottom: 1pt solid gray; } p.notep1 { margin-top: 0px; margin-bottom: 0px; } .tahiti-highlight-example { background: #ff9; text-decoration: inherit; } .tahiti-highlight-search { background: #9cf; text-decoration: inherit; } .tahiti-sidebar-heading { font-size: 110%; margin-bottom: 0px; padding-bottom: 0px; } /*************************************/ /* End DARB Formats */ /*************************************/ @media all { /* * * { line-height: 120%; } */ dd { margin-bottom: 2ex; } dl:first-child { margin-top: 2ex; } } @media print { body { font-size: 11pt; padding: 0px !important; } a:link, a:visited { color: black; background: inherit; } code, pre, samp, tt { font-size: 10pt; } #nav, #search_this_book, #comment_form, #comment_announcement, #flipNav, .noprint { display: none !important; } body#left-nav-present { overflow: visible !important; } } PKr.hcPK VwEOEBPS/dcommon/doccd_epub.jsM /* Copyright 2006, 2012, Oracle and/or its affiliates. All rights reserved. Author: Robert Crews Version: 2012.3.17 */ function addLoadEvent(func) { var oldOnload = window.onload; if (typeof(window.onload) != "function") window.onload = func; else window.onload = function() { oldOnload(); func(); } } function compactLists() { var lists = []; var ul = document.getElementsByTagName("ul"); for (var i = 0; i < ul.length; i++) lists.push(ul[i]); var ol = document.getElementsByTagName("ol"); for (var i = 0; i < ol.length; i++) lists.push(ol[i]); for (var i = 0; i < lists.length; i++) { var collapsible = true, c = []; var li = lists[i].getElementsByTagName("li"); for (var j = 0; j < li.length; j++) { var p = li[j].getElementsByTagName("p"); if (p.length > 1) collapsible = false; for (var k = 0; k < p.length; k++) { if ( getTextContent(p[k]).split(" ").length > 12 ) collapsible = false; c.push(p[k]); } } if (collapsible) { for (var j = 0; j < c.length; j++) { c[j].style.margin = "0"; } } } function getTextContent(e) { if (e.textContent) return e.textContent; if (e.innerText) return e.innerText; } } addLoadEvent(compactLists); function processIndex() { try { if (!/\/index.htm(?:|#.*)$/.test(window.location.href)) return false; } catch(e) {} var shortcut = []; lastPrefix = ""; var dd = document.getElementsByTagName("dd"); for (var i = 0; i < dd.length; i++) { if (dd[i].className != 'l1ix') continue; var prefix = getTextContent(dd[i]).substring(0, 2).toUpperCase(); if (!prefix.match(/^([A-Z0-9]{2})/)) continue; if (prefix == lastPrefix) continue; dd[i].id = prefix; var s = document.createElement("a"); s.href = "#" + prefix; s.appendChild(document.createTextNode(prefix)); shortcut.push(s); lastPrefix = prefix; } var h2 = document.getElementsByTagName("h2"); for (var i = 0; i < h2.length; i++) { var nav = document.createElement("div"); nav.style.position = "relative"; nav.style.top = "-1.5ex"; nav.style.left = "1.5em"; nav.style.width = "90%"; while (shortcut[0] && shortcut[0].toString().charAt(shortcut[0].toString().length - 2) == getTextContent(h2[i])) { nav.appendChild(shortcut.shift()); nav.appendChild(document.createTextNode("\u00A0 ")); } h2[i].parentNode.insertBefore(nav, h2[i].nextSibling); } function getTextContent(e) { if (e.textContent) return e.textContent; if (e.innerText) return e.innerText; } } addLoadEvent(processIndex); PKo"nR M PK VwEOEBPS/part8_desktop_mgmt.htma Desktop Management

Part VII

Desktop Management

This part provides information on managing Oracle WebCenter Content: Desktop.

Part VII contains the following chapter:

PKffaPK VwEOEBPS/part5_inbound_conv.htmv Managing Content Conversions PKbPK VwEOEBPS/dc_gui_templates.htm HTML Conversion Templates

33 HTML Conversion Templates

This chapter provides information about HTML (graphical user interface) templates and how to use the template editor for Dynamic Converter.

This chapter covers the following topics:

33.1 About Templates

A template is a set of formatting instructions you can associate with a source document. When you check a document into Oracle WebCenter Content Server, you either associate it with a default conversion template, or you can create a new customized template.

The following template options are available:

  • HTML Conversion templates: These are the newest template types, which can be configured in a cross-platform editor.

  • Classic HTML Conversion templates: These were previously known as GUI templates. There is no direct migration path from the GUI templates to the HTML Conversion templates. If you select a Classic HTML Conversion template, you may also select a Classic HTML Conversion layout.

  • Script templates: These run with default settings, and can be edited with a text editor.

After you have chosen a template type to associate with your document, and named the template, you can edit the template. There are two template editing utilities for customizing the appearance of native documents converted to an HTML format. These template editors are used to control the look and feel of the web pages you create.

  • The HTML Conversion Editor is used to edit the HTML Conversion Templates.

  • The Classic HTML Conversion Editor is used to edit the Classic HTML Conversion Templates and Classic HTML Conversion Layouts.

To turn a source document into a web page, you can use the default settings to perform a conversion. Alternatively, you can create a template, associate it with the document, and then edit the template, using one of the two template editor options.

The following sections describe tasks common to both template editors:

33.1.1 Creating a New HTML Conversion Template

Use the Dynamic Converter New HTML Conversion Template Form to create a new HTML Conversion Template. To access this page, click Create New Template on the Dynamic Converter Admin page.

Figure 33-1 New HTML Conversion Template Form

New HTML Conversion Template Form


Note:

For more information about checking content into the Content Server, see Oracle Fusion Middleware Using Oracle WebCenter Content.


To create a new HTML Conversion template:

  1. Open the Dynamic Converter Admin page.

  2. Click Create New Template.

  3. In the New HTML Conversion Template form, select the template format: HTML Conversion Template or Classic HTML Conversion Template.

  4. Specify all other required metadata for the template.


    Note:

    The template type is set by default to HTML Conversion Template. The Classic HTML Conversion Template is the former GUI Template.


  5. When you have completed the form, click Check In to check the HTML Conversion template file into the Content Server.

After checking a new HTML Conversion template into the Content Server, you can edit it using the Template Editor (see Section 33.1.2).

33.1.2 Editing an Existing HTML Conversion Template

The HTML Conversion Template Editor requires Internet Explorer on a Windows XP or greater system. To edit an existing HTML Conversion Template or a Classic HTML Conversion template (that is, one that is already checked into the Content Server):

  1. Open the Dynamic Converter Admin page.

  2. Click Edit Existing Template.

  3. On the Edit Templates page, select a template from the list of HTML Conversion templates in the Content Server.

    If a known HTML Conversion template is not included in the list of available templates, then it was most likely not assigned the correct HTML Conversion Template type when it was checked into the Content Server (see Section 32.4). You then need to open the content information page of the checked-in template and update its template type.

    The Edit Template button does not become available until you specify the name of an existing template.

  4. Click the Edit Template button. The HTML Conversion Template Editor is downloaded to your machine. With some browsers, such as Firefox, you may be prompted for how to handle the file dc_hcmapedit.jnlp. The correct way to open this file is with Java (TM) Web Start Launcher (default).

    The Template Editor is started. If you have not run the editor before, it is installed first and you may need to confirm a few prompts.

You can now edit the HTML Conversion template in the Template Editor.

You can also edit an existing HTML Conversion template from the Template Selection Rules page.


Note:

The Template Editor comes with its own extensive help system, which can be called from the application's user interface.


33.2 HTML Conversion Template Editor

This section provides a description of the HTML Conversion Template Editor. More detail can be found in Oracle WebCenter Content Template Editor Guide for Dynamic Converter.

The HTML Conversion Editor allows you to set various options that affect the content and structure of the output. The HTML Conversion Editor is Java-based and can run in any browser instance where a JRE is present.

The following topics are covered in this section:

33.2.1 Formatting Different File Types

The top item in the left-hand navigation pane of the HTML Conversion Editor allows you to set up custom formatting for different file types. Each file type uses a layout, either the default layout, or one created under Output Page Layouts. The exported files will use the same template as the root conversion.

These file types have slightly different options for formatting:

  • Text/Word Processing: Allows you to set options for bullets, footnotes and endnotes, handling character styles and embedded graphics, and setting pagination.

  • Spreadsheets: Allows you to set up section formatting and labeling, display grid lines, and size embedded graphics.

  • Presentations: Allows you to set up section formatting and labeling, and size slides.

  • Images: Allows you to set up section formatting and labeling, and size images.

  • Archives: Allows you to display either Filenames (the names of files and folders in the archive will be output) or Decompressed files (the file names will be output as links to the exported files). The exported files use the same template as the root conversion.

  • Database: Allows you to set up section formatting and labeling, and set the number of records per page.

33.2.2 Adding Document Properties

This item allows you to add predefined and custom properties. You can assign default values, metatag names, and output format to these properties

By default, no document properties are defined. In order to include them in the output from the conversion, each desired document property must first be defined here. They must then be added to the output from the conversion by inserting them into page layouts defined in the Output Page Layouts item.The most common predefined properties are as follows:

  • Primary author

  • Title

  • Subject

  • Keywords

  • Content

Several more less common predefined properties are also available.

For a custom property, you can create a descriptive name, and then assign default values, metatag names, and output formats.

33.2.3 Adding Text Elements

Text elements allow the user to insert strings into the output. Each text element is defined as a name-value pair with an optional output format that will be used to format the text.

If an output format is not specified, the text will be inserted into the output as-is, with no additional markup.

33.2.4 Adding Navigation Elements

Navigation elements allow you to have navigation links generated in the output. There are three kinds of navigation elements:

  • Document Navigation: This allows you to link to various items in the source document based on the document's structure. A common example of how one would use this type of navigation is to create links to all the paragraphs marked with outline level 1 (such as "Heading 1" paragraphs) in the document. Before using this form of navigation, link mapping rules must first be added. Link mapping rules establish which parts of the input document will be used to create links.

  • Page Navigation: This provides a way to link to certain key pages in the output (first page, next page, etc.). It also provides a way to link to external pages.

  • Section Navigation: This provides navigation for multi-section documents, such as spreadsheets and presentations.

Once you have added one of these, the left hand side of the editor displays expanded levels. You can specify information about the link, link set markup and formatting, and create link mapping rules. Link mapping rules allow you to match on a paragraph outline level or paragraph style name in order to generate navigation based on these two aspects of the source document. Once you define rules, click back on the Link Mapping Rules page to determine the sequence of the rules. The mapping rules are ordered so that the first rule that matches is the one that is applied.

33.2.5 Configuring HTML Settings

There are six major categories that you can configure:

  • HTML Settings: Allows you to set the HTML DOCTYPE and Language string.

  • CSS options: Allows you to specify whether Cascading Style Sheet ("CSS") formatting will be used, and if so, the method of CSS presentation. By default, the CSS is embedded in the HTML of each output file. You may also choose to output CSS styles in a separate file. The external stylesheet option allows you to specify a stylesheet with user-generated styles that will be referenced by the conversion.

  • Character set: Allows you to specify which character set should be used in the output file. Source documents that contain characters from many character sets will look best only when this option is set to Unicode or UTF-8. You may also select a character to be used when a character cannot be found in the output character set (unmappable).

  • Graphics output: Allows you to specify the format of the graphics produced by the technology: GIF, JPG, PNG, or none. Other options in this section allow you to specify quality and sizing of the graphic output.

  • Link options: This option allows you to specify how the browser should select which frame or window in which to open source document links. This value is used for the target attribute of the links the technology generates. This target value will be applied to all such links encountered in the source document.

  • Output formatting: This option causes the technology to write new blank lines to the output strictly to make the generated HTML more readable and visually appealing. While setting this option will make it easier to read the generated markup in a text editor, it does not affect the browser's rendering of the document. You can also include information about source document style names and how they are mapped, and the user can see what format has been mapped to a particular paragraph or text sequence by mousing over it.

33.2.6 Adding Output Markup Items

Markup items are HTML fragments that may be inserted directly into the output HTML as part of a page layout (see Section 33.2.9). Each markup item is a name/value pair. The name is what will appear in the screens for editing page layouts. The value is a block of HTML that will be inserted into the output HTML wherever the markup item appears in a page layout.

Click the Add button and specify a Name to use for referencing this piece of markup. Then enter the HTML into the Markup text box.

33.2.7 Adding Output Text Formats

Output text formats define text and formatting attributes of output document text. This allows you to standardize the look of the output despite differing formatting styles used by the various authors of the source documents. Text formats are only applied to text from word processing files. They cannot be used to change the formatting of text that is rendered as part of any graphics generated by the conversion. They are also not applied to text inside spreadsheets.

  1. Click the Add button to display the Markup tab. Then specify a Name to use for referencing this format.

  2. Under Tag name, enter the HTML paragraph level tag to put around paragraphs using this format. Note that any tag name may be entered here, whether it is legal or not. Only the tag name should be entered, not the surrounding angle brackets ("<" and ">"). The paragraph tag ("p") is the default.

  3. Under Custom Attributes, you can enter attributes that apply to the tag whose name was specified by the Tag name option above. To set the name and value of the new attribute, just click on them in the Custom Attributes table.

  4. Custom Markup allows you to enter HTML and/or regular text that will be inserted before and/or after every paragraph using this format.

  5. Other formatting options on the Markup tab include inserting new lines into the HTML before the paragraph to make it easier to view the HTML of the output of the conversion (only written if the Format HTML source for readability option is set on the Output Pages screen); specifying that a new output page is created every time this format is applied to a paragraph; and whether or not the first instance of this format should start a new page (to help avoid empty or mostly empty pages at the beginning of the output).

  6. On the Formatting tab, you can choose how to specify the formatting for paragraphs. If you click on the Use external CSS class option, a text field becomes available in which you must enter the name of a class from an external CSS file. The URL of the external CSS file is specified with the External user stylesheet option set on the Output Pages page.

    If you do not specify an external stylesheet, you can choose to format the document by observing the original source document formatting or forcing other formatting options. Character, Paragraph and Border formatting for an array of options can be set to one of four values:

    • Always off: Forces the attribute to always be off when formatting the text.

    • Always on: Forces the attribute to always be on when formatting the text.

    • Inherit (default): Takes the state of the attribute from the source document. In other words, if the source document had the text rendered with bold, then the technology will create bold text.

    • Do not specify: Leave the formatting unspecified.

33.2.8 Adding Format Mapping Rules

Once you have defined text formats, you must define rules to map output text formats to output text.

  1. Select Format Mapping Rules and click Add Format Mapping Rule.

  2. In the Format drop-down box, select one of your defined text formats.

  3. In the Match on drop down box, you may select one of the following paragraph formatting options for the rule to check:

    • Outline level: Match the outline level specified in the source document. Application-predefined "heading" styles typically have corresponding outline levels applied as part of the style definition.

    • Style name: Match the paragraph or character style name.

    • Is footnote: Match any footnote.

    • Is endnote: Match any endnote.

    • Is header: Match any document header text.

    • Is footer: Match any document footer text.

  4. For the Paragraph outline level, if Match on above is set to Outline level, then this defines which outline level to match. This option cannot be set/is ignored for all other matching rules.

  5. If Match on above is set to Style name, then this defines which source document paragraph or character style name to match. When matching on style names, you must supply a style name here, and no default value is provided. The name must exactly match the style name from the source document. Style name matching is done in a case-sensitive manner. This option cannot be set/is ignored for all other matching rules.

  6. When you have finished defining rules, you can go back to the Format Mapping Rules page and click on Move Up or Move Down to arrange the sequence in which the rules are checked. The mapping rules should be ordered so that the first rule that matches is the one, and only one, that is applied.

33.2.9 Adding Output Page Layouts

The Output Page Layouts section allows you to define the content of a set of output files. Page layouts are used to organize how the various pieces of the output are arranged.

  1. Click Add to add a new output page layout. On the next page, enter a name to use to refer to this layout (required). Once this has been done, click on the left side of the editor. The name you have just entered is displayed in the tree view. Click on this to expand the levels underneath.

  2. Click the box in front of Include navigation layout if you want to generate a single file containing markup and links to the document content specified in the page layout. This allows the user to create a "table of contents" page. You will need to define a navigation element under the Navigation Layout.

  3. The first item under the name of your output page layout is <title> Source. This lets you select where to get the value to use for the HTML <title> tag. Select Section Name, Text Element, Property, or Output Text Format (the last three must be previously defined). Click back on the <title> Source page to order the sequence of these sources.

  4. The Navigation Layout triggers the creation of a separate file with nothing but links to the actual document content. In order to generate this, you must have previously defined either a Document Navigation, Page Navigation, or Section Navigation element under Generated Content (see Section 33.2.4). In the expanded level under Navigation Layout, you can further select Markup Items to be placed in the Head and/or Body of the navigation page.

  5. The top level of the Page Layout section lets you set pagination options. The six options under Page Layout let you define how output documents are arranged. These six options are as follows:

    • Head: Items placed in the HTML <head> of each output file.

    • Page Top: Items to be placed at the top of each output page. For example, links to the first page, previous page and next page in the output.

    • Before Content: Items to be output before the document content.

    • Before Section: Items inserted before each section of a multi-section document. Note that this is not applicable to word-processing documents.

    • After Content: Items to be output after the document content.

    • Page Bottom: Items placed at the top of each output page; for example, a copyright notice.

    After selecting items to display for these six options, click at the top level of each one to set the order in which they will appear.

33.2.10 Previewing Your Content

The HTML Conversion Editor provides two options for previewing your content. They are both located in the Tools menu at the top of the interface.

  • View XML Structure: Click this option to display the XML structure viewer, which shows a text-based XML version of your chosen template options.

  • Set preview document: Click this option to enter the Content ID of the source document, and then click Preview Conversion. Your browser will open and display how the current template settings would affect the converted output.

33.2.11 Saving Your Template

If you start to exit the template editor, you will be prompted to supply an XML filename and location to store your template.

33.3 Classic HTML Conversion Template Editor

The following topics are covered in this section:

You can select Classic HTML Conversion Editor on the Edit Templates page. The first time the Edit Template button is clicked on the Edit Templates page or Template Selection Rules page, the Classic HTML Template Editor is downloaded onto the client machine. The Classic HTML Conversion Template Editor is an ActiveX control that must be run on Microsoft Windows with a 32-bit version of Internet Explorer present. Microsoft Internet Explorer 64-bit browsers are not supported for use.

When you specify the name of a recognized Classic HTML Conversion template on the Edit Templates page or Template Selection Rules page, the Edit Template button is activated. Click this button to open the Template Editor. The Template Editor features a template preview area and four editing buttons: Element Setup, Formatting, Navigation, and Globals. Each button opens a property sheet that contains numerous settings for your template; all of which can be edited in a graphical user interface.

Figure 33-2 Classic HTML Conversion Template Editor

The Classic HTML Conversion Template Editor

Each source document that you plan to convert to a web page using Dynamic Converter contains individual formatting attributes. You may have prepared styles in your source documents and assigned those styles to a specific typeface or font. Or, you may have manually formatted the content inside each source document (for example, headings in 14-point bold, sub-headings in 12-point italic, etc.). Classic HTML Conversion templates in Dynamic Converter can recognize both.

Classic HTML Conversion templates and the Template Editor can recognize styles as well as manually formatted documents. Once an element is assigned to these individual parts of your source document, you can then begin modifying the appearance and functionality of those elements using the Template Editor. When source documents are converted into web pages, it is the elements (stored in the template) that ultimately control how the web page will appear.

The Template Editor includes a very useful screentip Element, where you can place your cursor above a piece of text in the preview document and see the element that has been assigned to that text.

Many of the settings in the Template Editor apply to a single element. The more you define each element, the more control you have exert over the converted web page; all without ever touching the source document.


Note:

The Template Editor comes with its own extensive help system, which can be called from the application's user interface.


33.3.1 Template Elements

Nearly every source document has a title, a heading, and body text. Each one will likely have a unique font size and weight. The Template Editor can be used to assign unique elements to each piece of text and save that information in the Classic HTML Conversion template.

Figure 33-3 Elements in Template Editor

Template Editor dialog with the Elements tab selected

Elements are created from ranks, styles, or patterns:

  • Rank: Used by the Template Editor to identify the structure of the content of a document based on the hierarchy of that content. Ranks can be used with patterns in Element Setup to prepare a template for editing.

  • Style: A set of formatting characteristics with an assigned name that defines how text appears in a document. Styles can be assembled together to make up a style sheet or Cascading Style Sheet (CSS).

  • Pattern: A set of text attributes in a source document that the Template Editor can identify and associate with an element. If a manually-formatted source document has headings in Arial, 18-point, bold, you can base a pattern on these attributes and associate this pattern with an element. You can then use this element to format the content associated with the pattern.

You will find that styles in your source documents are the most useful and manageable for conversion purposes. As such, you should first try to implement styles in your source documents and perhaps distribute a style sheet to your content contributors.

Dynamic Converter templates are designed to be interchangeable with other Content Server related products, such as Content Publisher. The features that apply to reference pages in a web publication do not apply and will not work in Dynamic Converter. (An example of this would be adding a table of contents for multiple source documents.)


Note:

The Template Editor includes a separate and comprehensive online Help system (which is downloaded with the Template Editor). Each dialog box and property sheet in the Template Editor includes a Help button that describes that particular Element. To access these topics, click Help.


33.3.2 Sample Classic HTML Conversion Templates

A number of sample Classic HTML Conversion templates are available for download from Oracle Technology Network at http://www.oracle.com/technetwork/indexes/samplecode/. After downloading, check the sample into Content Server and begin using with the Template Editor.

33.3.2.1 Academy

Academy sample Classic HTML Conversion template

33.3.2.2 Acclaim CSS

Acclaim CSS sample Classic HTML Conversion template

33.3.2.3 Account

Account sample Classic HTML Conversion template

33.3.2.4 Adagio CSS

Adagio CSS sample Classic HTML Conversion template

33.3.2.5 Administration

Administration sample Classic HTML Conversion template

33.3.2.6 Analysis

Analysis sample Classic HTML Conversion template

33.3.2.7 Archive CSS

Archive CSS sample Classic HTML Conversion template

33.3.2.8 Blank

Blank sample Classic HTML Conversion template


Note:

This is the default template.


33.3.2.9 Business

Business sample Classic HTML Conversion template

33.3.2.10 Ceremonial

Ceremonial sample Classic HTML Conversion template

33.3.2.11 Courtesy

Courtesy sample Classic HTML Conversion template

33.3.2.12 Executive

Executive sample Classic HTML Conversion template

33.3.2.13 Introduction CSS

Introduction sample Classic HTML Conversion template

33.3.2.14 Lotus 1-2-3

Lotus 1-2-3 sample Classic HTML Conversion template

33.3.2.15 Lotus Freelance

Lotus Freelance sample Classic HTML Conversion template

33.3.2.16 MS Excel

MX Excel sample Classic HTML Conversion template

33.3.2.17 MS PowerPoint

MS PowerPoint sample Classic HTML Conversion template

33.3.2.18 Purple Frost

Purple Frost sample Classic HTML Conversion template

33.3.2.19 Retrofied! CSS

Retrofied! CSS sample Classic HTML Conversion template

33.3.3 Migrating From Script Templates to Classic HTML Conversion Templates

The script templates (see Chapter 35, "Script Templates") in earlier versions of Dynamic Converter were hand-coded text files that contain elements, macros, pragmas, indexes, and Idoc Script. A basic script template might look something like this:

<HTML>
<BODY>
<P>Here is the document you requested.
{## INSERT ELEMENT=Property.Title} by
{## INSERT ELEMENT=Property.Author}
<P>Below is the document itself
{## INSERT ELEMENT=Body}
</BODY>
</HTML>

Dynamic Converter now also supports XML-based Classic HTML Conversion templates designed for use with the GUI-driven Template Editor (see Section 33.3).

A basic Classic HTML Conversion template might look something like this in the Template Editor.

Figure 33-4 Classic HTML Conversion Template in Template Editor

Classic HTML Conversion template in the Template Editor

As a result of these differences, there is no automated upgrade process from the previous script templates to the current version Classic HTML Conversion templates. We can, however, recommend a migration path so that you can begin using the powerful Classic HTML Conversion templates in Dynamic Converter.

33.3.3.1 Updating an Old Template

To update a script template from an earlier version of Dynamic Converter to the Classic HTML Conversion template format:

  1. Open the Dynamic Converter Admin page.

  2. Create a new Classic HTML Conversion template.

  3. Click Template Selection Rules on the Dynamic Converter Admin page.

  4. On the Template Selection Rules page, highlight the rule associated with your previous template (the script template) and then scroll down to the "Template and layout for selected rule" area.


    Note:

    Rules that were created in an earlier version of Dynamic Converter (prior to version 6.1) will appear as a numbered rule in this version of Dynamic Converter. You can continue using that rule or delete it and re-create the rule in Dynamic Converter 11gR2 (you cannot rename a rule).


    You may want to modify the criteria assigned to your previous rule using the additional metadata fields available in Dynamic Converter. See Section 31.3.

  5. From the Available Templates menu, select the Classic HTML Conversion template that you created in Step 2 (templates are listed by content ID).

  6. Click Edit Template to open the Classic HTML Conversion Template Editor.

  7. In the Template Editor, click Change Preview to select a source document (by content ID) to preview your template with.

  8. Re-create the settings from your previous script template using the Template Editor (see Section 33.3). This will likely be the most time-consuming part of the migration process. You may want to open another web browser and preview a dynamically converted document that used the previous template, so that you can compare the templates as you work. Click OK to close the Template Editor when you are finished making changes.

  9. Enter the content ID of your previous script template in the Layout field (so that you can turn a former script template into a layout template that is used with the Classic HTML Conversion template).

  10. Click Update to associate your new Classic HTML Conversion template and layout template with your template selection rule.

  11. Search for the layout template (former script template) in the Content Server, check it out, and open it in a text editor.

    Make the following changes:

    • Insert the following token at the top of your file, before the first <HTML> tag:

      <!--TRANSIT - CUSTOMLAYOUT(TOP)-->
      
    • Insert the following token between the HTML <HEAD> tags:

      <!--TRANSIT - CUSTOMLAYOUT(HEAD)-->
      
    • Insert the following token in the HTML <BODY> tag:

      %%TRANSIT-BODYATTRIBUTES%%
      
    • Replace your existing Insert Body Element tag with the following token (this token will replace most of your previous element settings):

      <!-- TRANSIT - CUSTOMLAYOUT(BODY) -->
      
    • Remove all references to elements, macros, pragmas, and indexes.

    • Leave Idoc Script tags in place (those that call outside files or services).

  12. Save your new layout template and check it into the Content Server.


    Note:

    Unlike script templates, layout templates in the Content Server do not require the HCST file extension.


33.3.3.2 Sample of Newly Converted Template (From a Pre-6.0 Version)

To update an earlier Dynamic Converter script template (prior to version 6.0) to the current version Classic HTML Conversion template, you will need to recreate your original template settings in the Template Editor (see Section 33.3) and then turn your previous script template into a layout template. While all Idoc Script tags can remain, you will need to remove the syntax for elements, macros, pragmas, and indexes. These values are replaced with template tokens, in particular the CUSTOMLAYOUT(BODY) token, which represents nearly all of the settings made in the Template Editor.

The following example illustrates a very simple script template created in an earlier version of Dynamic Converter (prior to version 6.0) that is turned into a layout template in the current version. All element formatting, of course, must be recreated in the new Template Editor. (Bold text indicates a tag that is replaced.)

Example 33-1 Original Script Template

<html>
<head>
<title>
{## insert element=property.title suppress=tags}
</title>
<$defaultPageTitle="Converted Content"$>
<$include std_html_head_declarations$>
</head>
<body>
<$include body_def$>
<$include std_page_begin$>
<$include std_header$>
<table border="0" cellpadding="0" cellspacing="0" width="100%">
<tr><td>
{## INSERT ELEMENT=Property.Title}
</td></tr>
<tr><td>
{## INSERT ELEMENT=Body}
</td></tr>

/table>
<$include std_page_end$>
</body>
</html>

Example 33-2 Migrated Layout Template

<!-- TRANSIT - CUSTOMLAYOUT(TOP) -->
<html>
<head>
<!-- TRANSIT - CUSTOMLAYOUT(HEAD) -->
<$defaultPageTitle="Converted Content"$>
<$include std_html_head_declarations$>
</head>
<body %%TRANSIT-BODYATTRIBUTES%%>
<$include body_def$>
<$include std_page_begin$>
<$include std_header$>
<table border="0" cellpadding="0" cellspacing="0" width="100%">
<tr><td>
<!-- TRANSIT - CUSTOMLAYOUT(BODY) -->
</td></tr>
</table>
<$include std_page_end$>
</body>
</html>
PK+pjǐPK VwEOEBPS/dc_template_rules.htm(! Template Rules

31 Template Rules

This chapter explains how to manage Dynamic Converter template rules, assign metadata criteria to a rule, and choose a temple for a rule.

This chapter covers the following topics.

31.1 About Template Rules

A rule is a set of instructions that drive the conversion process in Dynamic Converter. These instructions identify source documents in the Oracle WebCenter Content Server and then determine whether or not these documents should be converted based on their metadata (content ID, type, author, and so on) and file type. The rule then requests that the document be converted using the template associated with the rule (for more on templates, see Chapter 32, "Conversion Templates"). You can have more than one rule in Dynamic Converter. If this is the case, the first rule to match the source document's metadata is used for dynamic conversion. Depending on the system configuration, other matching rules may also be applied.

The Template Selection Rules page allows you to add, remove, and reorganize rules; specify the criteria (metadata) to base a rule on; and assign a template (or templates) to the rule.

A number of features have come together to form the Template Selection Rules page. You can add multiple rules and then change the order in which those rules will apply to source documents. You can select a number of metadata fields to base a rule on (and add even more fields using the configuration page). Lastly, you can assign a template (or templates) to the rule and then edit those templates using the Edit Template button.

31.2 Managing Your Template Rules

The top section of the Template Selection Rules page enables you to manage the template rules.

31.2.1 Adding a Rule

To add a new template rule:

  1. Open the Dynamic Converter Admin page.

  2. Click Template Selection Rules.

  3. On the Template Selection Rules page, type a name for your rule in the New rule name text box (under the Template Selection Rules heading).

  4. Click Add New Rule.

    When your rule is highlighted, you will notice that the criteria and template fields for the rule are blank. You can start entering the desired metadata criteria and template for this rule right away.

  5. Click Update at the bottom of the Template Selection Rules page.

31.2.2 Deleting a Rule

To delete a template rule from the Template Selection Rules list:

  1. Open the Dynamic Converter Admin page.

  2. Click Template Selection Rules.

  3. On the Template Selection Rules page, highlight the rule to be deleted and click Delete Rule.

  4. Click Update at the bottom of the Template Selection Rules page.


    Important:

    Deleting a rule will remove all of the settings (metadata criteria and template) for that rule. You cannot undo this operation.


31.2.3 Reordering the Rules

To change the order in which your template rules are processed:

  1. Open the Dynamic Converter Admin page.

  2. Click Template Selection Rules.

  3. On the Template Selection Rules page, do either of the following:

    • To move a rule up the list, where it is prioritized over other rules, highlight the rule and click Move Up. Then click Update.

    • To move a rule down the list, where it will receive a lower priority, highlight the rule and click Move Down. Then click Update.

31.3 Assigning Metadata Criteria to a Rule

When assigning conversion templates to content items, you need to make sure that the metadata specified here matches the metadata assigned to your source documents. You can verify this by opening the content information page for your source documents in the Content Server.

To assign metadata to a template selection rule:

  1. Open the Dynamic Converter Admin page.

  2. Click Template Selection Rules.

  3. On the Template Selection Rules page, choose a metadata field from the first Field list (under the "Criteria for selected rule" heading). You may choose Type, Author, Title, Content ID, Title, or a number of other fields.

  4. In the Value text box, enter the metadata that you would like your rule to target.

    You can select the metadata value from the menu to the right of the Value text box. You can also use wildcards to specify a metadata value.

  5. If desired, choose a second and third metadata field for your rule.

    There will always be an "AND" relationship between the metadata fields, which means that only those content items that meet all criteria are converted by this rule.

    The maximum number of criteria that you can specify for each rule is controlled by a setting on the Dynamic Converter Configuration Page.

  6. Click Update on the bottom of the Template Selection Rules page to update your rule.

31.4 Choosing a Template for a Rule

Your template selection rule is not complete until you choose a template for the rule. The template will ultimately drive the appearance of your converted documents.

To assign a template to a rule:

  1. Open the Dynamic Converter Admin page.

  2. Click Template Selection Rules.

  3. On the Template Selection Rules page, enter the content ID for the template in the Template text box (under the "Template and layout for selected rule" heading).

    You can select a type of template (HTML Conversion, Classic HTML Conversion, or Script) from the Template Types menu, and then you can select your desired template from the Available Templates menu.

  4. If you chose a Classic HTML Conversion template in the previous step, you may want to complement it with a layout template. If so, enter the content ID for the layout template in the Layout text box (again, you may select the layout template from the Available Layouts menu).

  5. Click Update to add the template to your rule.

Once you have created a template selection rule, assigned the appropriate metadata criteria to it, and selected a template (or templates) for the rule, you should verify your configuration settings on the Dynamic Converter Configuration page. In particular, make sure that you have added the necessary file types to the Conversion Formats list.

For more information, see Chapter 32, "Conversion Templates."

PK%((PK VwE OEBPS/toc.ncx+| Oracle® Fusion Middleware Managing Oracle WebCenter Content, 11g Release 1 (11.1.1) Cover Title and Copyright Information Contents Preface What's New in This Guide Part I Understanding Oracle WebCenter Content 1 Introduction to Oracle WebCenter Content Features 2 Overview of Common Management Tasks Part II Basic Applications Administration 3 Getting Started Managing Oracle WebCenter Content 4 Finding Status and Error Information Part III Managing Content 5 Managing Content 6 Organizing Content 7 Managing Workflows 8 Customizing Repository Fields and Metadata 9 Categorizing and Linking Content 10 Tracking Content Access 11 Managing Content Profiles Part IV Records Management 12 Configuring Records Management 13 Managing a Records Retention Schedule 14 Managing Security for Records 15 Defining and Processing Dispositions 16 The Oracle Content Server Adapter 17 Managing Physical Content 18 Processing Reservations and Chargebacks 19 Configuring Related Content (Links) 20 Managing the Records System 21 Using Federated Search and Freeze 22 FOIA and Privacy Act Tracking Part V Managing Content Conversions 23 Configuring Inbound Refinery 24 Managing Inbound Refinery 25 Working with Conversions 26 Working With Image and Video Conversions 27 Managing PDF Watermark 28 Supported File Formats Part VI Managing Dynamic Conversion 29 Introduction to Dynamic Converter 30 Configuring Dynamic Converter 31 Template Rules 32 Conversion Templates 33 HTML Conversion Templates 34 Classic HTML Conversion Layout Templates 35 Script Templates 36 HTML Snippets 37 Working with Converted Content 38 Implementation Considerations for Dynamic Converter 39 Conversion Filters 40 Input File Formats 41 Office 2007/2010 Considerations 42 Elements Script Template Part VII Desktop Management 43 Managing Desktop Part VIII Troubleshooting 44 Troubleshooting Workflows 45 Troubleshooting Content Tracking Issues 46 Troubleshooting WebDAV 47 Troubleshooting Inbound Refinery Copyright PKoW++PK VwEOEBPS/content.opf6`ɟ Oracle® Fusion Middleware Managing Oracle WebCenter Content, 11g Release 1 (11.1.1) en-US E26693-01 Oracle Corporation Oracle Corporation Oracle® Fusion Middleware Managing Oracle WebCenter Content, 11g Release 1 (11.1.1) 2013-02-28T09:03:16Z This guide describes how to configure and manage Oracle WebCenter Content applications and components to extend Oracle WebCenter Content functionality, including Folios, Folders, WebDAV, Workflows, Content Categorizer, Content Tracker, Content Profiles, Records, Inbound Refinery, Conversions, and Desktop. PK,p;`6`PK VwEOEBPS/dc_content.htmH Working with Converted Content

37 Working with Converted Content

This chapter provides information on working with content items that have been converted and checked in to the Oracle WebCenter Content Server.

This chapter covers the following topics:

37.1 Viewing Content Information

Every content item checked into the Content Server has its own content information page, which can be used to view and verify the metadata information about the content item, such as the content ID, title, author, and other metadata. You will frequently visit the content information page of your source documents in order to specify your template selection rule criteria.

The Info icon on the search results page is used to access the content information page of a content item, where you can view the metadata for the content item. Use this page to view and verify information about a specific content item. For example, you can identify the release date of a file or the user login of the author.

Figure 37-1 Content Information Page

The Content Information page

This page shows a lot of information about the content item, including:

  • Values for all the metadata fields that were completed when the file was checked into the Content Server

  • The author's name (user login)

  • The file status indicating where the file is in its life cycle

  • The file format, which is the native application that the file was created with. The file format is expressed as the MIME content type.

  • The current web location, which is an active link that points to the web-viewable rendition (for example, PDF) of the checked-in content item, if such a rendition was generated. This URL uniquely refers to the web-viewable rendition of the content item's latest revision.

  • A native file link, which you can use to get a copy of the content item in its native format (that is, the one it was originally created in). If you click the link, you can open the file in its native application (if you have it installed on your computer) or you can save it to your local hard drive. You can also right-click the link and save the file locally. This enables you to make a copy of the file for reuse. You can then check it back into the Content Server as a new revision.

  • The complete revision history.


    Note:

    The content information can be displayed for any revision of the content item by clicking the revision link that is displayed in the Revision column of the Revision History section. The currently displayed content item is enclosed in square brackets: [ ].


Figure 37-2 Revision History of Content Item

Revision History of a selected content item

The content information page has other functions in addition to viewing a file's metadata, status, and revision history. The available options depend on your assigned privileges and the Content Server configuration, and may include any of the following:

ActionDefinition

Check Out

Enables you to check out a file for edit and later check it in with the same content ID and the revision number incremented by one (if you are a contributor).

Undo Check Out

Cancels the check-out of the content item. Your name will no longer appear next to "Checked out by: on the content information page. You can only undo a check-out of a content item that you checked out if you have the "admin" role or have administrator permissions for the security group that the content item belongs to.

Check In

Checks in a new revision of a content item currently checked out.

Update

Enables you to change the metadata fields for a content item already checked into the Content Server. For example, you can use Update to correct a misspelled word in the title field or select the correct content type if you initially entered it incorrectly.

Check In Similar

Enables you to check in another content item with the same metadata of the content item you have just checked in.

Send link by e-mail

Opens your e-mail program with a new message that contains a link to the URL (web address) of the web-viewable file.

Subscribe

Enables you to tag a content item so that you are automatically notified by e-mail about any changes to it (i.e., if a new revision is checked in). If the software does not know your e-mail address, you are prompted to enter it.

Unsubscribe

Enables you to cancel your subscription to the content item (i.e., no longer be notified of new revisions).

Create Shortcut

Enables you to create a shortcut to the content item in the Content Server and store the shortcut in a folder under Browse Content.

Delete Revision

Enables you to remove a revision of a file from the system. To delete a revision, you must have delete permission for the security group the file belongs to.

Revision Number

Displays the content information for the specified revision.


To access the content information page of a content item, complete the following steps:

  1. Search for the content item.

    The search results page is displayed.

  2. Click the Info icon (Figure 37-3) that corresponds to the file for which you want to see the content information.

Figure 37-3 Info Icon

The Info icon

The content information page is displayed.


Note:

See Oracle Fusion Middleware Using Oracle WebCenter Content for more information on searching for content.


37.2 Viewing a Converted File

Dynamic Converter provides a solution to the problem of requiring a client workstation to have native applications installed (such as Microsoft Word, Excel, or other applications) in order to open source documents created with those applications. It does this by creating a web-viewable version of the source document on demand and on the fly.

The web-viewable version of the source document can be seen by clicking an HTML link on these Content Server pages:

37.2.1 Search Results Page

You can use Content Server's extensive search Element to find content items. You can search by metadata and/or perform a full-text search (depending on the Content Server setup). The results of a search are shown on a search results page. If a content item in the list is of a file type that is supported and enabled for HTML conversion, then an HTML Rendition link is included in the actions popup menu. You can use this link to view an HTML rendition of the content item.

Figure 37-4 Html Rendition Link on Search Results Page

Search Results page shows the HTML Rendition link

When you click the HTML Rendition link, the file is converted and displayed using the rules and templates specified on the Template Selection Rules page.

37.2.2 Content Information Page

Every content item checked into Content Server has its own content information page, which shows the metadata information of the content item, such as the content ID, title, author, and other metadata.

If the content item is of a file type that is supported and enabled for HTML conversion by Dynamic Converter, then the content information page will display an (HTML) link beside the text "Get Conversion." You can use this link to view an HTML rendition of the content item.

Figure 37-5 Html Link on Content Information Page

Content Information page shows the HTML link

When you click the (HTML) link, the file is converted and displayed using the rules and templates specified on the Template Selection Rules page.

Subscription and Workflow Notifications

You can also open the content information page using the View Info link in the e-mail messages that you receive when you subscribe to a content item stored in the Content Server.

Figure 37-6 View Info Link in Subscription E-Mail Notification Message

Content Release Notification page shows View Info link

This same link is available in workflow notification messages, which eliminates the need for content reviewers to have the native application used to create the source file.

37.3 Previewing a Document Before Check-In

Content contributors can preview the HTML rendition of a document before checking it into the Content Server. This enables them to see if there are problems with the document or the template associated with the document, and notify the site webmaster or developer. Problems can then be resolved before more users or customers view the converted content. Both the content authors and the site developers gain from the ability to preview documents this way.

The dynamic contributor preview is displayed as an (HTML) button on Content Server's content check-in page.

Figure 37-7 Html Preview Button on Content Check-In Screen

Content Check-in page shows the HTML Preview button

Once a document has been selected and all metadata assigned to the document, click the preview button to see how the document will appear as a web page. The resulting screen displays a Complete Check In link in the left frame and the converted document in the right frame.

Figure 37-8 Dynamic Conversion Preview

Sample dynamic conversion preview

If you are satisfied with the HTML rendition of the document, you can click Complete Check In to check the document into the Content Server (at which time you are brought to the check-in confirmation screen). Click the Back button in your web browser to cancel the process and return to the content check-in screen.

If you check in a document using metadata that has no template associated with it, a blank Classic HTML Conversion template is assigned. This template contains no special formatting instructions, other than to convert your document into a web page.


Tip:

As a site administrator, you can also preview how a content item will appear with a particular template using the Change Preview button in the Template Editor.


PKͩHHPK VwEOEBPS/ibr_refinery_admin.htm Managing Inbound Refinery

24 Managing Inbound Refinery

This chapter discusses the administrative tasks needed to manage Inbound Refinery, such as managing agents and providers, configuring web server filters, and publishing dynamic and static layout files.

This chapter discusses the following topics:

24.1 Managing Refinery Authentication and Users

As a managed server running within an Oracle WebLogic Server domain, user and group access to Inbound Refinery is controlled by Oracle WebLogic Server and system security configuration is handled through the WebLogic Server console.

If additional services are required, such as Oracle Internet Directory or single sign on using Oracle Access Manager, these can be linked to the Oracle WebLogic Server domain managing Inbound Refinery using WebLogic Server controls.

When deployed, the refineryadmin Inbound Refinery role has permissions to administer Oracle Inbound Refinery. Any user needing administration rights to Inbound Refinery must be part of the corresponding refineryadmin group in Oracle WebLogic Server.

For additional information, see the following documentation.

Table 24-1 Additional System Security Documentation

TaskWhere to Go For More Information

Administering Oracle WebLogic Server

Oracle Fusion Middleware Administrator's Guide

Administering Oracle WebCenter Content

Administering Oracle WebCenter Content


24.1.1 Integration with Single Sign-On

Oracle Access Manager (OAM), part of Oracle's enterprise class suite of products for identity management and security, provides a wide range of identity administration and security functions, including several single sign-on options for Fusion Middleware and custom Fusion Middleware applications. OAM is the recommended single sign-on solution for Oracle Fusion Middleware 11g installations.

For smaller scale Oracle Fusion Middleware 11g installations, where you do not have an enterprise-class single sign-on infrastructure like Oracle Access Manager, you only need to provide a single sign-on capability within your specific Fusion Middleware application, you can configure a SAML-based SSO solution. If you need to provide single sign-on with other enterprise applications, this solution is not recommended.

If your enterprise uses Microsoft desktop logins that authenticate with a Microsoft domain controller with user accounts in Active Directory, then configuring SSO with Microsoft Clients may also be an option to consider.

The setup required for each of these SSO solutions is described in the following document sections.

Table 24-2 Single Sign-On Documentation

For Information On...See The Following Guide...

Configuring OAM and OSSO

Oracle Fusion Middleware Security Guide

Using Windows Native Authentication for Single Sign-on

Oracle WebLogic Server Admin Console Help: Configure Authentication and Identify Assertion Providers

Using WebLogic SAML for Single Sign-on

Oracle Fusion Middleware Securing Oracle WebLogic Server: Configuring the SAML Authentication Provider


24.2 Managing Refinery Conversion Queues

A refinery is set up as a provider to a Content Server. When a file is checked into the Content Server, a copy of the native file is stored in the /vault directory (the native file repository). The native file is the format in which the file was originally created (for example, Microsoft Word).

If the file format is set up to be converted, the Content Server creates a conversion job in its pre-converted queue. The Content Server then attempts to deliver the conversion job to one of its active refinery providers (a refinery that is configured to accept the conversion and is not busy). The Content Server sends the conversion parameters to an active refinery.

When the refinery receives conversion parameters, it returns the following data to the Content Server:

  • JobAcceptStatus: The status can be one of the following.

    StatusDescriptionContent Server Action

    ERROR

    There was an unexpected error in processing the request.

    The content item is left in GenWWW status and removed from the Content Server's pre-converted queue.

    NEVER_ACCEPT

    The refinery is not configured to accept the conversion, and it will never accept the job.

    The refinery provider is marked as unavailable until the conversion job is cleared from the pre-converted queue

    ACCEPT

    The refinery will take the conversion job.

    The job is removed from the pre-converted queue, transferred to the refinery, and expected to be converted.

    BUSY

    The refinery could take the conversion job, but it has reached its total queue maximum or the maximum number of conversion jobs for a specific conversion.

    The refinery provider is not used again until the RefineryBusyTimeSeconds it provides to the Content Server has elapsed.


  • JobAcceptStatusMsg: A string that explains the refinery's status, to be logged by both the refinery and the Content Server.

  • JobCanAccept: A boolean that indicates if the job was accepted.

  • RefineryBusyTimeSeconds: The number of seconds the refinery wants to be left alone (this is just a hint; the refinery will not stop accepting requests).

If the refinery does not accept the job, the Content Server tries the next available refinery. The Content Server keeps attempting to transfer the job until a refinery accepts the job or the maximum transfer time is reached. If the maximum transfer time is reached, the job is removed from the Content Server's pre-converted queue and the content item remains in GenWWW status.

When a refinery accepts the job, the Content Server then uploads a ZIP file, containing the conversion data and the file to be converted, to the refinery. The Content Server also places an entry in its RefineryJobs table, which it uses to track the conversion job. The refinery places the conversion job in its pre-converted queue.

The refinery then attempts to perform the specified conversion, calling the appropriate conversion options as necessary. When the refinery finishes processing the conversion job, it places it in its post-converted queue. The Content Server polls the refinery periodically to see if conversion jobs in its RefineryJobs table have been completed. When the refinery reports that it has finished processing a conversion job, the Content Server downloads any converted files (for example, a web-viewable thumbnail file and a PDF file) from the refinery, places the conversion job in its post-converted queue, and kicks off any post-conversion functionality as needed.

Refinery queue management settings can be configured both on the Content Server and on the refinery. The following pages arre used to manage refinery queues:

  • Refinery Conversion Options page: This page contains settings that affect how the Content Server interacts with all of its refinery providers.

    • Seconds between successive transfer attempts: Used to set the number of seconds between successive transfer attempts for each conversion job. By default, the Content Server waits 10 seconds between attempts to deliver a conversion job to one of its refinery providers.

    • Minutes allowed to transfer a single job: Used to set the number of minutes allowed for the transfer of each conversion job. By default, the Content Server attempts to transfer a conversion job to one of its refinery providers for 30 minutes.

    • Native file compression threshold: Used to set the native file compression threshold size in MB (default size is 1024 MB (1 GB)). Unless the native file exceeds the threshold size, it is compressed before the Content Server transfers it to a refinery. This setting avoids the overhead of compressing very large files, such as video files. To leave native files uncompressed before transfer, set the threshold size to 0.

    • When the time for transferring a job expires, the conversion should fail: Used to specify the time to failure for a conversion. When the maximum allowed time for transferring a conversion job is reached, the conversion job is removed from the Content Server's pre-converted queue and the content item remains in GenWWW status. If specified that the conversion job should fail, the content item remains in GenWWW status. A conversion error is displayed on the Content Information page with a Resubmit button, allowing the user to resubmit the content item for conversion.

    • When a conversion sent to an Inbound Refinery fails, set the conversion to 'Refinery Passthru': Used to specify how the Content Server handles failed conversions. If a file is sent to a refinery and conversion fails, the Content Server can be configured to place a copy of the native file in the weblayout directory by enabling refinery passthru.


      Note:

      When a file is sent to the refinery for conversion, an HCST file cannot be used instead of a copy of the native file. For more information on configuring how the Content Server handles files that are not sent to the refinery, see Section 23.4.3.


  • Add/Edit Outgoing Socket Provider page: Used to specify settings for an individual refinery provider.

    • Handles Inbound Refinery Conversion Jobs: Used to specify if the provider handles conversion jobs. If this option is not selected, the Content Server does not attempt to transfer any conversion jobs to or from the provider.

    • Inbound Refinery Read Only Mode: Used to prevent the Content Server from sending new conversion jobs to the refinery provider. However, the refinery provider continues to return conversion jobs as the jobs are finished.

The following refinery pages contain information and settings used to manage refinery queues:

  • Items in Queue page: Used to view items in the pre- and post-converted queues for a specific refinery agent (such as a Content Server).

  • Conversion Listing page: Used to view items in the pre and post-converted queues for a specific refinery agent (such as a Content Server).

    • Maximum number of conversions allowed to be queued: Used to set the total number of conversion jobs allowed to be queued by the refinery. Default: 0 (unlimited).

    • Maximum number of conversions in post conversion queue: Used to specify the number of conversions allowed to be queued in the post conversion queue of a refinery. Default: 1000.

    • Number of seconds the refinery should be considered busy: Used to specify the number of seconds the refinery is considered busy when the maximum number of conversions is reached. Default: 30 (seconds). When the maximum number of conversion jobs for the refinery is reached, Content Servers wait this amount of time before attempting to communicate with the refinery again.

    • Maximum conversions: You can specify the maximum number of jobs the refinery can process at the same time. The default is 5.

24.3 Managing Agents and Providers

This section discusses the following topics:

24.3.1 Agent Management

The following tasks are performed when managing agents:

24.3.1.1 Verbose Logging

You can enable verbose logging for each refinery agent. When verbose logging is on, general agent status information, a detailed description of each conversion engine action (for example, when the conversion was started and file details, conversion step details, and conversion results), and errors are recorded in the refinery agent log. When verbose logging is off, only general agent status information and errors are recorded in the refinery agent log.

To enable verbose logging for a refinery agent:

  1. Log into the refinery.

  2. Select Refinery Administration then Agent Management.

  3. On the Agent Management page, select the Enable Verbose Logging check box for the refinery agent.

  4. To revert to the last saved settings, click Reset.

  5. Click Update to save your changes.

24.3.1.2 Deleting Agents

A refinery agent can be deleted only when there are no conversion jobs in the refinery agent's pre or post-converted queues. To delete a refinery agent:

  1. Log into the refinery.

  2. Select Refinery Administration then Agent Management.

  3. On the Agent Management page, select Delete Agent from the Actions menu for the refinery agent.

  4. On the Delete Agent page, select the Confirm deletion of agent agent_name check box to confirm that you want the agent deleted. History, logs, and any jobs in the agent queue are also deleted.

  5. Click Delete Agent.

24.3.2 Managing Refinery Providers

You should not need to configure any refinery providers. To view refinery provider information using the web-based Inbound Refinery interface:

  1. Log into the refinery.

  2. Select Refinery Administration then Providers from the navigation menu.

24.4 Viewing Refinery Information

This section discusses methods to view refinery information:

24.4.1 Refinery Configuration Information

To view the configuration information for the refinery using the web-based Inbound Refinery interface:

  1. Log into the refinery.

  2. Select Refinery Administration, Configuration Information from the navigation menu. The Configuration Information page is displayed, showing an overview of the main system settings. In addition, it lists all installed server components or custom components that are currently enabled and disabled.

The Configuration Information page is for information purposes only and cannot be edited.

24.4.2 Refinery System Audit Information

To view the system audit information for the refinery using the web-based Inbound Refinery interface:

  1. Log into the refinery.

  2. Select Refinery Administration, System Audit Information from the navigation menu. The System Audit Information page is displayed, showing information which may be useful while troubleshooting a problem or tweaking a server's performance.

    The General Information section of this page provides the following information:

    • Information regarding whether the system is receiving too many requests.

    • Information about the memory cache for the system, which is useful in troubleshooting any "out of memory" errors you may receive. This is important when running the refinery server with many users and a large quantity of data.

    • Information about which Java threads are currently running. This is useful in determining the cause of an error.

    • Listing of any audit messages.

    Tracing in a refinery can be activated on a section-by-section basis. Tracing for active sections is displayed on the Console Output page. Section tracing is useful for determining which section of the server is causing trouble, or when you want to view the details of specific sections. Sections can be added by appending extra sections to create a comma separated list.

    A listing of the sections available for tracing, with brief descriptions, is available by clicking the information icon next to the Tracing Sections Information heading. For example, activating refinery displays extended information about conversion status, activating ref-config traces changes to the current running environment, and activating refsteplogic traces the logic that determines what conversion steps are used. The wildcard character * is supported so that ref* will trace all sections that begin with the prefix ref, including refinery, ref-config, and refsteplogic.

    Some tracing sections also support verbose output. Enable Full Verbose Tracing if you wish to see in-depth tracing for any active section that supports it.


    Important:

    Any options set on this page is lost when the refinery is restarted unless you enable Save and click Update.


24.5 Configuring the Web Server Filter

To configure the web server filter for a refinery using the web-based Inbound Refinery interface:

  1. Log into the refinery.

  2. Select Refinery Administration, Filter Administration from the navigation menu. The Configure Web Server Filter page is displayed, which is used to configure and troubleshoot the web server filter communication with the refinery.

24.6 Publishing Dynamic and Static Layout Files

To publish dynamic and static layout files:

  1. Log into the refinery.

  2. To publish your dynamic layout files, choose Administration then Admin Actions and select publish dynamic layout files under the Weblayout Publishing section. The PUBLISH_WEBLAYOUT_FILES service is executed.

    All dynamic refinery layout files (.css files and .js files) are published from the refinery IntradocDir/shared/config/templates directory to the weblayout directory. This service is used when customizing the refinery. The PUBLISH_WEBLAYOUT_FILES service is also executed each time the refinery is restarted.

  3. To publish static layout files, choose Administration then Admin Actions and select publish static layout files under the Weblayout Publishing section. The PUBLISH_STATIC_FILES service is executed.

    All static layout files (graphic files) are published from the refinery IntradocDir/shared/publish directory to the weblayout directory. This service is used when customizing your refinery. The PUBLISH_STATIC_FILES service is not executed each time your refinery is restarted, as it can be very time-consuming to execute. This service must be executed manually when customizing the refinery.

For more information about other publishing options available and for customizing the content and refinery servers, see the documentation provided with Content Server.

24.7 Active Virus Scanning on Windows

When running Inbound Refinery on Windows, active virus scanning of some Inbound Refinery and Content Server directories can cause conversions to fail.

Exclude the following Content Server directories from active virus scanning:

  • the weblayout directory (WeblayoutDir)

  • the vault directory (VaultDir)

  • IntradocDir\data\

  • IntradocDir\search\


    Tip:

    The Content Server vault\~temp\ directory should not be excluded, as it is the most important directory to scan.


Exclude the following Inbound Refinery directories from active virus scanning:

  • the vault directory (VaultDir)

  • the weblayout directory (WeblayoutDir)

  • IntradocDir\data\


    Tip:

    If these directories must be scanned, it is recommended that physical disk scanning be used on the Content Server and Inbound Refinery computers during off-peak hours rather than actively scanning these directories. For best results, a local anti-virus program should be used to scan local drives.


24.8 Changing the Date Format and Time Zones

This section discusses changing the deafult date format and the default time zone setting:

24.8.1 Changing the Date Format

The default English-US locale uses two digits to represent the year ('yy'), where the year is interpreted to be between 1969 and 2068. In other words, 65 is considered to be 2065, not 1965. If you want years prior to 1969 to be intF0erpreted correctly in the English-US locale, you need to change the default date format for that locale to use four digits to represent years ('yyyy').

This issue does not apply to the English-UK locale, which already uses four digits for the year.

To modify the default English-US date format:

  1. Start the System Properties utility:

    • Microsoft Windows: Select Start then Programs then Oracle Content Server. Choose refinery_instance then Utilities then System Properties.

    • UNIX: Start the SystemProperties script, which is located in the /bin subdirectory of the refinery's installation directory.

  2. Open the Localization tab.

  3. Select the English-US entry in the list of locales, and click Edit.

  4. On the Configure Locale dialog, modify the date format to use four digits for the year ('yyyy') rather than two ('yy').

  5. After you are done editing, click OK to close the Configure Locale dialog.

  6. Click OK to apply the change and exit System Properties.

  7. Stop and restart the refinery.

24.8.2 Setting the Time Zone

During the installation of Inbound Refinery, you might have indicated that you wanted to use the default time zone for the selected system locale. If that is the case, the installer attempted to automatically detect the time zone of the operating system and set the refinery time zone accordingly. In certain scenarios, the time zone of the operating system might not be recognized. The time zone will then be set to the UTC time zone (Universal Time Coordinated), which is the same as Greenwich Mean Time (GMT).

You then need to set the time zone manually:

  1. Start the System Properties utility:

    • Microsoft Windows: Select Start then Programs then Oracle Content Server. Choose refinery_instance then Utilities then System Properties.

    • UNIX: Start the SystemProperties script, which is located in the /bin subdirectory of the refinery's installation directory.

  2. Open the Server tab.

  3. From the System Timezone drop-down list, choose the time zone you want to use for the refinery.

  4. Click OK to apply the change and exit System Properties.

  5. Stop and restart the refinery.

24.9 Monitoring Refinery Status

Log files are created to help monitor the refinery status. Agent are entities, such as a Content Server, that sends a job to the refinery. Conversion status information is separated and logged by agent to make it easier to view the information and find details.

Two types of log files are created for the refinery:

  • Refinery logs: Refinery logs contain general information about refinery functionality that is not specific to conversions performed for agents (for example, startup information). One log file is generated for each day the refinery is running. For more information, see Section 24.9.1.

  • Refinery Agent logs: Refinery agent logs contain information specific to conversions performed for agents sending conversion jobs to the refinery. One log file is generated for each agent, each day that the agent sends at least one conversion job to the refinery. For more information, see Section 24.9.2.

24.9.1 Viewing Refinery Status

Entries are added to the appropriate log file throughout the day as events occur and are listed by date and time. The time stamp placed on a refinery log entry designates when the log entry was created, not necessarily when the action took place.

Refinery agent log entries list the conversion number at the beginning of each entry because each agent can have multiple concurrent conversions running at a given time. For example: Log entry for conversion job '3513'. The following types of log entries are generated:

Log EntryDescription

Info

Displays status information. For example, startup information or a description of a conversion engine action.

Error

Displays errors that occur.


Verbose logging can be enabled. When on, it records general agent status information, a detailed description of each conversion engine action (for example, when the conversion was started and file details, conversion step details, and conversion results), and errors. When verbose logging is off, only general agent status information and errors are recorded in the refinery agent log.

A log file might include Details links. Clicking the Details links expands and collapses log details. Typically, the log details are either a stack dump or a trace back to the code that generated the error.

24.9.1.1 Viewing Conversion Statuses

The refinery creates each agent when it sends its first conversion job to the refinery. Until then, information for the agent is not available in the refinery.

To view the current status of conversions for all refinery agents:

  1. Log into the refinery.

  2. Choose Home from the main menu, or choose Status then Refinery Status from the Main menu.

24.9.1.2 Viewing Refinery Logs

To view the refinery log files:

  1. Log into the refinery.

  2. Choose Home in the main menu and select the Refinery Logs tab, or choose Status then Refinery Status from the Main menu and select the Refinery Logs tab.

  3. On the Refingery Logs page, click a log link to display the refinery log.

24.9.1.3 Viewing Console Output

To view the refinery console output:

  1. Log into the refinery.

  2. Choose Home from the Main menu and select the Console Output tab, or choose Status then Refinery Status from the Main menu and select the Console Output tab.

    • Click Update to refresh the console output.

    • Click Clear to clear the console output.

24.9.1.4 Viewing Conversion History

To view the last fifty conversions in the conversion history for a specific refinery agent:

  1. Log into the refinery.

  2. Choose Status then agent_name from the menu and select the Conversion History tab, or choose View Conversion History from the Actions menu for the agent on the Refinery Status page.

  3. On the Conversion History page, click a Content ID link to display the Conversion Detail page.

24.9.2 Viewing Agent Statuses

The status of a specific agent can be viewed as well as the queues for all agents.

24.9.2.1 Viewing Specific Status

To view the current status of conversions for a specific refinery agent:

  1. Log into the refinery.

  2. Navigate to the Agent Status page in one of the following ways:

    • Click the agent name.

    • Select Status then agent_name from the navigation menu.

    • Select View Detailed Status from the Actions menu for the agent on the Refinery Status page.

24.9.2.2 Viewing Agent Queues

To view the items that are in the pre and post-converted queues for a specific refinery agent:

  1. Log into the refinery.

  2. Choose Status then agent_name from the navigation menu and select the Items in Queue tab, or choose View Items In Queue from the Actions menu for the agent on the Refinery Status page.

  3. On the Items in Queue page, click Refresh to update the information on the page.

24.9.2.3 Viewing Agent Logs

To view the log files for a specific refinery agent:

  1. Log into the refinery.

  2. Choose Status then agent_name from the navigation menu and choose the Agent Logs tab, or choose View Agent Logs from the Actions menu for the agent on the Refinery Status page.

  3. On the Agent Logs page, click a log link to display the refinery agent log.

PKzPFPK VwEOEBPS/ucm_workflows.htm Managing Workflows

7 Managing Workflows

Workflows are used to specify how content is routed for review, approval, and release to the system. This chapter provides overview, tasks, and reference information for using the workflow functionality available with Oracle WebCenter Content Server.

Setting up workflows for a business process can provide several advantages:

  • Workflows provide good reporting metrics. They can produce an audit trail of who signed off on content at various points of the life cycle of the content.

  • Workflows help get the right information to the right person.

  • Designing a workflow requires you to examine and understand your business processes, helping you find areas for improvement.

This chapter contains the following topics:

7.1 Understanding Workflows

Designing an effective workflow is an iterative process. After initial planning, workflows are refined as the process is implemented. Good planning in the beginning can reduce the amount of rework. For more information on planning, see Section 7.2.

There are three types of workflows:

  • A basic workflow defines the review process for specific content items, and must be initiated manually.

  • A criteria workflow is used for content that enters a workflow automatically based on metadata that matches predefined criteria.

  • A sub-workflow is initiated from a step in another workflow and is created in the same manner as criteria workflows. Sub-workflows are useful for splitting large, complex workflows into manageable pieces.

This section provides details about the steps that make up a workflow:

7.1.1 Workflow Steps

Steps define the process and the functionality of the workflow. Each workflow can include multiple review and notification steps with multiple reviewers to approve or reject the content at each step. For each step in a workflow, a set of users and a step type are defined.

The users defined for a step can perform only the tasks allowed for that step type.

Step TypeDescription

Contribution

The initial step of a Basic workflow. Administrators define the contributors when the workflow is created.

Auto-Contribution

The initial step of a Criteria workflow. There are no predefined users involved in this step. The contributor who checks in a content item that enters the workflow process automatically becomes part of the workflow.

Review

Users can approve or reject the content; editing is not allowed. You can also specify that the user must approve and sign the content with an electronic signature.

Review/Edit Revision

Users can edit the content if necessary then approve or reject it, maintaining the revision.

Review/New Revision

Users can edit the content if necessary then approve or reject it, creating a new revision.


After a workflow is enabled, it goes through several specific stages:

  • When a content item is approved by the minimum number of reviewers for a particular step, it goes to the next step in the workflow. If the step is defined with 0 approvals required, the reviewers are notified, but the content goes to the next step automatically.

  • If any reviewer rejects the content, it goes back to the most recent Review/Edit Revision or Review/New Revision step. If there is no such step, the content goes back to the original author.

  • Depending on how the edit criteria is defined, the most recent Review/Edit Revision or Review/New Revision step can result in a new revision or an updated revision.

  • A revision can be released:

    • After it exits the workflow: When content is approved at the last step in the workflow, the content item is released to the system.

    • Before it exits the workflow: If a side effect is set that releases a document from edit state, the document is available for indexing, searching, and archiving. Use this primarily for business routing that does not require publishing to the Web, for example an expense report.

  • Generally, if a Basic workflow contains multiple content items, none of them are released to the system until all of the items have been released from completion of the workflow. However, if a content item is released from the edit state as a side effect, that content item can be released without waiting for all items in the Basic workflow.

The standard workflow process can be customized to make it more flexible with jumps, tokens, and aliases. These are discussed fully in Section 7.5.

7.1.1.1 Events

Each step in a workflow has three events: entry, update, and exit.

  • An entry event script is evaluated when entering the step. If the entry event script does not result in a jump or exit, any users, aliases, and tokens are evaluated and e-mail notifications are sent.

  • An update event script is evaluated at various points (for example, during the hourly update cycle or on check in of the revision). Extra exit conditions are evaluated each time the update event script is evaluated.

  • An exit event script is evaluated when a revision has completed the step's approval requirements and the step's extra exit conditions are met.

For more information about Jumps, see Section 7.5.3.1.


Important:

Update and exit event scripts are not run when a revision is rejected. Any code to be evaluated on rejection must be in the entry event script for the step that the rejected content is sent to.


7.1.1.2 Workflow Step Files

The companion file is a text file that tracks the steps the revision has been through and maintains the current values of workflow variables. It is only active for the life of the workflow. Each revision in a workflow has a companion file, which exists while the revision remains in the workflow. When a revision is released from a workflow, its companion file is deleted.

Companion files are in HDA file format, and are named by Content ID (for example, HR_004.hda). Each companion file contains two sets of data:

  • The LocalData Properties section defines the Parent List and other workflow variables.

  • The WorkflowActionHistory ResultSet section contains a record of the steps, workflow actions, and users that have been involved in the revision's workflow history.

To retain a companion file, add the IsSaveWfCompanionFiles configuration variable to the/config/config.cfg file and set the parameter to true. For more information, see the Oracle Fusion Middleware Configuration Reference for Oracle WebCenter Content.

The companion file uses keys to keep track of workflow variables. For example, the following keys define the value of the EntryCount and Last Entry variables for an Editor step in a workflow called Marketing Brochures:

Editor@Marketing Brochures.entryCount=1
Editor@Marketing Brochures.lastEntryTs={ts '2001-05-02 16:57:00'}

The companion file maintains a parent list, which lists the sequence of steps that the revision has been through, starting with the current step and working backward. The parent list is used to determine which step to return to when a file is rejected, when the last step of a workflow is finished, or when an error occurs. For example, the following parent list shows the Marketing Team, Editor, and Graphic Artist steps of the Marketing Brochures workflow:

WfParentList=Marketing Team@Marketing Brochures#Editor@Marketing Brochures#Graphic Artist@Marketing Brochures#contribution@Marketing Brochures

An asterisk (*) in front of a step name in the parent list indicates a jump step.

7.1.2 Workflow Evaluation Process

Figure 7-1 shows a general workflow process.

Figure 7-1 Workflow Process

Surrounding text describes Figure 7-1 .

When a revision enters a workflow step:

  1. The entry script is evaluated.

    • The default entry script that keeps track of the entry count and last entry is updated.

    • Actions (such as additional user notification) are executed.

    • If the jump condition is met and a target step is defined, the revision jumps to another step.

    • If the jump condition is not met, reviewers for the current step are notified that the revision is ready for approval.

    • To avoid infinite loops, the entry script of a previously visited step is ignored. The only exception is when the step has been restarted using the wfCurrentStep(0) symbolic step.

  2. When the required number of reviewers have approved the revision, the exit script is evaluated.

    • If there is no exit script, the revision goes to the next step in the workflow.

    • If an exit script jump condition is met and a target step is defined, the revision jumps to another step.

  3. After the exit script is evaluated, the current action state (the wfAction workflow variable) is set. The following are the possible actions:

    • APPROVE

    • REJECT

    • CHECKIN

    • CONVERSION

    • META_UPDATE

    • TIMED_UPDATE

    • RESUBMIT

  4. As a revision moves through the steps of a workflow, each step is added to the parent list. If the revision revisits a step, the parent list returns to the state it was in when the revision first visited that step. The following table shows an example.

    Revision goes to:Parent list

    Step1

    Step1

    Step2

    Step1#Step2

    Step3

    Step1#Step2#Step3

    Step2

    Step1#Step2


  5. If a revision is rejected, the parent list is checked for the most recent Reviewer/Contributor step, and the revision goes to that step in the workflow.

  6. When a revision completes the last step in a workflow, the parent list is checked for the most recent jump step with a defined return step. If a return step is found, the revision goes to that step. If there are no jump steps in the parent list, or none of the jump steps have a return step defined, the revision exits the workflow.

Content items in a workflow can have the following statuses.

StatusCriteria WorkflowBasic Workflow

EDIT

The content item was rejected and returned to the initial contribution step.

The content item is in the initial contribution step, or the content item was rejected and returned.

REVIEW

The content item is in the review process.

The content item is in the review process.

PENDING

N/A

The content item completed all the workflow steps, but other content items in the workflow (if included) are not finished.

DONE

The content item in the workflow is finished.

All of the content items are finished.

GENWWW

The content is being converted to Web-viewable format.

The content is being converted to Web-viewable format.

RELEASED

The revision is available in Content Server.

The revision is available in Content Server.


7.1.3 Workflow Participation

E-mail is sent to the participating contributors in a Basic workflow who must check in designated content. E-mail is also sent to reviewers involved in workflow steps. Users can check their necessary actions on the Workflow Content Items page. To access the Workflow Content Items page, choose Content Management then Active Workflows from the Main menu.

Reviewers can review content, reject or approve content, and view information about the content and the workflow. If the content is rejected, the Reject Content Item page opens where the reviewer can enter a message to explain the reason for rejection. The message is sent to the reviewers assigned to the last step allowing a contribution. Those reviewers can then check out the content, edit it and check the content back in. On the Content Check In form, the reviewer should select the Revision Finished Editing box. The content then goes to the next step in the workflow. If the box is not selected, the content remains in Review status and must be approved before moving on through the workflow.

It is good practice to discuss workflows with the people involved so they are aware of the responsibilities they have in the process. More information about workflow participation is available in Oracle Fusion Middleware Using Oracle WebCenter Content.

7.2 Planning a Workflow

This section describes the steps to follow to choose a workflow type and plan the workflow:

Before beginning to design a workflow, evaluate how processes currently operate. For example, if you use e-mail loops to manage information between users on a project, can you incorporate that type of scenario into a workflow design?

Is the workflow being used to validate information? Or is it used for collaboration? What specific problem is it addressing? After you understand the current processes and their shortcomings, you can design the workflow to solve the problem.

Ask the following questions when designing a workflow:

  • Who is involved in the workflow? What users receive notification when an item is ready for review? Who has edit permissions? Who has final sign-off?

    Equally important is to ask who is left out of a workflow?

    How do you train the people who are involved in the workflow?

  • What happens when an item is in the workflow? What action is taken when an item is approved, rejected, or updated? What should occur when an item stays in a workflow too long?

  • When must an item be move to the next stage of a workflow? What is the criteria for determining when a workflow is completed?

  • Where do users go to participate in a workflow? Is there a Web interface?

  • How are approvals and rejections handled? Will audit information be stored? Are electronic signatures required?

7.2.1 Choosing a Workflow Type

Use a Basic Workflow when you must:

  • Specify an 'ad hoc' workflow, one that does not depend on specific criteria to be enabled.

  • Route multiple content items to go through the same series of steps. The items go through the steps individually, but are not released until all items are finished in the workflow.

  • Notify or remind a user to contribute a content item to the workflow.

  • Specify a workflow that is used infrequently.

  • Set up the review process for a group of related content items.

  • Specify a user to start the workflow. Content can enter a Basic workflow only when a user with Workflow rights starts the workflow.

Use a Criteria Workflow when you must:

  • Have content enter a workflow automatically based on specific metadata values.

  • Route single content items that match specific criteria. Multiple items can be routed, but they do not progress through the workflow as a unit.

  • Set up a standardized review process for individual documents.

  • Specify a frequently-used workflow.

  • Open the workflow to many users. Users do not need Workflow rights for their content to enter a Criteria workflow.

When deciding which type of workflow to use, keep the following key points in mind:

  • If content is checked in with the wrong security group or wrong metadata value, it can enter a Criteria Workflow accidentally.

  • If users are frequently processing content through the same Basic workflow, consider setting up a Criteria workflow to automate the process.

  • Do not use the same or overlapping criteria for multiple workflows. If a content item matches the criteria for multiple workflows, it enters the first workflow in the list.

  • When a content item is in a criteria workflow, it cannot be deleted until the item is released.

7.2.1.1 Security Issues

Keep the following security issues in mind when administering workflows:

  • Each workflow is associated with a security group.

    • For a Criteria workflow, only content items in the same security group enter the workflow.

    • For a Basic workflow, new content items are assigned to the security group of the workflow, and existing content items must belong to the workflow's security group.

  • A workflow can only jump to another workflow in the same security group.

  • The security group of a workflow cannot be changed while a workflow is active but the security group of an item in the workflow process can be changed.

  • Workflow rights and admin permission for the security group of the content are needed to set up a workflow and start it.

  • Workflow templates can be used to initiate contribution steps.

  • Write permission to the workflow's security group is required to be a contributor to the workflow.

7.2.2 Designing a Workflow

To design a workflow:

  1. Draw a flowchart of the workflow. For examples, see Section 7.3.1 and Section 7.4.1.

  2. Verify all the metadata needed for the workflow. If needed, create the metadata before setting up the workflow.

  3. Set up the aliases for the workflow. Alias creation is discussed in Oracle Fusion Middleware Administering Oracle WebCenter Content.

  4. Set up the tokens for the workflow. For more information, see Section 7.5.2.2.

  5. Set up a Basic or Criteria workflow. If workflow templates are available, consider using a template as a starting point. For more information, see Section 7.6.

  6. Set up sub-workflows if necessary.

  7. Set up jumps. For more information, see Section 7.5.3.2.

    • If script templates are available, use them to create the step event scripts.

    • If jumps are used, consider setting up a "master" workflow with sub-workflows for each jump.

  8. Test the workflow.

    • For a Criteria workflow, check in a test document that matches the defined security group and metadata field value.

    • For a Basic workflow, define test content and start the workflow.

    • Simulate as many approval/rejection scenarios as possible.

    • For workflows that contain jumps, simulate as many event scenarios as possible.

    • Include reviewers in the workflow testing to verify that people understand their roles.

7.2.3 Modifying Workflows

Keep these points in mind during workflow design:

You can:

  • Modify the criteria for a Criteria workflow.

  • Add or delete content items from a Basic workflow.

  • Modify step definitions (including reviewers, exit conditions, and events)

You cannot:

  • Change the order of the steps. You can delete steps and re-create them in new locations.

  • Add steps in the middle of the workflow. New steps are always added to the end of the workflow.

  • Delete a content item while it is in a criteria workflow.

  • Archive content items that are in either a basic or criteria workflow.

If you disable a Criteria workflow or cancel a Basic workflow to add or delete steps, any revisions in the workflow are released (Criteria workflow) or deleted (Basic workflow).

The following tips can help when modifying existing workflows. Altering a workflow in use is a time-consuming and difficult process. Careful design before implementation can help avoid rework. These options are considered a temporary correction until the workflow can be rebuilt.

  • Consider the following options to reorder steps in an existing workflow:

    • Create a sub-workflow and add a jump to it from an existing step. This change can be made to an existing step without disabling or canceling the workflow.

    • Add step event scripts to an existing step to define the actions that would normally take place in a separate step. This change can be made to an existing step without disabling or canceling the workflow.

  • If a workflow must be disabled to modify it (for example, to add a step) all revisions are immediately released from the workflow (Criteria workflow) or deleted (Basic workflow). To disable a workflow without releasing content:

    1. Clone the workflow to a static workflow which has the same step sequence as the original workflow but which has no step logic. The content goes into a step and stays there until moved.

    2. Create an update event in the original workflow. This event is triggered by time and pushes the content into the cloned static workflow at the appropriate step.

    3. When the content is out of the original workflow, disable the workflow, make the necessary changes, then re-enable it. Then use the same timed event logic to move the content from the cloned workflow back to the original workflow.

7.3 Creating a Criteria Workflow

Criteria workflows are used to set up a review process for individual documents that enter the workflow automatically when they match predefined criteria. For example, any time a new purchase order is generated, it might be automatically routed to specific reviewers for approval.

A Criteria workflow includes the following:

  • Criteria defined by a security group and one metadata field.

  • Auto-contribution step with no predefined users.

  • One or more reviewer steps with one or more reviewers per step.

Sub-workflows are set up using the same procedure as Criteria workflows with a few minor exceptions which are noted in the procedure for setting up Criteria workflows.

This section discusses the following topics:

7.3.1 Criteria Workflow Process

The following steps briefly explain the Criteria workflow process:

  1. A user with Workflow rights sets up the Criteria workflow by defining the following:

    • Security group

    • Metadata field and value (for example, ContentType matches PurchaseOrder). You can use fields of type Text or Long Text.

    • Review steps and reviewers for each step

    • The number of approvals required for each step. For example, must all reviewers approve it before it can move to the next step?

    • Any aliases and the people in the alias group

    • Any tokens needed

  2. A user with Workflow rights enables the Criteria workflow.

  3. When content is checked in that matches the defined security group and metadata field value, the content enters the workflow.

  4. Reviewers for the first step receive e-mail that the revision is ready for review.

  5. The reviewers approve or reject the revision.

    • If the step is a reviewer step, the reviewers can optionally sign and approve the content item revision without modification (changing the document produces a different identifier and invalidates any existing electronic signatures).

    • If the step is a reviewer/contributor step, the reviewers can check out the revision, edit it, and check it back in before approving it. For example, editors can alter the content of an item in the workflow.

    • If a user rejects the revision, the workflow returns to the previous contribution step, and the users for that step are notified by e-mail.

    • When the minimum number of users have approved the revision, it goes to next step. If the minimum number of approvals is 0, the revision moves to the next step automatically.

  6. When all steps are complete, the revision is released to the system.

    Figure 7-2 Criteria Workflow Process

    Surrounding text describes Figure 7-2 .

7.3.1.1 Criteria Workflow Tips

Each Criteria workflow must have unique criteria. If a content item matches the criteria for two workflows, it enters the first one in the list of defined workflows.

  • All users assigned to the Criteria workflow must have Read permission to the selected security group. Contributors must have Write permission to the selected security group to check the revision in and out.

  • You cannot add or delete steps while a Criteria workflow is enabled.

  • You cannot delete or archive a content item while it is in a criteria workflow.

  • Any content item checked in while a Criteria workflow is disabled bypasses the workflow process and is released to the system.

  • Disabling a Criteria workflow releases revisions in the workflow to the system.

  • A Criteria workflow can use jumps to sub-workflows and other Criteria workflows in the same security group, and can jump to other steps in the same workflow.

  • Consider making at least one step in the workflow a Reviewer/Contributor step so rejected revisions go to that step rather than back to the original author.

  • If the security group of an item does not match the security group of the workflow, the item does not enter the workflow.

  • Ensure that the criteria is different from other Criteria workflow.

  • If Content ID is used for the Field and if an Oracle database is used, enter all uppercase characters for the Value. All other fields can have mixed case.

  • If ContentID is used as the Field, click Select below the Value field to choose an existing content item.

  • Enter zero (0) in the At least this many reviewers field to notify reviewers that the revision has reached the step and to pass it on to the next step automatically. Reviewers cannot approve, reject, or edit the revision at that step.

7.3.2 Setting Up a Criteria Workflow

To create a Criteria workflow or sub-workflow:

  1. Choose Administration then Admin Applets from the Main menu. Choose Workflow Admin. Click the Criteria tab.

  2. Click Add.

  3. On the New/Edit Criteria Workflow page, enter a name in the Workflow Name field. Maximum length is 30 characters. Special characters (; @ &, and so on) are not allowed. You cannot change the workflow name after the workflow is created.

  4. Enter a detailed description for the workflow in the Description field.

  5. Select the Security Group from the list.

  6. Select an option from Original Author Edit Rule to specify if the original author can edit the revision or create a new revision if the item is rejected.

  7. To use a workflow template, select the Use Template check box and select the template name. This box is displayed only if a template currently exists. For more information, see Section 7.6.

  8. To create a Criteria workflow, select the Has Criteria Definition check box. To create a sub-workflow, deselect the check box.

  9. For a Criteria workflow, define the criteria by choosing the appropriate Field, Operator, and Value. Field values include Content ID, Author, Type, Account and any custom metadata of type Text or Long Text.

  10. Click OK.

  11. If a template was not used to create steps or to add another step, click Add in the right pane of the Workflow Admin page.

  12. On the Add New/Edit Step page, enter an appropriate Name for the step. You cannot change the name after the step is created. The name is usually descriptive of the step (for example, EditorialReview or TechnicalReview).

  13. To require that a reviewer provide credentials for an electronic signature, select Requires signature on approval. This option is available only if the Electronic Signatures component is enabled.

    An electronic signature uniquely identifies the contents of the file at a particular revision and associates the signature with a particular reviewer. If selected, the standard Approve action is replaced by the Sign and Approve action in the list of step options provided to the reviewer.

  14. Enter a Description for the step.

  15. Specify the authority level of the users for the step:

    • Users can review the current revision: Users can approve or reject the revision but cannot edit the revision.

    • Users can review and edit (replace) the current revision: Users can edit the revision, approve it, or reject it. An edit does not update the revision.

    • Users can review the current revision or create new revision: Users can edit the revision, approve it, or reject it. An edit updates the revision. This option preserves the original content and provides an audit trail of changes.

  16. Select the type of users to be added to the step. Multiple types can be defined:

    • To add a group of users defined by an alias, click Add Alias. On the Add Alias to Step page, choose the alias from the displayed list.

    • To add individual user logins, click Add User. On the Add User to Step page:

      • To narrow the list of users, select the Use Filter check box, click Define Filter, select the filter criteria, and click OK.

      • To select a range of users, click the first user, then press and hold Shift and click the last user in the range.

      • To select users individually, press and hold Ctrl and click each user name.

    • To add a variable user defined by a token, click Add Token. For information about creating tokens, see Section 7.5.2.2.

  17. Click OK.

  18. Click the Exit Conditions tab.

  19. Specify how many reviewers must approve the revision before it passes to the next step.

    • To require approval by all reviewers, select All reviewers.

    • To specify a minimum number of reviewers who must approve the revision, select At least this many reviewers and enter the number.

  20. Typically, exit conditions are useful when metadata could be changed by an external process during the workflow step. Use the following instructions if the step requires additional exit conditions to pass to the next step:

    1. Select the Use Additional Exit Condition check box.

    2. Click Edit.

    3. On the Edit Additional Exit Condition page, select a workflow condition or a metadata field from the Field list.

    4. Select an operator from the Operator list. Operator is a dependent choice list that shows operators associated with the Field.

    5. Select a value from the Value list. Value is a dependent list based on the option chosen as the Field.

    6. Click Add to add the conditional statement to the Condition Clause. The clause appears in the Condition Clause box. You can append multiple clauses with AND statements.

    7. Repeat for as many conditions as required. To modify an expression, select it in the Condition Clause box, change the Field, Operator, or Value, and click Update.

    8. To modify the condition expression, select the Custom Condition Expression check box and edit the script (for example, use OR not AND for a condition). The additional exit conditions must be Idoc Script statements that evaluate to true or false. Do not enclose the code in Idoc Script tags <$ $>.


      Caution:

      If Custom Condition Expression is deselected, the expression reverts to its original definition and all modifications are lost.


    9. Click OK.

  21. If the workflow requires conditional steps or special processing, click the Events tab and add the appropriate scripts. For more information, see Section 7.5.3.2.

  22. Click OK.

  23. Add, edit, and delete steps as necessary to complete the workflow.

    • To add another step to the workflow, repeat steps 11 through 22.

    • To edit an existing step, select the step and click Edit.

    • To delete an existing step, select the step and click Delete.

  24. Ensure that the correct workflow is selected in the left pane, and click Enable.

  25. On the confirmation page, click Yes to activate the selected workflow.

7.3.3 Changing a Criteria Workflow or Sub-workflow

If a Criteria workflow is disabled to add or delete steps, any revisions in the workflow are released.

To change an existing Criteria workflow or sub-workflow:

  1. Choose Administration then Admin Applets from the Main menu. Choose Workflow Admin. Click the Criteria tab.

  2. Select the workflow to change in the left pane.

  3. To add or delete steps, click Disable.

  4. Click Add, Edit, and Delete in the left and right panes to change the following:

    • workflow description

    • security group

    • type of workflow (Criteria or sub-workflow)

    • criteria

    • step description

    • type of step (reviewer, contributor same revision, contributor new revision)

    • users

    • events

    • number of approvals required

    • exit conditions

    Users can be added to a step while the Criteria workflow is enabled, but if any revisions are currently at that step in the workflow, the new users are not notified immediately. They are notified after the scheduled workflow system event occurs and performs a TIMED_UPDATE on all items in the workflow.

  5. If the workflow is disabled, ensure that the correct workflow is selected in the left pane and click Enable.

  6. On the confirmation page, click Yes to activate the selected workflow.

7.3.4 Disabling a Criteria Workflow or Sub-workflow

To disable a criteria workflow or sub-workflow:

  1. Choose Administration then Admin Applets from the Main menu. Choose Workflow Admin. Select the Criteria tab.

  2. Select the workflow.

  3. Click Disable.

  4. If any content items are still in the workflow process, you are notified that all of the content revisions will be released. If you do not want to release the content, click No to cancel the operation. Click Yes to release any content and disable the workflow.

    The status of the workflow changes to Disabled.

7.4 Creating a Basic Workflow

A Basic workflow defines the review process for specific content items. It is set up and initiated manually, and does not require you to define criteria for content to enter the workflow.

A Basic workflow includes the following:

  • One or more content items.

  • Initial contribution step with one or more contributors.

  • Zero or more review steps with zero or more reviewers per step.

This section discusses the following topics:

7.4.1 Basic Workflow Process

The following steps explain the Basic workflow process:

  1. A user with Workflow rights sets up the Basic workflow by defining the following items:

    • Content: Either create new content or select existing content. After going through the workflow, new content is released to the system at revision 1, and existing content is released to the system at the next revision number for that content item.

    • Initial contributors: Specify the list of user who can contribute content.

    • Review steps: Specify the reviewers for each step and number of approvals required for each step.

  2. A user with Workflow rights starts the Basic workflow by enabling it.

  3. An e-mail is sent to the contributors.

  4. Any of the contributors can check out then check in a file for each content item in the workflow.

  5. Reviewers for the first step are notified by e-mail that the revisions are ready for review.

  6. The reviewers approve or reject the revisions.

    • If the step permits editing, the reviewers can check out the revisions, edit them, and check them back in before approving it.

    • If a user rejects a revision, the revision returns to the previous contribution step, and the users for that step are notified by e-mail.

    • When the minimum number of users have approved the revision, it goes to next step. (If the minimum number of approvals is 0, the revision moves to the next step automatically.)

  7. Generally, if a Basic workflow contains multiple files, none of them are released to the system until all of the files have been released from completion of the workflow. Completed content items stay in PENDING status until the last revision is approved. However, if you release a content item from the edit state as a side effect, you can release that content item without waiting for all items in the Basic workflow.

  8. When all steps are complete and all revisions are approved, the revisions are released to the system.

Figure 7-3 Basic Workflow Process

Surrounding text describes Figure 7-3 .

7.4.2 Basic Workflow Tips

  • For new content, the Content ID defined in the workflow is the Content ID applied when the revision is released to the system. The Content ID cannot be changed.

  • A content item cannot be added to multiple basic workflows or an error occurs and the workflow is not enabled.

  • New content is assigned to the security group of the Basic workflow.

  • The security group of an existing revision must match the security group of the Basic workflow.

  • All users assigned to the Basic workflow must have Read permission to the selected security group. Contributors must have Write permission to the selected security group to edit revisions.

  • Review steps cannot be added, edited or deleted while a Basic workflow is active.

  • If an active workflow is canceled, any revisions in the workflow are deleted from the system. Any edits made to the files are lost unless they have also been saved on a local hard drive.

  • An inactive Basic workflow can be reused, but it must be started manually each time.

  • A Basic workflow can use jumps, but only to other steps in the same workflow. A Basic workflow cannot jump to a sub-workflow.

7.4.3 Setting Up a Basic Workflow

Keep these points in mind before creating a Basic workflow:

  • All users assigned to the workflow must have Read permission to the selected security group, and Contributors must have Write permission.

  • If using a template, change the reviewers if they are different from those defined in the selected template.

  • Do not add a content item to multiple basic workflows or an error occurs and the workflow is not enabled.

  • Enter zero (0) in the At least this many reviewers field to notify reviewers that the revision has reached the step. Reviewers cannot approve, reject, or edit the revision at that step. The workflow passes to the next step automatically.

To create a Basic workflow:

  1. Display the Workflow Admin: Workflows tab. The default tab view is that of a Basic workflow.

  2. Click Add.

  3. On the Add New/Edit Workflow page, enter a name in the Workflow Name field. The Workflow Name has a maximum field length of 30 characters and cannot contain special characters (; @ &, and so on). The name cannot be changed after the workflow is created.

  4. Enter a detailed description for the workflow in the Description field.

  5. Select the Security Group from the list to which the content items in this workflow belong.

  6. Select an option from Original Author Edit Rule to specify if the original author can edit the existing revision or create a new revision if the content item is rejected.

  7. To use a template, select the Use Template check box and select the template name. This box is displayed only if a template exists. For more information, see Section 7.6.

  8. Click OK.

  9. To add a new content item to the workflow, click New.

    On the Add Content to Workflow (New Content) page:

    1. Enter a Content ID for the new content item. The Content ID cannot be changed. To change a Content ID, delete the content item from the list and re-add it. If using an Oracle database, all Content IDs are converted automatically to uppercase letters.

    2. Click OK.

  10. To add an existing content item to the workflow, click Select.

    On the Add Content to Workflow (Existing Content) page:

    • To narrow the list of content items, specify criteria for the filter, release date or both.

    • To select a range of content items, click the first content item, then press and hold Shift and click the last content item in the range.

    • To select content items individually, press and hold Ctrl and click each content item.

    Existing content items must have the same security group as the workflow.

  11. Repeat steps 9 and 10 as necessary to add content items to the workflow.

  12. Define one or more contributors for the initial contribution step. You can define multiple types of users for the contribution step.

    • To add a group of users defined by an alias, click Add Alias to open the Add Alias to Workflow page.

    • To add individual user logins, click Add User to open the Add User: Basic Workflow page.

      • To narrow the list of users, select the Use Filter check box, click Define Filter, select the filter criteria, and click OK.

      • To select a range of users, click the first user, then press and hold Shift and click the last user in the range.

      • To select users individually, press and hold Ctrl and click each user name.

  13. If a template was not used to create review steps, or to add another step, click Add in the right pane near the Steps box.

  14. On the Add New/Edit Step page, enter an appropriate Name for the step. You cannot change the name after the step is created. The name is usually descriptive of the step (for example, EditorialReview or TechnicalReview).

  15. To require that a reviewer provide credentials for an electronic signature, select Requires signature on approval. This option is available only if you enabled the Electronic Signatures component.

    An electronic signature uniquely identifies the contents of the file at a particular revision and associates the signature with a particular reviewer. If you select this option, the standard Approve action is replaced by the Sign and Approve action in the list of step options provided to the reviewer.

  16. Enter a Description for the step.

  17. Specify the authority level of the users for the step.

    • Users can review the current revision: Users can approve or reject the revision.

    • Users can review and edit (replace) the current revision: Users can edit the revision, approve it, or reject it. Any edit does not update the revision of the content item.

    • Users can review the current revision or create new revision: Users can edit the revision, approve it, or reject it. Any edit updates the revision of the content item which preserves the original content and provides an audit trail of changes.

  18. Select the type of users to be added to the step. Multiple types of user can be defined:

    • To add a group of users defined by an alias, click Add Alias to open the Add Alias to Step page.

    • To add individual user logins, click Add User to open the Add User: Basic Workflow page. To narrow the list use the Use Filter check box; to select a range of users, click the first user, then press and hold Shift and click the last user name in the range. To select users individually, press and hold Ctrl and click each user name.

    • To add a variable user defined by a token, click Add Token. For more information, see Section 7.5.2.2.

    • Click the Exit Conditions tab.

    • Specify how many reviewers must approve the revision before it passes to the next step.

    • To require approval by all reviewers, select All reviewers.

    • To specify a minimum number of reviewers who must approve the revision, select At least this many reviewers and enter the number.

  19. Typically, exit conditions are useful when metadata could be changed by an external process during the workflow step. Use the following instructions if the step requires additional exit conditions to pass to the next step:

    1. Select the Use Additional Exit Condition check box.

    2. Click Edit.

    3. On the Edit Additional Exit Condition page, select additional criteria from lists.

    4. Select a workflow condition or a metadata field from the Field list.

    5. Select an operator from the Operator list. Operator is a dependent list that shows operators associated with the Field.

    6. Select a value from the Value list. Value is a dependent list based on the option chosen as the Field.

    7. Click Add to add the conditional statement to the Condition Clause. The clause appears in the Condition Clause box. You can append multiple clauses with AND statements.

    8. Repeat for as many conditions as required. To modify an expression, select it in the Condition Clause box, change the Field, Operator, or Value, and click Update.

    9. To modify the condition expression, select the Custom Condition Expression check box and edit the script (for example, use OR not AND for a condition). The additional exit conditions must be Idoc Script statements that evaluate to true or false. Do not enclose the code in Idoc Script tags <$ $>.


      Caution:

      If Custom Condition Expression is deselected, the expression reverts to its original definition; all modifications are lost.


    10. Click OK.

  20. If the workflow requires conditional steps or special processing, click the Events tab and add the appropriate scripts. For more information, see Section 7.5.3.2.

  21. Click OK.

  22. Add, edit, and delete steps as necessary to complete the workflow.

    • To add another user to the initial contribution step, repeat step 12.

    • To delete a user from the initial contribution step, click Delete.

    • To add another review step to the workflow, repeat steps 13 through 21.

    • To edit an existing review step, select the step and click Edit.

    • To delete an existing review step, select the step and click Delete.

  23. Ensure that the correct workflow is selected in the left pane, and click Start.

  24. On the Start Workflow page, enter a message to be sent to the contributors.

  25. Click OK.

7.4.4 Changing a Basic Workflow

To change an existing Basic workflow:

  1. Display the Workflow Admin: Workflows tab.

  2. Select the workflow to change in the left pane.

  3. To change the workflow security group or the number of review steps in an active workflow, first click Cancel to cancel the workflow.

  4. Click Edit in the left pane to change the workflow description or the security group.

  5. Click New then Select then Delete in the Content pane to add or delete content from the workflow.

  6. Click Add Alias then Add User then Delete in the Contributors pane to add or delete contributors from the initial contribution step.


    Caution:

    Content items can be changed in a Basic workflow after it has started, but the contributors are not notified automatically. Contributors can be changed after the workflow is started, but any new contributors are not notified immediately.


  7. Click Add then Edit then Delete in the Steps pane to change the following:

    • Requires signature on approval (optional Electronic Signatures option)

    • Step description

    • Type of step (reviewer or reviewer/contributor)

    • Users

    • Events

    • Number of approvals required

    • Exit conditions

7.5 Customizing Workflows

Tokens and jumps are used to customize workflows to accommodate different business scenarios. A token defines variable users in a workflow and a jump branches a workflow to a different side effect.

This section describes how to set up and use tokens and jumps. It discusses the following topics:

7.5.1 Idoc Script Functions and Variables

Jumps and tokens are created using Idoc script. The interfaces create the correct syntax and usage for you when you create tokens and jumps. However, you can customize your scripts using the following Idoc Script functions. For more information about usage, see Oracle Fusion Middleware Developing with Oracle WebCenter Content.

Idoc Script Functions

FunctionDescription

wfAdditionalExitCondition

Retrieves the exit condition defined for the current step.

wfAddUser

Adds a user, alias, or workflow token to the list of reviewers for a workflow. Use this function only inside a token.

wfCurrentGet

Retrieves a local state value from the companion file.

wfCurrentSet

Sets the local state value of a key in the companion file.

wfCurrentStep

Retrieves the name of a step relative to the current step.

wfDisplayCondition

Retrieves the exit condition for a workflow step.

wfExit

Exits a workflow step. Can be used to exit the workflow.

wfGet

Retrieves a state value from the companion file.

wfGetStepTypeLabel

Takes an internal workflow step value and turns it into a human-readable label.

wfIsReleasable

Indicates if the document is released (as far as the workflow is concerned).

wfJumpMessage

Defines a message to be included in the notification e-mail that is issued when a jump is entered.

wfLoadDesign

Used to retrieve information about the existing steps in a workflow or about the exit conditions in a workflow.

wfNotify

Sends an e-mail to a specified user, alias, or workflow token.

wfReleaseDocument

Causes a workflow to release all outstanding document revisions for a document currently being locked by the workflow.

wfSet

Sets a key with a particular value in the companion file.

wfUpdateMetaData

Defines a metadata value for the current content item revision in a workflow.


Idoc Script Variables

VariableDescription

wfAction

The action currently being performed on the revision.

wfJumpEntryNotifyOff

Turns on/off the jump notification.

wfJumpName

The name of the current jump.

wfJumpReturnStep

The name of the step in the parent workflow that the revision returns to when exiting a workflow after the current jump.

wfJumpTargetStep

The name of the step where the revision jumps if the conditions are met.

wfMailSubject

Defines the subject line of a workflow e-mail notification.

wfMessage

Defines a message to be included in a workflow e-mail notification.

wfParentList

List of the workflow steps that the revision has visited.

wfStart

Sends the revision to the first step in the current workflow.


7.5.2 Workflow Tokens

Use a token for the following purposes:

  • Add a variable to a workflow which is interpreted to be a specific user or class of user when the workflow is run.

  • Include users and aliases in workflow jumps.

  • Define users with conditional statements.

A token assignment is unique and local to each document in a workflow. The logic used to assign the token of one document does not affect other documents in the workflow.

Several sample workflow tokens are included which can be used as-is, or can be modified.


Important:

If a token does not resolve to any valid user names, the token is ignored. If no valid users are defined for a step, the revision moves to the next step in the workflow. For this reason, it is a good idea to identify at least one defined user for each step.


This section discusses the following information about tokens:

7.5.2.1 Token Syntax

The Idoc Script function for tokens, wfAddUser, takes two parameters:

  • User: The metadata field, alias name, or a variable that resolves to a user name or alias.

  • Type: The type of token, either user or alias.

All Idoc Script commands begin with <$ and end with $> delimiters. For example:

<$wfAddUser(dDocAuthor, "user")$>
<$wfAddUser("MktTeam", "alias")$>
<$wfAddUser("myUserList", "alias")$>

For more information about Idoc Script syntax and usage, see Oracle Fusion Middleware Configuration Reference for Oracle WebCenter Content.

7.5.2.2 Creating, Editing, or Deleting a Token

To create a token to represent one or more unspecified users, such as the document author:

  1. Choose Administration then Admin Applets from the Main menu. Choose Workflow Administration. Choose Tokens from the Options menu.

  2. On the Workflow Tokens page, click Add.

  3. On the Add/Edit Token page, enter a token name in the Token Name field. You cannot change the Token Name after you create the token. Try to use a descriptive name for the token (for example, GetOriginalAuthor, or AuthorManager).

  4. Enter a detailed description in the Description field.

  5. Click Add.

  6. On the Add Token User page, select User or Alias.

  7. Enter a metadata field that will resolve to a user or alias.

    • To specify the original author of the content item, enter dDocAuthor.

    • To specify an alias, type the alias name. See the Screen Aliases tab on the User Admin page for a list of defined aliases.

  8. Click OK. The Idoc Script containing the value you specified is shown in the User window.

  9. Repeat steps 5 through 8 to add another user or alias.

  10. To create a conditional token, edit the Users field. For an example, see Section 7.5.2.3.

  11. Click OK.

To change an existing workflow token, follow the previous steps and select the token to change. Click Edit. Make any necessary changes and click OK.

To delete a token, select the token from the list and click Delete.

7.5.2.3 Token Examples

The following examples illustrate how to use tokens in workflows.

Example 7-1 Original Author the Only Contributor

This example assumes that the original author is the only contributor for the file. Example 7-2 addresses the situation where different authors could check out and check in the file during the workflow process.

To notify the original author of each file when their content is released from a workflow into the system, first create a token that corresponds to the Author metadata field, which is called dDocAuthor in Idoc script. When setting up the workflow, create a notification step (0 approvals required) and select the token as the user for that step. For example:

<$wfAddUser(dDocAuthor, "user")$>

Example 7-2 Workflow with Reviewer/Contributor Steps

If a workflow includes Reviewer/Contributor steps, a user other than the original contributor could check in a revised file, and the dDocAuthor would no longer be the original author. In this case, you could set the OriginalAuthor variable in the companion file using custom script in the first workflow step, and specify the custom variable in the token instead of using dDocAuthor.

The event script in the first step might look like the following:

<$wfSet("originalContributor", dDocAuthor)$>
<$wfSet("type", user)$>

And the token script for the notification step would look like:

<$wfAddUser(wfGet("originalContributor"), wfGet("type"))$>

Example 7-3 Reviewers Selected Based on Jump Criteria

One typical use for tokens is to select reviewers as needed, or based on the conditions of a jump. Suppose you have a workflow set up for standard routing of all marketing materials through the Marketing department. However, you want any changes to your company catalogs to also be reviewed by the Distribution department.

To do this, create a jump that adds a token to the list of reviewers whenever the Type is catalog. For example:

<$wfAddUser("Dist_Dept", "alias")$>

Example 7-4 Conditional Token

Rather than creating a jump script for the previous example, you could define a conditional token that adds the Dist_Dept alias whenever the Type is catalog. For example:

<$if dDocType like "catalog"$>
  <$wfAddUser("Dist_Dept", "alias")$>
<$endif$>

Example 7-5 Token Specifying Management Chain

Another common use for tokens is specifying the current user's manager as a reviewer. (The manager attribute must be specified as a user information field in Content Server, or as a user attribute in an external directory such as LDAP or ADSI.) For example:

<$wfAddUser(getUserValue("uManager"), "user")$>

7.5.3 Workflow Jumps

Jumps are used to customize workflows. They are usually a conditional statement, written in Idoc Script.

Typical uses of jumps include:

  • Specify multiple metadata fields as the criteria to enter a workflow.

  • Take action on content automatically after a certain amount of time has passed.

  • Define different paths for files to move through the same workflow depending on metadata and user criteria.

  • Release a workflow document before approval by using the side effect: Release document from edit state.

The following is an example of a jump that exits the workflow if the author is sysadmin:

<$if dDocAuthor like "sysadmin"$>
  <$wfSet("wfJumpName", "Entry step")$>
  <$wfSet("wfJumpTargetStep", wfExit(0, 0))$>
  <$wfSet("wfJumpEntryNotifyOff", "0")$>
<$endif$>

In most cases, a jump includes a conditional statement. However, a jump can consist of non-conditional code, such as the following:

<$wfSet("custom_wf_variable", new_value)$>
  <$wfSet("wfJumpTargetStep", step_1)$>

This type of jump can be used to execute code or move a revision to another step automatically.

This section discusses the following topics about jumps:

7.5.3.1 Jumps and Events

As mentioned in Section 7.1.1.1, each step in a workflow has an entry, update, and exit event.

A script can be created with one or more jumps for any or all of the events in a step. Any Idoc Script defined for an event is evaluated at a specific time that depends on the type of event:

  • An entry event script is evaluated upon entering the step. Every time a step is entered, a default script is evaluated in addition to any user-defined custom script. The default script keeps track of the number of times the step has been entered (entryCount) and the last time the step was entered (lastEntryTs).

    If the entry event script does not result in a jump or exit, any users, aliases, and tokens are evaluated, and e-mail notifications are sent.

  • An update event script is evaluated and triggered by a state change of the content, such as the workflow update cycle, the update of the revision's metadata or approval or check in of the revision.

    Extra exit conditions are evaluated each time the update event script is evaluated.

  • An exit event script is evaluated when a revision has completed the step's approval requirements and the step's extra exit conditions are met.

By default, the workflow update cycle occurs hourly. The cycle time can be increased in increments of one hour (using the WorkflowIntervalHours configuration setting), but cannot be reduced to less than one hour. To have update scripts evaluated more often or in response to other events, initiate a metadata update cycle without changing any metadata.

Figure 7-4 Jumps and Events

Surrounding text describes Figure 7-4 .

Within a single workflow, when a reviewer rejects a revision the content item is routed back to the most recent contribution step. However, when a jump is made to a sub-workflow or other workflow and content is rejected there, different behavior occurs. The process returns to the parent workflow, not to the previous contribution step.


Important:

Update and exit event scripts of the current step are not run when a revision is rejected. Any code that is to be evaluated upon rejection must be located in the entry event script for the step that the rejected file is sent to.


Side effects are the actions that take place when a revision in a workflow step meets the jump condition. Side effects can include:

  • Jump to another step in the same workflow

  • Jump to a step in a sub-workflow or other Criteria workflow (for Criteria workflows only)

  • Notify users

  • Exit the workflow

  • Set state information in the companion file

  • Release a workflow document before approval

A jump can include a target step, which tells the workflow where to go to if the content meets the jump condition. The following is an example of a target step that sends the content to the next step in the workflow

<$wfSet("wfJumpTargetStep", wfCurrentStep(1))$>

A jump can also include a return step, which tells the workflow where to go if the content is returning from another workflow. The following is an example of a return step that sends the content to the next step in the workflow:

<$wfSet("wfJumpReturnStep", wfCurrentStep(1))$>

A step name variable, step_name@workflow_name, is assigned to each step in a workflow. There are two ways to reference a step in a jump:

  • Explicit reference is made to a specific step name, such as Editor@Marketing Brochures.

  • Symbolic reference is made relative to the current step and workflow, such as wfCurrentStep(-1) (previous step) or wfStart (first step in the workflow).


    Tip:

    Use symbolic references rather than explicit step names whenever possible especially when creating a script template.


The entry count variable, entryCount, keeps track of how many times a step has been entered and is part of the default entry script that is updated each time a step is entered. The following is an example of how an entry count variable is used in a conditional statement:

<$if entryCount = 1$>

The last entry variable, lastEntryTs, keeps track of when the step was last entered and is part of the default entry script that is updated each time a step is entered. The following is an example of how the last entry variable is used to specify that action should occur if the step has not been acted on within seven days:

<$if parseDate(wfCurrentGet("lastEntryTs")) < dateCurrent(-7)$>

7.5.3.2 Creating a Jump

Before creating jumps, draw a flowchart and work out all the possible jump scenarios. A recommended method of structuring jumps is to create a main workflow with steps that jump to sub-workflows.

After determining what jumps must occur within a workflow, set up the jumps then test them. You can create jumps directly in an existing workflow, or you can create script templates for jumps to be reused in different workflows.

Figure 7-5 Jump Flowchart

Surrounding text describes Figure 7-5 .

To create a jump:

  1. Create the workflow that contains jumps. For more information, see Section 7.3.2 or Section 7.4.3.

  2. On the Workflow Admin page, select the workflow. Click the Criteria tab or the Workflows tab (depending on the type of workflow).

  3. In the Steps pane, click Add or select the step to include the jump and click Edit.

  4. On the Add New/Edit Step page, click the Events tab.

  5. Click Edit next to the event (entry, update, or exit) containing the jump.

    • If a script template does not exist, use the Script Properties page to add properties.

    • If a script template exists, on the Edit Script for StepName page, select an editing option. To create the script from a blank page, select Create New. To create the script from a script template, select Use Script Template then select a template from the list.

  6. Click OK.

To test the side effects of a jump:

  1. On the Jumps tab, click Add, or select an existing jump and click Edit.

  2. For a new jump, on the Edit Script for StepName page, enter a jump name. You cannot change the Jump Name after you create the jump. Try to use a meaningful, descriptive name.

  3. If the jump must specify a return point, select the Has Return Point check box and select a return point from the list. The possible options are: Current Step, Next Step, Previous Step.

  4. If the reviewers for this step should not be notified when the jump is entered, select the Do not notify users on entry check box.

  5. If the content item is released before approval, select the Release document from edit state check box.

  6. Enter any custom side effects in the Custom Effects field. For examples, see Section 7.5.3.4.

  7. If the reviewers are notified when the jump is entered, click the Message tab and enter the notification message.

  8. Click OK. The Edit Script page is re-displayed.

To set up conditional statements:

  1. Select a metadata field from the Field list. This value is the workflow conditions or metadata field to be evaluated.

  2. Select an operator from the Operator list. Operator is a dependent list that shows only the operators associated with the Field.

  3. Select a value from the Value list. Value is the value associated with the specified metadata field.

  4. Click Add to add the conditional statement to the script.

To complete the jump:

  1. If the jump must specify a target step:

    • To specify a specific step, click Select, then select the workflow and step name on the Select Target Step page. The target step name is displayed in the target step area.

    • To select a symbolic step, such as the current step or exit the workflow, select the step from the Target Step list.

  2. To modify the script that was just created, click the Custom tab, select the Custom Script Expression check box, and edit the code.


    Caution:

    If Custom Script Expression is deselected, the expression reverts to its original definition. All modifications are lost.


To test the script:

  1. Click the Test tab.

  2. Click Select.

  3. To narrow the content list, on the Content Item View page, select the Use Filter check box, click Define Filter, select the filter criteria, and click OK.

  4. Select a content item to test and click OK. Check in and process a test document so that it is in the workflow and at the step you are adding the jump to, then select that content item for testing. If the selected content item is not currently in a workflow, you can still use it to test the script, but it is treated as if it were newly in the workflow.

  5. Click Load Item's Workflow State.

    If the selected content item is in a workflow, the companion file is loaded in the Input Data field.

  6. Click Test Script.

  7. The test results are displayed in the Results field.

    • The value of each parameter in the script is displayed.

    • If any Idoc script errors occur, they are displayed with the script containing the errors.

  8. To save the script, click OK.

  9. Continue adding jumps as needed to different steps.

7.5.3.3 Changing a Jump

To change an existing jump:

  1. Select the workflow on the The Workflow Admin page or the Workflow Admin: Workflows tab.

  2. In the Steps pane, select the step that includes the jump to be changed.

  3. Click Edit in the Steps pane.

  4. On the Add New/Edit Step page, click the Events tab.

  5. To delete a jump, click Clear in the corresponding event pane. To change a jump, click Edit in the corresponding event pane.

  6. On the Edit Script for StepName page, select an editing option:

    • To edit the existing script, select Edit Current.

    • To create a script, select Create New.

    • To use a script template, select Use Script Template then select a template from the list.

  7. On the Script Properties page, click Add, Edit, or Delete in the Jumps pane to change the jump side effects.

  8. Use the Field, Operator, Value fields and Add and Update buttons in the Script Clauses pane to change the conditional statements for the jumps.

  9. Use the Target Step list to change the target step for the jump.

  10. To change the automatically generated script, click the Custom tab, select the Custom Script Expression check box, and edit the code.


    Caution:

    If you clear the Custom Script Expression check box, the expression reverts to its original definition and modifications are lost.


  11. Test the script before saving it.

  12. Click OK to save the changes.

7.5.3.4 Jump Examples

The following examples describe how to set up different types of jumps.

Example 7-6 Metadata Criteria Jump

Suppose you have a Criteria workflow called Marketing Brochures that is defined with the Marketing security group and the MktBrochure content type. However, any brochures submitted by a graphic artist do not have to go through the first step, which is graphics department approval. You would use the Edit Script page to create the following entry event script for the first step.

<$if dDocAuthor like "bjones" or dDocAuthor like "sjames"$>
  <$wfSet("wfJumpName", "BypassGraphics")$>
  <$wfSet("wfJumpTargetStep", wfCurrentStep(1))$>
  <$wfSet("wfJumpEntryNotifyOff", "0")$>
<$endif$>

To change the automatically generated conditional statement from and to or, you must edit the script on the Custom tab of the Edit Script page.

Example 7-7 Time-dependent Jump

Suppose you want to limit the review period to one week. If the revision has not been approved or rejected, you want to notify the reviewers and process the revision through a sub-workflow called ApprovalPeriodExpired. You would use the Edit Script page to create the following update event script:

<$if parseDate(wfCurrentGet("lastEntryTs")) < dateCurrent(-7)$>
  <$wfSet("wfJumpName", "LateApproval")$>
  <$wfSet("wfJumpTargetStep",
    "NotifyAuthor@ApprovalPeriodExpired")$> 
  <$wfSet("wfJumpMessage", "The review period for content item
    <$eval(<$dDocTitle$>)$> has expired.")$>
  <$wfSet("wfJumpReturnStep", wfCurrentStep(0))$>
  <$wfSet("wfJumpEntryNotifyOff", "0")$>
<$endif$>

7.5.3.5 Jump Errors

Two responses are possible to a jump error:

  • An event script that causes an error in execution is treated as if it had never been evaluated. However, the default entry script that keeps track of the entry count and last entry is still evaluated.

  • A jump to an invalid step or a step in an inactive workflow results in an error, and the revision is treated as if it has completed the last step of the workflow.

7.6 Workflow and Script Templates

Workflow templates are a quick way to reuse workflows you have created. Each workflow template is an outline for a Basic, Criteria, or sub-workflow that is stored in the Workflow Admin tool. A workflow template is not tied to a security group, and it cannot include step event scripts. Use Script templates to store step event scripts.

The procedure used to create a template is similar to creating a workflow. This section provides an overview of the process. For more information, see Section 7.3.2 or Section 7.4.3.

This section contains the following topics:


Important:

When you use a workflow template, change the reviewers if they are different from those defined in the selected template.


7.6.1 Creating or Modifying a Workflow Template

To create or modify a workflow template:

  1. Display the Workflow Admin: Templates tab.

  2. To create a new template, click Add. To modify an existing template, select the template and click Edit.

  3. On the Add/Edit Template page, enter a template name in the Template Name field. You cannot change the Template Name after you create the template.

  4. Enter a detailed description in the Description field.

  5. Specify whether to permit the original author to edit the existing revision or create a new revision if the content item is rejected by checking the appropriate box.

  6. To Add a step, click Add.

  7. Enter an appropriate Name and Description for the step.

  8. Specify the authority level of the users for the step: Users can review the current revision, Users can review and edit (replace) the current revision or Users can review the current revision or create new revision.

  9. Click OK.

  10. Select the type of users for the step. You can define multiple types of user for a step.

  11. Click OK.

  12. Click the Exit Conditions tab.

  13. Specify how many reviewers must approve the revision before it passes to the next step.

  14. Specify additional exit conditions if needed.

  15. If the workflow requires conditional steps or special processing, click the Events tab and add the appropriate scripts.

  16. Click OK.

7.6.2 Creating a Script Template

Script templates are a quick way to reuse step event scripts. A script template is used as a starting point for creating event scripts. Each script template is a piece of Idoc Script stored in the Workflow Admin tool.


Important:

Script templates should use symbolic step names rather than explicit references.


To create a script template:

  1. In The Workflow Admin page, select Script Templates from the Options menu.

  2. Click Add.

  3. On the Add/Edit Script page, enter a script name in the Script Name field. The name cannot be changed after it is created.

  4. Enter a detailed description in the Description field.

7.6.2.1 Setting Up Jump Side Effects

  1. On the Jumps tab, click Add.

  2. Enter a jump name. The name cannot be changed after creating the jump.

  3. If the jump must specify a return point, select the Has Return Point check box and select a return point from the list.

  4. If users should not be notified when the jump is entered, select the Do not notify users on entry check box.

  5. If the content item is released before approval, select the Release document from edit state check box.

  6. Enter any custom side effects in the Custom Effects field.

  7. If users are notified when the jump is entered, click the Message tab and enter the notification message.

  8. Click OK.

7.6.2.2 Setting Up Script Template Conditional Statements

  1. Select a metadata field from the Field list.

  2. Select an operator from the Operator list.

  3. Select a value from the Value list.

  4. Click Add to add the conditional statement to the script.

7.6.2.3 Testing the Script

  1. Click the Test tab.

  2. Click Select.

  3. To narrow the content list, on the Content Item View page, select the Use Filter check box, click Define Filter, select the filter criteria, and click OK.

  4. Select a content item to test and click OK. If the selected content item is not currently in a workflow, it can be used to test the script but it is treated as if it were newly in the workflow.

  5. Click Select Workflow.

  6. On the Select Workflow Step page, select a workflow in the Workflows pane, select a step in the Steps pane, and click OK. Select a workflow step that is similar to the ones for which the script template is used.

  7. Click Load Item's Workflow State.

    If the selected content item is in a workflow, the companion file is loaded in the Input Data field.

  8. Click Test Script.

  9. The test results are displayed in the Results field.

    • The value of each parameter in the script is displayed.

    • If any Idoc script errors occur, they are displayed with the script containing the errors.

  10. To save the script template, click OK.

7.6.2.4 Changing a Script Template

To change an existing script template:

  1. On the Workflow Admin page, select Script Templates from the Options menu.

  2. On the Workflow Scripts page, select the script template to change.

  3. Click Edit.

  4. On the Add/Edit Script page, click Add, Edit, or Delete in the Jumps pane to change the jumps.

  5. Use the Field, Operator, Value fields and Add and Update buttons in the Script Clauses pane to change the conditional statements for the jumps.

  6. Use the Target Step list to change the target step for the jump.

  7. To modify the automatically generated script, click the Custom tab, select the Custom Script Expression check box, and edit the text.


    Caution:

    If you clear the Custom Script Expression check box, the expression reverts to its original definition and modifications are lost.


  8. Test the script before saving it. For more information, see Section 7.5.3.2.

  9. Click OK to save the changes.

7.6.2.5 Deleting a Script Template

To delete an existing script template:

  1. On the Workflow Admin page, select Script Templates from the Options menu.

  2. On the Workflow Scripts page, select the script template to delete.

  3. Click Delete.

  4. On the confirmation page, click Yes.

7.7 Workflow Scenarios

The following workflow scenario describe the planning process and the types of actions required to accomplish specific workflow tasks. It includes the following workflow examples:

7.7.1 Scenario 1: Criteria Workflow

Your Marketing department wants to have all marketing brochures approved by at least one of three graphic artists, the editor, and all of the marketing supervisors. The graphic artists and editor can edit the content, but the supervisors should not have editing privileges.

To set up the workflow for this example, you would:

  • Define a security group called Marketing, and ensure that the graphic artists and the editor have Write permission, and the marketing supervisors have Read permission to the security group.

  • Define a content type called MktBrochure.

  • Define a workflow called Marketing Brochures, with the security group set to Marketing and criteria set to Type = MktBrochure.

  • Define the first step, called Graphic Artist, as a Reviewer/Contributor step with approval required from at least 1 reviewer. Because the graphics department is very stable, you can assign the user logins of the three graphic artists to the step.

  • Define the second step, called Editor, as a Reviewer/Contributor step. Assign the editor's user login to the step.

  • Define the third step, called Marketing Team, as a Reviewer step with approval required from all reviewers. The management structure changes frequently, so set up an alias called MktTeam and assign it to this step.

  • All marketing brochures must be checked in to the Marketing security group with a Type of MktBrochure, so it is a good idea to instruct all possible contributors of marketing brochures about how to check them in.

  • For the approval process to work correctly, the MktTeam alias must be kept up-to-date.

7.7.2 Scenario 2: Tokens

After you created the Marketing Brochures workflow in Scenario 1, the Marketing department requested that all marketing brochures be returned to the original author for final review before they are released. The original author should not have editing privileges.

To set up the workflow for this example, you would:

  • Create a token called Author and define the user as dDocAuthor.

  • Define a fourth step in the workflow, called Original Author, as a Reviewer step. Assign the Author token to the step.

7.7.3 Scenario 3: Jump Based on Metadata

The Marketing Brochures workflow you created in Scenarios 1 and 3 is working smoothly, but now the Marketing department would like to automatically notify the various sales reps when a new brochure is released for one of their product lines.

To set up the workflow for this example, you would:

  • Define a required custom metadata field called Product, and create a list of the products.

  • Set up an alias for each product, and assign the appropriate sales reps to each alias. You can assign each user to multiple aliases.

  • Define a fifth step in the workflow, called Notify Sales, as a Reviewer step with approval required from zero (0) reviewers.

  • Define a sub-workflow for each product that contains one Reviewer step with approval required from zero (0) reviewers. Assign the corresponding product alias to the step.

  • Define an entry script in the Notify Sales step that jumps to the sub-workflow that matches the product.

  • For the notification process to work correctly, the product list and aliases must be kept up-to-date.

7.7.4 Scenario 4: Time-Dependent Jump

The Marketing department is having trouble getting marketing brochures approved quickly. They would like to change the Marketing Brochures workflow to automatically move content to the next step if it hasn't been approved or rejected by the graphics department, supervisors, or original author within 7 days. The editor is allowed a little more time. They get 10 days before the content goes to the next step.

To set up the workflow for this example, you would:

  • Define a script template called AutoApprove, with a target step that goes to the next step in the workflow if the last entry was 7 days ago.

  • Add an update jump to the Graphic Artist, Marketing Team, and Original Author steps. Use the AutoApprove script template to create the jump.

  • Add an update jump to the Editor step, using the AutoApprove script template. Edit the script so that the jump occurs at 10 days rather than 7 days.

7.8 Workflow Tips and Tricks

This section describes workflow tips and tricks:

In addition to this functionality, additional customizations are available through Consulting Services. For more information, see Section 7.8.6.

7.8.1 Requiring Step Authentication

It is sometimes necessary to re-authenticate a user for particular step of a workflow. For each workflow step that requires authentication before approval:

  1. Add the following line to the IntradocDir/config/config.cfg file:

    <workflow step name>:isRepromptLogin=true

    If multiple workflow steps require validation, add those steps on separate lines.

  2. Restart Content Server.

  3. Set up and enable the workflow, making sure to use the step name designated in the configuration entry for the step where validation is required.

When the workflow is initiated, the users at the workflow step designated with the isRepromptLogin configuration variable are prompted to login in before they can approve the content at the workflow step.

In the following example, validation is required at the steps named VIPApproval and CEOsignoff. The following entries are added to the config.cfg file:

VIPApproval:isRepromptLogin=true
CEOsignoff:isRepromptLogin=true

Content Server is restarted and a workflow is set up and enabled with steps named VIPApproval and CEOsignoff. Multiple users are assigned to the VIPApproval step, and only one user (the CEO of the company) is assigned to the CEOsignoff step.

Before the users at those steps can approve the workflow item, they must login again.

This functionality is available in Content Server 7.5 and later versions.

7.8.2 Setting Up Parallel Workflows

It is sometimes desirable to have two distinct groups of users able to review content items in workflow at the same time and to have a specified number of users from each group approve the content before it proceeds in the workflow.

When using Content Server, either all users or a specified number of users must approve the content before it continues. Usually the workflow does not differentiate between sources of approval. Consequently, all members of one group approve content while none of another group approve it, and the content would still advance through the workflow. The following code provides an example of how to add approval process discrimination.

This code allows step users to be set into groups. At each approval, the script checks for the group to which a user belongs. A user can belong to multiple groups; if the approving user is in a group, the counter for that group is incremented by one.

Extra exit conditions hold the content in the step until the extra conditions are met.

Add the following code in the entry portion of the step:

<$wfSet("set1", "0")$>
<$wfSet("set2", "0")$>
<$group1 = "user1, user2, user3,"$>
<$wfSet("group1", group1)$>
<$group2 = "user8, user9, user10,"$>
<$wfSet("group2", group2)$>

Add the following code in the update portion of the step:

<$if wfAction like "APPROVE"$>
<$if strIndexOf(wfGet("group1"), dUser) >=0$>
<$set1 = toInteger(wfGet("set1"))+1$>
<$wfSet("set1", set1)$>
<$endif$>
<$if strIndexOf(wfGet("group2"), dUser)>=0$>
<$set2 = toInteger(wfGet("set2"))+1$>
<$wfSet("set2", set2)$>
<$endif$>
<$endif$>

Add the following code in the extra exit conditions portion of the step (where n is the number of required approvers from group 1 and r is the number of required approvers from group 2):

toInteger(wfGet("set1")) >= n
toInteger(wfGet("set2")) >= r

By checking the approving user during each approve action, this workflow code increments the counter of the group to which the user belongs. The extra exit conditions hold the content item in the step until the minimum number of users in each group have approved it. If more than the minimum number of required approves for each group are executed, the approve actions are still logged but the content item does not proceed.

Reject actions are still absolute. A rejection from any named user still executes normal workflow reject behavior.

7.8.3 Adding Ad Hoc Step Users

You can add users to workflow steps without using metadata fields normally accessed by tokens. For example, a content item is traveling (and being edited) in workflow; each edit lists the person editing the content as the dDocAuthor. To send the item to the original author after the workflow cycle, a special token must be created:

<$wfAddUser(wfGet("originalContributor"), wfGet("type"))$>

Add the following code to the entry event of the first step in the workflow to restore the original author:

<$originalContributor=dDocAuthor$>
<$wfSet("originalContributor",originalContributor)$>
<$type="user"$> 
<$wfSet("type",type)$>

The event script uses wfSet() to put custom variables and values into the companion file at a point before the token call. The token then uses wfGet() to pull out those values and set the step user.

You can use this technique to obtain and store any standard or custom Idoc variable that holds valid user names or aliases. The Idoc variable can contain a comma-delimited list of user names or aliases. If user names are being stored, the <$type$> variable must be set to user (for example, <$type="user"$>. If alias names are being stored, the <$type$> variable must be set to alias (for example, <$type="alias"$>.

When placed in the entry event of a workflow step with the token set as the step user, the entry event code processes the information. It stores the user name (or alias) which is then called by the token and is set as a step user (or users, if a list was specified). Adding multiple or conditional code blocks and tokens (as shown previously) to your step entry events and step user definitions allows true ad hoc workflow routing.

7.8.4 Customizing Criteria Workflow E-mails

E-mails are triggered by criteria workflow at three points in the process:

  • On entry to a step.

  • On receipt of a reject reasons form.

  • On execution of the wfNotify Idoc script function.

It is possible to customize the e-mail message, the e-mail subject, and the template used for e-mails sent during criteria workflows. This section describes the processes for customizing the e-mail aspects of criteria workflows.

This section includes the following topics

7.8.4.1 Customizing E-mail Templates

The two most commonly used templates used to generate e-mail messages sent to recipients involved in a workflow are reviewer_mail.htm and reject_mail.htm. These are stored in IdcHomeDir/resources/core/templates.

You can modify these templates like any other template. E-mail template modification provides the greatest flexibility and opportunity for customizing workflow e-mails. Although this kind of modification is relatively straightforward, it still requires careful component development. Modifying the subject and the message in the e-mail is often the most important part of the message, and thus is often the most modified.

Custom workflow e-mail templates based on the standard templates can also be created. To call custom templates, add them as the optional third parameter to the wfNotify function, as in these examples:

<$wfNotify(userName, "user", templateName)$>
<$wfNotify(aliasName, "alias", templateName)$>

If an alternate template is not specified, the system default template is used.

For more information, see the discussion of wfNotify in Oracle Fusion Middleware Configuration Reference for Oracle WebCenter Content.

7.8.4.2 Customizing the Subject or Message Line

You can customize criteria workflow e-mail subject and message lines for your application.The e-mail subject line appears in the e-mail; the message line appears in the e-mail body with other information about the workflow e-mail (workflow name, step, and content item).

The message line defaults to one of two messages, depending on whether the step is notification-only (that is, if it has zero required reviewers).

You can customize subject lines and message lines in two ways:

  • You can modify the core string resource file according to standard component architecture.

  • For simple customizations, you can declare the wfMailSubject (for e-mail subjects) or wfMessage (for message lines) Idoc Script variable in a criteria workflow step event script or stored in the companion file.

7.8.4.2.1 Modifying Strings

The string definitions are as follows (variable 1 is the content item title and variable 2 is the name of the workflow step):

<@wwWfIsNotifyOnly=Workflow notification for content item '{1}' is in step '{2}'.@>

<@wwWfReadyForStep=Content item '{1}' is ready for workflow step '{2}'.@> 

<@wwWfRejected=Content item '{1}' has been rejected.@> 

You usually call these string definitions with the Idoc Script <$lc()$> localization function and you can alias them in a component resource file. For an example of e-mail subject line string includes, see the <@dynamichtml wf_approve_mail_subject@> include definition in the std_page.htm file.

7.8.4.2.2 Changing Idoc Variables

For simple e-mail or subject line changes, you can use Idoc Script rather than a component. You can place the wfMailSubject configuration variable or the wfMessage configuration variable in step event script. The value of these variables can also accept Idoc Script, as in this example:

<$wfMailSubject="My custom subject text for content with <$dDocTitle$> title"$>

No eval() function is required for the Idoc Script variables to evaluate.

If wfMailSubject or wfMessage is placed in the entry event of a workflow step, the e-mail messages triggered by content entering the step receives the customized subject or message line. These variables can also be declared before a wfNotify() function and the e-mail then generated by that function receives the customization.

7.8.5 Workflow Escalation

This example requires familiarity with tokens and coding.

Workflow escalation (that is, dynamically routing workflow content to different people than listed users) is a common workflow requirement. While this is easily accomplished with jumps and parsing of some criteria (for example, date, action, metadata) there is often some initial confusion or hesitation about where or to whom the content should go.

The issue is further complicated when the solution is to add a larger number of step users and to require only a subset of those users to approve the content before it can move on. This workflow still generates and sends out e-mails and appears in the workflow queue of non-approving users, users out of the office, and others who are there "just in case" some primary reviewer is not available.

7.8.5.1 Setting Up a Workflow Escalation

This solution incorporates two custom user metadata fields, a token, and workflow step entry and update event code.

  1. Create a user metadata field with the following elements:

    • name: OutOfOffice

    • type: text

    • option list: yes

    • option list type: select list validated

    • option list values: <blank>, false, and true

  2. Create another user metadata field with the following elements:

    • name: OutOfOfficeBackup

    • type: Long Text

    The OutOfOffice field is a flag that the user sets as TRUE when they are out of the office. To set this flag, select TRUE from the list and click Update to update the user profile.

    The OutOfOfficeBackup field contains the user name(s) of those users who can fill in as proxies for the out-of-office user. This field should optimally contain a single user name, formatted as shown in the User Admin applet.

    When the following workflow step event code finds that a listed user is "out of the office" it switches that user's name for the listed proxy (as designated by the value in the OutOfOfficeBackup field). The workflow step then restarts with only the users who had not yet approved the content and any designated proxies.

    A token pulls the list of users out of the companion file and sets them as step users. Special workflow messages are sent to designated proxies while the content was sitting in the update step event.

  3. Create a workflow Token

    • name: DynamicStepUsers

    • Token Code:

      <$if wfGet("dsu")$> 
        <$wfAddUser(wfGet("dsu"), "user")$> 
      <$else$>
        <$wfAddUser("sysadmin", "user")$> 
      <$endif$> 
        <$if wfGet("dsa")$> 
           <$wfAddUser(wfGet("dsa"), "alias")$>
      <$endif$> 
      

    The token pulls a value for the custom variable dsu and dsa out of the companion file. If no value is found for dsu then the sysadmin user is added as a precaution. The initial values for dsu and dsa are listed in the entry event of the step. If you want to assign users to the step(s), add the first contributor to the first step and the next step must include the first user plus the next user, user two.

  4. Paste the following code into the entry event of your workflow step. Comments are included through the entry code; remove the comments before inserting into your step:

    <$restartFlag=wfGet("restartStep")$>
    

    See if the cause is a restart for new backup users. If so, suppress notifications to old users and notify only new ones

    <$if restartFlag$>
    <$if toInteger(wfGet("restartStep"))>=1$>
      <$wfSet("wfJumpEntryNotifyOff", "1")$>
      <$oooUsers=wfGet("outOfOfficeUsers")$>
      <$if strIndexOf(oooUsers,",")>=1$>
    

    Check for multiple out of office users

      <$wfMessage=eval("The Following Users are out of the office: <$oooUsers$>
      \nYou are the designated backup for one of them.\n
      The content item <$dDocName$> is in the workflow step <$dwfStepName$> awaiting your   review")$>
    <$else$>
      <$wfMessage=eval("The Following User is out of the office: <$oooUsers$>
      \nYou are the designated backup for this user.\n
      The content item <$dDocName$> is in the workflow step <$dwfStepName$> awaiting your   review")$>
    <$endif$>
    <$rsMakeFromString("ooou",oooUsers)$>
    <$loop ooou$>
      <$userBackupName=row&"_Bkup"$>
      <$wfNotify(wfGet(userBackupName),"user")$>
    <$endloop$>
    <$wfSet("restartStep","0")$>
    <$endif$>
        <$else$>
    <$wfSet("restartStep","0")$>
      <$endif$>
    

    Set your step users here. Multiple users must be comma-delimited, user names must be in quotes, metadata field references are unquoted. This example proceeds, using user names rather than aliases. Slight code re-working is required if using aliases.

    <$dynamicStepUsers= <LIST USER NAMES IN QUOTES HERE>$>
    
    <$if strIndexOf(dynamicStepUsers,",")>=1$>
    

    If there are multiple users specified verify if any of the MULTIPLE users are out of the office

    <$rsMakeFromString("multiDynamicUsers",dynamicStepUsers)$>
    <$loop multiDynamicUsers$>
      <$if strEquals(getValueForSpecifiedUser(row,"uOutOfOffice"),"true")$>
    

    If user is out of office, get their backup

      <$backup=getValueForSpecifiedUser(row,"uOutOfOfficeBackup")$>
    

    Replace out-of-office user in users list with their backup

      <$dynamicStepUsers=strReplace(dynamicStepUsers,row,backup)$>
      <$endif$>
      <$endloop$>
       <$else$>
    

    Verify if the SINGLE user is out of the office

      <$if strEquals(getValueForSpecifiedUser(dynamicStepUsers,"uOutOfOffice"),"true")$>
    

    If user is out of office, get their backup

    <$dynamicStepUsers=getValueForSpecifiedUser(dynamicStepUsers,"uOutOfOfficeBackup")$>
      <$endif$>
    <$endif$>
    <$wfSet("dsu",dynamicStepUsers)$>
    

    Set the dsu variable into the companion file with listed and backup users

  5. Paste the following code into the update event of your workflow step, removing comment lines before doing so:

    <$remainingUsers=wfGet("wfUserQueue")$>
    

    Get users who have yet to approve

    <$rsMakeFromString("ru", remainingUsers)$>
    <$loop ru$>
    <$if strEquals(getValueForSpecifiedUser(row,"uOutOfOffice"),"true")$>
    

    Check the remaining users to see if they are out of the office

    <$if getBackup$>
    

    Create list of users requiring backup substitutes

      <$getBackup=row &","&getBackup$>
    <$else$>
      <$getBackup=row$>
    <$endif$>
    <$endif$>
    <$endloop$>
    <$if getBackup or strLength(getBackup)>0 $>
    

    If a user is listed as out of office then rewrite the user list and restart the step

    <$wfSet("outOfOfficeUsers",getBackup)$>
    <$rsMakeFromString("needBkup", getBackup)$>
    <$loop needBkup$>
    <$bkupUser=getValueForSpecifiedUser(row,"uOutOfOfficeBackup")$>
    <$newRemainingUsersList=strReplace(remainingUsers,row,bkupUser)$>
    <$wfSet(row&"_Bkup",bkupUser)$>
    <$endloop$>
    <$wfSet("dsu",newRemainingUsersList)$>
    <$getBackup=""$>
      
    <$wfSet("wfJumpTargetStep", wfCurrentStep(0))$>
    <$wfSet("restartStep","1")$>                             
    <$endif$>
    

7.8.6 Other Customizations

Many workflow customizations are available through Consulting Services. This section briefly describes two of those customizations. Please consider engaging Oracle Consulting Services for a thorough evaluation of the impact associated with the replication of workflow examples.

The following customizations are discussed in this section:

7.8.6.1 Setting Approval by Non-Reviewers

It is sometimes necessary to have a person approve content in a workflow step even if that individual is not part of the actual workflow. For example, if you want an outside opinion on a document in a particular stage in a workflow or if you want to designate a substitute reviewer because another reviewer is out of the office. The individual can approve content but they do not get normal workflow notifications and do not see the content items in their workflow queues.

Users with designated roles or who are members of designated aliases can approve content on this basis. These designated users see a Bypass Approve link in the workflow actions box for each item in a workflow to which they have normal security access. Performing a Bypass Approve approves the content item in the step in which it currently resides. Defined step exit conditions are still evaluated and still apply. The approval is logged in the Workflow History database table and in the WorkflowActionHistory ResultSet of the companion file.

A designated approver is not a regular workflow step approver and thus does not receive automatic workflow notifications. Access to a content item in a workflow where the approver wants to perform a Bypass Approve action must be intentional. The designated approver must access the Active Workflows menu, then select the workflow name and select an action.

The designated approver component aliases core resources. If other components are running or are planned, this must be taken into account. In some cases, you can combine component resources to include all required functionality or you can rename and re-reference them to keep all components working correctly (but separately).

7.8.6.1.1 Scenarios

For the following scenarios, assume that User A has a role or alias that grants designated approver status and that a sample workflow named MyWF has two steps.

  • Scenario 1: User A is listed as an approver for step 1 in MyWF but not in step 2. Therefore, no Bypass Approve link appears for Step 1. User A received default workflow action capabilities and notifications. The Bypass Approve link for step 2 appears under the Workflow Actions menu if User A accesses the Content in workflow 'MyWF' page when the content item is in step 2.

  • Scenario 2: User A is not listed as an approver in MyWF. The Bypass Approve link appears in the action menu on the Content in workflow 'MyWF' page for all steps and for all content to which User A has at least Read permission in its Security group.

  • Scenario 3: User A is not listed as an approver for Step 1 in MyWF. Step 1 requires two approvals from reviewers before it moves to step 2. The Bypass Approve link appears for User A. Click Bypass Approve to register an approval and fulfill one of two required approvals for the step to continue in the workflow.

7.8.6.2 Automatic Replication of Workflow Items

Content items are often processed (in one form or another) before they are 'released'. A released content item is indexed and can then be included in search results, archives, and in other processes and applications.

It is sometimes useful to release a content item before its completion in the workflow. For example, a content item must be in a released state to be replicated (and perhaps later used in disaster recovery). Items not released (such as items in workflows) are not used by the replicator.

It is possible to designate workflow items as released while still in workflow. Normal workflow actions, such as updating, checking out, and checking in are still available. However, these items are indexed and appear in search results, can be used in replication and archiving, and in any other processes or applications.


Important:

There are several elements to consider before replicating workflow items. This section describes the process but does not go into detail; contact Consulting Services before setting up replication of workflow items.


7.8.6.2.1 Potential Conflicts

There are several points to be aware of before setting up automatic replication:

  • Loss of data integrity: A premature release of content, whether intentional or otherwise, can have unforeseen effects on business process management, content accessibility, and information integrity. Potential ramifications must be thoroughly considered before the release of items still in workflow is considered.

  • Non-capture of workflow information: Workflows are a combination of process and content. While it is possible to release content and make it available for replication and other processes, it is not possible (without customization and assistance from Consulting Services) to capture and replicate workflow state information.

    If content in a workflow on a source instance is replicated and released on a target instance without passing through a workflow on that instance, the recovery process involves manual effort to re-set the content's workflow state.

    In most cases, cloning of workflow information is not necessary because recovery to a prior workflow state is required only during true disaster recovery. In all other cases, content items replicate as normal after exiting a workflow and supersede versions or revisions replicated during workflow.

  • Imported content items do not enter workflow: Content items replicated while in a workflow on a source instance do not enter a workflow on the target instance without additional customization and assistance from Consulting Services. Replicated content items are checked in to the target instance and released.

  • Restoration of replicated workflow items requires manual intervention: You can capture some workflow information as metadata for use in a manual restoration of the workflow. Only a check-in action triggers a content item's entry into workflow. To restore an item back into a workflow requires that you create a revision.

    To recover the content items on the target to a state as close as possible to their previous state on the source, a discrepancy between the number of revisions on the source and the target instances is intentionally and unavoidably introduced.

7.8.6.2.2 Scenarios

For content item 1, not released while in workflow, the following is true:

  • The content item moves through workflow in a non-released state.

  • The content item is not a candidate for replication while in workflow.

  • The content item is available only to named workflow step reviewers and administrators.

  • When the content item completes the workflow, it is released, indexed, and is now a candidate for replication.

For content item 2, released while in workflow, the following is true:

  • The content item moves through workflow and is released as specified. After an item is released, it cannot be 'unreleased'. New revisions of a content item created during workflow are not released unless specified.

  • Content items released while in workflow display in search results and can be viewed by users with the appropriate security access.

  • Users cannot edit, check out, check in, or update a released item that is in workflow unless they are designated users for the step in which the item is located and that step allows the attempted action.

  • A content item released while in workflow is a candidate for automatic replication. The Replicator treats the item as if it is not in workflow.

  • Content items completing workflow are replicated in the normal fashion and supersede any pre-existing content item versions replicated during workflow.

For information about implications of the release of content items in workflow for replication or other purposes, contact Consulting Services.

7.8.7 Triggering Criteria Workflows from Folders

It is sometimes necessary for documents in a particular folder to go through a criteria workflow. However, when you create a criteria workflow based on a folder, the criteria option of folder is not listed in the field list. The field list in criteria workflows only lists fields with a type of text or long text. Because Folder (xCollectionID) is an integer field, it is not an option.

Although you cannot select the folder field on the Edit Criteria form, you can define it as a criterion in Events. In the Entry Event of the first step, you can set up criteria to check for the appropriate folder number (xCollectionID). If it does not fulfill the criteria, the item can exit from the workflow.

The following general steps detail how to set up such a workflow:

  1. Start a new criteria workflow and choose the Security Group that the workflow uses.

  2. In the Criteria Definition section, define global criteria to monitor all of the documents that enter the system. For example:

    Field: ContentID
    Operator: Matches
    Value: *
    

    To monitor only items coming in through a specific folder, set an extra metadata field that specifies a folder number.

    If multiple workflows are in place, you can filter all content through this workflow and jump to sub-workflows through multiple criteria settings in the first step of this workflow.

  3. Add the first step to the workflow.

  4. In the Events tab, click Edit from the Entry Event.

  5. On the Jumps tab, click Add.

  6. Give the jump a meaningful name (for example, Folder Criteria) and click OK.

  7. For the jump criteria, enter the following:

    Field: Folder
    Operator: Not Equals
    Value: Folder ID on which the workflow is based
    

    When done, click Add.

  8. For the Target Step, select Exit to Parent Step and change both of the 0 parameters to 10 (for example, @wfExit(10,10). Documents not in the folder are forced out of the workflow.

  9. Click OK on the Entry Event and click OK on the Add New Step dialog.

  10. Add the necessary jumps, steps, and events for the rest of the workflow and enable it.

7.8.8 Searching Within a Workflow Step

When executing the GET_SEARCH_RESULTS service within a workflow step, you can experience data corruption because the workflow's data binder is being used by the service.

A solution for this is to temporarily set the security group value into a temporary variable. Then clear the current security group value, make the call for the search results, then reset the security group back again.

7.8.9 Suppressing Workflow Notifications

When a workflow step requires multiple approvers, a user who has approved the document can be re-notified during a timed workflow update cycle. To prevent additional notifications, use the wfSetIsNotifyingUsers workflow function. Used in a workflow step in the script section of the workflow, it sets an internal flag to determine if workflow notifications are sent out during the current document action (check in, approve, update, and so on). The suppression is applied to both e-mail and updates to the workflow in the queue.

When used in combination with wfIsFinishedDocConversion, this function can suppress notification until conversion is finished. It does not prevent documents from advancing out of the auto-contributor step but it does stop updates of the workflow in queue and notification e-mails.

These notifications are not lost. If the wfSetIsNotifyingUsers function is not used in a future workflow event to suppress notifications (updates to workflow in queue and workflow mail) then all users participating in the current step are notified.

You can use the following additional functions in the script section:

  • wfIsFinishedDocConversion, which returns a result indicating if the document is not in GENWWW after the current document action ends.

  • wfIsNotifyingUsers, which returns a result indicating if the workflow is currently suppressing all workflow notification for this particular workflow event.

For information about using these functions, see Oracle Fusion Middleware Developing with Oracle WebCenter Content.

PK5K7xPK VwEOEBPS/dc_script_templates.htm Script Templates

35 Script Templates

This chapter describes script templates for dynamic conversion and explains how to use script templates.

This chapter covers the following topics;

35.1 About Script Templates

Script templates are the text-based conversion templates that were primarily used in earlier versions of Dynamic Converter. They are plain-text files that must be hand-coded with elements, indexes, macros, pragmas, and Idoc Script. You can still use this template format in Dynamic Converter, but Classic HTML Conversion templates (see Chapter 32, "Conversion Templates") have, for the most part, replaced script templates.


Note:

See Oracle Fusion Middleware Developing with Oracle WebCenter Content for more information on Idoc Script.


The following is the code for a very simple script template:

{## unit}{## header}
<html>
<body>
{## /header}
<p>Here is the document you requested.
{## insert element=property.title} by
{## insert element=property.author}</p>

<p>Below is the document itself</p>
{## insert element=body}

{## footer}
</body>
</html>
{## /footer}{## /unit}

The {## unit}, {## /unit}, {## header}, {## /header}, {## footer} and {## /footer} macros can be ignored for the moment. Their purpose is described in Macros.

The remainder of the file is regular HTML code with the exception of three macros in the form {## insert element=xxx}. Dynamic Converter uses this template plus the source file to create its output. To accomplish this, Dynamic Converter reads through the template file, writing it byte for byte to the output file unless character mapping is performed on the template. This continues until the template contains a properly formatted macro. Dynamic Converter reads the macro and executes the macro's command. Usually this means inserting an HTML version of some element from the source file into the output file. Dynamic Converter then continues reading the template and executing macros until the end of the template file is reached.

In the example above, the first {## insert} macro uses the element syntax (described in Insert Element: {## INSERT}) to insert the title of the document. The second macro inserts the author of the document and the third macro inserts the entire body of the document. The resulting HTML might look like this (HTML that is the result of a macro is in bold):

<html>
<body>
<p>Here is the document you requested.
A Poem by
Phil Boutros</p>

<p>Below is the document itself</p>
<p>Roses are red</p>
<p>Violets are blue</p>
<p>I'm a programmer</p>
<p>and so are you</p>

</body>
</html>

35.2 Elements

This section covers the following topics:

35.2.1 Element Tree

Dynamic Converter uses the concept of an element tree to make various pieces and attributes of the source file individually addressable from within a script template.

The nodes of the element tree are used to generate a path to a specific element, and a period is used to separate the nodes in this path. For example, the path of the author property of a document is Property.Author.

For convenience, certain nodes in an element path may be skipped because they represent the obvious default behavior. These nodes include the Sections node (Sections.Current.Body.Title is equivalent to Body.Title), and the Body and Contents nodes (Body.Contents.Headings.1.Body is equivalent to Headings.1.Body).


Important:

These nodes may not be skipped if they are the last node in the path (Heading.1.Body is not equivalent to Headings.1).


There are two types of elements in the element tree: leaf elements and repeatable elements (see Section 35.2.2 and Section 35.2.3, respectively).

Figure 35-1 Example of an Element Tree

Example of an element tree

35.2.2 Leaf Elements

Leaf elements are single identifiable pieces of the source file like the author property (Property.Author) or the preface of the document (Body.Contents.Preface). This type of element is a valid target for inserting, testing and linking using the {## INSERT}, {## F} and {## LINK...} macros. The last node in this type of path must be a valid leaf node in the document tree. Valid leaf nodes are shown in italics in the element tree example in Element Tree.

35.2.3 Repeatable Elements

Repeatable elements have multiple instances associated with them, like the footnotes in a document (Sections.1.Footnotes). This type of element may not be directly inserted, tested or linked to but its instances may be looped through using the {## REPEAT} macro. The last node in this type of path must be a valid repeatable node in the document tree. Valid repeatable nodes are shown in bold in the element tree example in Element Tree.

Some templates use {## REPEAT} loops to generate one output file per repeatable element. For example, a template may render a presentation file as a group of output files, with one output file for each slide. When an input file contains an exceptionally large number of sections, it is possible for an operating system to run out of file handles. See your operating system's documentation or system administrator to find out how many open file handles are allowed. To avoid this extremely rare problem, set a value for the maxreps attribute of the {## REPEAT} macro or configure the operating system to allow more file handles.

35.2.4 Element Definitions

The following table contains a list of all supported elements and a brief description of each. (See Section 35.3 for a description of valid values for x.)

ElementTypeDescription

Property.Author

Leaf

Author property of the source file.

Property.Title

Leaf

Title property of the source file.

Property.Subject

Leaf

Subject property of the source file

Property.Keywords

Leaf

Keywords property of the source file.

Property.Comments

Leaf

Comments property of the source file.

Property.Others

Repeatable

This permits access to all properties not specifically accessible through property elements described above, and includes both the "Name" and the "Body" of the property. Which "Other" properties are supported is file format dependant. Some file formats also allow for additional user definable properties.

Only text properties are accessible. Properties such as Yes/No, numeric values, and dates are not supported.

Property.Others.x.Name

Leaf

Descriptive name for the property.

Property.Others.x.Body

Leaf

Text of the property.

Sheets

Repeatable

See 'Sections' below.

Slides

Repeatable

See 'Sections' below.

Sections

Repeatable

Sections are used to represent the highest level of abstraction within the source file. In general, word processor documents will have only one section, the document itself. Spreadsheets have one section for each sheet or chart. Presentations have one section for each slide. Graphics generally have one section but may have more, as in a multi-page TIFF.

For convenience and readability, Sheets and Slides are synonymous with Sections.

Sections.x.Body

Leaf

This element represents the main textual area of the source file.

For word processing documents, it includes the entire document excluding footnotes, endnotes, headers, footers, and annotations. (Footnote/endnote references are always included automatically in the body. If the template includes footnotes/endnotes, then these references provide a link to the note. Annotation references are not placed in the body unless the template includes annotations, in which case they provide links to the annotations.)

For spreadsheets, it includes the entire sheet.

For graphics, it includes any text that actually appears as text in the file format.

Sections.x.Body.Title

Leaf

For word processing documents, this element is the text marked with the title style. This may be different than the Property.Title. For all other types, this element will be the "name" of the section. For example, if the source file is a spreadsheet, this element will be the name of the sheet as it appears on the spreadsheet application's navigation tabs.

Sections.x.Body.Contents

Leaf

For word processing documents, this is the same as Sections.x.Body.

For all other document types, this is the same as the body minus the title, if a title exists.

Sections.x.Body.Contents. Preface

Leaf

Text between the top of the body and the first heading.

Sections.x.Body.Contents. Headings

Repeatable

Headings are labels in a word processor document inserted by the author to give a document structure. See Section 35.7 for more information on headings. Dynamic Converter reads this structure and, through the use of the Headings element, allows you to access it.

Sections.x.Body.Contents. Headings.x.Body.

Leaf with Leaves and Repeatables below

Under each heading, the structure of a complete document from Body down is repeated. See Section 35.7 for a clearer picture of how these elements map to parts of a document.

Sections.x.Body.Contents. Headings.x.Footnotes

Repeatable with Leaves below

Only footnotes contained in this heading.

Sections.x.Body.Contents. Headings.x.Endnotes

Repeatable with Leaves below

Only endnotes contained in this heading.

Sections.x.Body.Contents. Headings.x.Annotations

Repeatable with Leaves below

Only annotations contained in this heading.

Sections.x.Grids

Repeatable

Only valid for spreadsheet and database formats. This permits access to the "grids" inside a section or sheet of a spreadsheet or database file.

Sections.x.Grids.x.Body

Repeatable

Only valid for spreadsheet and database formats. This permits access to the "grids" inside a section or sheet of a spreadsheet or database file.

Sections.x.Image

Leaf

This element represents a graphic image of the content of the section. It is valid only for bitmap, drawing, chart and presentation sections.

Sections.x.BodyOrImage

Leaf

This element exists to make it easy to build templates that handle a range of section types. In word processing documents, spreadsheets and database sections, BodyOrImage is synonymous with Body. In bitmap, drawing, chart and presentation sections, BodyOrImage is synonymous with Image.

Sections.x.Title

Leaf

Same as Sections.x.Body.Title. For word processing documents, this element is the text marked with the title style. This may be different than the Property.Title. For all other types, this element will be the "name" of the section. For example, if the source file is a spreadsheet, this element will be the name of the sheet as it appears on the spreadsheet application's navigation tabs.

Sections.x.Type

Leaf

This element exists only for query purposes. It is valid only at the ELEMENT of a {## IF...} macro.

This element is normally used only for query purposes, but it may be inserted as well. See Section 35.4.4 for further details on how to use this in an {## IF} macro.

Sections.x.Footnotes

Repeatable

All footnotes.

Sections.x.Footnotes.x.Body

Leaf

The complete footnote reference and content text.

Sections.x.Footnotes.x. Reference

Leaf

The reference number for the footnote.

Sections.x.Footnotes.x. Content

Leaf

The content text for the footnote.

Sections.x.Footnotes

Repeatable

All footnotes.

Sections.x.Endnotes.x.Body

Repeatable with Leaves below

The complete endnote reference and content text.

Sections.x.Endnotes.x. Reference

Repeatable with Leaves below

The reference number for the endnote.

Sections.x.Endnotes.x. Content

Repeatable with Leaves below

The content text for the endnote.

Sections.x.Annotations

Repeatable

All annotations.

Sections.x.Annotations.x. Body

Leaf

The complete annotation reference and content text.

Sections.x.Annotations.x. Reference

Leaf

The reference text for the annotation.

Sections.x.Annotations.x. Content

Leaf

The content text for the annotation.

Sections.x.Slidenotes

Repeatable

All slide notes.

Please note that converting the slide notes will slow down the conversion process for PowerPoint files.

Sections.x.Slidenotes.x.Body

Leaf

The notes for the current slide.

It is recommended that you write slide notes at the end of the output file for performance reasons (PowerPoint files keep slide notes at the end of the file, not next to each slide). Not doing so will slow conversion, as the technology will be forced to perform excessive seeking in the input file.

Sections.x.Headers

Repeatable

All headers.

Sections.x.Headers.x.Body

Leaf

Text of the header.

Sections.x.Footers

Repeatable

All footers.

Sections.x.Footers.x.Body

Leaf

Text of the footer.

Pragma.Charset

Leaf

The HTML text string associated with the character set of the characters that Dynamic Converter is generating. In order for Dynamic Converter to correctly code the character set into the HTML it generates, all templates should include a META tag that uses the {## INSERT} macro as follows.

<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset={## INSERT ELEMENT=pragma.charset}">

If the template does not include this line, the user will have to manually select the correct character set in their browser.

Pragma.SourceFileName

Leaf

The name of the source document being converted. Note that this does NOT include the path name.

Pragma.CSSFile

Leaf

This element is used to insert the name of the Cascading Style Sheet (CSS) file into HTML documents. This name is typically used in conjunction with an HTML <LINK> tag to reference styles contained in the CSS file generated by Dynamic Converter.

When used with the {## INSERT} macro, this pragma will generate the URL of the CSS file that is created. This macro must be used with {## INSERT} inside every template file that inserts contents of the source file and when the selected HTML flavor supports CSS. The CSS file will only be created if the selected HTML flavor supports CSS.

When used with the {## IF} macro, the conditional will be true if the selected HTML flavor supports Cascading Style Sheets or not.

If CSS is required for the output, {## IF element=pragma.embeddedcss} or {## IF element=pragma.cssfile} must be used. However, Dynamic Converter does not differentiate between the two, as the choice of using embedded CSS vs. external CSS is your decision and you may even wish to mix the two in the output.

An example of how to use this pragma that works when exporting either CSS or non-CSS flavors of HTML would be as follows:

{## IF ELEMENT=Pragma.CSSFile}
    <LINK REL=STYLESHEET
   HREF="{## INSERT
   ELEMENT=Pragma.CSSFile}">
    </LINK>
{## /IF}

Pragma.EmbeddedCSS

Leaf

This element is used to insert CSS style definitions in a single block in the <HEAD> of the document.

When used with the {## INSERT} macro, this pragma will insert the block of CSS style definitions needed for use later in the file. This macro must be used inside every output HTML file where {## INSERT} is used to insert document content.

When used with the {## IF} macro, the conditional will be true if the selected HTML flavor supports CSS.

If CSS is required for the output, {## IF element=pragma.embeddedcss} or {## IF element=pragma.cssfile} must be used. However, Dynamic Converter does not differentiate between the two, as the choice of using embedded CSS vs. external CSS is your decision and you may even wish to mix the two in the output.

If a style is used anywhere in the input document, that style will show up in the embedded CSS generated for all the output HTML files generated for the input file. Consider a template that splits its output into multiple HTML files. In this example, the input file contains the "MyStyle" style. It does not matter if during the conversion only one output HTML file actually references the "MyStyle" style. The "MyStyle" style definition will still show up in the embedded CSS for all the output files, including those files that never reference this style.

Pragma.JsFile

Leaf

This element is used to insert the name of the JavaScript file into HTML documents. This name is typically used in conjunction with an HTML <SCRIPT> tag to reference JavaScript contained in the .js file generated by HTML Export.

When used with the {## INSERT} macro, this pragma will generate the URL of the JavaScript file that is created. This macro must be used with {## NSERT} inside every template file that inserts contents of the source file when:

  • The selected HTML flavor supports JavaScript.

  • The javaScriptTabs option has been set to true.

The JavaScript file will only be created if the selected HTML flavor supports JavaScript.

When used with the {## IF} macro, the conditional will depend upon whether the selected HTML flavor supports JavaScript or not.


35.3 Indexes

Repeatable nodes have an associated index variable that has a current value at any given time in the export process. For elements that contain repeatable nodes as part of their paths, the instance of the repeatable element must be specified by using a number or one of the index variable keywords.

This section covers the following topics:

35.3.1 Index Variable Keywords

The possible values for this index (referred to as 'x' in element definitions (see Section 35.2.4) are as follows:

35.3.1.1 Whole Number

For numeric values, the number is simply inserted as another node in the path.


Note:

Dynamic Converter indexes begin counting with 1 (not 0).


For example, Slides.1.Image references the first slide in a presentation and Footnotes.2.Body references the second footnote in a document.

If it cannot be guaranteed that elements are within the document which the template is applied on, they should not be explicitly referenced. For example, referencing Sections.4.Body may result in unexpected behavior in documents that have fewer than four sections.

Requesting a non-existent element will not cause an error in Dynamic Converter. The insertion will just be ignored. However, if other HTML surrounding the insertion depends on the results of the insert, the output may be invalid HTML.

35.3.1.2 Current, Next, Previous, First, and Last

The 'current', 'next', 'previous', 'first', and 'last' keywords are fairly self-explanatory. When the script template is processed, these variables are replaced with the appropriate index value. For example, Slides.Current.Image references the current slide and Slides.Next.Image refers to the next slide.

'Next' and 'previous' do not change the value of the index, as was the case in earlier versions of Dynamic Converter. As a result, the only places where the index is changed are inside of a {## REPEAT} loop and as the result of a {## LINK} statement.

{## REPEAT…}

The initial value of the index variable for any given repeatable element typically is 1. For {## REPEAT} loops, the index is incremented with each iteration. Termination of a {## REPEAT} loop resets the counter to its initial value. Actually, it is more accurate to say that the scope of the index is the repeat loop.

The following template fragment uses current in a repeat loop, which outputs all the footnotes in the source file:

{## REPEAT element=footnotes}
{## INSERT element=footnotes.current.body}
{## /REPEAT}

When a template containing a repeat statement is the target of a {## link} statement that specifies the element to be used as the repeat element, the initial value of the index will be determined by the {## LINK} processing.

{## LINK…}

The {## LINK} statement does not affect the index variable in the context of the current template. The {## LINK} statement can only affect index variables when both an element and a template are specified. In this case only the index variables in the target for the specified element are affected.

If the element specified in the {## LINK} contains a next or previous keyword, the value of current in the target file will be affected. The initial value of current in the target will be the value of (current in the source)+1 for next. Similarly, previous has the effect of decrementing the value of current.

The following example uses a single template file and the {## link} macro to create a set of HTML files, one for each slide in a presentation. The {## link} does the dual job of driving the generation of the HTML files and providing a "next" link for navigation. Notice the use of the next keyword in the {## if} macro that checks to see if there is a next slide:

{## unit}
<html>
<body>
<!-- insert the current slide -->
{## insert element=slides.current.image width=300}
<hr />
<!-- Is there a next slide? -->
{## if element=slides.next.image}
    <!-- If yes, generate a URL to an HTML file containing
        the next slide. The HTML file is generated using
        the current template (because there is no template
        attribute). While generating the new HTML file, the
        value of the index on slides will be its current
        value plus 1 once control returns to this template,
        the value of the index on slides is unchanged. -->
   <p><a href="{## link element=
   slides.next.image}">Next</a></p>
{## else}
    <!-- If no, create a link to the HTML containing the
        first slide. -->
    <p><a href="{## link element=
    slides.1.image}">First</a></p>
{## /if}
</body>
</html>
{## /unit}

35.3.1.3 Up, Down, Left, and Right

In addition to the Current, Next, Previous, First, and Last index variable keywords, repeatable grid elements have four additional keywords:

  • Up

  • Down

  • Left

  • Right

These keywords may only appear immediately after the Grids node in the document tree. For example, Grids.Up.Body is legal, but Sections.Left.Grids.1.Body is not. Use of these keywords is otherwise self-explanatory.

Note, too, that individual grids are only addressable relative to each other. In other words, while it is possible to specify the "up" grid, it is not possible to arbitrarily specify a grid directly (that is, "5, 7").

35.3.2 Creating a Set of HTML Files for Each Slide in a Presentation

The following example uses a single script template file and the {## LINK...} macro to create a set of HTML files, one for each slide in a presentation. The {## LINK...} does the dual job of driving the generation of the HTML files and providing a "next" link for navigation. Notice the use of the Next keyword in the {## IF...} macro that checks to see if there is a next slide.

<html>
<body>
<!-- Insert the current slide -->
{## INSERT ELEMENT=Slides.Current.Image WIDTH=300}
<hr />
<!-- Is there a next slide? -->
{## IF ELEMENT=Slides.Next.Image}
<!-- If yes, generate a URL to an HTML file containing the next slide. The HTML file is generated using the current template (because there is no TEMPLATE attribute). While generating the new HTML file, the value of the index on Slides is its current value plus 1 once control returns to this template, the value of the index on Slides is unchanged. -->
<p><a href="{## LINK ELEMENT=Slides.Next.Image}">Next</a></p>
{## ELSE}
<!-- If no, create a link to the HTML containing the first slide. -->
<p><a href="{## LINK ELEMENT=Slides.1.Image}">First</a></p>
{## /IF}
</body>
</html>

35.4 Macros

This section covers the following topics:

35.4.1 About Macros

Macros are commands to Dynamic Converter within script templates. Despite their casual similarity to HTML tags, they are not bound by any of the rules that tags would usually follow inside an HTML file. Macros may appear anywhere in the script template file, except inside another macro.

In the documentation and examples, the pieces of a macro are always shown delimited by spaces. However, semicolons may also delimit them. This option was added to accommodate certain HTML editors. In certain editors, URLs entered into dialog boxes may not have non-quoted spaces. This made it difficult or impossible to use the {## LINK} macro in these situations.

For example, {## INSERT ELEMENT=Sections.1.Body} may also be written as {##;INSERT;ELEMENT=Sections.1.Body}.

Note that template macro string parameters and options support sprintf style escaped characters. This means that characters such as \x22, \r and %% are supported. Also note that most template attribute values may be quoted. The exception is template element strings, which may not be quoted at this time.

For example:

{## ANCHOR aref="next" format="<a href=\"%url\">Next</a><br/>\r\n"}

35.4.2 Units: {## UNIT}, {## HEADER}, and {## FOOTER}

If a template file is going to make use of the {## UNIT} macro at all, this macro must be the first macro in the template file. It delimits the beginning and end of each unit. Unit boundaries are used when determining where to break the document when breaking based on content size (see Section 35.8).

A unit consists of a header, a footer (both of which are optional), and a body (which may be empty). To ensure that the header is the first item in the template and the footer is the last item, text between the {## UNIT} tag and the {## HEADER} tag will be ignored, as will text between the {## /FOOTER} tag and the {## /UNIT} tag, including whitespace. The header and footer of a unit will be output in every page containing that unit, enclosing that portion of the unit's body that is able to fit in a particular page. The entire template is a unit that may contain additional units.

Syntax

{## UNIT [BREAK]}
    [{## HEADER}
        any HTML
     {## /HEADER}]

        any HTML

    [{## FOOTER}
        any HTML
    {## /FOOTER}]
{## /UNIT}
Attributes
BREAK
AttributeDescription

BREAK

This optional attribute forces a page break before inserting the unit contents unless doing so would cause the body of the first page to be empty. One situation where this attribute would be useful would be to force a page break between each section of a document, perhaps to get one presentation slide per page.

The {## UNIT} macro and its BREAK attribute are ignored when SCCOPT_EX_PAGESIZEpagesize is set to zero.

It is sometimes important to make sure that a break does not occur in the midst of text that is intended to be on the same page. To prevent breaks like this from occurring, enclose the text that should be kept on the same page inside a nested {## UNIT}{## HEADER} pair. For example, to prevent a page break from occurring while a link is being created, the template author might write something like the following:

{## unit}{## header}
<a href="{## link element=sections.current.body}">Link</a>
{## /header}{## /unit}

35.4.3 Insert Element: {## INSERT}

This macro inserts an element of the source file into the output file at the current location.

Syntax

{## INSERT [ELEMENT=element [WIDTH=width] [HEIGHT=height] [SUPPRESS=suppress] [TRUNCATE=truncate]] | [NUMBER=number] [URLENCODE]}
AttributeDescription

ELEMENT

This attribute describes which part of the source file should be placed in the output file at the location of the macro. See Section 35.2.4 for the possible values for this attribute. If the value of this attribute is not in the element tree, Dynamic Converter considers it to be a custom element and the EX_CALLBACK_ID_PROCESSELEMENTSTR callback is called.

Example: {## INSERT ELEMENT=Sections.1.Body}

WIDTH

This optional attribute defines the width in pixels of the element being inserted. It is currently only valid for the Image element. If the WIDTH attribute is not present but the HEIGHT attribute is, the width of the image is calculated automatically based on the shape of the element. If neither the WIDTH and HEIGHT attributes are present, the image's original dimensions are used. If the image's original dimensions are unknown, the defaults assume a HEIGHT and WIDTH of 200.

Example: {## INSERT ELEMENT=Slides.1.Image WIDTH=400}

HEIGHT

This optional attribute defines the height in pixels of the element being inserted. It is currently only valid for the Image element. If the HEIGHT attribute is not present, but the WIDTH attribute is, the height of the image is calculated automatically based on the shape of the element.

Example: {## INSERT ELEMENT=Slides.1.Image HEIGHT=400}

SUPPRESS

This optional attribute allows certain things to be suppressed from the output. This is very useful if elements need to be inserted in contexts where HTML is not appropriate, such as passing information to Java applets, ActiveX controls, or populating parts of a form. Possible values are as follows:

TAGS: All HTML tags are suppressed from the output of the element, however the text may still contain HTML character codes like &quot; or &#123;

For non-embedded graphics such as presentations and graphic files, the URL of the converted graphic will not be suppressed. The <img> tag that would normally surround the URL is suppressed, however.

For embedded graphics such as those found in word processing sections and spread sheets, both the URL and the <IMG> tag are suppressed. Since there would be no way to access the resulting converted embedded graphic, conversion of the graphic is not done.

Example:

<form method="POST">
<input type="text" size="20" name="Author"
value="{## INSERT ELEMENT=Property.Author SUPPRESS=TAGS}">
</form>

BOOKMARKS: Turns off all bookmarks in the inserted section. Bookmarks automatically precede many inserted elements so that other template elements may link to them. SUPPRESS=BOOKMARKS is provided to prevent problems with nested <a> tags. Note that this represents a subset of the suppression behavior provided by SUPPRESS=TAGS.

INVALIDXMLTAGCHARS: Drops from the output all characters that are not allowed in XML tag names. This is designed to allow template authors to {## INSERT} custom document property names inside angle brackets ("<" and ">") to create XML tags. Most characters in Unicode and its subset character sets may be used as part of XML tag names. Illegal tag characters include "control" characters such as line feed and carriage return. In addition there are special rules for what characters can be the first character in a tag name.

Example:

{## REPEAT Sections.Property.Others}
<{## INSERT ELEMENT=Property.Others.Current.Name SUPPRESS=InvalidXMLTagChars}>
<{## INSERT ELEMENT=Property.Others.Current.Body SUPPRESS=InvalidXMLTagChars}>
</{## INSERT ELEMENT=Property.Others.Current.Name
SUPPRESS=InvalidXMLTagChars}>
{/## REPEAT}

produces something similar to the following:

<MyProperty>PropertyValue</MyProperty>

TRUNCATE

When set, this attribute forces a maximum length in characters for the inserted element. This allows elements to be truncated rather than broken across pages when the page size option is in use. Truncated elements will end with the truncation identifier which is "…" (three periods). All elements that have a truncate value will be no more than the specified number of characters in length including the length of the truncation identifier. In Dynamic Converter, elements are inserted in their entirety if no truncation size is specified. The value of this attribute must be greater than or equal to five characters.

An example of a situation where element truncation is useful is to limit the size of entries when building a table of contents.

The TRUNCATE attribute implies suppression of tags for the insert. It also auto applies the no source formatting option for the insert.

Note that the TRUNCATE attribute cannot be used with custom elements, because the custom element definition precludes the existence of any other attributes to {## INSERT}.

The TRUNCATE attribute has three special aspects to its behavior when grids are being inserted:

When truncation is in effect, the truncation size refers to the number of characters of content in each cell, not the number of characters in the grid as a whole.

While truncation normally causes all markup tags to be suppressed, when grids are in use, the table tags are retained (assuming that the output flavor supports tables).

Users are reminded that only one grid size may be selected for each spreadsheet sheet or database inserted. The size of the grid will be based in part on the TRUNCATE value if one or both the grid dimensions are not specified and the SCCOPT_EX_PAGESIZE option is in use. In this situation, if a grid from a single sheet is inserted in more than one place in the template, and there are differing TRUNCATE values, then the grid dimensions will be based on the largest TRUNCATE value specified.

NUMBER

This attribute allows the developer to retrieve the total instance count or the current index value of any repeatable element. This can be very useful for writing JavaScript, BasicScript, etc. Two special keywords do not appear in the element tree but can be used as nodes in the following special case.

Count and CountB0: When appended to a repeating element and used with the NUMBER attribute, these nodes allow the developer to insert a text representation of the number of instances of the given repeatable element. Count gives the count assuming the first index is 1 and CountB0 gives it assuming the first index is 0.

Example: If a presentation has three slides, the template fragment,

<P>{## INSERT NUMBER=Slides.Count}
<P>{## INSERT NUMBER=Slides.CountB0}

produces the following text:

<P>3
<P>2

Value and ValueB0: When appended to a repeating element and used with the NUMBER attribute, these nodes allow the developer to insert a text representation of the current value of the index of the given repeatable element. Value gives the count assuming the first index is 1 and ValueB0 gives it assuming the first index is 0.

Example: If the current value of the index on Slides is 2, the template fragment,

<P>{## INSERT NUMBER=Slides.Current.Value}
<P>{## INSERT NUMBER=Slides.Current.ValueB0}

Produces the following text:

<P>2
<P>1

URLENCODE

This optional attribute causes the inserted element to be URL encoded. As such, it is ignored unless it is specified as part of an insert that contains a file name. The following elements may be URL encoded:

  • pragma.sourcefilename

  • pragma.cssfile

  • pragma.embeddedcss

  • pragma.jsfile

In addition, the following elements will be URL encoded when the section type is "Archive" or "AR":

  • sections.x.fullname

  • sections.x.basename

  • sections.x.body

  • sections.x.title

  • sections.x.reflink

For all other {## INSERT} tags, this attribute is ignored. As such, you should note that Dynamic Converter does not modify any URLs coming out of the input documents being converted. These URLs continue to be passed through as is. This attribute is also ignored if the URL was created using the EX_CALLBACK_ID_CREATENEWFILE callback. Such URLs are assumed to already be URL-encoded.


Inserting Properties

Because of the special ways that properties are used in documents, property strings are inserted into the output HTML a little differently than the way other {## INSERT} macros work.

The property is always inserted as if the SCCOPT_NO_SOURCEFORMATTING option were set. This prevents formatting characters such as new lines from interfering with the property strings.

The property is always inserted as if the script template specified Suppress=Tags. This provides you with maximum control over how property strings are presented.

35.4.4 Conditional: {## IF...}, {## ELSEIF...}, and {## ELSE}

This macro allows an area of the script template to be used based on information about an element of the source file.

Syntax

{## IF ELEMENT=element [CONDITION=Exists|NotExists]
[VALUE=value]}
    any HTML
{## /IF}

or

{## IF ELEMENT=element [[CONDITION=Exists|NotExists] |
[VALUE=value]]}
    any HTML
{## ELSE}
    any HTML
{## /IF}

or

{## IF ELEMENT=element [[CONDITION=Exists|NotExists] |
[VALUE=value]]}
    any HTML
{## ELSEIF ELEMENT=element [[CONDITION=Exists|NotExists] |
[VALUE=value]]}}
    any HTML
{## ELSE}
    any HTML
{## /IF}

Note:

Multiple {## ELSEIF} statements may be used after {## IF}. In addition, {## ELSE} is not required when using {## ELSEIF}.


AttributeDescription

ELEMENT

This attribute describes which part of the source file should be tested. See Section 35.2.4 for the possible values for this attribute. If neither the CONDITION nor VALUE attribute exists, the element is tested for existence.

CONDITION

Defines the condition the element is tested for, possible values are EXISTS and NOTEXISTS.

VALUE

Defines the values the element should be tested against. The VALUE attribute is currently valid only for the Sections.x.Type element for testing of the type of a section of the source file.

Possible values include:

  • ar = archive

  • bm = bitmap

  • ch = chart

  • db = database

  • dr = drawing

  • mm = multimedia

  • pr = presentation

  • ss = spreadsheet

  • wp = word processing document

Examples:

{## if element=property.comment}
  <p><b>Comment property exists</b></p>
{## else}
  <p><i>Comment property does not exist</i></p>
{## /if}
{## if element=sections.1.type value=wp}
  <p><b>The source file is a word processor file</b></p>
{## /if}
{## if element=sections.1.type value=ss}
  <p>Spreadsheet</p>
{## elseif element=sections.1.type value=ar}
  <p>Archive</p>
{## elseif element=sections.1.type value=ch}
  <p>Chart</p>
{## else}
  <p>Not ss, ar, or ch</p>
{## /if}
{## if element=sections.current.type value=pr
    condition=notexists}
    <p>We can do something here for all document types
    other than presentations.</p>
{## else}
  <p>This is used only for presentations.</p>
{## /if}

35.4.5 Loop: {## REPEAT}

This command allows an area of the script template to be repeated, once for each occurrence of an element.

Syntax

{## REPEAT ELEMENT=element [MAXREPS=maxreps] [SORT=sort]}
    any HTML
{## /REPEAT}
AttributeDescription

ELEMENT

This attribute describes what part of the source file should be repeated on. It must be a repeatable element. See Section 35.2.4 for the possible values for this attribute.

Any HTML may be defined between the {## REPEAT... } macro and its closing {## /REPEAT} macro. This HTML is repeated once for each instance for the element specified. In addition, the word Current may be used in any other {##} tag as the element-index of the element being repeated. For instance, the following HTML in the template will produce a list of the footnotes in the document.

Example:

<HTML>
<BODY>
<P>Here are the footnotes
{## REPEAT ELEMENT=Footnotes}
<P>{## INSERT ELEMENT=Footnotes.Current.Body}
{## /REP}
<P>No more footnotes
</BODY>
</HTML>

Similarly, the following HTML in the template will insert the names of all the items in an archive:

{## repeat element=sections}
  {## insert element=sections.current.fullname}
{## /repeat}

MAXREPS

This attribute limits the total number of loops the repeat statement may make to the value specified. It is useful for preventing exceptionally large documents from producing an unwieldy amount of output.

SORT

This optional attribute defines whether to sort the output or not. This attribute is ignored if the input file is not an archive file of arctype file. All sorts are done based on the character encoding of the values in the input file. The sorts are also case insensitive at this time. Valid values of the sort attribute are:

  • fullname: sort by Sections.Current.FullName

  • basename: sort by Sections.Current.BaseName

  • none: no sorting is done. This is the default.


35.4.6 Linking With Structured Breaking: {## LINK}

This macro generates a relative URL to a piece of the document produced by Dynamic Converter. Normally this URL would then be encapsulated by the template with HTML anchor tags to create a link. {## LINK} is particularly powerful when used within a {## REPEAT} loop.

Syntax

{## LINK ELEMENT=element [TOP]}

or

{## LINK TEMPLATE=template}

or

{## LINK ELEMENT=element TEMPLATE=template [TOP]}
AttributeDescription

ELEMENT

Defines the element that is the target for the link. The URL that the {## LINK...} macro generates will point to the first instance of this element in the output file. If this attribute is not present, the resulting URL will link to any output file that was produced with the specified script template. If such a file does not exist, the specified script template is used to generate a file.

Remember that each element has one or more index values, some of which may be variables. An example of this type of index variable is the "current" in Sections.Current.Body. Use of {## LINK} affects the value of those index variables, which may cause subtle side effects in the behavior of the linked template file.

For a description of how {## LINK} affects the index of inserted elements more information, see Section 35.3.

TEMPLATE

The name of a template file which must exist in the same directory as the original template file. If this attribute is not present, the current template will be used. If an element was specified in the {## LINK}, then the template must contain a {## INSERT} statement using that element.

It is important to note that while the template language is normally case-insensitive, the case of the template file names specified here is important. The file name specified for the template is passed as is to the operating system. On operating systems such as UNIX, if the wrong case is given for the template file name, the template file will not be found and an error will be returned.

TOP

This attribute is only meaningful if an element is specified in the {## LINK} command. When this attribute exists, the generated URL will not contain a bookmark, and therefore the resulting link will always jump to the top of the HTML file containing the specified element. This is useful if the top of the script template has navigation or other information that the developer would like the user to see.


35.4.6.1 {## LINK} Usage Scenarios

Using the first syntax shown at the beginning of this section, a URL for the element bookmark is inserted in the document. Normally this syntax is used to create intradocument links to aid navigation. An example would be creating a link to the next section of the document.

In the second syntax, a URL is created to an output file generated by the specified template. This template is run on the same source document, but may extract different parts of the document. Normally, in this syntax, the "main" template contains a link to a second HTML file. This second file is generated using the template specified by the {## LINK} command and contains other document elements. As an example, the "main" template could produce a file containing the body of the document and a link to the second HTML file, which contains the footnotes and endnotes.

The third and most powerful syntax also produces the URL of a file generated by the specified template. This template is then expected to contain an insertion of the specified element. Normally this syntax is used with repeatable elements. It allows the author to generate multiple output files with sequential pieces of the document. As such it provides a way to break large documents up into smaller, more readable pieces. An example of where this syntax would be used is a template that generates a "table of contents" in one HTML file (perhaps a separate HTML frame). The entries in the table are then links to other HTML files generated by different templates.

Note that a {## LINK} statement which specifies a template does not always result in a new file being created. New files are only created if the target of the link does not exist yet. So if for example two {## LINK} statements specify the same element and template, only one HTML file is produced and the same URL will be used by both {## LINK} statements.

35.4.6.2 {## LINK} Archive File Example

The following template generates a list of links to all the extracted and converted files from the source archive file (represented by decompressedFile in the following example):

{## repeat element=sections}
   <p><a href="{## link element=sections.current.decompressedFile}">
   {## insert Element=sections.current.fullname}</a></p>
{## /repeat}

35.4.6.3 {## LINK} Presentation File Example

The following example (template.htm) uses the first syntax to generate a set of HTML files, one for each slide in a presentation. Each slide will include links to the previous and next slides and the first slide. Note the use of {## IF} macros so the first and last slides do not have Previous and Next links respectively:

template.htm
    <html>
    <body>
    {## insert element=slides.current.image width=300}
    <hr />
    {## if element=slides.previous.image}
       <p><a href={## link element=slides.previous.image}>
    previous</a></p>
    {## /if}
    {## if element=slides.next.image}
       <p><a href={## link element=
       slides.next.image}>Next</a></p>
    {## /if}
    </body>
    </html>

Due to the side effects of {## LINK} using the element attribute, there can be some confusion over what values "current," "previous" and "next" have when each {## LINK} is processed. To better illustrate how this template works, consider running it on a presentation that contains three slides:

First Output File

Since no template is specified in the {## LINK} statements, template.htm is (re)used as the template for all {## LINK} statements. For the first slide, nothing interesting happens until slides.next is encountered. Since slides.current is 1 in this case, slides.next refers to slides.2 and the {## LINK} is performed on slides.2.image. This {## LINK} fills in the anchor tag with the URL for the output file containing the second slide. Since no file containing slides.2 exists, {## LINK} opens a new file.

Second Output File

For the second slide the template is rerun. slides.current now refers to slides.2, slides.previous refers to slides.1 and slides.next refers to slides.3. The {## INSERT} statement will insert the second slide.

The {## IF} statement referring to slides.previous succeeds. Since the file containing slides.1 already exists, no additional file is created. The anchor tag will be filled in with the URL for the first output file.

The {## IF} statement referring to slides.next also succeeds and the anchor tag will be filled in with the URL for the output file containing the third slide. Since no file containing slides.3 exists, {## LINK} opens a new file.

Third Output File

For the third slide the template is rerun. slides.current now refers to slides.3 and slides.previous refers to slides.2. slides.next refers to slides.4, which does not exist. The {## INSERT} statement will insert the third slide.

The {## IF} statement referring to slides.previous succeeds. Since the file containing slides.2 already exists, no additional file is created. The anchor tag will be filled in with the URL for the second output file.

The {## IF} statement referring to slides.next fails. At this point processing is essentially complete.

35.4.7 Linking With Content Size Breaking: {## ANCHOR}

This macro generates a relative URL to a piece of the document produced by Dynamic Converter when doing document breaking based on content size.

Syntax

{## ANCHOR AREF=type [STEP=stepval] FORMAT="anchorfmt" [ALTLINK="element"] [ALTTEXT="text"]}
AttributeDescription

AREF

Indicates the relation of the target of the link to the current file. Allowable values for this attribute are:

  • nsertStart: links to first page of the inserted element

  • InsertEnd: links to last page of the inserted element

  • Next: links to next page in the inserted element

  • Prev: links to previous page in the inserted element

  • FirstFile: links to first page created for the entire document

  • LastFile: links to last page created for the entire document

STEP

This attribute is used to insert a link to "fast forward/rewind" through the output pages. This attribute may only be used if AREF is "next" or "prev." It is specified as a non-zero positive integer. For example, to insert a link to skip ahead five pages in a document, the following statement could be used:

{## unit aref="next" step="5" format="<p><a href=\"%url\">Next</a></p>"}

If not specified, the default value of the STEP attribute is one (1), which corresponds to the next/previous page. This attribute has no meaning when aref equals "insertstart," "insertend," "firstfile," or "lastfile."

FORMAT

This is an sprintf style format string specifying the text to output as the link. Dynamic Converter replaces the %url format specifier with the target URL into the format string. For example:

{## anchor aref="next" format="<a href=\"%url\">Next</a><br/>\r\n"}

ALTLINK

An attribute used to specify the target of the anchor if it cannot be resolved based on the anchor type. For example, the final file of a breakable element has no "next" file, and thus would resolve to nothing. However, if the altlink attribute is specified, the anchor will be generated using a URL to the first file found containing the specified element.

Note that no EX_CALLBACK_ID_ALTLINK callback will be made if an EX_CALLBACK_ID_ALTLINK attribute is specified in the {## ANCHOR} statement.

For example:

{## anchor aref=next format="<a href=\"%url\">Next</a>" altlink=headings.next.body}

ALTTEXT

Text to be output if the anchor cannot be resolved. If this attribute is not specified, no text will be output if the anchor target does not exist. For example:

{## anchor aref=next format="<a href=\"%url\">Next</a>" alttext="Next"}


35.4.8 Comment Put in the Output File: {## IGNORE}

This macro causes {##} statements in an area of the template file to be ignored by the template parser. Any text between the {## IGNORE} and {## /IGNORE} tags will be written to the output file as-is. This macro allows {##} statements in an area of the template to be commented out for debugging purposes, or to actually write out the text of another {##} macro. However, the browser will parse any HTML tags inside the ignored block and the text will be formatted accordingly. This macro can ignore all {##} macros except for an {## /IGNORE} macro. No escape sequence has been implemented for this purpose. As a result, {## IGNORE} statements cannot be nested. If they are nested, a run time template parser error will occur.

Syntax

{## IGNORE}
    any HTML or other {##} macros
{## /IGNORE}

Note:

To fully comment out a section of the script template, surround the ## IGNORE statements with HTML comments, for example:

<!--{## Ignore} (everything between here and the end HTML comment is commented out) {## /Ignore}-->


35.4.9 Comment Not Put in the Output File: {## COMMENT}

The {## COMMENT} macro allows the template writer to include comments in the template without including them in the final output files. {## COMMENT} provides the functionality of {## ignore}, but the text inside the {## COMMENT} block is not rendered to the output files and is not included in page size calculations. Like {## IGNORE}, {## COMMENT} macros may not be nested.

Syntax

{## COMMENT}
   any HTML or other {##} macros
{## /COMMENT}

35.4.10 Including Other Templates: {## INCLUDE}

This command allows other templates to be inserted into the current template. It works in a manner similar to the C/C++ # include directive.

Syntax

{## INCLUDE TEMPLATE=template}
AttributeDescription

TEMPLATE

This attribute gives the name of the template to insert.


35.4.11 Setting Options Within the Template: {## OPTION}

This macro sets an option to a given value. All {## OPTION} statements are executed in the order in which they are encountered. Remember when using this template macro that the {## UNIT} tag must be the first template macro in any template.

Options set in the template have template scope. This means that, for example, if a {## LINK} macro references another template, options in the referenced template are not affected by the option settings from the parent template. Similarly, when the files contained in an archive file are converted, Export recursively calls itself to perform the exports of the child documents in the archive. Each child document is converted using a copy of the parent template, and that copy does not inherit the option values from the parent template.

Options set using {## OPTION} in the template are not inherited by the dynamic conversions performed on files within archives. Each child conversion receives a fresh copy of all option values as originally set with DASetOption.

Remember that setting an option in the template overrides any option value set by an application within the scope of the template.

Syntax

{## OPTION OPTION=value}
AttributeDescription

OPTION

See the table below for the supported options and their values.


Supported Options and Values

OptionDescription

SCCOPT_GRAPHIC_TYPE

This option sets the format of the graphics produced by Dynamic Converter when it converts document embeddings.

The supported values are:

  • FI_GIF: GIF graphics

  • FI_JPEGFIF: JPEG graphics

  • FI_PNG: PNG graphics

  • FI_NONE: no graphic conversion

The default is FI_JPEGFIF.

SCCOPT_GIF_INTERLACED

This option specifies whether GIF output should be interlaced or non-interlaced. Interlaced GIFs are useful when graphics are to be downloaded over slow Internet connections. They allow the browser to begin to render a low-resolution view of the graphic quickly and then increase the quality of the image as it is received. There is no real penalty for using interlaced graphics.

The supported values are:

  • 0 or FALSE (i.e., non-interlaced)

  • 1or TRUE (i.e., interlaced)

SCCOPT_JPEG_QUALITY

This options sets the lossyness of JPEG compression. This should be a value between 1 and 100 (percent), with 100 being the highest quality but the least compression, and 1 being the lowest quality but the most compression.

SCCOPT_GRAPHIC_SIZEMETHOD

This option determines the method used to size graphics. You can choose among three methods, each of which involves some degree of trade off between the quality of the resulting image and speed of conversion:

  • SCCGRAPHIC_QUICKSIZING

  • SCCGRAPHIC_SMOOTHSIZING

  • SCCGRAPHIC_SMOOTHGRAYSCALESIZING

Using the quick sizing option results in the fastest conversion of color graphics, though the quality of the converted graphic will be somewhat degraded.

The smooth sizing option results in a more accurate representation of the original graphic, as it uses antialiasing. Antialiased images may appear smoother and can be easier to read, but rendering when this option is set will require additional processing time.

Please note that the smooth sizing option does not work on images which have a width or height of more than 4,096 pixels.

The grayscale-only option also uses antialiasing, but only for grayscale graphics, and the quick sizing option for any color graphics.

SCCOPT_GRAPHIC_OUTPUTDPI

This option specifies the output graphics device's resolution in dots per inch (dpi), and only applies to images whose size is specified in physical units (in/cm). For example, consider a 1-inch square, 100-dpi graphic that is to be rendered on a 50-dpi device (with this option set to '50'). In this case, the size of the resulting WBMP, TIFF, BMP, JPEG, GIF, or PNG will be 50 x 50 pixels.

The valid values are any integer between 0 and 2400 (dpi).

SCCOPT_GRAPHIC_SIZELIMIT

This option sets the maximum size of the exported graphic (in pixels). It may be used to prevent inordinately large graphics from being converted to equally cumbersome output files, thus preventing bandwidth waste.

This option takes precedence over all other options and settings that affect the size of a converted graphic.

SCCOPT_GRAPHIC_WIDTHLIMIT

This option allows a hard limit to be set for how wide (in pixels) a converted graphic may be. Any images wider than this limit will be resized to match the limit. It should be noted that regardless whether the SCCOPT_GRAPHIC_HEIGHTLIMIT option is set or not, any resized images will preserve their original aspect ratio. Images smaller than this width are not enlarged when using this option.

SCCOPT_GRAPHIC_HEIGHTLIMIT

This option allows a hard limit to be set for how high (in pixels) a converted graphic may be. Any images higher than this limit will be resized to match the limit. It should be noted that regardless whether the SCCOPT_GRAPHIC_WIDTHLIMIT option is set or not, any resized images will preserve their original aspect ratio. Images smaller than this height are not enlarged when using this option.

SCCOPT_EX_FONTFLAGS

This option is used to turn off specified font-related markup in the output. Naturally, if the requested output flavor or other option settings prevent markup of the specified type from being written, this option cannot be used to turn it back on. However, specifying the size, color and font face of characters may all be suppressed by bitwise OR-ing together the appropriate combination of flags in this option.

  • SUPPRESS_SIZE

  • SUPPRESS_COLOR

  • SUPPRESS_SIZECOLOR

  • SUPPRESS_FACE

  • SUPPRESS_SIZEFACE

  • SUPPRESS_COLORFACE

  • SUPPRESS_ALL

  • SUPPRESS_NONE

SCCOPT_EX_GRIDROWS

This option specifies the number of rows that each template "grid" (applicable only to spreadsheet or database files) should contain.

Setting this option to zero ("0") means that no limit is placed on the number of rows in the grid.

SCCOPT_EX_GRIDCOLS

This option specifies the number of columns that each template "grid" (applicable only to spreadsheet or database files) should contain.

Setting this option to zero ("0") means that no limit is placed on the number of columns in the grid.

SCCOPT_EX_GRIDADVANCE

This option specifies how the "previous" and "next" relationships will work between grids.

  • ACROSS: The input spreadsheet or database is traversed by rows.

  • DOWN: The input spreadsheet or database is traversed by columns.

This option has no effect on up/down or left/right navigation.

SCCOPT_EX_GRIDWRAP

This option specifies how the "previous" and "next" relationships work between grids at the edges of the spreadsheet or database.

Consider a spreadsheet that has been broken into 9 grids by HTML Export as follows:

Sample of spreadsheet broken into 9 grids

If this option is set to TRUE, then the Grids.Next.Body value after Grid 3 will be Grid 4. Likewise, the Grids.Previous.Body value before Grid 4 will be Grid 3.

If this option is set to FALSE, then the Grids.Next.Body after Grid 3 will not exist as far as template navigation is concerned. Likewise, the Grids.Previous.Body before Grid 4 will not exist as far as template navigation is concerned.

In other words, this option specifies whether the "previous" and "next" relationships "wrap" at the edges of the spreadsheet or database.


35.4.12 Copying Files: {## COPY}

The {## COPY} macro is used to copy extra, static files into the output directory along with the output from the converted document. For example, if you have added a company logo that was not in the original input document, {## COPY} can be used to make it a part of the converted output document. Other examples include graphics used to mimic "buttons" for navigation, outside CSS files, or a piece of Java code to be run.

Syntax

{## COPY FILE=file}
AttributeDescription

FILE

This is the name of the file to be copied. If a relative path name is specified as part of the file, then it must be relative to the directory containing the root template file.

For example:

{## COPY FILE=uparrow.gif}


The {## COPY} macro may occur anywhere inside a template. If the {## COPY} is inside a {## IF}, then the {## COPY} will only be executed if the condition is TRUE. In {## REPEAT} loops, the {## COPY} will only be performed if the loop is executed one or more times. In addition, if the {## REPEAT} loops more than once, Dynamic Converter detects this and the {## COPY} is executed only once.

As its name suggests, the {## COPY} macro is a straight file copy. Therefore, no conversions are performed as part of the copy. For example, graphics formats are not changed and graphics are not resized. Template authors should also remember to use {## GRAPHIC} when graphics and other files are copied so that space will be created for the external graphic in the text buffer size calculations.

Since the only action Dynamic Converter takes is to copy the requested file, it is up to the template author to make use of the copied file at another point in the template. For example, a graphic file may be copied and then the template can use an <img> tag which references the copied graphic. The following snippet of template code would do this:

{## copy FILE=Picture.JPG
{## graphic PATH=Picture.JPG}
<img src="Picture.JPG">

Note:

If the file copy fails, Dynamic Converter will continue and no error will be reported.


35.4.13 Deprecated Template Macros

Earlier releases of Dynamic Converter used different macro syntax where template macros were expected to start with {Inso} rather than {##}. In addition some words that had been abbreviated must now be spelled out ("insert" instead of "ins"). The old syntax will continue to be supported for the foreseeable future. However, it has been deprecated.

The old Inso macros and their new equivalents are as follows:

  • {insoins} is now {## insert}

  • {insoif} ... {/insoif} is now {## if} ... {## /if}

  • {insoelseif} ... {/insoelseif} is now {## elseif} ... {## /elseif}

  • {insoelse} ... {/insoelse} is now {## else} ... {## /else}

  • {insoignore} ... {/insoignore} is now {## ignore} ... {## /ignore}

  • {insolink} is now {## link}

  • {insorep} ... {/insorep} is now {## repeat} ... {## /repeat}

You cannot mix old-style Inso macros with the new {##} macro style in the same template.

No new or future features that Dynamic Converter will include support the old syntax. Thus, for example, the old syntax has not been extended to include support for the new {## UNIT} macros.

35.5 Pragmas

Pragmas provide access to certain document elements that are not logically part of the element tree. The following pragmas are supported:

35.5.1 Pragma.Charset

This pragma represents the HTML text string associated with the character set of the characters that Dynamic Converter is generating. In order for Dynamic Converter to correctly code the character set into the HTML it generates, all templates should include a META tag that uses the {## INSERT} macro as follows:

<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset={## INSERT ELEMENT=pragma.charset}">

If the template does not include this line, the user will have to manually select the correct character set in their browser.

35.5.2 Pragma.CSSFile

This pragma is used to insert the name of the Cascading Style Sheet (CSS) file into HTML documents. This name is typically used in conjunction with an HTML <LINK> tag to reference styles contained in the CSS file generated by Dynamic Converter.

When used with the {## INSERT} macro, this pragma will generate the URL of the CSS file that is created. This macro must be used with {## INSERT} inside every template file that inserts contents of the source file and when the selected HTML flavor supports CSS. The CSS file will only be created if the selected HTML flavor supports CSS.

When used with the {## IF} macro, the conditional will be true if the selected HTML flavor supports Cascading Style Sheets or not.

If CSS is required for the output, {## IF element=pragma.embeddedcss} or {## IF element=pragma.cssfile} must be used. However, Dynamic Converter does not differentiate between the two, as the choice of using embedded CSS vs. external CSS is your decision and you may even wish to mix the two in the output.

An example of how to use this pragma that works when exporting either CSS or non-CSS flavors of HTML would be as follows:

{## IF ELEMENT=Pragma.CSSFile}
    <LINK REL=STYLESHEET
      HREF="{## INSERT
      ELEMENT=Pragma.CSSFile}">
    </LINK>
{## /IF}

35.5.3 Pragma.EmbeddedCSS

This pragma is used to insert CSS style definitions in a single block in the <HEAD> of the document.

When used with the {## INSERT} macro, this pragma will insert the block of CSS style definitions needed for use later in the file. This macro must be used inside every output HTML file where {## INSERT} is used to insert document content.

When used with the {## IF} macro, the conditional will be true if the selected HTML flavor supports CSS.

If CSS is required for the output, {## IF element=pragma.embeddedcss} or {## IF element=pragma.cssfile} must be used. However, Dynamic Converter does not differentiate between the two, as the choice of using embedded CSS vs. external CSS is your decision and you may even wish to mix the two in the output.

If a style is used anywhere in the input document, that style will show up in the embedded CSS generated for all the output HTML files generated for the input file. Consider a template that splits its output into multiple HTML files. In this example, the input file contains the "MyStyle" style. It does not matter if during the conversion only one output HTML file actually references the "MyStyle" style. The "MyStyle" style definition will still show up in the embedded CSS for all the output files, including those files that never reference this style.

35.5.4 Pragma.JsFile

This pragma is used to insert the name of the JavaScript file into HTML documents. This name is typically used in conjunction with an HTML <SCRIPT> tag to reference JavaScript contained in the .js file generated by HTML Export.

When used with the {## INSERT} macro, this pragma will generate the URL of the JavaScript file that is created. This macro must be used with {## INSERT} inside every template file that inserts contents of the source file when:

  1. The selected HTML flavor supports JavaScript.

  2. The javaScriptTabs option has been set to true.

The JavaScript file will only be created if the selected HTML flavor supports JavaScript.

When used with the {## IF} macro, the conditional will depend upon whether the selected HTML flavor supports JavaScript or not.

35.5.5 Pragma.SourceFileName

This pragma represents the name of the source document being converted.


Note:

The Pragma.SourceFileName pragma does not include the path name.


35.6 Setting Script Template Formatting Options

You can control formatting options for script templates by editing the Script Template Conversion Configuration Settings on the Dynamic Converter Configuration page.

The settings that you can change include:

35.6.1 Changing the Format Used for Converted Graphics

If you want to change the format to be used for converted graphics, edit the following option:

# SCCOPT_GRAPHIC_TYPE
#
# Determines what graphic format will be used for exported graphics.
# Setting this to "none" disables graphic output.
#
graphictype     gif
#graphictype    jpeg
#graphictype    png
#graphictype    none

Lines that begin with "#" have been commented out. So the above example shows the default setting, with the gif format selected. To use the jpeg format, instead, you would simply comment the first line and uncomment the second line, thus:

#graphictype    gif
graphictype    jpeg
#graphictype    png
#graphictype    none

35.6.2 Generating Bullets and Numbers for Lists

If you want to generate bullets and numbers for lists instead of HTML list tags, you would edit the following option:

# SCCOPT_GENBULLETSANDNUMS

#
# Generate Bullets and Numbers.  Bullets and numbers will be generated for
# lists instead of using HTML list tags (<ol>, <ul>, <li>, etc.) when
# rendering lists in a document.
#
genbulletsandnums   no
#genbulletsandnums  yes

Again, comment one line and uncomment another, thus:

#genbulletsandnums  no
genbulletsandnums   yes

35.7 Breaking Documents by Structure

One of the most powerful features of the template architecture is the ability to break long word processor documents up into logical pieces and create powerful navigation aids to access them.

To understand how this is done, you must first understand the document tree as it relates to word processing documents. The somewhat complex graphic below attempts to show how the elements in the tree relate to a real-world document (see figure below).

The following are some examples of elements and the data they would produce if run against the document shown in the preceding image. Note the omission of the default nodes body and contents in the second two examples:

body.contents.headings.2.body.title

would produce "Present Day."

body.contents.headings.2.body.contents.headings.1.body.title

would produce "Commercial."

body.contents.preface

would produce "The History of Flight" and the text below it, up to but not including "Introduction."

headings.2.headings.1.headings.3.title

would produce "McDonnell-Douglas."

headings.2.headings.1.headings.3.contents

would produce the text below "McDonnell-Douglas" but above "Military."

Figure 35-2 Breaking Up Documents by Structure

Sample of breaking up a document by structure

Breaking documents requires that Dynamic Converter understands the logical divisions in the structure of a document. Currently the only formats that can give Dynamic Converter this information in an unambiguous manner are Microsoft Word 95 and higher and WordPerfect 6.0 and higher. In these formats, the breaking information is available if the author placed table-of-contents information in the document. Refer to the appropriate software manual for information on the necessary procedure for including this information. That is not to say that the document must have a table of contents, only that the information to build one must be present.

It should be noted that some word processing formats, including Microsoft Word 2002 (XP), allow users to specify TOC entries in multiple ways. Dynamic Converter only supports two of these methods:

TOC specified through…Supported in Dynamic Converter?

Applied heading styles

Yes

Custom styles with outline levels

Yes

Outline level applied as a paragraph attribute

No

TOC entries

No


Additionally, if a heading style is applied to text inside a table in the original document, Dynamic Converter will not break on that heading. This is because Dynamic Converter will not break within tables.

Indexes and Structure-Based Breaking

All repeatable nodes have an associated index variable that has a current value at any given time in the conversion process. For elements that contain repeatable nodes as part of their path, the instance of the repeatable element must be specified by using a number or one of several index variable keywords. See Section 35.3.1 for more information on the possible values for the index variables.

35.8 Breaking Documents by Content Size

In addition to breaking documents by structure (see Section 35.6), Dynamic Converter also supports breaking documents based on the amount of content to be placed in each output file or "page." Documents can even be broken based on both their structure and content size.

To break documents by content size, two things must be done. First, the SCCOPT_EX_PAGESIZEpageSize option must be set (see Section 35.4.11). The second thing that must be done is that the template used must be equipped with the {## UNIT} construct (see Section 35.4.2).

The basic idea behind the unit template construct is to tell Dynamic Converter what things should be repeated on every "page" and what pieces should only be shown once. In other words, the unit template construct provides a mechanism for grouping template text and document elements. Unit boundaries are used when determining where to break the document when spanning pages.

Here are some examples of the kinds of things the template author might want to appear on every page:

  • The <META> tag inserting the output document character set.

  • A company copyright message.

  • Navigational elements to link the previous/next pages together.

Typical examples of things that would not go on every page would be:

  • The actual content of the document.

  • Structural navigational elements like the links for a table of contents.

A unit consists of a header, a footer (both of which are optional), and a body. Items that are to be repeated at the beginning or end of every unit should be placed in the header or footer respectively.

A unit is delimited by the {## UNIT} template macro. Similarly, the {## HEADER} and {## FOOTER} template macros delimit the header and footer respectively. The body is everything that is left between the header and the footer. The {## UNIT} macro must be the first macro in the template. The body frequently contains nested units. The body may be empty.

To ensure that the header is the first item in the template and the footer is the last item, text between the {## UNIT} tag and the {## HEADER} tag will be ignored, as will text between the {## /FOOTER} tag and the {## /UNIT} tag, including whitespace. The header and footer of a unit will be output in every page containing that unit, enclosing that portion of the unit's body that is able to fit in a particular page. The entire template is a unit that may contain additional units.

35.8.1 A Sample Size Breaking Template

By way of example, let's take another look at the very simple template from About Script Templates. To make things more interesting, let's insert the character set into the template with a <meta> tag. Let's also insert some better navigation to improve movement between the pages. The modified version of the template is as follows:

{## unit}{## header}
<html><head>
<meta HTTP-EQUIV="Content-Type" CONTENT="text/html;
charset={## insert element=pragma.charset}" /></head>
<body>
{## anchor aref="prev" format="<p><a href=\"%url\">Prev</a></p>"}
{## /header}
<p>Here is the document you requested.
{## insert element=property.title} by
{## insert element=property.author}</p>

<p>Below is the document itself</p>
{## insert element=body}
{## footer}
{## anchor aref="next" format="<p><a href=\"%url\">Next</a></p>"}
</body>
</html>
{## /footer}{## /unit}

A very small value (about 20 characters) is used for the page size option. The resulting HTML might look like this (HTML that is the result of a macro is in bold):

file1.htm

<html><head>
<meta HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=us-ASCII"/></head>
<body>
<p>Here is the document you requested.</p>
<p>A Poem by Phil Boutros</p>
<p><a href="file2.htm">Next</a></p>
</body>
</html>

file2.htm

<html><head>
<meta HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=us-ASCII" /></head>
<body>
<p><a href="file1.htm">Next</a></p>
<p>Below is the document itself</p>
<p>Roses are red</p>
<p>Violets are blue</p>
<p><a href="file3.htm">Prev</a></p>
</body>
</html>

file3.htm

<html><head>
<meta HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=us-ASCII" /></head>
<body>
<p><a href="file2.htm">Prev</a></p>
<p>I'm a programmer</p>
<p>and so are you</p>
</body>
</html>

There are several things to note here:

  • The page size option value does not apply to the text from the template, only the text inserted from the source document. Each page contains roughly 20 characters of visible input document text.

  • The {## INSERT} of the character set is part of the {## HEADER} and therefore is inserted into all the output pages.

  • Text from the body of the unit is inserted sequentially. Thus "as is" template text such as the line "<p>Below is the document itself</p>" is only inserted once.

  • The {## ANCHOR} tags only insert links to the previous/next page if there actually is a previous/next page. Thus the first page does not have a link to the non-existent previous page.

35.8.2 Templates Without {## UNIT} Macros

The {## UNIT} macro is only required in templates that are designed to break pages based on size using the SCCOPT_EX_PAGESIZEpageSize option. An example of a template that would not perform any size-based breaking is one that defines an HTML <FRAME>, but does not include any document content. Another example where size-based breaking might not be desired is a table of contents page, even though a table of contents page does contain document content.

A template that does not conform to the {## UNIT} format is a not a size-based breaking template. Support for this type of template will continue for the indefinite future. The template will be considered to not be a size-based breaking template if the first macro tag encountered is something other than {## UNIT}. This means that there cannot be any {## UNIT}, {## HEADER} or {## FOOTER} macros later in the template. The value of the SCCOPT_EX_PAGESIZEpageSize option will be ignored for this type of template.

35.8.3 Indexes and Size-Based Breaking

As mentioned earlier, all repeatable nodes have an associated index variable. See Section 35.3.1 for information about using index variable keywords such as "Next" and "Last."

35.9 Using Grids to Navigate Spreadsheet and Database Files

In order to support spreadsheets (and database files, though they are not as common), a template-based navigation concept known as a "grid" is available. Grids offer a way to consistently navigate a spreadsheet or database in an intuitive fashion.

Grids can be used to present the output of large spreadsheets in smaller pieces, so that less scrolling is necessary. It can also be used to help prevent the HTML versions of large spreadsheets from overwhelming browsers, potentially causing them to lock up. Grids can also be used to halt processing of large spreadsheets before they waste too much CPU time.

To use grids, you should use the new grid template element (see Section 35.2.4). Grids may only be used in templates that have been enabled with the {## UNIT} template macro. It is also important to set the grid-related options (see Section 35.4.11).

The grid support has some important limitations:

  1. The output file format and flavor are expected to supports tables, although this is not required.

  2. Grids are only used when converting spreadsheets and database input files. Grids are not available for word processing files at this time.

  3. Due to size constraints, grid support works best if the contents of the cells in the input file do not make use of a lot of formatting (bold, special fonts, text color, etc.).

To further explain the grid system, consider a multi-sheet spreadsheet workbook as an example. Each sheet in the spreadsheet workbook is broken into a collection of grids. Each grid has a fixed maximum size and is a rectangular portion of the spreadsheet. The size of the grid is specified as a number of spreadsheet cells. For example, consider the 7 x 10 spreadsheet in Figure 35-3.

Figure 35-3 Example 7 X 10 Spreadsheet

Sample 7 x 10 spreadsheet

If you wanted to break it up into 3 x 4 grids, nine grids would be produced as shown in Figure 35-4.

Figure 35-4 Example 7 x 10 Spreadsheet Split Up in 3 X 4 Grids

Sample 7 x 10 spreadsheet split up in 3 x 4 Grids

Normally, all grids have the same number of cells. The exception is that grids at the right or bottom edge of the spreadsheet may be smaller than the normal size. Grids will never be larger than the requested size. For this reason, grids can easily be navigated by using "up," "down," "left," or "right." One thing that grids cannot do is address individual cells in a spreadsheet (except, of course, in the degenerate case of a grid whose size is 1 x 1).

Dynamic Converter does not force deck/page breaks between each grid. Therefore, if the template writer wants to limit each deck/page to only one grid, they should force the break in the template.

Grid Support When Tables Are Not Available

Not all output flavors supported by Dynamic Converter support the creation of tables. If the output flavor does not support tables, Dynamic Converter will still support grids. However, Dynamic Converter's normal non-table output will be what is presented in grid form. For example, if "[A1]" represents the contents of cell A1, then we would export the following for a grid of size (2 x 2):

If grids.1.body is:

[A1]
[A2]
[B1]
[B2]

then grids.right.body is:

[C1]
[C2]
[D1]
[D2]

and grids.down.body is:

[A3]
[A4]
[B3]
[B4]
PK4+PK VwEOEBPS/rm_res_charge.htm Processing Reservations and Chargebacks

18 Processing Reservations and Chargebacks

Chargebacks are fees charged to people or businesses for the use of storage facilities or actions performed on physical items in the storage facilities. They can also be used to provide an explanation for storage actions. Reservations are used to manage physical items which can be checked out to users, reserved for later use, or requested.

This chapter discusses the processing of charges and invoicing as well as the reservation process. Not all tasks discussed here can be performed by all users. Access to functionality is dependent on assigned rights and roles.

This chapter covers the following topics:

18.1 Managing Chargebacks

Invoices can be generated for the storage, use, reservation, and destruction of the managed content. The invoices can then be sent to internal or external customers in accordance with applicable business procedures.

The administrator sets up charge types (billable events), payment types (methods of payment), and customers (users or organizations who will be billed). After being set, each billable action (creation, reservation, storage, destruction) can be charged to a particular customer by creating invoices containing one or more transactions on physical items occurring for these customers. Automatic transactions are those in which the charges are calculated at transaction time.

A charge type is a defined transaction triggered by certain criteria. For example, the creation of a physical item of object type Box and media type Box may cost $5 per occurrence, while reservation of an item with priority ASAP Rush may cost $20.

Whenever someone performs an action meeting the criteria of a charge type, a billable transaction is recorded for the associated user or organization (customer). The system uses the most specific charge type. If charge type A has two criteria and charge type B has the same two criteria plus another one, charge type B is recorded for a transaction meeting all three criteria of charge type B even though it also meets the two criteria of charge type A.

An amount of money is associated with each charge type that can be per item or for a specific period.For example, you could charge a fee every time a physical item is created (or reserved or destroyed), or charge a monthly fee to store a physical item.

A payment type specifies how internal or external customers pay for services. Pre-defined payment types include credit card or check. Custom payment types can also be created.

Customers are internal or external users or organizations who are charged for the services rendered on physical items. They will receive the invoices generated by Physical Content Management (in accordance with the applicable business procedures) and make the payments for the chargebacks.

After the charge types, payment types, and customers are defined, they can be used to create invoices to submit to the customers for each billable event. Invoices can be run on as as-needed basis or they can be scheduled automatically in accordance with defined criteria.

This section discusses the following topics:

18.1.1 Understanding the Chargeback Process

A site's specific reservation process may differ from the one described here, depending on the procedures in place.

The typical fulfillment process of a chargeback is as follows:

  1. A user performs a billable action (for example, creates, reserves, or stores a physical item in storage).

    If automatic transactions are enable, when the user performs one of these actions on the physical item, those items are matched against all defined charge types. Each action on each item is matched against current transactions. If there is no match to a transaction, the action will not be recorded for chargeback.

  2. The transaction is recorded in the system. The administrator should make sure there are transactions in place to cover as many variations as possible regarding actions on physical items. In this way chargeback can be made more automatic and require less individual attention for each request made for a physical item.

  3. The administrator generates an invoice, either automatically through using scheduled invoices or manually by generating individual invoices.

  4. The invoice is sent to the customer according to business procedures. The Physical Content Management functionality does not e-mail invoices or otherwise deliver them.

  5. The bill is paid or otherwise considered paid according to company procedures.

  6. After the bill is paid, the administrator marks the invoice as paid within the Physical Content Management feature.

18.1.2 Configuring Chargeback Processing

Administrators set up charge types, payment types, and customers.


Permissions:

The PCM.Admin.Manager right is required to perform this action. This right is assigned by default to the PCM Administrator role. In addition, the chargeback feature has its own set of rights that define what users can do in this area.


The default Physical Content Management functionality comes with the following predefined charge actions:

  • Creation: A user is billed if a physical item is created.

  • Destruction: A user is billed if a physical item is destroyed.

  • Reservation: A user is billed if a reservation request is made for a physical item.

  • Storage: A user is billed if a physical item is stored.

The default Physical Content Management functionality comes with the following predefined payment types:

  • Credit Card: To charge a customer paying with a credit card.

  • Check: To charge a customer paying by check.

This section discusses the following topics:

18.1.2.1 Creating or Editing a Charge Type


Permissions:

The PCM.Admin.Manager right and the CBC.ChargeBacks.Create right are needed to perform this action. These rights are assigned by default to the PCM Administrator role.


The most specific charge type is always used. For example, if charge type A has two criteria and charge type B has the same two criteria plus another one, charge type B is recorded for a transaction meeting all three criteria of charge type B (even though it also meets the two criteria of charge type A).

To create a new charge type to be used for chargebacks:

  1. Choose Physical then Configure. Choose Charges then Type.

  2. On the Configure Charge Type page, click Add.

  3. On the Create or Edit Charge Type page, specify the properties of the charge type.

    • Charge Type ID: Unique identifier for the charge type. Maximum: 30 characters. This field is view-only on the Edit Charge Type page.

    • Description: Description of the charge type. Maximum: 60 characters.

    • Actions: Type of action associated with the charge type. Options include Creation, Destruction, Reservation, or Storage.

    • Charge Amount: Amount charged for the transaction in dollars and cents and frequency of the charge (per item or per period).

    • Frequency: If Action is set to Storage, this is the frequency of the storage period.

    • Object Types: Object type that triggers the charge type. Click Browse to view and select an object type from a list.

    • Media Types: Media type that triggers the charge type. Click Browse to view and select a media type from a list.

    • Transfer Method and Priorities: When Action is set to Reservation, this is the transfer method of reservation that triggers the charge type.

    The new charge type is now added to the top of the list on the Configure Charge Type page.


Permissions:

The PCM.Admin.Manager right and the CBC.ChargeBacks.Edit rights are needed to perform this action. These rights are assigned by default to the PCM Administrator role.


To modify a page, select the item to edit in the list of items and choose Edit Charge Type from the item's Actions menu. Modify the properties as required and click OK when finished.

18.1.2.2 Viewing a Charge Type


Permissions:

The PCM.Admin.Manager right and the CBC.ChargeBacks.Read right are needed to perform this action. These rights are assigned by default to the PCM Administrator role.


To view the properties of a charge type:

  1. Choose Physical then Configure. Choose Charges then Type.

  2. In the list of charge types on the Configure Charge Type page, select the item and click the item's Info icon. The Payment Type Information page opens listing all properties of the charge type.

18.1.2.3 Deleting a Charge Type


Permissions:

The PCM.Admin.Manager right and the CBC.ChargeBacks.Delete right are needed to perform this action. These rights are assigned by default to the PCM Administrator role.


To delete a charge type:

  1. Choose Physical then Configure. Choose Charges then Type.

  2. In the list of charge types on the Configure Charge Type page, select the item to edit, and choose Delete Charge Type in the item's Actions menu or select an item's check box and choose Delete in the Table menu.

18.1.2.4 Creating or Editing a Payment Type


Permissions:

The PCM.Admin.Manager right and the CBC.ChargeBacks.Create right are needed to perform this action. These rights are assigned by default to the PCM Administrator role.


To create a new payment type to be used for chargebacks:

  1. Choose Physical then Configure. Choose Charges then Payment Methods.

  2. On the Configure Payment Methods page, click Add.

  3. On the Create or Edit Payment Method page, specify the properties of the payment type, and click OK.

    The new payment type is now added to the bottom of the list on the Configure Payment Methods page.


Permissions:

The PCM.Admin.Manager right and the CBC.ChargeBacks.Edit right are needed to perform the following action. These rights are assigned by default to the PCM Administrator role.


To modify a payment type, select the item to edit in the list of items and choose Edit Payment Type from the item's Actions menu. Modify the properties as required and click OK when finished.

18.1.2.5 Viewing a Payment Type


Permissions:

The PCM.Admin.Manager right and the CBC.ChargeBacks.Read right are needed to perform the following action. These rights are assigned by default to the PCM Administrator role.


To view the properties of a payment type:

  1. Choose Physical then Configure. Choose Charges then Payment Methods.

  2. In the list of payment types on the Configure Payment Methods page, select the item and click the item's Info icon.

    The Payment Type Information page opens listing all properties of the payment type.

18.1.2.6 Deleting a Payment Type


Permissions:

The PCM.Admin.Manager right and the CBC.ChargeBacks.Delete right are needed to perform the following action. These rights are assigned by default to the PCM Administrator role.


To delete a payment type:

  1. Choose Physical then Configure. Choose Charges then Payment Methods.

  2. In the list of payment types on the Configure Payment Methods page, select the item to edit, and choose Delete Payment Type from the Actions menu.

18.1.2.7 Creating or Editing a Customer


Permissions:

The PCM.Admin.Manager right and the CBC.ChargeBacks.Create right are needed to perform the following action. These rights are assigned by default to the PCM Administrator role.


To create a new customer to be used for chargebacks:

  1. Choose Physical then Configure from the Top menu. Choose Charges then Customers.

  2. On the Configure Customers page, click Add.

  3. On the Create or Edit Customer page, specify the properties of the customer:

    • Customer ID: Unique ID for the group being billed. Maximum characters: 30.

    • Name: Descriptive name for the customer. Maximum characters: 60.

    • Address and Contact Information: Address and contact information, including e-mail or phone.

    • Is Active: Indicator of the active status of the customer. Default: no.

    The new customer is now added to the bottom of the list on the Configure Customers page.


Permissions:

The PCM.Admin.Manager right and the CBC.ChargeBacks.Edit right are needed to perform the following action. These rights are assigned by default to the PCM Administrator role.


To modify a customer, select the customer to edit in the list of customers and choose Edit Customer from the Actions menu on the customer list. Modify the properties as required and click OK when finished.

18.1.2.8 Viewing a Customer


Permissions:

The PCM.Admin.Manager right and the CBC.ChargeBacks.Read right are needed to perform the following action. These rights are assigned by default to the PCM Administrator role.


To view the properties of a customer:

  1. Choose Physical then Configure. Choose Charges then Customers.

  2. In the list of customers on the Configure Customers page, select the item and click the item's Info icon. The Customer Information page opens listing all properties of the customer.

18.1.2.9 Deleting a Customer


Permissions:

The PCM.Admin.Manager right and the CBC.ChargeBacks.Delete right are needed to perform the following action. These rights are assigned by default to the PCM Administrator role.


To delete a customer:

  1. Choose Physical then Configure. Choose Charges then Customers.

  2. In the Configure Customers page, in the list of customers, select the item to delete, and choose Delete Customer in the item's Actions menu. To delete multiple customers, select the check box for the customers and choose Delete from the Table menu.

18.1.2.10 Creating Automatic Transactions

Automatic transactions can be defined by selecting the transaction type and enabling it. To enable automatic transactions:

  1. Choose Physical then Configure. Choose Charges then Automatic Transactions.

  2. On the Configure Automatic Transactions page, select the transaction that should be made automatic by selecting the transaction's check box.

  3. When finished, click Submit Update.

18.1.2.11 Creating or Editing a Manual Transaction

You can create a manual transaction in much the same way as creating automatic transactions. To add a manual transaction:

  1. Choose Physical then Configure. Choose Charges then Manual Transactions.

  2. On the Create Manual Transaction page, enter the necessary information for the transaction.

  3. When finished, click Create.

18.1.2.12 Deleting a Manual Transaction


Permissions:

The PCM.AdminManager right, the CBC.ChargeBacks.Delete right, and the CBC.ChargeBacks.Admin right are required to perform this task. These rights are assigned by default to the PCM Administrator role.


To delete a transaction:

  1. Choose Physical then Configure. Choose Chargebacks.

  2. On the Charge Invoices page, select the link to Transactions with No Invoice.

  3. In the list of transactions, select the Delete check box for the one to delete.

18.1.3 Managing Chargeback Tasks

This section discusses the processing of charging, invoicing, and billing. It contains the following topics:

18.1.3.1 Creating or Scheduling an Invoice


Permissions:

The PCM.AdminManager right, the CBC.ChargeBacks.Create right, and the CBC.ChargeBacks.Admin right are required to perform this task. These rights are assigned by default to the PCM Administrator role.


To manually create a new invoice:

  1. Choose Physical then Invoices. Choose Chargebacks.

  2. On the Invoices page, click Add.

  3. Select the content criteria used to screen for items to be included on the invoice (for example, records from a certain department).

  4. Enter the necessary additional criteria to filter the transactions. Click Generate Invoice to create an invoice immediately or click Schedule. Click Schedule to open a scheduling page where schedule criteria can be entered.

  5. Click OK when done.

18.1.3.2 Adjusting an Invoice


Permissions:

The PCM.AdminManager right, the CBC.ChargeBacks.Edit right, and the CBC.ChargeBacks.Admin right are required to perform this task. These rights are assigned by default to the PCM Administrator role.


To edit an invoice:

  1. Choose Physical then Invoices.

  2. On the Invoices page, choose Edit then Adjust Invoice in the Actions menu for an item. A page opens where information about the invoice can be adjusted.

18.1.3.3 Deleting an Invoice


Permissions:

The PCM.AdminManager right, the CBC.ChargeBacks.Delete right, and the CBC.ChargeBacks.Admin right are required to perform this task. These rights are assigned by default to the PCM Administrator role.


To delete an invoice:

  1. Choose Physical then Invoices.

  2. In the list of invoices on the Invoices page, select the check box next to the invoice then choose Delete from the Table menu.

18.1.3.4 Viewing Invoice Information


Permissions:

The PCM.AdminManager right, the CBC.ChargeBacks.Read right, and the CBC.ChargeBacks.Admin right are required to perform this task. These rights are assigned by default to the PCM Administrator role.


To view an invoice:

  1. Choose Physical then Invoices.

  2. On the Invoices page, click the Info icon for the invoice with information to Onview.

18.1.3.5 Printing an Invoice


Permissions:

The PCM.AdminManager right, the CBC.ChargeBacks.PrintInvoices right, and the CBC.ChargeBacks.Admin right are required to perform this task. These rights are assigned by default to the PCM Administrator role.


To print an invoice:

  1. Choose Physical then Invoices.

  2. On the Invoices page, choose Reports from the Actions menu for the invoice to print, and choose the type of report to produce.

18.1.3.6 Marking an Invoice As Paid


Permissions:

The PCM.AdminManager right, the CBC.ChargeBacks.Admin right, and the CBC.ChargeBacks.Edit right are required to perform this task. These rights are assigned by default to the PCM Administrator role.


To mark an invoice as paid:

  1. Choose Physical then Invoices.

  2. On the Invoices page, choose Edit then Mark Paid in the Actions menu for the invoice to mark as paid.

18.2 Processing Reservations

Reservations are used to manage physical content. A user can put a hold on items that are currently unavailable (for example, someone else has the item). If others also made a reservation request for an item, that reservation is put on a waiting list, which specifies the order in which people made a reservation for the item. A reservation request may comprise multiple items.

After a reservation request is made, an e-mail notification is sent to the administrator, who processes the request and starts the reservation fulfillment process in accordance with the applicable procedures in the organization.

If you are a user with the standard reservation privileges, you cannot make any changes to an existing reservation. You can only do so if your administrator has granted you special privileges beyond the defaults for a PCM requestor.

Each user can normally place only one reservation request for the same item. However, the administrator may have set up the system so a user can make multiple requests. This may be useful in environments where there are users who make reservation requests on behalf of several people.

This section discusses the following topics:

18.2.1 Reservation Request Properties

Each reservation request has several properties, including the following:

18.2.1.1 Request Status

The request status specifies the current status for a reserved physical item, which can be any of the following:

  • Waiting List: The request item is currently already checked out to someone else. It becomes available to the next requestor upon its return (unless the administrator chooses to override the waiting list order).

  • In Process (initial default): The reserved item is available and is being prepared for delivery. Only one request item for a reservation can have the In Process status.

  • Not Found: The request item could not be located in its designated location.

  • Unavailable: The request item cannot currently be processed for delivery.

  • Denied: The reservation request has been rejected by the administrator and cannot be fulfilled.

  • Canceled: The reservation request was called off before it could be fulfilled.

  • Checked Out: The reserved item is currently in the possession of someone as part of a reservation request. If a physical item is checked out, its current location (as shown on the Physical Item Information page) is automatically set to the value of the Deliver To Location field for the associated reservation request. If no value was entered in this field, the current location is set to OTHER. Also, the current location comment on the Physical Item Information page) is set to the location comment specified for the associated reservation request. If no comment was provided, it is set to the login name of the user who made the reservation.

  • Overdue: The reserved item is currently checked out to someone who has failed to return the item within the configured checkout time. As a result, the reservation request cannot currently be fulfilled.

    By default, an e-mail notification is sent out to the user who has an overdue item. This e-mail notification can be turned off.

  • Returned: The checked-out item was returned to the storage repository, so it is available for other users to reserve and check out.

18.2.1.2 Transfer Method

The transfer method specifies how the person who made the request (the requestor) will receive the reserved item. Users specify the desired transfer method when a reservation request is created. The following transfer methods are supported:

  • Copy: The physical content item will be duplicated and the copy will be provided to the intended recipient. The copy can be physical (for example, a copied DVD) or electronic (for example, an ISO image of a CD).

  • Fax: The physical content item will be faxed to its intended recipient.

  • Mail: The original physical content item will be mailed to its intended recipient.

  • Pickup: The intended recipient will pick up the physical content item in person.

  • Email: The content item will be e-mailed to its intended recipient.

18.2.1.3 Priority

The priority of a reservation request specifies the urgency with which it must be fulfilled. User specify the desired priority when they create a reservation request. The following priorities are supported:

  • No Priority: Delivery of the requested item does not have any particular priority (there is no rush). The item can be delivered in accordance with the applicable fulfillment procedures.

  • ASAP Rush: The requested item should be delivered to its intended recipient as soon as possible after the reservation was made.

  • This Morning: The requested item should be delivered to its intended recipient the same morning the reservation was made.

  • Today: The requested item should be delivered to its intended recipient the same day the reservation was made.

  • This Week: The requested item should be delivered to its intended recipient the same week the reservation was made.

18.2.2 Managing Reservations

The following tasks are included when managing reservations:

18.2.2.1 Creating a Reservation Request


Permissions:

The PCM.Reservation.Create right is required to perform this task. This right is assigned by default to the predefined PCM Requestor and PCM Administrator roles.


Reservation requests can only be made for physical items. Error messages are displayed if an attempt is made to reserve electronic items.

By default, each user can place only one reservation request for the same item. If users make reservation requests on behalf of multiple people (for example, manager assistants), it may be useful to override this behavior. To do so, add the following variable to the physicalcontentmanager_environment.cfg configuration file:

AllowMultipleRequests=true

If a reservation request is created for a physical item containing other items, the other items are included in the reservation. The child items are not seen in the request, but when a checkout is done for the parent item, all child items are also checked out. A request can be made for each of the child items, but they cannot be checked out until the parent item is returned.

As soon as a reservation request is submitted, the status of all request items is automatically changed to In Process, unless their status is already In Process or Checked Out. In that case, it is changed to Waiting List.

Users with the standard reservation privileges (those with the predefined 'pcmrequestor' role) cannot make any changes to an existing reservation by default. In order to edit reservation requests, they must be given the PCM.Reservation.Edit right.

To make a reservation request:

  1. Search for the physical item(s) to reserve and add them to the content basket.

  2. Choose My Content Server then My Content Basket.

  3. On the Content Basket page, select the check box of each physical item to reserve and choose Request then Request Selected Items from the Table menu. To reserve all items in the content basket, choose Request All Items.

    A prompt is displayed asking if the selected items should be removed from the content basket after they are reserved.

  4. Click Yes or No. Click Cancel to stop the reservation request.

  5. On the Create or Edit Request page, specify the properties of the new reservation request:

    • Request Name: Name for the reservation. Note that this is not required to be unique. Each reservation request has a unique system-internal reference. The system tracks reservation requests using this internal reference, not the request name. Therefore, multiple reservations can have the same name. Maximum characters: 30.

    • Request Date: Date and time the request is made. Default is the current date and time.

    • Requestor: Person submitting the request. Default is currently logged-in user.

    • Security Group: Group to which the request is assigned. Security groups can be used to limit the requests to which users have access.

    • Transfer Method and Priority: Desired transfer method and priority to be used.

    • Required By Date: Date when the items are needed. Click the calendar icon to select a date. Providing a time is optional. If not specified, midnight (12:00) is used.

    • Deliver To Location and Location Comment: Location where the item should be delivered. If the location is in the storage hierarchy, click Browse to search for and select the location. If not in the hierarchy, use the Comment field to provide delivery details. If an item is checked out, its current location (as shown on the Physical Item Information page) is automatically set to the value of this field for the associated reservation request. If no value was entered, the current location is set to OTHER.

    • Comments: Additional comments as needed.

  6. Click Create when finished.

The status of all request items is now automatically changed to In Process, unless their status is already In Process or Checked Out. In that case, it is changed to Waiting List. The items are reserved and the administrator is notified about the reservation request. After the administrator processes the reservation request, it can be fulfilled in accordance with the procedures in the organization.

18.2.2.2 Editing a Reservation Request


Permissions:

The PCM.Reservation.Edit right is required to perform this task. This right is assigned by default to the predefined PCM Administrator role. A user can edit an owned reservation without this right depending on the settings when PCM was configured.


To modify the properties of a reservation request:

  1. Choose Physical then Reservations.

  2. On the Reservation Search Results page, locate the reservation request to edit and choose Edit then Edit Request from its Actions menu.

  3. On the Create or Edit Request page, modify the properties of the reservation request and click Submit Update when finished.

18.2.2.3 Deleting a Reservation Request


Permissions:

The PCM.Reservation.Delete right is required to perform this task. This right is assigned by default to the predefined PCM Administrator role. A user can delete an owned reservation without this right depending on the setting when PCM is configured.


To delete a reservation request (and effectively cancel it):

  1. Choose Physical then Reservations.

  2. On the Reservation Search Results page, locate the reservation request to delete and choose Delete Request from its Actions menu.

    The reservation request is deleted immediately, without any further prompts. If there were no errors, a message is displayed stating the reservation request was deleted successfully.

18.2.2.4 Viewing Reservations for a Physical Item


Permissions:

The PCM.Reservation.Read right is required to perform this task. This right is assigned by default to the predefined PCM Requestor and PCM Administrator roles.


To view all outstanding reservation requests for a physical item:

  1. Search for the physical item.

  2. On the search results page, choose Information then View Reservations in the item's Actions menu.

  3. The Reservation Search Results page opens listing all outstanding reservation request for the current physical item.

18.2.2.5 Changing the Status of a Request Item


Permissions:

The PCM.Reservation.Edit right is required to perform this task. This right is assigned by default to the predefined PCM Administrator role. Users can change the status of an owned reservation without this right depending on the settings when PCM was configured.


To change the status of a request item in a reservation request:

  1. Search for the request item to change.

  2. On the Reservation Search Results page locate the request item with statuses to change and choose Information then Request Item Information from its Actions menu.

  3. On the Request Item Information page, choose Edit on the Page menu.

  4. On the Edit Request Item page, select a new status and click Submit Update when finished.

PKCVOPK VwEOEBPS/rm_initial_config.htm Configuring Records Management

12 Configuring Records Management

The Records portion of Oracle WebCenter Content is used to manage content items on a retention schedule. The focus of records management tends to be the preservation of content for historical, legal, or archival purposes while also performing retention management functions. The focus of retention management tends to be the scheduled elimination of content based on a schedule designed by a record administrator. Both records and retention management are combined to track and preserve content as needed, or dispose of content when it is not longer required.


Important:

You must configure all defaults, including any necessary categories, dispositions, and triggers, before checking in content that will use those defaults.


Items for retention are any form of information, both physical and electronic, that is important enough for an organization so it must be retained for a specific period and may be disposed of when no longer needed. However, it can be revisioned, retained and can be managed on a disposition schedule. An organization may choose to manage content to eliminate outdated and misleading information and track documents related to legal proceedings.

This chapter covers the following topics:

12.1 Understanding Records Management

Many organizations are subject to regulations that require the retention of information for a specified period:

  • Sarbanes Oxley:

    • Applies to all publicly traded corporations or companies that may become public

    • Audit-related working papers, communications, and correspondence must be retained for five years after the audit

  • Government organizations: DoD 5015.2, General Records Schedule

  • Pharmaceutical/health care industry: HIPAA, FDA regulations

  • Financial services: SEC Rule 17a

  • Telecommunications industry: 47 CFR 42, and so on

There may be litigation-related needs for effective and efficient retention management:

  • Policy-based retention of content:

    • Retain information needed for litigation (for example, a contract and any communication relating to it).

    • Centralized searching and retrieval of that information.

  • Systematic disposition of eligible content:

    • Less material to search through during discovery.

    • Less material to give to opposing counsel.

  • Suspend/freeze disposition of content relating to pending litigation:

    • Avoid appearance of cover-up and possible liability when content relating to pending litigation is destroyed.

There may be business-related needs for effective and efficient retention management:

  • To organize items that are created in a variety of forms (e-mail, CDs, DVDs) and which are stored in a variety of locations (employee computers, central file storage, and so on).

  • To provide a uniform infrastructure for retrieving and sharing the content across the organization.:

  • The information may be required for the day-to-day operations of the organization and must be kept for historical, tracking, or audit purposes (for example, receipts, order histories, completed forms, personnel files, corporate announcements).

  • The information may be necessary to the success or survival of the organization (for example, software source code, contracts, financial data).

  • There may be internal policies or external regulations requiring the information to be retained (for example, transaction documents, financial statements, lease agreements).

  • To ensure that content items are retained over the period they are useful to the business.

This section discusses the following additional topics in records management:

12.1.1 Life Cycle for Retained Content

The life cycle of retained content goes through several stages.

Figure 12-1 Life Cycle of Retained Content

Text describes the life cycle of retained content.

The filing date is the date a content item is marked as an item being tracked. This often coincides with the check-in date. However, it is possible for an active content item already checked in to be tracked.

The information may need to be for different periods of time, depending on the type of content, its use within the organization, and the need to comply with external laws or regulations.

The cutoff of a content item is the moment the status of the item changes and the item goes into disposition. An item may be cut off after a specific period, at a specific event, or after an event.

Items are disposed of by authorized people according to the requirements of the organization. Disposition actions can include destruction, storage, transfer, or an item can be deemed so important it will never be destroyed (for example, due to historical significance). "Disposal" in this instance indicates a status changes from active use.

12.1.2 Types of Retained Content

Retained content can be divided into categories depending on the perspective:

12.1.2.1 Internal and External Retained Content

An internal retained content item is an electronic item stored within Oracle WebCenter Content and managed by the product.

External content can also be managed. An external retained content item is a source file not stored in Oracle WebCenter Content. It can be in a variety of formats, both physical or electronic. The software can manage the disposition schedule, search metadata associated with the external file, and manage an electronic rendition of an external file. An electronic rendition can either be checked in as a primary file of an external item, or be filed as a separate file, and then linked to the external file metadata.

12.1.2.2 Classified, Unclassified, Declassified Content

Content can be classified, unclassified, or declassified.

  • Classified content is that which requires protection against unauthorized disclosure (for example, because it contains information sensitive to the national security of the United States or because it is essential for a corporation's operation).

  • Unclassified content is not and has never been classified.

  • Declassified content was formerly classified, but that classified status has been lifted.

A classification specifies the security level of a classified content item. A classification guide provides default classification values for check-in pages.

Options can be chosen during the initial setup to insure that the system complies with the DoD 5015.2 standard (including Chapter 4). The software has been certified by the Joint Interoperability Test Command (JITC) to comply with that standard. A copy of the standard is available on the official website of the Department of Defense, Washington Headquarters Services, Directives and Records Division at http://www.dtic.mil/whs/directives/.


Important:

Executive Order 12958: Classified National Security Information describes in detail the system for classifying, safeguarding, and declassifying national security information. This guide assumes you are familiar with proper classification protocols.


12.1.2.3 Non-Permanent, Transfer or Accession, and Reviewed Content

For disposition purposes, content is categorized into non-permanent, transfer or accession to NARA, and subject to review. Most items fall into the non-permanent category.

Non-permanent items are usually destroyed after a retention period. Permanent items are deemed important for continued preservation and are retained indefinitely (for example, because of their historical significance).

Items can be scheduled for periodic reviews by authorized people. This complies with the DoD Vital Record Review criteria.

12.1.3 Basic Retention Management Concepts

Records is used to manage content, regardless of source or format, in a single, consistent, manageable infrastructure. Managed items are assigned retention schedules and disposition rules that allow users to schedule life cycles for content to eliminate outdated or superseded information, manage storage resources, or comply with legal audit holds.

Content and its associated metadata are stored in retention schedules, which are hierarchies with categories that define disposition instructions. Access to the items is controlled by rights assigned to users by a Records Administrator. The items can be accessed, reviewed, retained, or destroyed in an easy and efficient manner by authorized people according to the requirements of an organization.

Disposition schedules of content in the repository can also be managed, enabling the scheduling of life cycles for content to eliminate outdated or superseded information, manage storage resources, or comply with legal audit holds.

The following concepts are important to understand in the context of retention management:

  • Record administrator: individuals in the organization who are responsible for setting up and maintaining the retention schedule and other aspects of the management system.

  • Record user: individuals who use the software to check content in and out of the system, to search for records, and to perform other non-administrative tasks.

  • Record officer: individuals who have limited administrative responsibility in addition to the responsibilities of a record user.

  • Administrator: individuals who may maintain the computer system, network, or software at the site where the management system is in place.

  • The retention schedule is an organized hierarchy of series, categories, and record folders, which allows users to cluster retained content into similar groups, each with its own retention and disposition characteristics.

  • A series is an organizational construct in the retention schedule that assists in organizing categories into functional groups. Series are normally static and are used at a high level in an organization hierarchy. They can be especially useful if a large amount of categories are used. A series can be nested, which means a series may contain other series.

  • A retention category is a set of security settings and disposition instructions in the retention schedule hierarchy, below a series. It is not an organization construct but rather a way to group items with the same dispositions. A category helps organize record folders and content into groups with the same retention and disposition characteristics. A retention category may contain one or more record folders or content items, which then typically follow the security settings and disposition rules associated with that retention category. Retention categories cannot be nested, which means a retention category cannot contain other retention categories.

  • A record folder is a collection of similar content items in the retention schedule. Folders enable content to be organized into groups. A folder typically follows the security settings and disposition rules associated with its assigned retention category. Folders can be nested, which means a folder may contain other folders.

  • Disposition is the collective set of actions taken on items. Disposition actions include wait times and activities such as transfer to external storage facilities, the destruction of temporary content, deletion of previous revisions, and deletion of all revisions.

  • A disposition instruction is created within a retention category, and typically consists of one or more disposition rules, which define how content is handled and what actions should be taken (for example, when and how content should be disposed of).

  • A period is the segment of time that must pass before a review or disposition action can be performed. Several built-in periods are provided (for example, "one year"), but custom periods can be created to meet unique business needs.

  • A trigger is an event that must occur before a disposition instruction is processed. Triggers are associated with disposition rules for retention categories. Examples of triggering events include changes in status, the completed processing of a preceding disposition action, or a retention period cutoff.

  • A link is a defined relationship between items. This may be useful when items are related and need to be processed together. Links are available for items stored both in and out of the retention schedule.

  • A classification specifies the security level of a classified item. It is used in the process of identifying and safeguarding content containing sensitive information. Typical classification levels are "Top Secret," "Secret," and "Confidential," and "Unclassified."

  • A classification guide is a mechanism used to define default values for several classification-related metadata fields on the content check-in pages for content. A guide enables convenient implementation of multiple classification schemes.

  • Freezing inhibits disposition processing for an item. Frozen content cannot be altered in any way nor can it be deleted or destroyed. This may be necessary to comply with legal or audit requirements (for example, because of litigation). Freezing is available for items stored both in and out of the retention schedule.

  • External items are those that are not searched and processed in the same fashion as retained content. External content usually refers to content managed by Physical Content Management or managed by an adapter (an add-on product).

  • Federation, Federated Search, Federated Freeze are functionality used to manage the process of legal discovery. Using Federated Search or Freeze, a legal officer can search content across all repositories to gather information needed for legal proceedings.

12.1.4 Physical Content Management

Physical Content Management (PCM) provides the capability of managing physical content that is not stored in the repository in electronic form. All items, internal and external regardless of their source or format, are managed in a single, consistent, manageable infrastructure using one central application and a single user interface. The same retention schedules are used for both electronic (internal) and physical (external) content.

PCM tracks the storage locations and retention schedules of the physical content. The functionality provides the following main features:

  • Space management, including definition of warehouse layout, searching for empty space, reserving space, and tracking occupied and available space.

  • Circulation services, including handling reservation requests for items, checking out items, and maintaining a due date for checked-out items.

  • Chargeback services, including invoicing, for the use of storage facilities and/or actions performed on physical items.

  • Barcode file processing, including uploading barcode information directly into the system, or processing barcode files manually.

  • Label creation and printing, including labels for users, storage locations, or individual physical items.

  • Retention management, including periodic reviews, freezes and litigation holds, and e-mail notifications for pending events.

12.1.5 Basic Retention Processes

The following steps outline the basic workflow of retained content:

  1. The retention schedule and any required components, such as triggers, periods, classifications, and custom security or metadata fields are created.

  2. Items are filed into the retention schedule by users. The filed items assume the disposition schedules of their assigned category.

  3. Disposition rules are processed in accordance with the defined disposition schedules, which usually have a retention period. The processing is activated by either a system-derived trigger or custom trigger. The trigger could affect one or more items simultaneously.

  4. Whenever a disposition event is due for action (as activated by a trigger), an e-mail notification is sent to the person responsible for processing the events. The same is true for review. The pending events and reviews are displayed in the pages accessed from the Retention Assignments links within the user interface.

  5. The Records Administrator or privileged user performs the review process. This is a manual process.

  6. The Records Administrator processes the disposition actions on the pending dispositions approval page. This is a manual process.

  7. A batch process is run to process an approval.

Many disposition schedules are time-based according to a predictable schedule. For example, content is often filed then destroyed after a certain number of years. The system tracks when the affected content is due for action. A notification e-mail is sent to reviewers with links to the pages where reviewers can review and approve content and folders that are due for dispositions.

In contrast, time-event and event-based dispositions must be triggered with a non-system-derived trigger (a trigger that was defined for a particular scenario). For example, when a pending legal case starts litigation, the Records Administrator must enable the custom trigger and set its activation date because the start date information is external. Custom triggers can define event and time-event based disposition actions based on the occurrence of a particular event.

12.2 Selecting the Software Configuration

By choosing certain options, specific components are enabled and ready for use. To view details about the components that are installed and the disposition actions enabled with each option, click the i button next to the option.

The following options are available to enable:

  • Folders Retention: Enables functionality to apply retention rules for deletions of items stored together in a retention query folder. This functionality is described in detail in the Oracle WebCenter Content Application Administrator's Guide for Content Server. See Chapter 13 for a summary of this feature.

  • Minimal: Enables the minimal amount of functionality and excludes some disposition actions and most of the product features. This is the default when the software is enabled.

  • Typical: Enables all disposition actions and all features except for DoD Configuration, Classified Topics, FOIA/PA tracking (Freedom of Information Act/Privacy Act), and E-mail.

  • DoD Baseline: Enables the features from a Typical installation with the addition of DoD Configuration and E-mail.

  • DoD Classified: Enables all features except for FOIA/PA.

  • Custom: Enables the ability to choose a variety of features. Some disposition actions are dependent on other actions. If an action is selected, dependent actions are also automatically selected.

The only way to enable FOIA/PA tracking is by using the Custom configuration option. If the FOIA/PA functionality is installed, fast index rebuilds may not be possible. Deselect the Fast Index Rebuild option when using FOIA/PA. If you install FOIA/PA after the system has been in use, you should rebuild the index.


Permissions:

The Admin.RecordManager right is required to perform this action. This right is assigned by default to the Records Administrator role.


To set the software configuration:

  1. Choose Records then Configure then Enabled Features.

  2. On the Enabled Features page, select the type of configuration. After selection, the feature and disposition options at the bottom of the page appear with the check box selected, indicating which choice is included. If Custom is selected, choose which features and dispositions to be enabled.

  3. Click Submit.


Important:

You must configure all defaults, including any necessary categories, dispositions, and triggers, before checking in content that will use those defaults.


If DoD functionality is enabled by using either DoD option (or a customized option that enables DoD features), then some features are automatically enabled as well. For example, when creating custom search templates, the Security Classification status of a content item is always displayed whether or not the classification was chosen for inclusion in the template. It is a requirement of the DoD specification that the classification level always be displayed in a search result.

After making selections or changing options (for example, switching from Baseline to Classified), restart Content Server. Depending on the search options are in use, the index may also need to be rebuilt. See Administering Oracle WebCenter Content for details about restarting the system and rebuilding the index.

If a component is disabled, the data used with that component is not deleted. If the component is enabled again, the old data can still be used.

12.2.1 Usage Notes

Depending on the cache settings for your browser, it may need to be restarted or the cache settings must be cleared in order to view changes made to the configuration. For example, if you enable Offsite Storage functionality, you may need to clear the cache settings and restart your browser for the appropriate options to appear on the Physical menu. The same is true if you disable functionality in order to remove the options.

When using Records with a Safari browser, menus can appear behind the icons for the Admin Applets. Therefore, if you choose Administration then Admin Applets then choose Records or Physical, the options on the Records or Physical menu appear behind the icons for the Admin Applets. This is a known problem and Oracle is working to solve this issue.

When using the IBM WebSphere Application Server (WAS) in a browser using tabs, the login/logout process performs differently than in browsers not using tabs. Authentication is not set at the "tab level" but rather at the "browser level." For example, consider this scenario:

  • Oracle WebCenter Content and Records are installed in the same cell on WAS.

  • A user logs in to Oracle WebCenter Content then opens a new tab and enters the Records system URL in the browser.

  • That user is automatically logged in to the Records system with the same permissions as when that user logged in to Oracle WebCenter Content. There is no need to re-authenticate.

  • If the user logs out of the Oracle WebCenter Content system, the user is also automatically logged out of the browser tab session for the Records system. This is reflected on the next action or when the tab is refreshed.

12.3 Retention Management Options

After choosing the features to use, certain options must be configured in order for the system to work properly. If not done, a warning messages appears indicating that the setup is incomplete.

To complete the configuration, click the link in the warning message. The Setup Checklist page opens showing a series of links to other pages where configuration selections can be made. When done configuring, select the check box next to an option to indicate the completed task. Depending on the action, it may be necessary to refresh the frame in order to view the completed tasks.

The Setup Checklist can also be accessed by choosing Records then Configure then Setup Checklist.

All defaults, including any necessary categories, dispositions, and triggers, must be set before checking in content that will use those defaults.


Important:

If File Store Provider is needed to check in templates, set up the File Store Provider first and then check in the templates. To install a file store provider, click Install Default Templates (Category Defaults, Reports, Dashboards, etc.) on the Setup Checklist page. See Administering Oracle WebCenter Content for details about using File Store Provider.


If the configuration of the system changes (for example, switch from DoD Baseline to Typical) reconfigure the options needed for the level of functionality that is enabled.

The required options include:

  • Set configuration variables: Several optional variables can be changed.

  • Define default metadata: Some content items are automatically checked in to the repository such as audit entries and screening reports. In order for them to check in properly, choose default metadata for the content. For example, if a DoD installation level is chosen then the default metadata must include the Category or Folders metadata field.

  • Configure the installation: Before using the system, complete the installation steps outlined in Section 12.2.

  • Configure the security settings: Determine the appropriate roles, rights, and user permissions to perform certain tasks.

The other configuration options on this page can be performed in any order. When finished setting configuration options, click Submit. To clear the options selected, click Reset.

The following list provides an overview of the steps needed to set up the retention software. The steps should be followed in the order given. For example, you must define triggers and periods before disposition rules, because when you define a category and its disposition rule, you include references to triggers and periods.


Tip:

To track actions while setting up and configuring the system, first configure the audit trail. All user actions are set to be recorded by default.


Some of these tasks may be optional depending on your organization. The information is provided so you can determine if the step may be useful.

  • Determine additional security settings.

  • Configure system settings:

    • Set the calendar for the organization. See Section 12.5.1.

    • Define the time periods associated with retention or disposition of retained content.

    • Set up any custom fields required.

  • Set up the retention schedule. This includes:

  • Determine how content will be handled:

    • Using triggers to initiate events affecting content. For details, see Section 15.1.

    • Defining the sequence of actions to be performed on items during their life cycle. For details, see Section 15.3.

    • Inhibiting disposition processing. For details, see Section 15.2.

  • Establish relationships between content. See Using Oracle WebCenter Content for details about establishing links between content items.

In addition, workflows can be created to track requests made under the Freedom of Information Act (FOIA) and Privacy Act (PA) if that software is enabled.

12.4 Setting Up Physical Content Management

Several aspects of PCM should be set up in order to use the system. These include:

  • Set up the required PCM user roles and rights.

  • Configure the PCM environment including chargebacks, customers, and object types. See Chapter 17.

  • Define the storage space environment. See Section 17.2.

  • Define disposition rules for physical content, if required. See Chapter 15.

12.5 Configuring Retention Definitions and Options

Several system-wide configuration settings are specified on the Configure Retention Settings page. Most of these options can be set by selecting the check box next to the option. General configuration choices are available by choosing Records then Configure. Choose Settings to open the Configure Retention Settings page.

General options:

  • Start of fiscal calendar: Sets the start date for the calendar used for fiscal accounting. See Section 12.5.1.

  • Archive Metadata Format: Sets the storage file format for metadata of items in a disposition bundle.

  • Log Metadata Changes: Enables tracking of item-level metadata changes.

  • Disable life cycle updates: Stops the updating of disposition dates and review date composition.

  • Enable Category Dispositions Review: Enables the workflow to review category dispositions. The workflow must be set up before this option is enabled.

  • Enable Report Exclude Search Options: Enables an option that allows a user to exclude reports from searches.

Record Definition options:

  • Always restrict revisions/Never restrict revisions: Allows revisions of content items or prevents revisions.

  • Always restrict deletions/Never restrict deletions: Allows deletions of content items or prevents deletions.

  • Always restrict edits/Never restrict edits: Allows edits of content or prevents content editing.

  • Display record icon when: Indicates when a record icon should be shown. Options include when editing, deleting, or revisioning of content is restricted or any combination of those actions. The appearance of the record icon can also be disabled. The icon can assist users to determine the status of content (that is, if it is considered a record for tracking purposes).

Security options:

  • ACL-based security: Enables security on Retention Schedule objects based on Access Control Lists.

  • Default Oracle WebCenter Content security on Retention Schedule objects: Enables default security on categories, folders, and triggers.

  • Supplemental Markings: Enables supplemental marking security on retention objects.

  • User must match all supplemental markings: Forces a user to match all markings to access an item.

  • Custom security fields: Enables the ability to create custom security fields.

  • Classified security: Enables classified security features (required for conformance to the Chapter 4 Classified Records section of the DoD 5015.2 specification).

Notification options:

  • Do not notify authors: Prevents e-mail notifications to be sent for pending events, reviews, and the Notify Authors disposition action.

Scheduling options:

  • Only allow scheduled screening: Prevents users from starting screenings manually by hiding the Search button on the screening page.

User interface options:

  • User-friendly disposition: Enables user-friendly language for disposition rules and processing.

  • Show export date: Enables users to export items that changed since a specific date.

  • Use page Navigation: Displays more elaborate page navigation controls on screening results lists and record folder lists.

  • Paginate Navigation Tree: Displays the retention schedule in the Browse Content menu as a tree-like structure when using the Trays layout. If more than 20 items are available for viewing, an option appears to view the next 20 items in the structure.

DoD Configuration options:

  • Enable custom scripting: Allows creation of custom scripts for security or for notifications.

Classified topic options:

  • Run auto computation of declassification date: Computes the declassification date for classified objects.

  • Maximum years before declassifying: Sets the number of years after which content is declassified.

12.5.1 Setting the Fiscal Calendar


Permissions:

The Admin.RecordManager right is required to perform this task. This right is assigned by default to the Records Administrator role.


The fiscal calendar is the calendar used by an organization for financial and accounting purposes. A fiscal year may coincide with a calendar year (that is, run from January 1 to December 31), but that is not required.

Specify the start date of the fiscal year once, unless the organization changes the fiscal start date or the start date varies from year to year. The fiscal start date may need to be set manually each year if your organization has a unique fiscal calendar start, such as the first Monday of each year, for example, because a date does not fall on the same weekday each year.

To set the fiscal calendar start date:

  1. Choose Records then Configure then Settings.

  2. On the Configure Retention Settings page, specify the date the fiscal year begins for the organization in the Start of Fiscal Calendar box. To enter a date, enter the starting date and select the month from the list. For example, if your organization starts its fiscal calendar on April 1, type 1 and select April from the list of months.

  3. Click Submit Update. A message is displayed saying the configuration was successful.

  4. Click OK.

12.5.2 Managing Time Periods

Periods define a length of time to use in retention schedules and dispositions. They are associated with retention periods for dispositions and with review periods for cycling subject-to-review content.

Three types of time periods are used in retention:

  • Custom: A custom period has a defined start date and time usually not corresponding to a fiscal or calendar year period.

  • Fiscal: A fiscal period corresponds to a fiscal year.

  • Calendar: A calendar period corresponds to the calendar year.

Built-in periods cannot be edited or deleted. A user can edit any periods that are created, and created periods can be deleted if the period is not in use.

To work with periods, the following rights are required:

  • Admin.Triggers: This right enables a user to view information about periods.

  • Admin.RecordManager: In addition to viewing information about periods, this right also enables a user to create (add), edit, and delete periods.

The following calendar periods are predefined:

  • Calendar Quarters (wwRmaCalendarQuarter)

  • Calendar Years (wwRmaCalendarYear)

  • Months (wwRmaMonth)

  • Fiscal Quarters (wwRmaFiscalQuarter)

  • Fiscal Halves (wwRmaFiscalHalves)

  • Fiscal Years (wwRmaFiscalYear)

Weeks (wwRmaWeekEnd) are defined as a built-in custom period.

The following tasks are performed when managing time periods:

12.5.2.1 Creating or Editing a Custom Time Period


Permissions:

The Admin.RecordManager right is required to perform this action. This right is assigned by default to the Records Administrator role.


Custom periods can be created in addition to the standard calendar periods already defined. For example, you may need a calendar period such as decade or century for the review cycle or retention period needs of your organization.

To create a custom period:

  1. Choose Records then Configure. Choose Retention then Periods.

  2. On the Configure Periods page, click Add.

  3. On the Create or Edit Period page, enter a name for the period.

  4. Select the type of time period, either Calendar, Fiscal, or Custom. The start date of the fiscal year is defined on the Configure Retention Settings page. The Custom option is useful for creating lengthy periods such as decades or centuries, or unusual periods such as School Year Session, or Software Development Cycle.

  5. Click the calendar icon and select or edit a custom start time.

  6. Enter an integer value for the length of the time period and choose a time unit from the Length list.

  7. Enter a label to describe the end of the period.

  8. Click Create. A message is displayed saying the period was created successfully, with the period information.

  9. Click OK.

To edit a time period:

  1. Choose Records then Configure. Choose Retention then Periods.

  2. On the Configure Periods page, choose Edit Period from the item's Actions menu for the period to edit.

  3. On the Create or Edit Period page, edit the appropriate information.

  4. Click Submit Update. A message is displayed saying the period was updated successfully.

  5. Click OK.

12.5.2.2 Viewing Period Information


Permissions:

Either the Admin.Triggers or Admin.RecordManager right is required to perform this action. The Admin.Triggers right is assigned by default to the Records Administrator and Records Officer roles and the Admin.RecordManager right to the Records Administrator role.


To view information about a period:

  1. Choose Records then Configure. Choose Retention then Periods.

  2. On the Configure Periods page, click the period to view from the Period Name list.

    The Built-in label indicates if a period was predefined. A period created by an administrator always displays No for the Built-in label. If a period is a built-in period, the Edit option is not displayed on the page because a user cannot edit a predefined period. The Actions menu is not available to any users other than those with the Admin.RecordManager right.

  3. When done, click OK.

12.5.2.3 Viewing Period Usage


Permissions:

Either the Admin.Triggers or Admin.RecordManager right is required to perform this action. The Admin.Triggers right is assigned by default to the Records Administrator and Records Officer roles. The Admin.RecordManager right to the Records Administrator role.


Period usages are usually viewed to determine why a custom period cannot be deleted.

To view period references:

  1. Choose Records then Configure. Choose Retention then Periods.

  2. On the Configure Periods page, click the period to view from the list.

  3. Choose References from the Information page Actions menu. The Period Reference page opens. This page shows all folders, categories, and/or category dispositions the current period is referenced by, with a link to each of the referencing items. If a link is clicked, the associated information page for the item opens.

  4. When done, click OK.

12.5.2.4 Deleting a Custom Period


Permissions:

The Admin.RecordManager right is required to perform this action. It is assigned by default to Records Administrator role.


Built-in periods cannot be deleted. Before deleting a period, verify that the period is not referenced by a retention period within a disposition rule for a category, or by a review period for an item, record folder, or retention category.

  1. Choose Records then Configure. Choose Retention then Periods.

  2. On the Configure Periods page, choose Delete Period from a period's Actions menu. A message is displayed saying the period was deleted successfully.

  3. Click OK.

12.5.2.5 Example: Creating a Custom Period

This example demonstrates creating a custom period with the following characteristics:

  • The custom period name is School Year 2010-2011.

  • The custom start time is September 7th, 2010, and the start time is 9:00 am. The system automatically calculates and tracks the end of the period.

  • The length of the period is nine months.

  • The end of the period label is End of School Year 2011.

To create a custom school period:

  1. Choose Records then Configure. Choose Retention then Periods.

  2. On the Configure Periods page, click Add in the Period Name area.

  3. On the Create or Edit Period page, enter School year 2010-2011 as the Period Name.

  4. By default, the Custom option is already selected in the Period Type list. Leave the Custom option selected.

  5. Click the calendar icon and select a custom start date of September 7, 2010. The date and default time show in the Custom Start Time box. The time defaults to 12 am (midnight) on this page, so to edit the time, you must do so directly in the Custom Start Time text box. Change 12 to 9. Specify the date according to the format used by your system locale.

  6. Enter 9 as the Length and select Months from the list.

  7. Enter End of School year 2010-2011 as the label for end of period.

  8. Click Create.

    A message is displayed saying the period was created successfully.

  9. Click OK.

12.5.3 Setting Performance Monitoring

Performance monitoring can be enabled to check the status of batch processing, service calls, and other system information. To enable this, choose Records then Audit. Choose Configure then Performance Monitoring.

Several default numbers have been set as a starting point for monitoring. Actual performance variations will depend on the hardware used at the site and other variables such as total amount of content and software in use.

For details about using performance monitoring, see Administering Oracle WebCenter Content.

12.6 PCM Options

Some general configuration options for Physical Content Management are available on the Configure Retention Settings page. This is similar to the Configure Retention Settings page where a series of options are used to determine system functionality.

To access this page, choose Physical then Configure then Settings. Other configuration options are available on the Configure menu, such as setting up chargebacks, invoices, and other aspects of Physical Content Management.

The following options appear on the Configure Physical Settings page:

  • Default Transfer Method: Specifies the default transfer method (copy, fax, mail, and so on).

  • Default Request Priority: Specifies the default priority to be used for reservations (no priority, rush, this week, and so on).

  • Default Checkout Period (days): Specifies the number of days a reserved physical item can be checked out.

  • Delete completed requests: Specifies if completed reservation requests are automatically deleted after a specified number of days.

  • Request history period (days): The maximum number of days a reservation request is stored in history.

  • Check in internal content item for reservation workflow: Specifies if a new internal content item should be checked in when a reservation request is made.

  • Do not notify users when checked-out items are overdue: Specifies that users with late items receive an e-mail notification.

  • Allow reservation requestors to modify/delete their reservations: Specifies if users who create a reservation request can modify or delete their open requests.

  • Automatically update request waiting list: Specifies if waiting lists for requests are updated automatically.

  • Show batch services: Specifies if batch services are available in the External Content menu.

  • Enable offsite functionality: Specifies if the storage of content offsite is enabled. When this is enabled, new metadata fields are added to the system as well as the Offsite security group.

12.7 Creating Custom Metadata Sets

If an organization has unique needs for metadata fields for retention categories or record folders, the software can be customized to include the fields. Depending on the field characteristics, the new custom fields are displayed on the Create Category, Create Folder page, or Create Physical page (if Physical Content Management is enabled). These fields are also displayed on the edit and information pages for those retention schedule objects.

The order in which the custom metadata fields appear depends on the order indicated in the custom metadata fields box. The fields can be arranged using the arrows near the custom metadata box.

Custom fields can be added to existing tables already in use in the repository. These fields supplement the fields uses with retention category pages, record folder pages, and physical items pages.

Auxiliary metadata sets can also be created. These are subsets of metadata that can be attached to objects in the repository. This type of metadata is associated with specific properties of an item, such as image size, the character encoding of a document, or other property that must be tracked for specific items. When creating auxiliary metadata, the database table in which the metadata is stored is also created, with a name given to the table and fields added to it. Note that in order to search for auxiliary metadata, Oracle Text Search (full-text searching) must be used.

The process is the same for creating both types of metadata, either complete auxiliary sets or additional fields with the standard metadata sets. The main difference lies in the creation of the table to store the auxiliary metadata set.


Note:

Using auxiliary metadata sets can slow the search times when using OracleTextSearch because additional tables must be accessed and evaluated.


This section discusses the following topics:

12.7.1 Creating or Editing Custom Metadata Fields


Important:

If you plan to use an option list with the custom field, the option list must be created and populated before creating the custom field..


The following information is a general navigational procedure for adding metadata fields regardless of type (standard metadata or auxiliary metadata).


Permissions:

Users must have the Records Administrator role or the PCM Administrator role in order to perform this action. The user must also have administrative permissions.


  1. Choose Records then Configure. Choose Metadata then Metadata Sets.

  2. Perform these actions on the Metadata List page:

    To create a new auxiliary metadata set, choose Create Auxiliary Metadata from the page menu. On the Create or Edit Auxiliary Metadata Set page, enter the auxiliary metadata set name, display name, name of the new table being created to house the metadata set, and column prefix for that table.

    To add fields to an existing metadata set, either auxiliary or standard set (Retention Categories, Record Folders, or Physical), choose Update Fields from the auxiliary set's individual Actions menu on the Metadata List page.

  3. On the Create or Edit Standard Metadata Field page, add the field information for the new metadata field.

    • Name: Name for the field in the database. Maximum of 30 characters is allowed. Do not use special characters (question mark, punctuation, and so on).

    • Caption: Caption for the field that will appear in the user interface. Maximum of 30 characters allowed.

    • Type: The data type for the field. Options include:

      • Text (default): Text field, 30 characters maximum.

      • Long Text: Text field, 100 characters maximum.

      • Integer: An integer value ranging from -231 to 2 31 (-2 billion to +2 billion). Decimal values and commas not permitted.

      • Memo: Text field, 1000 characters maximum.

      • Date: A date field according to the date format specified in system settings. Selecting this type puts the CeRalendar component icon next to the date field.

    • Default Value: Default value for an option list, Text, or Long Text field. Maximum characters allowed: 30.

    • Usage: Select a check box to enable usage. Options include:

      • Required: If selected the column will be required.

      • Enabled: If selected, the field is enabled on pages.

      • Searchable: If selected, the field is added to those fields that are searchable.

    • Option List Key: The field used for the option list. Click Choose to select a key from a list. Note that an option list must be created and populated before it can be used.

    • Option List Type: The kind of option list to use, selectable from a list.

  4. Click Add (a plus sign) to add the field to the Field list. Click Delete (an X) to delete a field from the list. To change the order of fields, highlight a field and move it up or down in the list by clicking the Up or Down arrow.

  5. Click Apply after adding or editing all the fields.

12.7.2 Viewing Custom Metadata Field Information

To view information about the custom fields added to metadata sets:

  1. Choose Records then Configure. Choose Metadata then Metadata Sets.

  2. On the Metadata List page, choose Fields Information from the Actions menu of the metadata set to view.

    The Fields for Metadata page opens showing the specific fields created for that metadata set.

12.7.3 Deleting a Custom Metadata Field


Permissions:

The Admin.RecordManager right or PCM.Admin.Manager right (when using PCM) is required to perform this action. This right is assigned by default to the Records Administrator and the PCM Administrator roles. The user must also have administrative permissions.


To delete a custom metadata field:

  1. Choose Records then Configure. Choose Metadata then Metadata Sets.

  2. On the Metadata List page, choose Update Fields from the set's individual Actions menu on the Metadata List page.

  3. On the Create or Edit Auxiliary Metadata Set page, select the field name in the Field list and click Delete (an X).

  4. Click Apply after deleting the fields.

12.7.4 Example: Creating a Custom Category Metadata Field

This example creates a custom retention category metadata field that is an optional text box in which you enter an integer value for a SKU (Stock Keeping Unit).


Permissions:

The Admin.RecordManager right is required to perform this action. This right is assigned by default to the Records Administrator role.


To create a custom retention category metadata field:

  1. Choose Records then Configure. Choose Metadata then Metadata Sets.

  2. On the Metadata List page, choose Update Fields in the Actions menu for Retention Categories.

  3. On the Create or Edit Standard Metadata Field page, complete the metadata fields as follows:

    1. Enter DeptSKU as the Name.

    2. In the Type list, select Integer.

    3. Enter Department SKU as the Caption.

    4. Select Enabled.

    5. Select Searchable.

  4. Click Add (the plus symbol).

  5. Click Apply. To view the new field, browse content, and choose Create Retention Category from the Actions menu. The new custom metadata field is displayed.

    The Department SKU field is added to the Create Retention Category page.

12.8 Setting Up Workflows


Important:

Workflow creation is only needed to enable category disposition approval processing, reservation processing, or offsite request processing. If you do not need that functionality, you do not need to set up any workflows.


Workflows are used to specify how content is routed for review, approval, and release to the system. A criteria workflow is used for content that enters the review process automatically, based on metadata matching predefined criteria. A basic workflow is one used to process specific content items.

Three specific criteria workflows must be set up in order for the following functionality to work:

  • Category Disposition Approval Processing: Set up to route category dispositions for review and approval.

    If you enable the disposition workflow feature on the Configure Retention Settings page but do not set up the workflow, you must set the UpdateDispositionsTableOnWorkflowApproval configuration variable to false in the config.cfg file.

  • Reservation Processing: Set up to route reservation requests for physical content for processing.

  • Offsite Processing: Set up to process requests for offsite storage of items.

A workflow is composed of several steps that route the content to groups of people in an alias list. It can be customized to exit when completed, branch content depending on certain conditions, and use variables to designate unknown users. Workflows are discussed in detail in Chapter 7. This section describes only the information needed to establish the three workflows described previously.

This section discusses the following topics:

12.8.1 Workflow Prerequisites and Process

The following steps briefly explain the Criteria workflow process and some of the tasks that should be performed before setting up the workflow:

  1. A user with Workflow rights sets up the Criteria workflow by defining the following:

    • Security groups: The RecordsGroup, Reservation and Offsite security groups are required.

    • Metadata fields and values: These fields are set up at installation (for example, OffsiteRequest.)

    • Review steps and reviewers for each step: It is good practice to discuss workflows with the people involved so they are aware of the responsibilities they will have in the process.

    • If a group of people need to be included in an alias that should be created ahead of time. The following alias lists are needed:

      • Disposition Reviewers: Those people who will review disposition criteria. Suggested name: DispositionReviewGroup.

      • Reservation Reviewers: Those people who can approve reservation requests. Suggested name: ReservationGroup.

      • Offsite Request Reviewers: Those people who review requests for offsite storage. Suggested name: OffSiteRequestReviewGroup.

      See Oracle Fusion Middleware Administering Oracle WebCenter Content for details about adding aliases and adding users to alias groups.

  2. A user with Workflow rights starts the Criteria workflow by enabling it.

  3. When content is checked in with the defined security group and metadata field value, the content enters the workflow.

  4. Reviewers for the first step are notified by e-mail that the revision is ready for review.

  5. The reviewers approve or reject the revision.

    • If the step is a reviewer/contributor step, the reviewers can check out the revision, edit it, and check it back in before approving it. For example, administrators may need to alter a reservation request.

    • If a user rejects the revision, the workflow returns to the previous contribution step, and the users for that step are notified by e-mail.

    • When the minimum number of users have approved the revision, it goes to next step. If the minimum number of approvals is 0, the revision moves to the next step automatically.

  6. When all steps are complete, the revision is released to the system.

12.8.2 Creating Necessary Workflows

This section details the specific requirements for the three workflows needed for the following functionality:

12.8.2.1 Category Dispositions Workflow

The Category Disposition Workflow is used to approve the disposition rules on a category before the rules are enacted.

  1. Choose Administration then Admin Applets.

  2. Choose Workflow Admin from the Administration Applets list.

  3. Click the Criteria tab in the Workflow Admin dialog. Click Add.

  4. Enter the following information in the New Criteria Workflow dialog:

    • Workflow name: CategoryDispositionsProcess.

    • Description: Category Disposition Processing.

    • Security Group: Select RecordsGroup from the list.

    • Original Author Edit Rule: Select Edit Revision.

    • Has Criteria Definition: Select this check box.

    • Field: Select Type from the list.

    • Operator: This should say Matches.

    • Value: Select RetentionCategory from the list.

    Click OK when done. The Workflow Admin dialog opens.

  5. In the Criteria portion of the dialog, in the Steps section, click Add.

  6. Enter the following information in the Add New Step dialog:

    • Step name: CategoryDispositionsReview.

    • Description: Review Category Dispositions.

    • Users can review and edit (replace) the current revision: Select this check box.

    • Click the Users tab then Add Alias. Select the alias list for the users who will review dispositions and click OK.

    • Click the Exit Condition tab. In the Required Approvers portion, select the check box for All Reviewers.

  7. Click OK then Enable in the Workflow Admin dialog to start the workflow.

12.8.2.2 Reservation Processing Workflow

The Reservation workflow is used to process reservation requests for physical items.

  1. Choose Administration then Admin Applets.

  2. Choose Workflow Admin from the Administration Applets list.

  3. Click the Criteria tab in the Workflow Admin dialog. Click Add.

  4. Enter the following information in the New Criteria Workflow dialog:

    • Workflow name: ReservationProcess.

    • Description: Processes reservations.

    • Security Group: select Reservation.

    • Original Author Edit Rule: Select Edit Revision.

    • Has Criteria Definition: Select this check box.

    • Field: Select Type.

    • Operator: This should say Matches.

    • Value: Select Request.

    Click OK when done.

  5. In the Criteria portion of the Workflow Admin dialog, in the Steps section, click Add.

  6. Enter the following information for the first step in the Add New Step dialog:

    • Step name: RequestReview

    • Description: Review Request

    • Users can review and edit (replace) the current revision: selected.

    • Click the Users tab then Add Alias. Select the alias list for the users who will review reservation requests and click OK.

    • Click the Exit Condition tab. In the Required Approvers portion, select At Least This Many Reviewers and enter 1 for the value.

    • Click OK. The Workflow Admin dialog opens.

  7. In the Criteria portion of the dialog, in the Steps section, click Add.

  8. Enter the following information for the second step in the Add New Step dialog:

    • Step name: RequestComplete

    • Description: Complete the request

    • Users can review the current revision: selected.

    • Click the Users tab then Add Alias. Select the alias list for the users who will complete the reservation requests and click OK.

    • Click the Exit Condition tab. In the Required Approvers portion, select At Least This Many Reviewers and enter 0 for the value.

    • Click the Events tab.

      • Click Edit in the Entry section. Click the Custom tab then select Custom Script Evaluation. Enter the following code

        <$wfSet("wfJumpName", "complete")$>
                <$wfSet("wfJumpEntryNotifyOff", "1")$>
        

        Click OK.

      • Click Edit in the Update section. Click the Custom tab then select Custom Script Evaluation. Enter the following code:

        <$if parseDate(dOutDate) < parseDate(dateCurrent(1))$>
                <$wfSet("wfJumpName", "complete_update")$>
                <$wfSet("wfJumpTargetStep", wfCurrentStep(10))$>
                <$wfSet("wfJumpEntryNotifyOff", "1")$>
        <$endif$>
        

        Click OK.

  9. Click OK then Enable in the Workflow Admin dialog to start the workflow.

12.8.2.3 Offsite Storage Workflow

The Offsite Storage workflow is used to process requests to store physical items offsite.

  1. Choose Administration then Admin Applets.

  2. Choose Workflow Admin from the Administration Applets list.

  3. Click the Criteria tab in the Workflow Admin dialog. Click Add.

  4. Enter the following information in the New Criteria Workflow dialog:

    • Workflow name: OffsiteProcess.

    • Description: Processes Offsite Requests.

    • Security Group: select Offsite.

    • Original Author Edit Rule: select Edit Revision.

    • Has Criteria Definition: selected.

    • Field: select Type.

    • Operator: this should say Matches.

    • Value: select Offsiterequest.

    Click OK when done. The Workflow Admin dialog is opens.

  5. In the Criteria portion of the dialog, in the Steps section, click Add.

  6. Enter the following information for the first step in the Add New Step dialog:

    • Step name: OffsiteRequestReview.

    • Description: Review Offsite Request.

    • Users can review and edit (replace) the current revision: selected.

    • Click the Users tab then Add Alias. Select the alias list for the users who will review reservation requests and click OK.

    • Click the Exit Condition tab. In the Required Approvers portion, select At Least This Many Reviewers and enter 1 for the value.

  7. Click OK then click Enable in the Workflow Admin dialog to start the workflow.

12.9 Configuration with Desktop Integration Suite

When using Oracle DIS with the Records system with the DoD compliance component enabled, users may not be able to check in files by copying and pasting or by dragging and dropping them into contribution folders. DoD compliance requires that the Category or Folder fields be required during checkin, that means an item cannot be checked in if the field is empty.

Because copying and pasting or dragging and dropping into a folder often does not require any additional user interaction, the check-in will not complete successfully unless the administrator configures the Records system to enable such checkins.

Several workarounds for this issue are available:

  • Set default metadata for the folders by selecting the category and folder from the available selections

  • Set default metadata for users by creating a global rule when setting up profiles.

  • Change the configuration of the system by setting the dodSkipCatFolderRequirement variable.

12.10 Configuration Variables

Several configuration variables can be included in a configuration file to change the behavior or interface of the software. In addition to the configuration variables described here, flags in the rma_email_environment.cfg file can be set to determine which fields can be edited during events such as check-in and update for e-mail content. The flags are a double-colon-separated list.

The following is an overview of the more commonly used configuration variables. For details about each variable, see the Oracle Fusion Middleware Configuration Reference for Oracle WebCenter Content.

  • AllowRetentionPeriodWithoutCutoff: Used to specify retention periods for triggers.

  • dodSkipCatFolderRequirement: Allows items to be checked in without specifying a category or folder for the checkin. If a DoD configuration is in use, this causes non-conformance with DoD regulations.

  • HideVitalReview: Used to hide the Subject to Review fields.

  • RecordsManagementDenyAUthorFreePassOnRMSecurity: Allows the author of content to delete content they authored regardless of the user's security settings.

  • RecordsManagementNumberOverwriteOnDelete: Sets the number of disk scrubbing passes used for a destroy action.

  • RmaAddDocWhereClauseForScreening: Allows users with the Records Administrator role to screen for frozen items to which they do not have access (using ACLs) on the screening page or on the Freeze Information page.

  • RmaAllowKeepOrDestroyMetadataOption: Allows the option to keep or destroy metadata when using the following disposition actions: Delete All Revisions, Accession, Archive, Move, and Transfer.

  • RmaEnableWebdavPropPatchOnExport: Enables WebDAV support of a PropPatch method to assign metadata values to a file that has been uploaded to a WebDAV server.

  • RmaEnableFilePlan: Enables the File Plan folder structure.

  • RmaEnableFixedClone: Enables the fixed clone functionality that allows the creation of record clones of content revisions.

  • RmaEnablePostFilterOnScreening: Enables additional security on screening results. If a user does not have appropriate security for an item in a screening result list, that item is hidden from view.

  • RmaFilePlanVolumePrefix and RmaFilePlanVolumeSuffix: Defines the naming convention for volumes.

  • RmaFixedClonesTitleSuffix: Used to set the suffix that is automatically appended to a fixed clone content item.

  • RMAHideExternalFieldsFromSearchInfo and RMAHideExternalFieldsFromCheckInUpdate: Used to hide external fields on the noted pages. The default setting is TRUE, so External fields are hidden on those pages.

  • RmaNotifyDispReviewerAndCatAuthor: Used to control who is notified about disposition actions.

  • RmaNotifyReviewerAndAlternateReviewer: Used to control what reviewers are notified about actions.

  • ShowContentForStorageBrowse: Used to show content items in the storage browse pages.

  • SimpleProfilesEnabled: Used to enable Simple Profile functionality.

  • UieHideSearchcheck boxes: Used to show or hide the metadata field check boxes on the search page, which limit the number of metadata fields initially shown on the page.

PKPeePK VwEOEBPS/finding_status.htm d Finding Status and Error Information

4 Finding Status and Error Information

This chapter provides information on sources of Oracle WebCenter Content information that can be helpful in the troubleshooting process.

4.1 Monitoring Content Server Status

Information on several Content Server internal resources that are useful in monitoring the status of a Content Server instance is available in Oracle Fusion Middleware Administering Oracle WebCenter Content. These resources include:

  • Content Server status

  • Java output

  • System configuration information

  • System audit information

  • Scheduled jobs

4.2 Monitoring Content Server Logs

Information on finding and using Content Server status information and errors in log files is available in Oracle Fusion Middleware Administering Oracle WebCenter Content. This information includes:

  • Log file characteristics

  • Accessing Content Server logs

  • Accessing Archiver logs

  • Accessing Inbound Refinery logs

PK PK VwEOEBPS/dc_intro.htmOa Introduction to Dynamic Converter

29 Introduction to Dynamic Converter

This chapter introduces Dynamic Converter, which is an Oracle WebCenter Content Server component that performs on-demand document conversion using customizable templates.

This chapter covers the following topics:

29.1 About Dynamic Converter

Dynamic Converter provides an industry-proven transformation technology and on-demand publishing solution for critical business documents. With Dynamic Converter, you can easily convert any business document into a web page for everyone to see without use of the application used to create that document. The benefits are immediate; information can be exchanged freely without the bottleneck of proprietary applications.

When a web browser first requests a document, a set of rules are applied to determine how that document should appear as a web page. These rules are defined in a template, a core component of Dynamic Converter.

Dynamic Converter offers a number of important benefits to users:

  • Business documents can be easily viewed in a web browser.

  • Native applications (such as Adobe Acrobat, Microsoft Word, etc.) are not required.

  • Multiple renditions of a document are available for different web browsers.

  • Templates are interchangeable with Content Publisher.

  • Numerous business document types, including legacy formats, are supported.

The HTML renditions of source documents in the Content Server are made available to users via an HTML link on the search results page and the content information page in the Content Server.

29.2 Basic Dynamic Converter Concepts

The following concepts are important in the context of Dynamic Converter:

  • Developer: The individual who integrates Dynamic Converter into another technology or application.

  • Source file: The document, spreadsheet, presentation or other information that the developer wishes to convert to a web page (also referred to as source document and content item).

  • Output file: The file being created from the source file (also referred to as the web-viewable format).

  • Output files: The complete set of files that together make up the rendered output (web page) from a source file.

  • Template: A template tells the conversion engine how to convert the source document into the output document.

  • Template rules: Documents matching certain criteria are converted using the specified template, layout, and options.

29.3 Dynamic Converter Process

Figure 29-1 shows the basic Dynamic Converter process.

Figure 29-1 Basic Dynamic Converter Process

The basic Dynamic Converter process

The process consists of five steps:

  1. A user requests a content item through a web browser.

  2. The web server passes this request to Dynamic Converter, which determines the template to be used for the HTML conversion (based on metadata matching criteria).

  3. Dynamic Converter converts the native file (for example, a Word document or Excel spreadsheet).

  4. The conversion produces one or more HTML pages with supporting files (GIF, JPEG, and so on), which Dynamic Converter outputs to a special caching area in Content Server's web-viewable file repository ("Web Layout").

  5. The web server retrieves any additional files (for example, CSS files or images used for the page header and footer), and serves these, together with all files produced by Dynamic Converter, to the user.


    Note:

    Dynamic Converter uses caching to reduce the load on the server and ensure that documents are not unnecessarily re-translated.


29.4 Upfront Conversions

In earlier versions of Dynamic Converter, a content item was converted to a web-viewable format (HTML, XML, etc.) when the content item was first requested by the user; more specifically, when the user clicked the (HTML) link beside the content item on the search results or content information page. Once the content item was converted, it was cached in the Content Server so that each subsequent request for the converted file would be immediate.

Since version 6.0 (circa 2004), Dynamic Converter converts content items that match a conversion rule when the content item is checked in, rather than when the user requests it. As a result, users will be able to immediately view the dynamically converted rendition of the content item.

This upfront conversion applies only to content items that match a conversion rule in Dynamic Converter. Rules are specified on the Template Selection Rules page.

If no rule exists for the content item, then an upfront conversion will not take place, even if a default template and layout file are available for the content item. The default templates and layout files are specified on the Dynamic Converter Configuration page.

Please note that upfront conversions must be enabled in the Conversion and Caching Optimizations section of the Dynamic Converter Configuration page.

29.5 Forced Conversions

You can designate multiple conversions of the same content item so that it can be used for different purposes on your web site. You might, for example, include it as a snippet of HTML code in one location and as a complete article in another location. This is done using a forced conversion in Dynamic Converter.

Forced conversions allow you to specify a list of rules where every rule is evaluated. If the first rule matches, it will be applied. If the next rule matches, it will also be applied, and so on. In this way, Dynamic Converter may create multiple renditions of the same content, if necessary. As a result, content can be converted multiple times using different templates and layout files.

You can enable forced conversion for a template rule on the Template Selection Rules page.

A forced conversion takes place at the same time as an upfront conversion; that is, when the content item is checked into the Content Server. The end users will not be able to tell the difference between an upfront conversion and a forced conversion. Regardless of the method, the goal is the same: to have a content item converted and stored in cache by the time the user clicks the HTML link.

Please note that forced conversions must be enabled in the Conversion and Caching Optimizations section of the Dynamic Converter Configuration page, along with upfront conversions.

29.6 Fragment-Only Conversions

One type of forced conversion (see Section 29.5) is the fragment-only conversion. A fragment is a piece of content that will be included in another content item. Individual fragments can then be combined to form a content-rich web page. A fragment generally contains no <html> or <body> tags, so that it can be easily included in another web page. The fragment is not intended to be viewed by itself and as such should not be displayed to users who click the HTML dynamic conversion link. Rules designed for fragments should be excluded from Dynamic Converter's rule evaluation during a user request.

When forced conversions are selected, you can enable fragment-only conversion for a template rule on the Template Selection Rules page.

Like other forced conversions, fragment-only conversions take place upfront, when the content item is checked into the Content Server.

29.7 Caching and Querying

Dynamic Converter includes a conversion and caching strategy that significantly improves the overall performance of your intranet or external web site. This Element allows Content Server to serve up dynamically created web pages much more quickly than was possible in earlier versions.

While the conversion and caching enhancements are built into the application, there are several configuration options that you can set in order to fine-tune Dynamic Converter:

All these configuration options can be set in the Conversion and Caching Optimizations section of the Dynamic Converter Configuration page.

29.7.1 Caching of Timestamps

Every time a user clicks the HTML dynamic conversion link on the search results page or content information page, three files are queried in the Content Server database: the source document, the conversion template, and the layout file (if applicable). The database queries confirm that the dynamically converted file is the most recent, but these queries are done even when an up-to-date conversion is available.

Dynamic Converter version 6.2 and higher use a new method of verifying the revision of content items and conversion templates without querying the database each time. Instead, the time stamps of the converted content items are stored in the server's memory-based cache. Future conversion requests can then compare these cached time stamps with the time stamps of the content item to be converted without querying the database. When combined with the upfront conversion Element (see Section 29.4), Dynamic Converter becomes much more efficient in its revision and conversion queries. Using time stamps, the caching and querying mechanism detects the new revisions of content items in the Content Server, because with each new revision a new file is created with a new time stamp.

29.7.2 Metadata Changes

If you or your users make metadata-only changes to a content item, neither a new file nor a new time stamp is created, and the changes will go undetected. To address this problem, you must make sure that all metadata changes are identified by Dynamic Converter. You can do this by enabling the "Reconvert when metadata is updated" option on the Dynamic Converter Configuration page. This option forces the Content Server to update the time stamp of the source content items after a metadata update. With this option enabled, the time stamps of all web-viewable formats are updated to reflect the metadata change that occurred for the corresponding source content item. The updated time stamp, as a result, will be recognized by Dynamic Converter, and the content item, with metadata updates, will be reconverted.

Database Method of Checking

You can choose to use the database method of checking whether the content item's metadata has been updated. You set this option on the Dynamic Converter Configuration page. With this configuration option enabled, content item updates continue to signal timestamp changes in the converted files, but the new caching and querying method is not used to determine if the content items are up to date. Instead, the Content Server database is queried for this information. You might use this method, for example, if you are experiencing problems with the optimized query Element or you are troubleshooting a related issue.

29.7.3 Timestamp Checking Frequency

By default, Dynamic Converter checks the time stamp of the converted content items every 1,500 milliseconds, or 1.5 seconds. You can increase or decrease this value if you would like to balance the number of queries performed with the number of visitors to your site. You can change the timestamp checking frequency on the Dynamic Converter Configuration page.

If you increase this setting to, say, one minute (60,000 milliseconds) and a new content item is checked into the Content Server, then the new version will not be available to users until one minute has passed.

29.7.4 Cache Interval

The cache interval is the frequency with which the conversion cache is evaluated and cached items may be considered for deletion, depending on how long they have been in the cache and their conversion status. You can set the cache interval (in days) on the Dynamic Converter Configuration page. The default is seven days (once every week).

29.7.5 Cache Size

Dynamically converted files are kept in a cache to avoid unnecessary re-conversion. You can set the maximum cache size on the Dynamic Converter Configuration page. The default is 10,000 MB (about 10 GB). If the cache exceeds this maximum size, then during the next clean-up cycle (which, by default, is seven days) the cached items that have not been accessed for the longest period of time are deleted first. (The list for deleting is sorted by the "last accessed" date in ascending order.) If the cache size limit is not exceeded, then the cached items are examined for potential deletion in the same order, but items that are forced conversions of existing documents are not deleted.

29.7.6 Cache Expiration Period

Dynamic Converter keeps converted content items in the Web Layout conversion cache to prevent items from being reconverted unnecessarily. You can control the number of days that must pass before converted items in the cache may be considered for deletion. By default, cache clean-up is evaluated every seven days. Date expiration only applies to cached items for documents that are no longer present and to cached items that were not generated by forced conversion (see Section 29.5). The default cache expiration period is seven days.

The cache expiration setting works in conjunction with the cache interval (see Section 29.7.4), which controls the frequency with which the cache is evaluated. For example, if the cache interval is set to 14 days and the cache expiration period is set to 8 days, then the cache will be evaluated every 14 days and all cached items older than 8 days will be deleted (unless they were the result of forced conversion).

29.8 Special Conversions

Dynamic Converter supports the following special conversions:

29.8.1 Conversion of HTML Forms to HTML

Dynamic Converter supports the conversion of HTML forms into HTML. This allows information supplied through HTML forms to be presented in flexible ways.

For example, the HTML form used to enter data might look something like the form shown in Figure 29-1.

Figure 29-2 Data Entry Form

The data entry form

This HTML form, together with the values entered, is automatically checked into the Content Server as an HCSF file when it is submitted by clicking the Submit button. If a user then wants to view the form data, a template could be used to present the data from the HTML form.

Figure 29-3 Form Data in Table

Sample form data

29.8.2 Conversion of XML to HTML

Dynamic Converter supports the conversion of XML to HTML by means of an XSL file. The XSL file (with the extension .xsl) is a template that defines how XML files are presented as HTML in a web browser.

In order for Dynamic Converter to properly identify and convert XML files, you must:

  • Check the XSL file into the Content Server.

  • Configure Dynamic Converter to recognize XML files. See Section 30.3.1 for an explanation of how to add a file format for dynamic conversion. (In this case, you would add "application/xml" in the Formats text box.)

  • Create a Dynamic Converter rule that matches the XML files you want to convert and specify the XSL file as the conversion template for that rule. For more information, see Chapter 31, "Template Rules."


    Note:

    A sample XML file and XSL file are available for download from Oracle Technology Network at http://www.oracle.com/technetwork/indexes/samplecode/.


29.8.3 Rendering Paragraphs as Graphics

Dynamic Converter lets you render paragraphs as graphics. You can use this feature to add custom and protected fonts to documents without allowing public access to the fonts.

This setting is in the Classic HTML Conversion Template Editor (select Formatting and then Paragraph). If you are running Dynamic Converter on Windows, your selection of the font to be rendered is the same font that is used in conversion.

If Dynamic Converter is installed on a UNIX platform, the conversion process draws from a different group of fonts. In that event, the font selected in the Template Editor must also be available on the UNIX system. Both fonts must have exactly the same name for the rendering to take effect. The GD_Font_Path variable must point to a font directory, and that directory must contain at least one TrueType font with the .ttf file extension. If these requirements are not fulfilled, rendering paragraphs as graphics will fail.

When rendering paragraphs as graphics, Dynamic Converter does not support embedded graphics. Any images in the paragraph will be replaced by the string [ ]. Templates should avoid using rendering paragraphs as graphics in sections that contain graphics.

29.9 Dynamic Converter Interface in Content Server

This section covers the changes to the Content Server interface after the Dynamic Converter software is installed.

If the Dynamic Converter Admin link is missing, the Dynamic Converter component was not correctly installed or enabled. For details on how to install the Dynamic Converter component, see Oracle WebCenter Content Installation Guide.

Figure 29-4 Dynamic Converter Admin Link in Administration Tray

The Dynamic Converter admin link

If Dynamic Converter was added to Content Server successfully, the Administration page and menu includes a link called Dynamic Converter Admin.

PKL?TaOaPK VwEOEBPS/part4_record_mgmt.htm t Records Management PK { PK VwEOEBPS/trouble_ibr.htm Troubleshooting Inbound Refinery

47 Troubleshooting Inbound Refinery

This chapter describes troubleshooting measures for Oracle WebCenter Content: Inbound Refinery.

The following topics are discussed in this chapter:

47.1 Troubleshooting PDF Conversion Problems

Inbound Refinery can convert native files to PDF by either exporting to PDF directly using Oracle Outside In PDF Export (included with Inbound Refinery) or by using third-party applications to output the native file to PostScript and then using a third-party PDF distiller engine to convert the PostScript file to PDF. PDF conversions require the following components to be installed and enabled on the Inbound Refinery server:

This section discusses the following topics:

47.1.1 Troubleshooting Process for PDF Conversion Issues

The vast majority of PDF conversion issues fall into one of the following categories:

  • When a file is checked into the Content Server, a PDF is not generated.

  • A PDF is generated, but there are problems with the output.

When troubleshooting PDF conversion issues, you should first try to identify if the issue is related to just one specific file, all files of that type, or all files. For example, if you are having problems converting a Microsoft Excel document to PDF, try checking in other Microsoft Excel documents; preferably files that are smaller and less complex. If the problem is specific to a single file, the problem is most likely related to something within the file itself, such as file corruption, file setup and formatting, and so forth.

PDF not generated:

If a PDF is not generated when a file is checked into the Content Server, follow basic troubleshooting:

  1. Look at the Inbound Refinery and agent logs and identify which step of the conversion process failed (printing to PostScript, PostScript to PDF conversion, etc.). For more information about viewing Inbound Refinery and agent logs and enabling verbose logging for agents, see Chapter 23.

  2. If the file is timing out during conversion, first try checking in another, smaller, less complex file of the same type. If multiple files are timing out, adjust your timeout values and re-submit the files for conversion. For more information about configuring timeout values, see Chapter 23.

  3. If the file is failing to print to PostScript, try printing the file to PostScript manually. Most failure to print to PostScript issues are related to the following possible causes:

    • The IDC PDF Converter PostScript printer is not installed.

    • The IDC PDF Converter PostScrpt printer is not named or set up properly.

  4. If the file is printing to PostScript successfully but failing to convert to PDF, again first try checking in another, smaller, less complex file of the same type. If the problem is not specific to a single file, or you cannot identify a problem within the files that is causing the conversion to fail, the problem is most likely related to the distiller engine that you are using.

47.1.2 Common Conversion Issues

Content items are often converted incorrectly, or not at all, for the following reasons:

  • Information within the document is outside of the document's print area: Depending on the native application used to create the document and how your system is set up, a document is sometimes printed to a PostScript file, and the PostScript file is then converted to PDF. Therefore, any information in the document that is outside of the document's print area will not be included in the generated PDF.

  • Inbound Refinery is trying to convert a file that is not appropriate for the conversion engine: For example, if a file from an application other than Microsoft Word has the extension doc, the document is opened in Microsoft Word, which is not correct. The conversion will then fail.

  • The third-party application that is used for conversion starts up with items that require user interaction, such as startup dialogs, tip wizards, or update notices: This prevents Inbound Refinery from processing and converting the files correctly, and the conversion will time out. Always turn off all such features before using a third-party application for conversion purposes.

  • The Inbound Refinery's Java Virtual Machine (JVM) is frozen: This is usually associated with failed attempts to convert invalid file formats. Restarting Inbound Refinery will usually fix this problem.

  • Inbound Refinery did not have enough time to process the file: You can detect this by filtering for the conversion status PassThru in Repository Manager. You can also look at the Inbound Refinery and agent log files. Prevent future occurrences of this problem by increasing the appropriate conversion factor on the Timeout Settings page in the Inbound Refinery administration interface.

  • The content item was converted correctly but you cannot view the generated PDF file in Adobe Acrobat or Acrobat Reader. You might be using an old Acrobat version. In order to ensure that you can view all generated PDF files correctly, you should always use the latest version of Adobe Acrobat or Adobe Acrobat Reader.

  • A Microsoft Office file and a link within that file does not convert correctly. It is possible that the link is not formatted correctly or is not supported by Inbound Refinery.

47.1.3 Inbound Refinery Setup and Run Issues

The following are symptoms of Inbound Refinery setup and run issues when converting PDF:

47.1.3.1 Inbound Refinery Won't Process Any Files

Inbound Refinery has been installed, but no files are being converted.

Possible CausesSolutions

File formats and conversion methods not set up for file type in the Content Server.

Use the File Formats Wizard or Configuration Manager in the Content Server to set up the file formats and conversion methods for PDF conversion. For more information, see Chapter 23.


47.1.3.2 Missing IDC PDF Converter Printer

The IDC PDF Converter Printer is missing from the list of local printers and documents are stuck in GENWWW. Rebooting the server did not resolve the issue.

Possible CausesSolutions

The Print Spooler service might not be running.

This service ensures that all installed printers are available, including the IDC PDF Converter printer. Check in the Windows services console (accessible by choosing Control Panel, then Administrative Tools, then Services) to verify this service is running and set to start automatically.

If the service is not running, the Inbound Refinery cannot locate and use the IDC PDF Converter printer and documents will be stuck in GENWWW. With the startup type of the Printer Spooler service set to Automatic, this service starts every time the computer boots.

After starting the Print Spooler service, you can use Repository Manager to resubmit the documents stuck in GENWWW. Assuming that there are no other conversion issues, the system should now be able to convert documents to PDF successfully.


47.1.3.3 Error: 'Unable to convert. The printer is not installed'

Inbound Refinery is not converting any files to PDF, and the following error message appears in the Inbound Refinery log:

Unable to convert. The printer 'IDC PDF Converter Printer' is not installed.

Possible CausesSolutions

The IDC PDF Converter printer is not installed.

Install the IDC PDF Converter printer.


47.1.3.3.1 Error: 'Unable to convert. Not printing to 'c:/temp/idcoutput.ps'.'

Inbound Refinery is not converting any files to PDF, and the following error appears in the Inbound Refinery log:

Step MSOfficeToPostscript forced conversion failure passthru by conversion engine with error: ''Unable to convert. The printer 'IDC PDF Converter' is not printing to 'c:/temp/idcoutput.ps'.''

Possible CausesSolutions

The IDC PDF Converter printer is not printing to the correct port.

The IDC PDF Converter printer must be set to print to the correct port. The default port is c:\temp\idcoutput.ps.

The default port can be changed by adding the PrinterPortPath variable to the intradoc.cfg file located in the refinery IntradocDir\bin\ directory and specifying the port path. In this case, the IDC PDF Converter printer should be set to print to the port specified in the intradoc.cfg file.


47.1.3.4 Conversions Keep Timing Out

Inbound Refinery conversions keep timing out.

Possible CausesSolutions

Files are password protected.

Password-protected files will bring up a dialog window during conversion, which will cause the conversion to time out if the dialog is not cleared manually. Remove password protection from files before checking them in.

Your timeout settings are not sufficient.

Adjust your timeout settings. For more information about configuring timeout values, see Chapter 23


47.1.3.5 Microsoft Word Files Won't Convert

Microsoft Word files fail to convert.

Possible CausesSolutions

Automatic spell checking and grammar checking are causing the conversions to time out.

Turn off the Excel options to perform spell checking and grammar checking automatically.

You are using Word and your security level is too high and is causing the conversions to time out.

Set the Word security level to low so Word does not prompt to enable/disable macros when a file with macros is opened.

You are using Word 2003 and the Customer Experience Improvement Program is causing the conversions to time out.

Turn off Show content and links from Microsoft Online on the Tools, Options, General tab under the Online category, and opt out of the Customer Experience Improvement Program on the Tools, Options, General tab under the Customer Feedback category.



Note:

For more information about converting Microsoft Word files, see Section 25.1.4.


47.1.3.6 Microsoft Excel Files Won't Convert

Microsoft Excel files fail to convert.

Possible CausesSolutions

Automatic calculations are causing the conversions to time out.

Turn off the Excel option to perform calculations automatically.

Automatic spell checking and grammar checking are causing the conversions to time out.

Turn off the Excel options to perform spell checking and grammar checking automatically.

You are using Excel and your security level is too high and is causing the conversions to time out.

Set the Excel security level to low so Excel does not prompt to enable/disable macros when a file with macros is opened.

You are using Excel and the Customer Experience Improvement Program is causing the conversions to time out.

Turn off Show content and links from Microsoft Online on the Tools, Options, General tab under the Online category, and opt out of the Customer Experience Improvement Program on the Tools, Options, General tab under the Customer Feedback category.



Note:

For more information about converting Microsoft Excel files, see Section 25.1.4.


47.1.3.7 Microsoft PowerPoint Files Won't Convert

Microsoft PowerPoint files fail to convert.

Possible CausesSolutions

Automatic spell checking and grammar checking are causing the conversions to time out.

Turn off the PowerPoint options to perform spell checking and grammar checking automatically.

You are using PowerPoint and your security level is too high and is causing the conversions to time out.

Set the PowerPoint security level to low so PowerPoint does not prompt to enable/disable macros when a file with macros is opened.

You are using PowerPoint and the Customer Experience Improvement Program is causing the conversions to time out.

Turn off Show content and links from Microsoft Online on the Tools, Options, General tab under the Online category, and opt out of the Customer Experience Improvement Program on the Tools, Options, General tab under the Customer Feedback category.



Note:

For more information about converting Microsoft PowerPoint files, see Section 25.1.4.


47.1.3.8 Microsoft Visio Files Won't Convert

Microsoft Visio files fail to convert.

Possible CausesSolutions

You are using Visio and the Customer Experience Improvement Program is causing the conversions to time out.

Turn off Show content and links from Microsoft Online on the Tools, Options, General tab under the Online category, and opt out of the Customer Experience Improvement Program on the Tools, Options, General tab under the Customer Feedback category.



Note:

For more information about converting Microsoft Visio files, see Section 25.1.4.


47.1.3.9 FrameMaker Files Won't Convert

FrameMaker files fail to convert.

Possible CausesSolutions

The files are structured FrameMaker files.

Structured FrameMaker files are most likely fail to convert. A dialog box is displayed when a structured FrameMaker file is opened, which will cause the conversion to time out unless the dialog box is cleared manually.


47.1.3.10 WordPerfect Files Won't Convert

WordPerfect files fail to convert.

Ho
Possible CausesSolutions

The files are old WordPerfect files.

WordPerfect files created in versions prior to version 6 might not be processed effectively. Convert these files to a more recent version of WordPerfect before checking them in.


47.1.4 PDF Display Issues

The following are symptoms of display issues for PDF files generated by Inbound Refinery:

47.1.4.1 Blank PDF files in Internet Explorer

When attempting to open PDF files in Microsoft Internet Explorer, a blank PDF file is displayed.

Possible CausesSolutions (refer to:)

An old version of Adobe Acrobat Reader is being used that does not support in-place activation.

http://support.microsoft.com/default.aspx?scid=http://support.microsoft.com:80/support/kb/articles/q177/3/21.asp&NoWebContent=1

You have a slow connection, the server has a high load, or the PDF file is very large.

http://support.microsoft.com/default.aspx?scid=http://support.microsoft.com:80/support/kb/articles/q177/3/21.asp&NoWebContent=1

The ActiveX control is corrupt. For Adobe Acrobat Reader 4 to use in place activation with Internet Explorer, the Pdf.ocx and Pdf.tlb files must be present in the acrobat_install_dir\Program Files\Adobe\Acrobat 4.0\Acrobat\ActiveX\ directory.

http://www.adobe.com/support/


47.1.4.2 Error: 'File does not begin with '%PDF-'

When attempting to open a PDF file in a web browser, you receive the following error message:

"...File does not begin with '%PDF-'"

Possible CausesSolutions (refer to:)

The PDF file has an .mme file extension rather than a .pdf file extension.

http://www.adobe.com/support/

The PDF file has an .mme file extension rather than a .pdf file extension.

http://www.planetpdf.com/mainpage.asp?WebPageID=304


47.1.4.3 PDF Files Don't Open Within Browser Window

When viewing PDF files generated by Inbound Refinery through a web browser, the PDF files do not open within the browser window.

Possible CausesSolutions

Settings in Adobe Acrobat Reader or Acrobat.

In Acrobat Reader or Acrobat, verify that Preferences are set for Web Browser Integration/Display PDF in Browser. The exact setting depends of version of Acrobat Reader or Acrobat that you are using.


47.1.4.4 Problems Printing PDFs Using Adobe Acrobat 6.0

When you try to print a PDF, the document will not print and the following message is displayed: Could not start print job.

Possible CausesSolutions

You have Adobe Acrobat 6.0 installed. Adobe Acrobat 6.0 is unable to print a PDF when a file name and URL has more than 256 characters. URLs in workflow and subscription email notifications can easily exceed 256 characters.

Adobe has fixed this problem in Adobe Acrobat 6.0.1. Download and install Adobe Acrobat 6.0.1 or higher to solve this problem.


47.1.4.5 Problem Displaying Internal Thumbnails When Viewing PDF Files

When you view a PDF file, internal thumbnails (thumbnails of the pages within the PDF file) do not display properly. They might display with poor quality, display as grey rectangles, or not display at all.

Possible CausesSolutions

You are using Adobe Acrobat Reader 5 or 6, and the PDF file is being byte served from the web server.

As of Acrobat 5, internal thumbnails can be embedded in the PDF by the creating application, or the viewing application can attempt to create thumbnails dynamically from the rendered pages.

If the thumbnail is being generated in Acrobat Reader dynamically, the PDF is being byte served from the web server, and the internal thumbnails are not embedded in the PDF, certain versions of Reader might not be able to render the internal thumbnails properly. This is because the full image data for a given page is on the web server and not available on the client to render the thumbnail image.

It is also possible that certain versions of Acrobat Reader might not display internal thumbnails for any PDF that are byte served from the web server.

Possible solutions include:

Use Acrobat Reader 7 or higher. This issue appears to be fixed in Acrobat Reader 7.

Configure the application that is creating your PDFs (your PostScript to PDF distiller engine or other third-party application) to embed internal thumbnails.

Disable byte serving of PDF files on the web server.


47.2 Troubleshooting Tiff Converter Problems

This section discusses the following topics regarding problems encountered during Tiff conversion:

47.2.1 Installation Problems

The following table lists common problems with installing Inbound Refinery, possible causes, and solutions.

ProblemPossible CausesSolutions

Refinery or Content Server would not start after Tiff Converter components are installed

Wrong component installed on content server/refinery.

Uninstall the components and reinstall the components on the correct location.


47.2.2 General Conversion Problems

The following table lists general Inbound Refinery conversion problems, possible causes, and solutions.

ProblemPossible CausesSolutions

TIFF files are not being converted (they are being passed through in their native format).

File formats and conversion methods for Inbound Refinery have not been set up properly in Content Server.

Set up file formats and conversion methods for Inbound Refinery. For details, see Section 25.2.

TIFF files are not being converted (they are being passed through in their native format).

The conversions are taking too long, and Inbound Refinery is timing out.

Change your Inbound Refinery timeout settings. For details, see Section 25.2.2.2.

TIFF files are not being converted (they are being passed through in their native format).

Inbound Refinery is failing to launch CVista PdfCompressor.

Ensure that the PdfCompressor path is correct. For details, see Section 25.2.3.2.

Zipped TIFF files are not being processed by Inbound Refinery when they are checked in.

File formats and conversion methods for Inbound Refinery have not been set up properly in Content Server.

Change how zip files are processed. For details, see Section 25.2.2.2.

The TIFF Conversion conversion method is not available in the Content Server Configuration Manager.

The TiffConverterSupport component has not been uploaded and enabled.

Upload and enable the TiffConverterSupport component using either the Component Wizard or the Component Manager. This component is included on the Inbound Refinery distribution media.

The TIFF Conversion conversion method is not available in the Content Server Configuration Manager.

The TiffConverterSupport component is enabled, but Content Server has not been restarted.

Restart Content Server.


47.2.3 CVista PdfCompressor Conversion Problems

The following table lists common conversion problems when using CVISION CVista PdfCompressor, possible causes, and solutions.

ProblemPossible CausesSolutions

Inbound Refinery is failing to launch CVista PdfCompressor.

The path to the CVista PdfCompress.exe file is incorrect.

Ensure that the PdfCompressor path is correct. For details, see Section 25.2.3.2.

I have used the CVista PdfCompressor user interface to change conversion settings, but this has had no effect on how TIFF files are being processed.

Changes made in the CVista PdfCompressor user interface will not affect how CVista PdfCompressor functions when called by Inbound Refinery.

Change the CVista PdfCompressor configuration settings using the Inbound Refinery user interface. For details, see Section 25.2.3.2.

CVista PdfCompressor is only performing OCR on English text.

By default, CVista PdfCompressor uses only an English OCR dictionary. Other OCR languages must be set up.

Set up multiple OCR languages for CVista PdfCompressor. For details, see Section 25.2.3.2.


47.2.4 PDF Thumbnailing and Viewing Problems

The following table lists common problems with creating thumbnails for and viewing the PDF files that are generated by Inbound Refinery, possible causes, and solutions.

ProblemPossible CausesSolutions

No thumbnails are being created for PDF files generated by Inbound Refinery.

Thumbnailing is not enabled in Inbound Refinery.

Enable thumbnailing in Inbound Refinery.

When viewing PDF files generated by Inbound Refinery in Adobe Acrobat Reader, there are lines or other artifacts on the screen.

Acrobat Reader 4 is being used to view the files.

When viewing PDF files generated by Inbound Refinery, use Adobe Acrobat Reader 6.0.1 or higher for the best results.


47.3 Troubleshooting XML Converter Problems

Two areas have been identified as possible problems when converting XML.

After installing Inbound Refinery, a Content Server or refinery instance will not start or is not functioning properly.

Possible CausesSolutions

The XMLConverter component has been installed on a Content Server, or the XMLConverterSupport component has been installed on a refinery.

The XMLConverter component must be installed on refineries, and the XMLConverterSupport component must be installed on Content Servers.

If you install the wrong component, complete the following:

  • Uninstall the component from the Content Server or refinery using the Component Manager or the Component Wizard.

  • Install and enable the correct component.


XML Converter has been installed, but no files are being converted.

Possible CausesSolutions

File formats and conversion methods not set up for file type in the Content Server.

Use the File Formats Wizard or Configuration Manager in the Content Server to set up the file formats and conversion methods for XML conversion. For details, see Section 23.4.

The refinery has not been configured to accept the conversion.

Configure the refinery to accept the conversion. For details, see Section 23.6.2.

The refinery has not been configured to create XML files as the primary web-viewable rendition or an additional rendition.

For details, see Section 25.3.2 and Section 25.3.3.


PKw.KRHPK VwEOEBPS/ibr_conversions.htm Working with Conversions

25 Working with Conversions

When using Inbound Refinery, several different conversion operations can be configured and managed. PDF conversion, XML conversion, Tiff conversion, and converting Microsoft Office files to HTML can all be managed. This chapter discusses the tasks involved in managing those conversions types.


Note:

Native conversions fail when Inbound Refinery is run as a service on win64 platforms. This is due to the fact that services on win64 platforms do not have access to printer services. If performing native conversions, Inbound Refinery should not be run as a service.


This section describes how to work with conversions and includes the following topics:

25.1 Managing PDF Conversions

Inbound Refinery can convert native files to PDF by either exporting to PDF directly using Oracle Outside In PDF Export (included with Inbound Refinery) or by using third-party applications to output the native file to PostScript and then using a third-party PDF distiller engine to convert the PostScript file to PDF.

PDF conversions require the following components to be installed and enabled on the Inbound Refinery server.

Component NameComponent DescriptionEnabled on Server

PDFExportConverter

Enables Inbound Refinery to use Oracle OutsideIn to convert native formats directly to PDF without the use of any third-party tools. PDF Export is fast, multi-platform, and allows concurrent conversions.

Inbound Refinery Server

WinNativeConverter

Enables Inbound Refinery to convert native files to a PostScript file with either the native application or OutsideInX and convert the PostScript file to PDF using a third-party distiller engine. This component is for Windows platform only. It replaces the functionality previously made available in the deprecated PDFConverter component.

WinNativeConverter offers the best rendition quality of all PDF conversion options when used with the native application on a Windows platform. This does not allow concurrent conversions.

WinNativeConverter also enables Inbound Refinery to convert native Microsoft Office files created with Word, Excel, Powerpoint and Visio to HTML using the native Office application.

Inbound Refinery Server

OpenOfficeConversion

Provides cross-platform support allowing Inbound Refinery to convert supported files to PDF using Open Office. Like WinNativeConverter, OpenOfficeConversion doesn't allow concurrent conversions, but unlike WinNativeConverter, it does support UNIX platforms.

Inbound Refinery Server



Note:

Native conversions fail when Inbound Refinery is run as a service on win64 platforms. This is due to the fact that services on win64 platforms do not have access to printer services. If performing native conversions, Inbound Refinery should not be run as a service.


This section describes how to work with PDF conversions and includes the following topics:

25.1.1 PDF Conversion Considerations

There are several factors to consider when choosing a PDF conversion method. System performance (the time it takes to convert a file to PDF format), the fidelity of the PDF output (how closely it matches the look and formatting of the native file), what native applications are needed (such as Microsoft Word or Powerpoint, used to generate the PostScript file converted by Inbound Refinery), and the platform a conversion application requires should all be taken into consideration.

If the speed of conversion is a primary concern, using PDF Export to convert original files directly to PDF is fastest. In addition to not having to use third-party tools, PDF Export allows concurrent PDF conversions and supports Windows, Linux and UNIX platforms.

If the fidelity of the PDF output is a primary concern, then using the native application to open the original file, output to PostScript, and convert the PostScript to PDF is the best option. However, this method is limited to the Windows platform and it cannot run concurrent PDF conversions.

If conversion must be done on a UNIX platform, then using OpenOffice to open a native file and export directly to a PDF file may be the best option. Depending on how it is set up, it may provide greater fidelity than PDF Export. However, unlike PDF Export, it does not support concurrent PDF conversions. Table 25-1 compares conversion methods and lists the platforms they support.


Note:

Regardless of the conversion option used, a PDF is a web-ready version of the native format. A converted PDF should not be expected to be an exact replica of the native format. Many factors such as font substitutions, complexity and format of embedded graphics, table structure, or issues with third-party distiller engines may cause the PDF output to differ from the native format.


Table 25-1 PDF Conversion Methods

Conversion MethodPerformanceFidelitySupported PlatformsConcurrent PDF Conversions

PDF Export

Best

Good

Windows/UNIX

Yes

3rd-Party Native Applications

Good

Best

Windows

No

OpenOffice

Good

Good

Windows/UNIX

No


25.1.2 Configuring PDF Conversion Settings

This section discusses the following topics regarding PDF conversion settings:

25.1.2.1 Configuring Content Servers to Send Jobs to Inbound Refinery

File extensions, file formats, and conversions are used in Content Server to define how content items should be processed by Inbound Refinery and its conversion add-ons. Each Content Server must be configured to send files to refineries for conversion. When a file extension is mapped to a file format and a conversion, files of that type are sent for conversion when they are checked into the Content Server. Use either the File Formats Wizard or the Configuration Manager to set the file extension, file format, and conversion mappings.

All conversions required for Inbound Refinery are available by default in Content Server. For more information about configuring file extensions, file formats, and conversions in your Content Servers, see Section 23.4.1.2 and Section 23.4.2.

Conversions available in the Content Server should match those available in the refinery. When a file format is mapped to a conversion in the Content Server, files of that format are sent for conversion upon check-in. One or more refineries must be set up to accept that conversion. Set the conversions that the refinery will accept and queue maximums on the Conversion Listing page. All conversions required for Inbound Refinery are available by default in both Content Server and Inbound Refinery.

For more information about setting accepted conversions, see Section 23.6.2.

25.1.2.2 Setting PDF Files as the Primary Web-Viewable Rendition

To set PDF files as the primary web-viewable rendition:

  1. Log into the refinery.

  2. Select Conversion Settings, then select Primary Web Rendition.

  3. On the Primary Web-Viewable Rendition page, select one or more of the following conversion methods. For a conversion method to be available, the associated components must be installed and enabled:

    • Convert to PDF using PDF Export: when running on either Windows or UNIX, Inbound Refinery uses Outside In PDF Export to convert files directly to PDF without the use of third-party applications. PDFExportConverter must be enabled on the refinery server.

    • Convert to PDF using third-party applications: when running on Windows, Inbound Refinery can use several third-party applications to create PDF files of content items. In most cases, a third-party application that can open and print the file is used to print the file to PostScript, and then the PostScript file is converted to PDF using the configured PostScript distiller engine. In some cases, Inbound Refinery can use a third-party application to convert a file directly to PDF. For this option to be available, WinNativeConverter must be enabled on the refinery server. In addition, when using this option, Inbound Refinery requires the following:

      • A PostScript distiller engine.

      • A PostScript printer.

      • The third-party applications used during the conversion.

    • Convert to PDF using OpenOffice: when running on either Windows or UNIX, Inbound Refinery can use OpenOffice to convert some file types directly to PDF. For this option to be available, OpenOfficeConversion must be installed on the refinery server. When using this option, Inbound Refinery requires only OpenOffice.

    • Convert to PDF using Outside In: Inbound Refinery includes Outside In, which can be used with WinNativeConverter on Windows to create PDF files of some content items. Outside In is used to print the files to PostScript, and then the PostScript files are converted to PDF using the configured PostScript distiller engine. When using this option, Inbound Refinery requires only a PostScript distiller engine.

    Inbound Refinery attempts to convert each incoming file based on the conversion method assigned to the format by the Content Server. If the format is not supported for conversion by the first selected method, Inbound Refinery checks to see if the next selected method supports the format, and so on. Inbound Refinery will attempt to convert the file using the first selected method that supports the conversion of the format.

    For example, consider that you select both the Convert to PDF using third-party applications option and the Convert to PDF using Outside In option. You then send a Microsoft Word file to the refinery for conversion. Because the Microsoft Word file format is supported for conversion to PDF using a third-party application (Microsoft Word), Inbound Refinery attempts to use the Convert to PDF using third-party applications method to convert the file to PDF as the primary web-viewable rendition.

    If this method fails, Inbound Refinery does not attempt the Convert to PDF using Outside In method. However, if you send a JustWrite file to the refinery for conversion, this file format is not supported for conversion to PDF using the Convert to PDF using third-party applications method, so Inbound Refinery will check to see if this format is supported by the Convert to PDF using Outside In method. Because this format is supported by Outside In, Inbound Refinery will attempt to convert the file to PDF using Outside In.

  4. Click Update to save your changes.

  5. When using the Convert to PDF using Third-Party Applications method or the Convert to PDF using Outside In method, click the corresponding PDF Web-Viewable Options button.

  6. On the PDF Options page, set your PDF options, and click Update to save your changes.

25.1.2.3 Installing a Distiller Engine and PDF Printer

When converting documents to PDF using WinNativeConverter, a distiller engine and PDF printer must be obtained, installed and configured. This is not necessary when converting to PDF using either Outside In PDF Export or OpenOffice to open and save documents to PDF.

WinNativeConverter can use several third-party applications to create PDF files of content items. In most cases, a third-party application that can open and print the file is used to print the file to PostScript, and then the PostScript file is converted to PDF using the configured PostScript distiller engine. In some cases, WinNativeConverter can use a third-party application to convert a file directly to PDF.


Note:

A distiller engine is not provided with Inbound Refinery. You must obtain a distiller engine of your choice. The chosen distiller engine must be able to execute conversions via a command-line. The procedures in this section use AFPL Ghostscript as an example. This is a free, robust distiller engine that performs both PostScript to PDF conversion and optimization of PDF files during or after conversion.


To install the PDF printer:

  1. Obtain and install a distiller engine on the computer where Inbound Refinery has been deployed.

  2. Start the System Properties utility:

    • Microsoft Windows: Choose Start then Programs then Oracle Content Server. Choose refinery_instance then Utilities then System Properties.

  3. Open the Printer tab.

  4. Click Browse next to the Printer Information File field and navigate to the printer information file installed with your distiller engine.

  5. Enter a name for the printer in the Printer Name field.

  6. Enter the name of the printer driver in the Printer Driver Name field. This name should match the name used in the printer driver information file.

  7. Enter the port path in the Printer File Port Path field. For example, c:\temp\idcout.ps

  8. Click Install Printer and follow the printer install instructions when prompted.


    Note:

    After a printer is installed, the fields on the System Properties Printer tab are disabled. If the installed printer is deleted, the Printer tab is enabled again and the printer must be reinstalled.


  9. Click OK to apply the change and exit System Properties.

25.1.2.4 Configuring Third-Party Application Settings

To change third-party application settings:

  1. Log into the refinery.

  2. Select Conversion Settings then Third-Party Application Settings.

  3. On the Third-Party Application Settings page, click Options for the third-party application.

  4. Change the third-party application options.

  5. Click Update to save your changes.

25.1.2.5 Configuring Timeout Settings for PDF Conversions

To configure timeout settings for PDF file generation:

  1. Log into the refinery.

  2. Select Conversion Settings then Timeout Settings.

  3. On the Timeout Settings page, enter the Minimum (in minutes), Maximum (in minutes), and Factor for the following conversion operations:

    • Native to PostScript: the stage in which the original (native) file is converted to a PostScript (PS) file.

    • PostScript to PDF: the stage in which the PS file is converted to a Portable Document Format (PDF) file.

    • FrameMaker to PostScript: these values apply to the conversion of Adobe FrameMaker files to PS files.

    • PDF to Post Production: the stage in which any processing is performed after the file has been converted to PDF format.

  4. Click Update to save your changes.

25.1.2.6 Setting Margins When Using Outside In

Inbound Refinery includes Outside In version 8.3.2. When using Outside In to convert graphics to PDF, you can set the margins for the generated PDF from 0–4.23 inches or 0–10.76 cm. By default, Inbound Refinery uses 1-inch margins on the top, bottom, right, and left.

To adjust these margins:

  1. Use a text editor to open the intradoc.cfg file located in the refinery DomainDir/ucm/ibr/bin directory.

  2. Change the following settings:

    OIXTopMargin=
    OIXBottomMargin=
    OIXLeftMargin=
    OIXRightMargin=
    
  3. To change the margin units from inches to centimeters, set the following:

     OIXMarginUnitInch=false
    
  4. Save your changes to the intradoc.cfg file.

  5. Restart the refinery.

25.1.3 Configuring OpenOffice

Typically, the OpenOffice Listener must always be running on the Inbound Refinery computer, or PDF conversion will fail. When running OpenOffice on Windows, configure an OpenOffice port in the Setup.xcu file and run the OpenOffice Quickstarter. The Quickstarter adds shortcuts to OpenOffice applications to the system tray and runs the OpenOffice Listener as a background process.

By default, the Quickstarter loads at system startup and the OpenOffice icon should be in the system tray. To start the Quickstarter, launch any OpenOffice application. The application can then be closed, and the Quickstarter remains running. To set the Quickstarter to load at system startup, right-click the OpenOffice icon in the system tray, and choose Load OpenOffice.org During System Start-Up.


Note:

OpenOffice can be launched by Inbound Refinery running as a service on Windows XP, 2000, 2003. However, because you must be logged in to Windows to run the OpenOffice Listener, you must always be logged in to Windows when using OpenOffice for PDF conversion even when running Inbound Refinery as a service.


This section discusses the following topics regarding OpenOffice conversions:

25.1.3.1 OpenOffice Configuration Considerations

Typically, the OpenOffice Listener must always be running on the Inbound Refinery computer, or PDF conversion will fail. When running OpenOffice on Windows, configure an OpenOffice port in the Setup.xcu file and run the OpenOffice Quickstarter. The Quickstarter adds shortcuts to OpenOffice applications to the system tray and runs the OpenOffice Listener as a background process.

By default, the Quickstarter loads at system startup and the OpenOffice icon should be in the system tray. To start the Quickstarter, launch any OpenOffice application. The application can then be closed, and the Quickstarter remains running. To set the Quickstarter to load at system startup, right-click the OpenOffice icon in the system tray, and choose Load OpenOffice.org During System Start-Up.


Note:

OpenOffice can be launched by Inbound Refinery running as a service on Windows XP, 2000, 2003. However, because you must be logged in to Windows to run the OpenOffice Listener, you must always be logged in to Windows when using OpenOffice for PDF conversion even when running Inbound Refinery as a service.


25.1.3.2 Configuring the OpenOffice Port and Setting up the Listener

When running OpenOffice on UNIX, it is recommended that you configure an OpenOffice port and run soffice, which acts as the Listener. If desired, soffice can be used on Windows instead of the Quickstarter.

To start soffice, launch the soffice.exe file located in the following directory:

  • Windows: OpenOffice_install_dir\openoffice.org3\program\

  • UNIX: OpenOffice_install_dir/openoffice.org3/program


Note:

For versions of OpenOffice prior to 3.x, soffice.exe is located in the following directories:

Windows: OpenOffice_install_dir\program\

UNIX: OpenOffice_install_dir/program


Editing Setup.xcu or main.xcd

Prior to version 3.3 of OpenOffice, the file Setup.xcu was used to configure a listening port. Starting with version 3.3, Setup.xcu was incorporated into the file main.xcd. If configuring a version of OpenOffice prior to 3.3, then the steps below apply to editing the Setup.xcu file. If configuring version 3.3 or later versions of OpenOffice, the steps below apply to editing the main.xcd file.

To configure an OpenOffice port:

  1. In a standard text editor, open the Setup.xcu file (for versions prior to 3.3) or main.xcd file (for version 3.3. or higher) of OpenOffice. The Setup.xcu file is located in the following directory:

    • Windows: OpenOffice_install_dir\share\registry\data\org\openoffice\

    • UNIX: OpenOffice_install_dir/share/registry/data/org/openoffice

    The main.xcd file is located in the following directory

    • Windows: OpenOffice_install_dir\openoffice.org\basisversion_number\share\registry

    • UNIX: OpenOffice_install_dir/openoffice.org/basisversion_number/share/registry

  2. Search for the element <node oor:name="Office">. This element contains several <prop/> elements.

  3. Insert the following <prop/> element on the same level as the existing elements, as the first element:

    <prop oor:name="ooSetupConnectionURL" oor:type="xs:string">
    <value>socket,host=localhost,port=8100;urp;</value>
    </prop>
    

    This configures OpenOffice to provide a socket on port 8100, where it will serve connections via the UNO remote protocol (URP). Be careful to block port 8100 for connections from outside your network in your firewall. Using port 8100 is recommended. However, it might be necessary to adjust the port number if port 8100 is already in use. In this case, replace 8100 in the element.

  4. After making changes to the Setup.xcu or main.xcd file, stop and restart the Quickstarter (Windows) or soffice (UNIX or Windows).

25.1.3.3 Setting Port for Session Using soffice Command Line Parameters

As an alternative to configuring an OpenOffice port in the Setup.xcu file and then running the OpenOffice Quickstarter (Windows) or soffice (UNIX or Windows), soffice can be launched from the command line with parameters. However, these settings only apply to the current session. To launch soffice from the command line:

  1. Open a command window and navigate to the following directory:

    • Windows: OpenOffice_install_dir\openoffice.org3\program\

    • UNIX: OpenOffice_install_dir/openoffice.org3/program

  2. Enter the following command:

    soffice "-accept=socket,port=8100;urp;"
    
  3. Verify that OpenOffice is listening on the specified port by opening a command window and entering one of the following commands:

    netstat -a
    netstat -na
    

    An output similar to the following shows that OpenOffice is listening:

    TCP <Hostname>:8100 <Fully qualified hostname>: 0 Listening
    

25.1.3.4 Configuring Inbound Refinery to Use OpenOffice

To configure Inbound Refinery to use OpenOffice:

  1. If port 8100 was not used when modifying the OpenOffice Setup.xcu file, do the following:

    1. In the Inbound Refinery administration interface, select Conversion Settings then Third-Party Application Settings.

    2. On the Third-Party Application Settings page, click the Options button for OpenOffice.

    3. On the The OpenOffice Options page, in the Port to Connect to the OpenOffice Listener field, enter the port that you used when modifying the OpenOffice Setup.xcu file.

    4. Click Update.

  2. Restart Inbound Refinery.

25.1.3.5 Setting Classpath to OpenOffice Class Files

If converting documents using OpenOffice, Oracle Inbound Refinery requires class files distributed with OpenOffice. You must set the path to the OpenOffice class files in the refinery intradoc.cfg file, located in the DomainHome/ucm/ibr/bin directory. To set the path in the intradoc.cfg file:

  1. Navigate to the DomainHome/ucm/ibr/bin directory and open the intradoc.cfg file in a standard text editor.

  2. At the end of the file, enter the following:

    JAVA_CLASSPATH_openoffice_jars=OfficePath/Basis/program/classes/unoil.jar:OfficePath/URE/java/ridl.jar:OfficePath/URE/java/jurt.jar:OfficePath/URE/java/juh.jar
    

    Note:

    The true value for OfficePath is likely to include spaces and care must be taken when setting this in a Microsoft Windows environment. Ensure that the paths are not enclosed in quotes, that slashes (/) are used for path separators and not backslashes (\), and that any space in the path is escaped using a backslash (\). For example, a properly formed classpath in a Windows environment could look like this:

    JAVA_CLASSPATH_openoffice_jars=C:/Program\ Files/OpenOffice.org\
    3/Basis/program/classes/unoil.jar:C:/Program\ Files/OpenOffice.org\
    3/URE/java/ridl.jar:C:/Program\ Files/OpenOffice.org\
    3/URE/java/jurt.jar:C:/Program\ Files/OpenOffice.org\ 3/URE/java/juh.jar
    

  3. Save and close the intradoc.cfg file.

  4. Restart Inbound Refinery.

25.1.3.6 Using OpenOffice Without Logging In to Host

Inbound Refinery can use OpenOffice to convert some file types directly to PDF. This is done by configuring the OpenOffice listener, which must be running in order for conversions to be successful. Typically, you must be logged in to the computer on which OpenOffice is installed in order for OpenOffice to be able to open and process any documents. However, the OpenOffice listener can be run in headless mode with no graphical user interface.


Note:

Before setting up the OpenOffice listener to run in headless mode, confirm that documents can be converted to PDF using OpenOffice running in a non-headless mode. Also, turn off any extra screens that start up before OpenOffice can be used, such as startup dialogs, tip wizards, or update notices. These cause the refinery process to time out, because conversions will not proceed until these screens are cleared and they are not displayed in headless mode.


This section discusses the following topics:

25.1.3.6.1 Setting Up Headless Mode on a Windows Host

To convert documents to PDF using OpenOffice without being logged in to a Windows host, you must create a custom service to run the OpenOffice listener in headless mode. The Windows Resource Kits provide the INSTSRV.EXE and SRVANY.EXE utilities to create custom services.

To set up a custom OpenOffice service:

  1. In the MS-DOS command prompt, type the following command:

    path\INSTSRV.EXE  service_name path\SRVANY.EXE
    

    where path is the path to the Windows Resource Kit, and service_name is the name of your custom service. This name can be anything, but should be descriptive to identify the service. When done, a new service key is created in your Windows registry.

  2. Open the Registry Editor by selecting Start, then select Run, entering regedit, and clicking OK.


    Caution:

    Backup your registry before editing it.


  3. Backup your registry by choosing File then Export and entering a name for the backup file, then clicking Save. Remember the location to which the backup file is saved should you need to restore the registry.

  4. Navigate to the new registry key created in the first step and select the new service key. The new key is located at:

    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\service_name

  5. With the new key selected, choose Edit then New, then select Key, and name it Parameters.

  6. Right-click on the Parameters key, select New, then select String Value, and name the value Application.

  7. Right-click on the Application string and select Modify.

  8. Type in the full path to soffice.exe, appended with -headless. For example:

    C:\Program Files\OpenOffice2.0\program\soffice.exe-headless
    
  9. Close the Registry Editor and restart the computer.

  10. After the computer has successfully restarted, choose Start then Settings then Control Panel. Choose Administrative Tools then Services to open Windows Services.

  11. On the Windows Services page, right-click the service you just created, choose Properties and ensure that the service is set up to start automatically

  12. Select the Log On tab and enable This account. This enables the service to run using a specific user account.

  13. Enter the same user credentials that the Inbound Refinery is using to run.


    Note:

    The Inbound Refinery user will need to have the right to log on as a service on the Inbound Refinery computer.


  14. Start the service, accept the changes and close Windows Services.

25.1.3.6.2 Setting Up Headless Mode on a UNIX Host

To convert documents to PDF using OpenOffice without being logged in to a UNIX host, the OpenOffice listener must run in headless mode with no graphical user interface, using a virtual buffer display (X server).


Important:

Each UNIX environment is unique. This information is a general guideline for setting up the OpenOffice listener in headless mode on UNIX platforms. An example of the procedure for Red Hat EL4 is also included.


In general, to configure the OpenOffice listener to run in headless mode on UNIX platforms:


Note:

Before setting up OpenOffice to run in headless mode, ensure that Inbound Refinery is installed and configured correctly to successfully convert documents to PDF using OpenOffice in non-headless mode.


  1. Create a startup script to run Inbound Refinery when the system boots up.

  2. Configure a virtual X server and create a startup script to run it when the system boots up, to enable OpenOffice to run.

  3. Create a startup script to run OpenOffice in headless mode when the system boots up.

  4. Configure the system to run the startup scripts in the following order:

    1. Start Inbound Refinery

    2. Start the virtual X server

    3. Start OpenOffice


      Note:

      The virtual X server must be started prior to starting OpenOffice, or OpenOffice will not run. Additionally, remember to ensure that the web server is also configured to run when the system boots up.


25.1.4 Converting Microsoft Office Files to PDF

When running on Windows, Inbound Refinery can use Microsoft Office to convert Microsoft Office files to PDF files. The following Microsoft Office versions are supported:

  • Microsoft Office 2003

  • Microsoft Office 2007

    Microsoft Office 2010


    Note:

    Support for Microsoft Office 2007 excludes support for Microsoft Project 2007.


Please note the following important general considerations:

  • Microsoft Office is used to convert Microsoft Office files to PDF when the Convert to PDF Using third-party applications option is selected on the Primary Web-Viewable Rendition page.

  • Inbound Refinery can convert a number of special features in Microsoft Office files into links in the generated PDF files. You set the conversion options for Microsoft Office files using the Third-Party Application Settings page.

  • To keep a conversion of a Microsoft Office file from timing out, all functions requiring user input should be disabled. These include password protection, security notifications, such as disabling of macros, and online access requests to show online content or participate in user feedback programs. For details on how to disable these and other similar features, see the Microsoft documentation for each product.

  • If a Microsoft Office file was converted to a PDF file successfully, but one or more links in the file could not be converted to links in the PDF file, the conversion status of that file is set to Incomplete. To prevent this from happening, you can set AllowSkippedHyperlinkToCauseIncomplete=false in the intradoc.cfg configuration file located in the refinery DomainDir\ucm\ibr\bin\ directory.

This section discusses the following topics regarding Microsoft Office conversions:

25.1.4.1 Converting Microsoft Word Files to PDF

Consider the following when running Inbound Refinery on Windows and using Microsoft Word to convert Word files to PDF:

  • Any information in a Word file that is outside of the document's print area will not be converted to PDF.

  • Password-protected files will time out unless the need for a password is removed.

  • On Word 2003, choose Tools then Options then General. Turn off Show content and links from Microsoft Online under the Online category, and opt out of the Customer Experience Improvement Program under the Customer Feedback category. If you do not, these files might time out.

  • The following types of links in Word files can be converted to PDF:

    • Absolute URL links (for example, http://www.example.com). You can also use links that specify targets on the page (for example http://idvm001/ibr/portal.htm#target). In order to be processed as an absolute URL link, Word must return the http:// prefix as a part of the link. All supported versions of Microsoft Word automatically enforce this rule.

    • Relative URL links (for example, ../../../../portal.htm). These links do not contain any server name or protocol prefix.

    • Mailto links (links to e-mail addresses; for example mailto:support@example.com). In order to be processed as an e-mail link, Word must return the mailto: prefix as a part of the link. All supported versions of Microsoft Word automatically enforce this rule.

    • Table of Contents links (converted to bookmarks in the generated PDF file).

    • Bookmarks (internal links to auto-generated or author-generated bookmarks).

    • Standard heading styles (Heading 1, Heading 2, and so on, which are converted to bookmarks in the generated PDF file).

    • Links to footnotes and endnotes.

    • UNC path links (for example, \\server1\c\TestDocs\MSOfficeXP\word\target.doc). This option is not currently available on the Word Options panel. To enable this functionality, you must set the ProcessWordUncLinks=true variable in the refinery connection's intradoc.cfg file (DomainHome\ucm\ibr\bin\intradoc.cfg). In general, UNC paths have no relevance in a web browser; a UNC path is not a URL. Therefore, the PDF must be opened outside of the web browser for UNC path links to be resolved correctly. If you are using UNC path links, you might want to configure the Reader on client computers to open PDF files outside the browser.

  • Links in text boxes are not converted.

  • Linked AutoShapes and objects (for example, pictures or WordArt objects) located in tables are not converted.

  • You might notice in some generated PDF files that the "hotspot" for a link is sometimes slightly off from the actual text (within a character or two). To date there are no know problems related to this occurrence, and there is currently no solution.

25.1.4.2 Converting Microsoft Excel