Data Services Developer's Guide

     Previous  Next    Open TOC in new window    View as PDF - New Window  Get Adobe Reader - New Window
Content starts here

Obtaining Enterprise Metadata

A first step in creating data services for the BEA Aqualogic Data Services Platform is to obtain metadata from physical data needed by your application.

This chapter describes this process, including the following topics:

 


Creating Data Source Metadata

Metadata is simply information about the structure of a data source. For example, a list of the tables and columns in a relational database is metadata. A list of operations in a Web service is metadata.

In AquaLogic Data Services Platform, a physical data service is based almost entirely on the introspection of physical data sources.

Figure 3-1 Data Services Available to the RTL Sample Application

Data Services Available to the RTL Sample Application

Table 3-2 list the types of sources from which AquaLogic Data Services Platform can create metadata.

Table 3-2 Data Sources Available for Creating Data Service Metadata
Data Source Type
Access
Relational (including tables, views, stored procedures, and SQL)
JDBC
Web services (WSDL files)
URI, UDDI, WSDL
Delimited (CSV files)
File-based data, such as spreadsheets.
Java functions (.java)
Programmatic
XML (XML files)
File- or data stream-based XML

When information about physical data is developed using the Metadata Import Wizard two things happen:

You can import metadata on the data sources needed by your application using the AquaLogic Data Services Platform Metadata Import wizard. This wizard introspects available data sources and identifies data objects that can be rendered as data services and functions. Once created, physical data services become the building-blocks for queries and logical data services.

Data source metadata can be imported as AquaLogic Data Services Platform functions or procedures. For example, the following source resulted from importing a Web service operation:

(::pragma function <f:function xmlns:f="urn:annotations.ld.bea.com" kind="read" nativeName="getCustomerOrderByOrderID" nativeLevel1Container="ElecDBTest" nativeLevel2Container="ElecDBTestSoap" style="docu	ment"/>::)
declare function f1:getCustomerOrderByOrderID($x1 as element(t1:getCustomerOrderByOrderID)) as schema-element(t1:getCustomerOrderByOrderIDResponse) external;

Notice that the imported Web service is described as a "read" function in the pragma. "External" refers to the fact that the schema is in a separate file. You can find a detailed description of source code annotations in "Understanding AquaLogic Data Services Platform Annotations" in the XQuery Reference Guide.

For some data sources such as Web services imported metadata represents functions which typically return void (in other words, these functions perform operations rather than returning data). Such routines are classified as side-effecting functions or, more formally, as AquaLogic Data Services Platform procedures. You also have the option of marking routines imported from certain data sources as procedures. (See Identifying AquaLogic Data Services Platform Procedures.)

The following source resulted from importing Web service metadata that includes an operation that has been identified as a side-effecting procedure:

(::pragma function <f:function xmlns:f="urn:annotations.ld.bea.com" kind="hasSideEffects" nativeName="setCustomerOrder" style="document"/>::)
declare function f1:setCustomerOrder($x1 as element(t3:setCustomerOrder)) as schema-element(t3:setCustomerOrderResponse) external;

In the above pragma the function is identified as "hasSideEffects".

Note: AquaLogic Data Services Platform procedures are only associated with physical data services and can only be created through the metadata import process. So, for example, attempting to add procedures to a logical data service through Source View will result in an error condition.

Identifying AquaLogic Data Services Platform Procedures

When you import source metadata for Web services, relational stored procedures, or Java functions you have an opportunity to identify the metadata that represents side-effecting routines. A typical example is a Web service that creates a new customer record. From the point of view of the data service such routines are procedures.

Procedures are not standalone; they always are part of a data service from the same data source.

When importing data from such sources the Metadata Import wizard automatically categorizes routines that return void as procedures. The reason for this is simply: if a routine does not return data it cannot inter-operate with other data service functions.

There are, however, routines that both return data and have side-effects; it is these routines which you need to identify as procedures during the metadata import process. Identification of such procedures provides the application developer with two key benefits:

Table 3-4 lists common AquaLogic Data Services Platform operations, identifying which operations are available or unavailable for data service procedures.

Table 3-4 AquaLogic Data Services Platform Scope of Procedures
Artifact
Procedures Available
Procedures Unavailable
AquaLogic Data Services Platform IDE
  • Metadata import operations
  • Function execution from Test View
  • AquaLogic Data Services Platform Control query function palette
  • AquaLogic Data Services Platform Palette
  • XQuery Editor function list
  • Query Plan Viewer function list
  • For use in queries
  • For use in logical data services
AquaLogic Data Services Platform Console
  • Function security settings
  • Left tree access
  • Cache operations
AquaLogic Data Services Platform APIs
  • invokeProcedure()
  • Strongly typed API
  • AquaLogic Data Services Platform control
  • invoke() API (only for use with functions)
  • prepareExpression() for running queries

Procedures greatly simplify the process of updating non-relational back-end data sources by providing an invokeProcedure( ) API. This API encapsulates the operational logic necessary to invoke relational stored procedures, Web services, or Java functions. In such cases update logic can be built into a back-end data source routine which, in turn, updates the data.

For information on updating non-relational sources and other special cases see Handling Updates Through Data Services.

For an example showing how you can identify side-effecting procedures during the metadata import process see Importing Web Services Metadata.

 


Obtaining Metadata from Relational Sources

You can obtain metadata on any relational data source available to the BEA WebLogic Platform. For details see the BEA Platform document entitled How Do I Connect a Database Control to a Database Such as SQL Server or Oracle.

Four types of metadata can be obtained from a relational data source:

Note: When using an XA transaction driver you need to mark your data source's connection pool to allow LocalTransaction in order for single database reads and updates to succeed.
Note: For additional information in XA transaction adaptor settings see "Developing Adaptors" in BEA WebLogic Integration documentation: http://download.oracle.com/docs/cd/E13214_01/wli/docs81/devadapt/dbmssamp.html

Importing Relational Table and View Metadata

To create metadata on relational tables and views follow these steps:

  1. Select the project in which you want to create your metadata. For example, if you have a project called myLDProject right-click on the project name and select Import Source Metadata... from the pop-up menu. Click Next.
  2. From the available data sources in the Import Wizard select Relational (see Figure 3-5).
  3. Figure 3-5 Selecting a Relational Source from the Import Metadata Wizard


    Selecting a Relational Source from the Import Metadata Wizard

  4. Either select a data source from available sources or make a new data source available to the WLS.
  5. Figure 3-6 Import Data Source Metadata Selection Dialog Box


    Import Data Source Metadata Selection Dialog Box

Data Object Selection Options

For information on creating a new data source see Creating a New Data Source.

If you choose to select from an existing data source, several options are available (Figure 3-6).

Select All Database Objects

If you choose to select all, a table will appear containing all the tables, views, and stored procedures in your data source organized by catalog and schema.

Filter Data Source Objects

Sometimes you know exactly the objects in your data source that you want to turn into data services. Or your data source may be so large that a filter is needed. Or you may be looking for objects with specific naming characteristics (such as %audit2003%, a string which would retrieve all objects containing the enclosed string).

In such cases you can identify the exact parts of your relational source that you want to become data service candidates using standard JDBC wildcards. An underscore (_) creates a wildcard for an individual character. A percentage sign (%) indicates a wildcard for a string. Entries are case-sensitive.

For example, you could search for all tables starting with CUST with the entry: CUST%. Or, if you had a relational schema called ELECTRONICS, you could enter that term in the Schema field and retrieve all the tables, views, and stored procedure that are a part of that schema.

Another example:

 CUST%, PAY% 

entered in the Tables/Views field retrieves all tables and views starting with either CUST or PAY.

Note: If no items are entered for a particular field, all matching items are retrieved. For example, if no filtering entry is made for the Procedure field, all stored procedures in the data object will be retrieved.

For relational tables and views you should choose either the Select all option or Selected data source objects.

You can also use wildcards to support importing metadata on internal stored procedures. For example, entering the following string as a stored procedure filter:

%TRIM%

retrieves metadata on the system stored procedure:

STANDARD.TRIM

In such a situation you would also want to make a nonsense entry in the Table/View field to avoid retrieving all tables and views in the database.

For details on stored procedures see Importing Stored Procedure-Based Metadata.

SQL statement

Allows you to enter an SQL statement that is used as the basis for creating a data service. See Using SQL to Import Metadata for details.

Creating a New Data Source

Most often you will work with existing data sources. However, if you choose New... the WLS DataSource Viewer appears (Figure 3-7). Using the DataSource Viewer you can create new data pools and sources.

Figure 3-7 BEA WebLogic Data Source Viewer

BEA WebLogic Data Source Viewer

For details on using the DataSource Viewer see Configuring a Data Source in WebLogic Workshop documentation.

Selecting an Existing Data Source

Only data sources that have set up through the BEA WebLogic Administration Console are available to a AquaLogic Data Services Platform application or project. In order for the BEA WebLogic Server used by AquaLogic Data Services Platform to access a particular relational data source you need to set up a JDBC connection pool and a JDBC data source.

Once you have selected a data source, you need to choose how you want to develop your metadata — by selecting all objects in the database, by filtering database objects, or by entering a SQL statement. (see Figure 3-6).

Creating Table- and View-Based Metadata

Once you have selected a data source and any optional filters, a list of available database objects appears.

Figure 3-9 Identifying Database Objects to be Used as Data Services

Identifying Database Objects to be Used as Data Services

Using standard dialog commands you can add one or several tables to the list of selected data objects. To deselect a table, select that table in the right-hand column and click Remove.

A Search field is also available. This is useful for data sources which have many objects. Enter a search string, then click Search repeatedly to move through your list.

  1. Once you have selected one or several data sources, click Next to verify the location of the to-be-created data services and the names of your new data services.
  2. The imported data summary screen:

    • Lists selected objects by name. You can mouse over the XML type to see the complete path (Figure 3-10).
    • Lists the location of the generated data service in the current application.
    • Identifies any name conflicts. Name conflicts occur when there is an data service of the same name present in the target directory. Any name conflicts are highlighted in red.
You can edit the file name to clarify the name or to avoid conflicts. Simply click on the name of the file and make any editing changes.

Alternatively, choose Remove All to return to the initial, nothing-is-selected state.

  1. There are several situations where you will need to change the name of your data service:
    • There already is a data service of the same name in your application.
    • You are trying to create multiple data services with the same name.
    • In such cases the name(s) of the data service(s) having name conflicts appear in red. Simply change to a unique name using the built-in line editor.

      Figure 3-10 Relational Source Import Data Summary Screen


      Relational Source Import Data Summary Screen

  2. Click Finish. A data service will be created for each object selected. The file extension of the created data services will always be .ds.
Database-specific Considerations

Database vendors variously support database catalogs and schemas. Table 3-11 describes this support for several major vendors.

Table 3-11 Vendor Support for Catalog and Schema Objects
Vendor
Catalog
Schema
Oracle
Does not support catalogs. When specifying database objects, the catalog field should be left blank.
Typically the name of an Oracle user ID.
DB2
If specifying database objects, the catalog field should be left blank.
Schema name corresponds to the catalog owner of the database, such as db2admin.
Sybase
Catalog name is the database name.
Schema name corresponds to the database owner.
Microsoft SQL Server
Catalog name is the database name.
Schema name corresponds to the catalog owner, such as dbo. The schema name must match the catalog or database owner for the database to which you are connected.
Informix
Does not support catalogs. If specifying database objects, the catalog field should be left blank.
Not needed.
PointBase
PointBase database systems do not support catalogs. If specifying database objects, the catalog field should be left blank.
Schema name corresponds to a database name.

XML Name Conversion Considerations

When a source name is encountered that does not fit within XML naming conventions, default generated names are converted according to rules described by the SQLX standard. Generally speaking, an invalid XML name character is replaced by its hexadecimal escape sequence (having the form _xUUUU_).

For additional details see section 9.1 of the W3C draft version of this standard:


http://www.sqlx.org/SQL-XML-documents/5WD-14-XML-2003-12.pdf 

Once you have created your data services you are ready to start constructing logical views on your physical data. See Designing Data Services. and Modeling Data Services.

Importing Stored Procedure-Based Metadata

Many DBMS systems utilize stored procedures to improve query performance, manage and schedule data operations, enhance security, and so forth. For specifically supported vendors you can import metadata based on stored procedures. Each stored procedure becomes a data service.

Note: See Supported Configurations in AquaLogic Data Services Platform Release Notes. For details on creating and managing stored procedures see the documentation for the particular DBMS.

Stored procedures are essentially database objects that logically group a set of SQL and native database programming language statements together to perform a specific task.

Table 3-12 defines some commonly used terms as they apply to this discussion of stored procedures.

Table 3-12 Terms Commonly Used When Discussing Stored Procedures
Term
Usage
Function
A function is identical to a procedure except a function always return one or more values to the caller and a procedure never returns a value. The value can be a simple type, a row type, or a complex user defined type.
Package
A package is a group of related procedures and functions, together with the cursors and variables they use, stored together in a database for continued use as a unit. Similar to standalone procedures and functions, packaged procedures and functions can be called explicitly by applications or users.
Stored Procedure
A sequence of programming commands written in an extended SQL (such as PL/SQL or T-SQL), Java or XQuery, stored in the database where it is to be used to maximize performance and enhance security. The application can call a procedure to fetch or manipulate database records, rather than using code outside the database to get the same results. Stored procedures do not return values.
AquaLogic Data Services Platform Procedure
Typically a routine which performs work but does not return data. An example would be a routine callable from a data service which writes information to a log file.
Rowset
The set of rows returned by a procedure or query.
Result set
JDBC term for rowset.
Parameter mode
Procedures can have three modes: IN, OUT, and INOUT. There roughly correspond to "write", "read", and "read/write".

Importing Stored Procedures Using the Metadata Import Wizard

Imported stored procedure metadata is quite similar to imported metadata for relational tables and views. The initial three steps for importing stored procedures are the same as importing any relational metadata (described under Importing Relational Table and View Metadata).

Note: If a stored procedure has only one return value and the value is either simple type or a RowSet which is mapping to an existing schema, no schema file created.

Relational Source Import Data Summary Screen

You can select any combination of database tables, views, and stored procedures. If you select one or several stored procedures, the Metadata Import wizard will guide you through the additional steps required to turn a stored procedure into a data service. These steps are:

  1. Select one or several stored procedures. A data service can represent only one stored procedure. In other words, if you have five stored procedures, you will create five data services.
  2. Figure 3-13 Selecting Stored Procedure Database Objects to Import


    Selecting Stored Procedure Database Objects to Import

  3. After you have added the database objects that you want to become data services.
  4. From the selected procedures (Figure 3-14) configure each stored procedure. If your stored procedure has an OUT parameter requiring a complex element, you may need to provide a schema.
  5. Figure 3-14 Configuring a Stored Procedure in Pre-editing Mode


    Configuring a Stored Procedure in Pre-editing Mode

    Data objects in the stored procedure that cannot be identified by the Metadata Import wizard will appear in red, without a datatype. In such cases you need to enter Edit mode (click the Edit button) to identify the data type.

    Your goal in correcting an "<unknown>" condition associated with a stored procedure (Figure 3-14) is to bring the metadata obtained by the import wizard into conformance with the actual metadata of the stored procedure. In some cases this will be by correcting the location of the return type. In others you will need to adjust the type associated with an element of the procedure or add elements that were not found during the initial introspection of the stored procedure.

    Figure 3-15 Stored Procedure in Editing Mode (with Callouts)


    Stored Procedure in Editing Mode (with Callouts)

  6. Edit your procedure as appropriate using the following steps:
    1. Select a stored procedure from the complete list of stored procedures that you want to turn into data services.
    2. Edit the stored procedure parameters including setting mode (in, out, inout), type, and for out parameters, schema location.
    3. Verify and, if necessary, add, remove, or change the order of parameters.
    4. Verify and, if necessary, add, remove, or change any editable rowset.
    5. Supply a return type (either simple or complex through identifying a schema location) in cases the Metadata Import wizard was unable to determine the type.
    6. Accept or cancel your changes.
    7. You need to complete information for each selected stored procedure before you can move to the next step. In particular, any stored procedures shown in red must be addressed.

      Details for each section of the stored procedure import dialog box appear below.

Procedure Profile

Each element in a stored procedure is associated with a type. If the item is a simple type, you can simply choose from the pop-up list of types.

Figure 3-16 Changing the Type of an Element in a Stored Procedure

Changing the Type of an Element in a Stored Procedure

If the type is complex, you may need to supply an appropriate schema. Click on the schema location button and either enter a schema path name or browse to a schema. The schema must reside in your application.

After selecting a schema, both the path to the schema file and the URI appear. For example:

http://temp.openuri.org/schemas/Customer.xsd}CUSTOMER
Procedure Parameters

The Metadata Import wizard, working through JDBC, also identifies any stored procedure parameters. This includes the name, mode (input [in], output [out], or bidirectional [inout]) and data type. The out mode supports the inclusion of a schema.

Complex type is only supported under three conditions:

Rowsets

Not all databases support rowsets. In addition, JDBC does not report information related to defined rowsets. In order to create data services from stored procedures that use rowset information, supply the correct ordinal (matching number) and a schema. If the schema has multiple global elements, you can select the one you want from the Type column. Otherwise the type will be the first global element in your schema file.

The order of rowset information is significant; it must match the order in your data source. Use the Move Up / Move Down commands to adjust the ordinal number assigned to the rowset.

Complete the importation of your procedures by reviewing and accepting items in the Summary screen (see step 4 in Importing Relational Table and View Metadata for details).

Note: XML types in data services generated from stored procedures do not display native types. However, you can view the native type in the Source View pragma (see Working with XQuery Source).

Handling Stored Procedure Rowsets

A rowset type is a complex type. The name of the rowset type can be:

  1. Mark Appropriate Imported Stored Procedure Metadata as AquaLogic Data Services Platform Procedures
Identifying Stored Procedures as Data Service Procedures

It is often convenient to leverage independent routines as part of managing enterprise information through a data service. An obvious example would be to leverage standalone update or security functions through data services. Such functions have noXML type; in fact they typically return nothing (or void). Instead the data service knows that they have side-effects and are associated as procedures with a data service of the same data source.

Stored procedures are very often side-effecting from the perspective of the data service, since they perform internal operations on data. In such cases all you need to do is identify the stored procedures as a data service procedure during the metadata import process.

After you have identified the stored procedures that you want to add to your data service or XML file library (XFL), you also have an opportunity to identify which of these should be identified as data service procedures.

Figure 3-17 Identifying Stored Procedures Having Side Effects

Identifying Stored Procedures Having Side Effects

Note: Data service procedures based around atomic (simple) types are collected in an identified XML function library (XFL) file. Other procedures need to be associated with a data service that is local to your AquaLogic Data Services Platform-enabled project.
Internal Stored Procedure Support

You can import metadata for an internal stored procedures. See Filter Data Source Objects for details.

Stored Procedure Version Support

Only the most recent version of a stored procedure can be imported into AquaLogic Data Services Platform. For this reason you cannot identify a version number when importing a stored procedure through the Metadata Import wizard. Similarly, adding a version number to AquaLogic Data Services Platform source will result in a query exception.

Supporting Stored Procedures With Nullable Input Parameter(s)

If you know that an input parameter of a stored procedure is nullable (can accept null values), you can change the signature of the function in Source View to make such parameters optional by adding a question mark at end of the parameter. For example (question-mark shown in bold):

	function myProc($arg1 as xs:string) ...

would become:

function myProc($arg1 as xs:string?) ...

For additional information on Source View see Working with XQuery Source.

Stored Procedure Support for Commonly Used Databases

Each database vendor approaches stored procedures differently. XQuery support limitations are, in general, due to JDBC driver limitations.

General Restriction

AquaLogic Data Services Platform does not support rowset as an input parameter.

Oracle Stored Procedure Support

Table 3-18 summarizes AquaLogic Data Services Platform support for Oracle database procedures.

Table 3-18 Support for Oracle Store Procedures
Term
Usage
Procedure types
  • Procedures
  • Functions
  • Packages
Parameter modes
  • Input only
  • Output only
  • Input/Output
  • None
Parameter data types
Any Oracle PL/SQL data type except those listed below:
  • ROWID
  • UROWID

Note: When defining function signatures, note that the Oracle %TYPE and %ROWTYPE types must be translated to XQuery types that match the true types underlying the stored procedure's %TYPE and %ROWTYPE declarations. %TYPE declarations map to simple types; %ROWTYPE declarations map to rowset types.

For a list of database types supported by AquaLogic Data Services Platform see Relational Data Types to XQuery Data Types.
Data returned from a function
Oracle supports returning PL/SQL data types such as NUMBER, VARCHAR, %TYPE, and %ROWTYPE as parameters.
Comments
The following identifies limitations associated with importing Oracle database procedure metadata.
  • The Metadata Import wizard can only detect the data structure for cursors that have a binding PL/SQL record. For a dynamic cursor you need to manually specify the cursor schema.
  • Data from a PL/SQL record structure cannot be retrieved due to Oracle JDBC driver limitations.
  • The Oracle JDBC driver supports rowset output parameters only if they are defined as reference cursors in a package.
  • The Oracle JDBC driver does not support NATURALN and POSITIVEN as output only parameters.

Sybase Stored Procedure Support

Table 3-19 summarizes AquaLogic Data Services Platform support for Sybase SQL Server database procedures.

Table 3-19 Support for Sybase Stored Procedures
Term
Usage
Procedure types
  • Procedures
  • Grouped procedures
  • Functions
  • Functions are categorized as a scalar or inline table-valued and multi-statement table-valued function. Inline table-valued and multi-statement table-valued functions return rowsets.

Parameter modes
  • Input only
  • Output only
Parameter data types
For the complete list of database types supported by AquaLogic Data Services Platform see Relational Data Types to XQuery Data Types.
Data returned from a function
Sybase functions supports returning a single value or a table.
Procedures return data in the following ways:
  • As output parameters, which can return either data (such as an integer or character value) or a cursor variable (cursors are rowsets that can be retrieved one row at a time).
  • As return codes, which are always an integer value.
  • As a rowset for each SELECT statement contained in the stored procedure or any other stored procedures called by that stored procedure.
  • As a global cursor that can be referenced outside the stored procedure supports, returning single value or multiple values.
Comments
The following identifies limitations associated with importing Sybase database procedure metadata:
  • The Sybase JDBC driver does not support input/output or output only parameters that are rowsets (including cursor variables).
  • The Jconnect driver and some versions of the BEA Sybase driver cannot detect the parameter mode of the procedure. In this case, the return mode will be UNKNOWN, preventing importation of the metadata. To proceed, you need to set the correct mode in order to proceed.
  • Only data types generally supported by AquaLogic Data Services Platform metadata import can be imported as part of stored procedures.

IBM DB2 Stored Procedure Support

Table 3-20 summarizes AquaLogic Data Services Platform support for IBM DB2 database procedures.

Table 3-20 Support for IBM DB2 Stored Procedures
Term
Usage
Procedure types
  • Procedures
  • Functions
  • Packages
Each function is also categorized as a scalar, column, row, or table function.
Here are additional details on function categorization:
  • A scalar function is one that returns a single-valued answer each time it is called.
  • A column function is one which conceptually is passed a set of like values (a column) and returns a single-valued answer (AVG( )).
  • A row function is a function that returns one row of values.
  • A table function is function that returns a table to the SQL statement that referenced it.
Parameter modes
  • Input only
  • Output only
  • Input/output
Parameter data types
For the complete list of database types supported by AquaLogic Data Services Platform see Relational Data Types to XQuery Data Types.
Data returned from a function
DB2 supports returning a single value, a row of values, or a table.
Comments
The following identifies limitations associated with importing DB2 database procedure metadata:
  • Column type functions are not supported.
  • Rowsets as output parameters are not supported.
  • The DB2 JDBC driver supports float, double, and decimal input only and output only parameters.
  • Float, double, and decimal data types are not supported as input/output parameters.

  • Only data types generally supported by AquaLogic Data Services Platform metadata import can be imported as part of stored procedures.

Microsoft SQL Server Stored Procedure Support

Table 3-21 summarizes AquaLogic Data Services Platform support for Microsoft SQL Server database procedures.

Table 3-21 AquaLogic Data Services Platform Support for Microsoft SQL Server Stored Procedures
Term
Usage
Procedure types
SQL Server supports procedures, grouped procedures, and functions. Each function is also categorized as a scalar or inline table-valued and multi-statement table-valued function.
Inline table-valued and multi-statement table-valued functions return rowsets.
Parameter modes
SQL Server supports input only and output only parameters.
Parameter data types
SQL Server procedures/functions support any SQL Server data type as a parameter.
Data returned from a function
SQL Server functions supports returning a single value or a table.
Data can be returned in the following ways:
  • As output parameters, which can return either data (such as an integer or character value) or a cursor variable (cursors are rowsets that can be retrieved one row at a time).
  • As return codes, which are always an integer value.
  • As a rowset for each SELECT statement contained in the stored procedure or any other stored procedures called by that stored procedure.
Comments
The following identifies limitations associated with importing SQL Server procedure metadata.
  • Result sets returned from SQL server (as well as those returned from Sybase) are not detected automatically. Instead you will need to manually add parameters as a result.
  • The Microsoft SQL Server JDBC driver does not support rowset input/output or output only parameters (including cursor variables).
  • Only data types generally supported by AquaLogic Data Services Platform metadata import can be imported as part of stored procedures.

Using SQL to Import Metadata

One of the relational import metadata options (see Figure 3-6) is to use an SQL statement to customize introspection of a data source. If you select this option the SQL Statement dialog appears.

Figure 3-22 SQL Statement Dialog Box

SQL Statement Dialog Box

You can type or paste your SELECT statement into the statement box (Figure 3-22), indicating parameters with a "?" question-mark symbol. Using one of the AquaLogic Data Services Platform data samples, the following SELECT statement can be used:

SELECT * FROM RTLCUSTOMER.CUSTOMER WHERE CUSTOMER_ID = ?

RTLCUSTOMER is a schema in the data source, CUSTOMER is, in this case, a table.

For the parameter field, you would need to select a data type. In this case, CHAR or VARCHAR.

The next step is to assign a data service name.

When you run your query under Test View, you will need to supply the parameter in order for the query to run successfully.

Once you have entered your SQL statement and any required parameters click Next to change or verify the name and location of your new data service.

SQL Statement Dialog Box

Figure 3-23 Relational SQL Statement Imported Data Summary Screen

Relational SQL Statement Imported Data Summary Screen

The imported data summary screen identifies a proposed name for your new data service.

The final steps are no different than you used to create a data service from a table or view.

Relational Data Types to XQuery Data Types

Relational data types are necessarily mapped to XQuery data types when metadata is obtained. Specific mappings related to core and base support for relational data is described in the XQuery-SQL Mapping Reference in the XQuery Reference Guide.

Providing Role-based Access to AquaLogic Data Services Platform Relational Sources

When you import metadata from relational sources, you can provide logic in your application that maps users to different data sources depending on the user's role. This is accomplished by creating an interceptor and adding an attribute to the RelationalDB annotation for each data service in your application.

The interceptor is a Java class that implements the SourceBindingProvider interface. This class provides the logic for mapping a users, depending on their current credentials, to a logical data source name or names. This makes it possible to control the level of access to relational physical source based on the logical data source names.

For example, you could have the data source names cgDataSource1, cgDataSourc2, and cgDataSource3 defined on your WebLogic Server and define the logic in your class so that an user who is an administrator can access all three data sources, but a normal user only has access to the data source cgDataSource1.

Note: All relational, update overrides, stored procedure data services, or stored procedure XFL files that refer to the same relational data source should also use the same source binding provider; that is, if you specify a source binding provider for at least one of the data service (.ds) files, you should set it for the rest of them.

To implement the interceptor logic, do the following:

  1. Write a Java class SQLInterceptor that implements the interface com.bea.ld.binds.SourceBindingsProvider and define a getBindings() public method within the class. The signature of this method is:
  2. public String getBinding(String genericLocator, boolean isUpdate)

    The genericLocator parameter specifies the current logical data source name. The isUpdate parameter indicates whether a read or an update is occurring. A value of true indicates an update. A value of false indicates a read. The string returned is the logical data source name to which the user is to be mapped. Listing 3-1 shows an example SQLInterceptor class.

  3. Compile your class into a JAR file.
  4. In your application, save the JAR file in the APP-INF/lib directory of your WebLogic Workshop application.
  5. Define the configuration interceptor for the data source in your DS or XFL files (or both if necessary) by adding a sourceBindingProviderClassName attribute to the RelationalDB annotation. The attribute must be assigned the name of a valid Java class, which is the name of as your interceptor class. For example (the attribute and Java class are in bold):
  6. <relationalDB dbVersion="4" dbType="pointbase" name="cgDataSource" sourceBindingProviderClassName="sql.SQLInterceptor"/>
  7. Compile and run you application. The interceptor will be invoked on execution.
  8. Listing 3-1 Interceptor Class Example
    public class SqlProvider implements com.bea.ld.bindings.SourceBindingProvider{
    public String getBinding(String dataSourceName, boolean isUpdate) {

    weblogic.security.Security security = new weblogic.security.Security();
    javax.security.auth.Subject subject = security.getCurrentSubject();
    weblogic.security.SubjectUtils subUtils =
    new weblogic.security.SubjectUtils();

    System.out.println(" the user name is " + subUtils.getUsername(subject));

    if (subUtils.getUsername(subject).equals("weblogic"))
    dataSourceName = "cgDataSource1";
           System.out.println("The data source is " + dataSourceName);
    System.out.println("SDO " + (isUpdate ? " YES " : " NO ") );

    return dataSourceName;
    }
    }

 


Importing Web Services Metadata 

A Web service is a self-contained, platform-independent unit of business logic that is accessible through application adaptors, as well as standards-based Internet protocols such as HTTP or SOAP.

Web services greatly facilitate application-to-application communication. As such they are increasingly central to enterprise data resources. A familiar example of an externalized Web service is a frequent-update weather portlet or stock quotes portlet that can easily be integrated into a Web application. Similarly, a Web service can be effectively used to track a drop shipment order from a seller to a manufacturer.

Note: Multi-dimensional arrays in RPC mode are not supported.

Creating a data service based on a Web service definition (schema) is similar to importing relational data source metadata (see Importing Relational Table and View Metadata).

Here are the Web service-specific steps involved:

  1. Select the AquaLogic Data Services Platform-based project in which you want to create your Web service metadata. For example, if you have a project called DataServices right-click on the project name and select Import Metadata... from the pop-up menu.
  2. From the available data sources in the Metadata Import wizard select Web service and click Next.
  3. There are three ways to access a Web service:
    • From a Web service description language (WSDL) file that is in your current AquaLogic Data Services Platform project.
    • From a URI which is a WSDL accessible via a URL (HTTP).
    • From a Universal Description, Discovery, and Integration service (UDDI).
    • Figure 3-24 Locating a Web Service


      Locating a Web Service

Note: For the purpose of showing how to import Web service metadata a WSDL file from the RTLApp sample is used for the remaining steps. If you are following these instructions enter the following into the URI field to access the WSDL included with RTLApp:
Note: http://localhost:7001/ElecWS/controls/ElecDBTestContract.wsdl
  1. From the selected Web service choose the operations that you want to turn into data services or XFL functions.
  2. Identify which, if any, Web service-based data services should be marked as having side-effects.
Note: Imported operations returning void are automatically imported as AquaLogic Data Services Platform procedures. You can identify other operations as procedures using the Select Side Effect Procedures dialog ( Figure 3-25).

It is often convenient to leverage side-effecting operations as part of managing enterprise information through a data service. An obvious example would be to manage standalone update or security functions through data services. The data service registers that such operations have side-effects.

Procedures are not standalone; they always are part of a data service from the same data source.

Web services are side-effecting from the perspective of the data service even when they do return data. In such cases, you need to associate the Web service operation with a data service during the metadata import process.

Figure 3-25 Marking Imported Operations AquaLogic Data Services Platform Procedures

Marking Imported Operations AquaLogic Data Services Platform Procedures

Procedures must be associated with a data service that is local to a AquaLogic Data Services Platform-enabled project.

Figure 3-26 Identifying Web Service Operations to be Used as Data Services

Identifying Web Service Operations to be Used as Data Services

Using standard dialog editing commands you can select one or several operations to be added to the list of selected Web service operations. To deselect an operation, click on it, then click Remove. Or choose Remove All to return to the initial state.

  1. Click Next to verify the location of the to-be-created data services and their names.
  2. Figure 3-27 Web Services Imported Data Summary Screen


    Web Services Imported Data Summary Screen

    The summary screen shown in Figure 3-27:

    • Lists the Web service operations you have selected.
    • Lists the target name for the generated data services.
    • Identifies in red any data service name conflicts.
    • Even if there are no name conflicts you may want to change a data service name for clarity. Simply click on the name of the data service and enter the new name.

    • Provides an option for adding the function to an existing data service based on the same WSLD. This option is only enabled if such a data service exists in your project. If there are several data services based on the same WSDL, a dropdown menu allows you to choose the data service for your function.
Note: Web Service functions identified as side-effecting procedures must be associated with a data service based on the same WSDL.
Note: When importing a Web service operation that itself has one or more dependent (or referenced) schemas, the Metadata Import wizard creates second-level schemas according to internal naming conventions. If several operations reference the same secondary schemas, the generated name for the secondary schema may change if you re-import or synchronize with the Web service.
  1. Click Finish. A data service will be created for each selected operation.

  2. Web Services Imported Data Summary Screen

Testing Metadata Import With an Internet Web Service URI

If you are interested in trying the Metadata Import wizard with an internet Web service URI, the following page (available as of this writing) provides sample URIs:

http://www.strikeiron.com/BrowseMarketplace.aspx?c=14&m=1

Simply select a topic and navigate to a page showing the sample WSDL address such as:

http://ws.strikeiron.com/SwanandMokashi/StockQuotes?WSDL

Copy the string into the Web service URI field and click Next to select the operations want to turn into sample data services or procedures.

Another external Web service that can be used to test metadata import can be located at:

http://www.whitemesa.net/wsdl/std/echoheadersvc.wsdl

Setting Up Handlers for Web Services Accessed by AquaLogic Data Services Platform

When you import metadata from web services for AquaLogic Data Services Platform, you can create SOAP handler for intercepting SOAP requests and responses. The handler will be invoked when a web service method is called. You can chain handlers that are invoked one after another in a specific sequence by defining the sequence in a configuration file.

To create and chain handlers, follow these two steps:

  1. Create Java class implements the interface javax.xml.rpc.handler.GenericHandler. This will be your handler. (Note that you could create more than one handler. For, example you could have one named WShandler and one named AuditHandler.) Listing 3-2 shows an example implementation of a GenericHandler class. Place your handlers in a folder named WShandler in WebLogic Workshop. (For detailed information on how to write handlers, refer to Creating SOAP Message Handlers to Intercept the SOAP Message in Programming WebLogic Web Services.
  2. Listing 3-2 Example Intercept Handler
    package WShandler;

    import java.util.Iterator;
    import javax.xml.rpc.handler.MessageContext;
    import javax.xml.rpc.handler.soap.SOAPMessageContext;
    import javax.xml.soap.SOAPElement;
    import javax.xml.rpc.handler.HandlerInfo;
    import javax.xml.rpc.handler.GenericHandler;
    import javax.xml.namespace.QName;

    /**
    * Purpose: Log all messages to the Server console
    */
    public class WShandler extends GenericHandler
    {
    HandlerInfo hinfo = null;

    public void init (HandlerInfo hinfo) {
    this.hinfo = hinfo;
    System.out.println("*****************************");
    System.out.println("ConsoleLoggingHandler r: init");
    System.out.println(
    "ConsoleLoggingHandler : init HandlerInfo" + hinfo.toString());
    System.out.println("*****************************");
    }

    /**
    * Handles incoming web service requests and outgoing callback requests
    */
    public boolean handleRequest(MessageContext mc) {
    logSoapMessage(mc, "handleRequest");
    return true;
    }

    /**
    * Handles outgoing web service responses and
    * incoming callback responses
    */
    public boolean handleResponse(MessageContext mc) {
    this.logSoapMessage(mc, "handleResponse");
    return true;
    }

    /**
    * Handles SOAP Faults that may occur during message processing
    */
    public boolean handleFault(MessageContext mc){
    this.logSoapMessage(mc, "handleFault");
    return true;
    }

    public QName[] getHeaders() {
    QName [] qname = null;
    return qname;
    }

    /**
    * Log the message to the server console using System.out
    */
    protected void logSoapMessage(MessageContext mc, String eventType){
    try{
    System.out.println("*****************************");
    System.out.println("Event: "+eventType);
    System.out.println("*****************************");
    }
    catch( Exception e ){
    e.printStackTrace();
    }
    }

    /**
    * Get the method Name from a SOAP Payload.
    */
    protected String getMethodName(MessageContext mc){

    String operationName = null;

    try{
    SOAPMessageContext messageContext = (SOAPMessageContext) mc;
    // assume the operation name is the first element
    // after SOAP:Body element
    Iterator i = messageContext.
    getMessage().getSOAPPart().getEnvelope().getBody().getChildElements();
    while ( i.hasNext() )
    {
    Object obj = i.next();
    if(obj instanceof SOAPElement)
    {
    SOAPElement e = (SOAPElement) obj;
    operationName = e.getElementName().getLocalName();
    break;
    }
    }
    }
    catch(Exception e){
    e.printStackTrace();
    }
    return operationName;
    }
    }
  3. Define a configuration file in your application. This file specifies the handler chain and the order in which the handlers will be invoked. The XML in this configuration file must conform to the schema shown in Listing 3-3.
  4. Listing 3-3 Handler Chain Schema
    <?xml version="1.0" encoding="UTF-8"?>
    <xs:schema targetNamespace="http://www.bea.com/2003/03/wlw/handler/config/" xmlns="http://www.bea.com/2003/03/wlw/handler/config/" xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified" attributeFormDefault="unqualified">
    <xs:element name="wlw-handler-config">
    <xs:complexType>
    <xs:sequence>
    <xs:element name="handler-chain" minOccurs="0" maxOccurs="unbounded">
    <xs:complexType>
    <xs:sequence minOccurs="0" maxOccurs="unbounded">
    <xs:element name="handler">
    <xs:complexType>
    <xs:sequence>
    <xs:element name="init-param"
    minOccurs="0" maxOccurs="unbounded">
    <xs:complexType>
    <xs:sequence>
    <xs:element name="description"
    type="xs:string" minOccurs="0"/>
    <xs:element name="param-name" type="xs:string"/>
    <xs:element name="param-value" type="xs:string"/>
    </xs:sequence>
    </xs:complexType>
    </xs:element>
    <xs:element name="soap-header"
    type="xs:QName" minOccurs="0" maxOccurs="unbounded"/>
    <xs:element name="soap-role"
    type="xs:string" minOccurs="0" maxOccurs="unbounded"/>
    </xs:sequence>
    <xs:attribute name="handler-name"
    type="xs:string" use="optional"/>
    <xs:attribute name="handler-class"
    type="xs:string" use="required"/>
    </xs:complexType>
    </xs:element>
    </xs:sequence>
    <xs:attribute name="name" type="xs:string" use="required"/>
    </xs:complexType>
    </xs:element>
    </xs:sequence>
    </xs:complexType>
    </xs:element>
    </xs:schema>

    The following is an example of the handler chain configuration. In this configuration, there are two chains. One is named LoggingHandler. The other is named SampleHandler. The first chain invokes only one handler named AuditHandler. The handler-class attribute specifies the fully qualified name of the handler.

    <?xml version="1.0"?> 
    <hc:wlw-handler-config name="sampleHandler" xmlns:hc="http://www.bea.com/2003/03/wlw/handler/config/">
    <hc:handler-chain name="LoggingHandler">
    <hc:handler
    handler-name="handler1"handler-class="WShandler.AuditHandler"/>
    </hc:handler-chain>
    <hc:handler-chain name="SampleHandler">
    <hc:handler
    handler-name="TestHandler1" handler-class="WShandler.WShandler"/>
    <hc:handler handler-name="TestHandler2"
    handler-class="WShandler.WShandler"/>
    </hc:handler-chain>
    </hc:wlw-handler-config>
  5. In your AquaLogic Data Services Platform application, define the interceptor configuration for the method in the data service to which you want to attach the handler. To do this, add a line similar the bold text shown in the following example:
  6. xquery version "1.0" encoding "WINDOWS-1252";

    (::pragma xds <x:xds xmlns:x="urn:annotations.ld.bea.com"
    targetType="t:echoStringArray_return"
    xmlns:t="ld:SampleWS/echoStringArray_return">
    <creationDate>2005-05-24T12:56:38</creationDate>
    <webService targetNamespace=
    "http://soapinterop.org/WSDLInteropTestRpcEnc"
    wsdl="http://webservice.bea.com:7001/rpc/WSDLInteropTestRpcEncService?WSDL"/></x:xds>::)

    declare namespace f1 = "ld:SampleWS/echoStringArray_return";

    import schema namespace t1 = "ld:AnilExplainsWS/echoStringArray_return" at "ld:SampleWS/schemas/echoStringArray_param0.xsd";

    (::pragma function <f:function xmlns:f="urn:annotations.ld.bea.com" kind="read" nativeName="echoStringArray" nativeLevel1Container="WSDLInteropTestRpcEncService" nativeLevel2Container="WSDLInteropTestRpcEncPort" style="rpc">
    <params>
    <param nativeType="null"/>
    </params>
    <interceptorConfiguration aliasName="LoggingHandler" fileName="ld:SampleWS/handlerConfiguration.xml" />
    </f:function>::)

    declare function f1:echoStringArray($x1 as element(t1:echoStringArray_param0)) as schema-element(t1:echoStringArray_return) external;
    <interceptorConfiguration aliasName="LoggingHandler" fileName="ld:testHandlerWS/handlerConfiguration.xml">

    Here the aliasName attribute specifies the name of the handler chain to be invoked and the fileName attribute specifies the location of the configuration file.

  7. Include the JAR file in the library module that defines the handler class referred to in the configuration file.
  8. Compile and run your application. Your handlers will be invoked in the order specified in the configuration file.

 


Importing Java Function Metadata

You can create metadata based on custom Java functions. When you use the Metadata Import wizard to introspect a .class file, metadata is created around both complex and simple types. Complex types become data services while simple Java routines are converted into XQueries and placed in an XQuery function library (XFL). In Source View (see Working with XQuery Source) a pragma is created that defines the function signature and relevant schema type for complex types such as Java classes and elements.

In the RTLApp DataServices/Demo directory there is a sample that can be used to illustrate Java function metadata import.

Web Services Imported Data Summary Screen

Supported Java Function Types

Your Java file can contains two types of functions. These are described in Table 3-28:

Table 3-28 Types of Java Functions Supported for Metadata Import
Types of Java Functions
Application in AquaLogic Data Services Platform
Functions processing primitive types or arrays of primitive types
Grouped into an XQuery Function Library file, callable by any data service in the same application.
Functions processing complex types or arrays of complex types
Grouped into a data services, using XMLBean Java-to-XML technology.

Before you can create metadata on a custom Java function you must create a Java class containing both schema and function information. A detailed example is described in Creating XMLBean Support for Java Functions.

Adding Java Function Metadata Using Import Wizard

Importing Java function metadata is similar to importing relational data source metadata (see Importing Relational Table and View Metadata). Here are the Java function-specific steps involved:

  1. Select the AquaLogic Data Services Platform-based project in which you want to create your Java function metadata. (In the DataServices project of the RTLApp there is a special Demo folder containing XML, CSV, and Java data and schema samples.)
  2. Build your project to validate its contents. A build will create a .class file from your .java function and place it in your application's library.
  3. Right-click on the Java folder and select Import Source Metadata from the pop-up menu.
  4. From the available data sources in the Metadata Import wizard select Java Function (see Figure 3-29). Click Next.
  5. Figure 3-29 Selecting a Java Function as the Data Source


    Selecting a Java Function as the Data Source

  6. Your Java .class file must be in your BEA WebLogic application. You can browse to your file or enter a fully-qualified path name starting from the root directory of your AquaLogic Data Services Platform-based project.
  7. Figure 3-30 Specifying a Java Class File for Metadata Import


    Specifying a Java Class File for Metadata Import

  8. Select Java functions for import.
  9. Figure 3-31 Selecting Java Functions to Become Either Data Services or XFL Functions


    Selecting Java Functions to Become Either Data Services or XFL Functions

  10. Java functions with the following input and output types are supported for import:
  1. Identify which, if any, Java function-based data services should be identified as having side-effects.
  2. It is often convenient to leverage independent routines as part of managing enterprise information through a data service. An obvious example would be to leverage standalone update or security functions through data services. Such functions have noXML type; in fact they typically return nothing (or void). Instead the data service knows that the routine has side-effects, but those effects are not transparent to the service. AquaLogic Data Services Platform procedures can also be thought of as side-effecting functions.

    Java functions are "side-effecting" from the perspective of the data service when they perform internal operations on data.

    After you have identified the Java functions that you want to add to your project, you can also identify which, if any, of these should be treated as AquaLogic Data Services Platform procedures (Figure 3-32). In the case of main(), the Metadata Import wizard detects that it returns void so it is already marked as a procedure.

    Figure 3-32 Marking Java Functions as AquaLogic Data Services Platform Procedures


    Marking Java Functions as AquaLogic Data Services Platform Procedures

    Functions based around atomic (simple) types are collected in an identified XML function library (XFL) file.

Note: Side-effecting procedures must to be associated with a data service that is from the same data source. In this case, the source is your Java file. In other words, in order to specify a Java function as a procedure, a function in the same file that returns a complex element must either be created at the same time or already exist in your project.
  1. Click Next to verify the name and location of your new data service(s).
  2. Figure 3-33 Java Function Imported Data Summary Screen


    Java Function Imported Data Summary Screen

    You can edit the proposed data service name either for clarity or to avoid conflicts with other existing or planned data services. All functions returning complex data types will be in the same data service. Click on the proposed data service name to change it.

    Procedures must be associated with a data service that draws data from the same data source (Java file). In the sample shown in Figure 3-33, the only available data service is PRODUCTS (or whatever name you choose).

    If there are existing XFL files in your project you have the option of adding atomic functions to that library or creating a new library for them. All the Java file atomic functions are located in the same library.

  3. Click Finish.

Creating XMLBean Support for Java Functions

Before you can import Java function metadata, you need to create a .class file that contains XMLBean classes based on global elements and compiled versions of your Java functions. To do this, you first create XMLBean classes based on a schema of your data. There are several ways to accomplish this. In the example in this section you create a WebLogic Workshop project of type Schema.

Generally speaking, the process involves:

Note: The Java function import wizard requires that all the complex parameter or return types used by the functions correspond to XMLBean global element types whose content model is an anonymous type. Thus only functions referring to a top level element are importend.
Creating a Metadata-enriched Java Class: An Example

In the following example there are a number of custom functions in a .java file called FuncData.java. In the RTLApp this file can be found at:

ld:DataServices/Demo/Java/FuncData.java

Some functions in this file return primitive data types, while others return a complex element (Table 3-34). The complex element representing the data to be introspected is in a schema file called FuncData.xsd.

Table 3-34 Metadata-enriched Java Class Artifacts
File
Purpose
FuncData.java
Contains Java functions to be converted into data service query functions. Also contains as small data sample.
FuncData.xsd
Contains a schema for the complex element identified in FuncData.java

The schema file can be found at:

ld:DataServices/Demo/Java/schema/FuncData.xsd

To simplify the example a small data set is included in the .java file as a string.

The following steps will create a data service from the Java functions in FuncData.java:

  1. Create a new AquaLogic Data Services Platform-based application called CustomFunctions.
  2. Create a new project of type Schema in your application; name it Schemas.
  3. Right-click on the newly created Schemas project and select the Import... option.
  4. Browse to the RTLApp and select FuncData.xsd for import.
  5. Importing a schema file into a schema project automatically starts the project build process.

    When successful, XMLBean classes are created for each function in your Java file and placed in a JAR file called JavaFunctSchema.jar

    The JAR file is located in the Libraries section of your application.

  6. Build your project.
  7. In your AquaLogic Data Services Platform-based project (customFunctionsDataServices) create a folder called JavaFuncMetadata.
  8. Right-click on the newly created JavaFuncMetadata folder and select the Import... option.
  9. Browse to the ld:DataServices/Demo/Java folder in the RTLApp and select FuncData.java for import. Click Import.
  10. Build your project.
  11. The JAR file named for your AquaLogic Data Services Platform-based project is updated to include a.class file named FuncData.class; It is this file that can be introspected by the Metadata Import wizard. The file is located in a folder named JavaFuncMetadata in the Library section of your application.

    Figure 3-35 Class File Generated Java Function XML Beans


    Class File Generated Java Function XML Beans

  12. Now you are ready to create metadata from your Java function. These steps are described in Adding Java Function Metadata Using Import Wizard.

Inspecting the Java Source

The .java file used in this example contains both functions and data. More typically, your routine will access data through a data import function.

The first function in Listing 3-4 simply retrieves the first element in an array of PRODUCTS. The second returns the entire array.

Listing 3-4 JavaFunc.java getFirstPRODUCT( ) and getAllPRODUCTS( ) Functions
public class JavaFunc {

 ...

public static noNamespace.PRODUCTSDocument.PRODUCTS getFirstProduct(){
noNamespace.PRODUCTSDocument.PRODUCTS products = null;
try{
noNamespace.DbDocument dbDoc = noNamespace.DbDocument.Factory.parse(testCustomer);
products = dbDoc.getDb().getPRODUCTSArray(1);
//return products;
}catch(Exception e){
e.printStackTrace();
}
return products;
}

public static noNamespace.PRODUCTSDocument.PRODUCTS[] getAllProducts(){
noNamespace.PRODUCTSDocument.PRODUCTS[] products = null;
try{
noNamespace.DbDocument dbDoc = noNamespace.DbDocument.Factory.parse(testCustomer);
products = dbDoc.getDb().getPRODUCTSArray();
//return products;
}catch(Exception e){
e.printStackTrace();
}
return products;
}
}

The schema used to create XMLBeans is shown in Listing 3-5. It simply models the structure of the complex element; it could have been obtained by first introspecting the data directly.

Listing 3-5 B-PTest.xsd Model Complex Element Parsed by Java Function
<xs:schema elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema">
<xs:element name="db">
<xs:complexType>
<xs:sequence>
<xs:element ref="PRODUCTS" maxOccurs="unbounded"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="AVERAGE_SERVICE_COST" type="xs:decimal"/>
<xs:element name="LIST_PRICE" type="xs:decimal"/>
<xs:element name="MANUFACTURER" type="xs:string"/>
<xs:element name="PRODUCTS">
<xs:complexType>
<xs:sequence>
<xs:element ref="PRODUCT_NAME"/>
<xs:element ref="MANUFACTURER"/>
<xs:element ref="LIST_PRICE"/>
<xs:element ref="PRODUCT_DESCRIPTION"/>
<xs:element ref="AVERAGE_SERVICE_COST"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="PRODUCT_DESCRIPTION" type="xs:string"/>
<xs:element name="PRODUCT_NAME" type="xs:string"/>
</xs:schema>


Java functions require that an element returned (as specified in the return signature) come from a valid XML document. A valid XML document has a single root element with zero or more children, and its content matches the schema referred.

Listing 3-6 Approach When Data is Retrieved Through a Document
	public static noNamespace.PRODUCTSDocument.PRODUCTS getNextProduct(){

// create the dbDocument (the root)
noNamespace.DbDocument dbDoc = noNamespace.DbDocument.Factory.newInstance();
// the db element from it
noNamespace.DbDocument.Db db = dbDoc.addNewDb();
// get the PRODUCTS element
PRODUCTS product = db.addNewPRODUCTS();
//.. create the children
product.setPRODUCTNAME("productName");
product.setMANUFACTURER("Manufacturer");
product.setLISTPRICE(BigDecimal.valueOf((long)12.22));
product.setPRODUCTDESCRIPTION("Product Description");
product.setAVERAGESERVICECOST(BigDecimal.valueOf((long)122.22));

// .. update children of db
db.setPRODUCTSArray(0,product);

// .. update the document with db
dbDoc.setDb(db);

//.. now dbDoc is a valid document with db and is children.
// we are interested in PRODUCTS which is a child of db.
// Hence always create a valid document before processing the
children.
// Just creating the child element and returning it, is not
// enough, since it does not mean the document is valid.
// The child needs to come from a valid document, which is created
// for the global element only.

return dbDoc.getDb().getPRODUCTSArray(0);

}

How Metadata for Java Functions Is Created

In AquaLogic Data Services Platform, user-defined functions are typically Java classes. The following are supported:

In order to support this functionality, the Metadata Import wizard supports marshalling and unmarshalling so that token iterators in Java are converted to XML and vice-versa.

Functions you create should be defined as static Java functions. The Java method name when used in an XQuery will be the XQuery function name qualified with a namespace.

Table 3-36 shows the casting algorithms for simple Java types, schema types and XQuery types.

Table 3-36 Simple Java Types and XQuery Counterparts
Java Simple or Defined Type
Schema Type
boolean
xs:boolean
byte
xs:byte
char
xs:char
double
xs:double
float
xs:float
int
xs:int
long
xs:long
short
xs:short
string
xd:string
java.lang.Date
xs:datetime
java.lang.Boolean
xs:boolean
java.math.BigInteger
xs:integer
java.math.BigDecimal
xs:decimal
java.lang.Byte
xs.byte
java.lang.Char
xs:char
java.lang.Double
xs:double
java.lang.Float
xs:float
java.lang.Integer
xs:integer
java.lang.Long
xs:long
java.lang.Short
xs:short
java.sql.Date
xs:date
java.sql.Time
xs:time
java.sql.Timestamp
xs:datetime
java.util.Calendar
xs:datetime

Java functions can also consume variables of XMLBean type that are generated by processing a schema via XMLBeans. The classes generated by XMLBeans can be referred in a Java function as parameters or return types.

The elements or types referred to in the schema should be global elements because these are the only types in XMLBeans that have static parse methods defined.

The next section provides additional code samples that illustrate how Java functions are used by the Metadata Import wizard to create data services.

Technical Details, with Additional Example Code

In order to create data services or members of an XQuery function library, you would first start with a Java function.

Processing a Function Returning an Array of Java Primitives

As an example, the Java function getListGivenMixed( ) can be defined as:

public static float[] getListGivenMixed(float[] fpList, int size) { 
int listLen = ((fpList.length > size) ? size : fpList.length);
float fpListop = new float[listLen];
for (int i =0; i < listLen; i++)
fpListop[i]=fpList[i];
return fpListop;
}

After the function is processed through the wizard the following metadata information is created:

xquery version "1.0" encoding "WINDOWS-1252";

(::pragma xfl <x:xfl xmlns:x="urn:annotations.ld.bea.com">
<creationDate>2005-06-01T14:25:50</creationDate>
<javaFunction class="DocTest"/>
</x:xfl>::)

declare namespace f1 = "lib:testdoc/library";

(::pragma function <f:function xmlns:f="urn:annotations.ld.bea.com" nativeName="getListGivenMixed">
<params>
<param nativeType="[F"/>
<param nativeType="int"/>
</params>
</f:function>::)

declare function f1:getListGivenMixed($x1 as xsd:float*, $x2 as xsd:int) as xsd:float* external;

Here is the corresponding XQuery for executing the above function:

declare namespace f1 = "ld:javaFunc/float"; 
let $y := (2.0, 4.0, 6.0, 8.0, 10.0)
let $x := f1:getListGivenMixed($y, 2)
return $x
Processing complex types represented via XMLBeans

Consider that you have a schema called Customer (customer.xsd), as shown below:

<?xml version="1.0" encoding="UTF-8" ?> 
<xs:schema targetNamespace="ld:xml/cust:/BEA_BB10000" xmlns:xs="http://www.w3.org/2001/XMLSchema">
<xs:element name="CUSTOMER">
<xs:complexType>
<xs:sequence>
<xs:element name="FIRST_NAME" type="xs:string" minOccurs="1"/>
<xs:element name="LAST_NAME" type="xs:string" minOccurs="1"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>

If you want to generate a list conforming to the CUSTOMER element you could process the schema via XMLBeans and obtain xml.cust.beaBB10000.CUSTOMERDocument.CUSTOMER. Now you can use the CUSTOMER element as shown:

public static xml.cust.beaBB10000.CUSTOMERDocument.CUSTOMER[] 
getCustomerListGivenCustomerList(
xml.cust.beaBB10000.CUSTOMERDocument.CUSTOMER[] ipListOfCust)
throws XmlException {
xml.cust.beaBB10000.CUSTOMERDocument.CUSTOMER [] mylocalver =
ipListOfCust;
return mylocalver;
}

Then the metadata information produced by the wizard will be:

(::pragma function <f:function xmlns:f="urn:annotations.ld.bea.com" kind="datasource" access="public"> 
<params>
<param nativeType="[Lxml.cust.beaBB10000.CUSTOMERDocument$CUSTOMER;"/>
</params>
</f:function>::)

declare function f1:getCustomerListGivenCustomerList($x1 as element(t1:CUSTOMER)*) as element(t1:CUSTOMER)* external;

The corresponding XQuery for executing the above function is:

declare namespace f1 = "ld:javaFunc/CUSTOMER"; 
let $z := ( 
validate(<n:CUSTOMER xmlns:n="ld:xml/cust:/BEA_BB10000"><FIRST_NAME>John2</FIRST_NAME><LAST_NAME>Smith2</LAST_NAME> 
</n:CUSTOMER>), 
validate(<n:CUSTOMER xmlns:n="ld:xml/cust:/BEA_BB10000"><FIRST_NAME>John2</FIRST_NAME><LAST_NAME>Smith2</LAST_NAME> 
</n:CUSTOMER>), 
validate(<n:CUSTOMER xmlns:n="ld:xml/cust:/BEA_BB10000"><FIRST_NAME>John2</FIRST_NAME><LAST_NAME>Smith2</LAST_NAME> 
</n:CUSTOMER>), 
validate(<n:CUSTOMER xmlns:n="ld:xml/cust:/BEA_BB10000"><FIRST_NAME>John2</FIRST_NAME><LAST_NAME>Smith2</LAST_NAME> 
</n:CUSTOMER>)) 
for $zz in $z 
return
  f1:getCustomerListGivenCustomerList($z)
Restrictions on Java Functions

The following restrictions apply to Java functions:

 


Importing Delimited File Metadata

Spreadsheets offer a highly adaptable means of storing and manipulating information, especially information which needs to be changed quickly. You can easily turn such spreadsheet data in a data services.

Spreadsheet documents are often referred to as CSV files, standing for comma-separated values. Although CSV is not a typical native format for spreadsheets, the capability to save spreadsheets as CSV files is nearly universal.

Although the separator field is often a comma, the Metadata Import wizard supports any ASCII character as a separator, as well as fixed-length fields.

Note: Delimited files in a single server must share the same encoding format. This encoding can be specified through the system property ld.csv.encoding and set through the JVM command-line directly or via a script such as startWebLogic.cmd (Windows) or startWebLogic.sh (UNIX).
Note: Here is the format for this command:
-Dld.csv.encoding=<encoding format>

If no format is specified through ld.csv.encoding, then the format specified in the file.encoding system property is used.

In the RTLApp DataServices/Demo directory there is a sample that can be used to illustrate delimited file metadata import.

Class File Generated Java Function XML Beans

Providing a Document Name, a Schema Name, or Both

There are several approaches to developing metadata around delimited information, depending on your needs and the nature of the source.

Using the Metadata Import Wizard on Delimited Files

Importing XML file information is similar to importing a relational data source metadata (see Importing Relational Table and View Metadata). Here are the steps that are involved:

  1. Select the project in which you want to create your delimited file metadata. For example, if you have a project called myProject right-click on the project name and select Import Source Metadata from the pop-up menu.
  2. From the available data sources in the Metadata Import wizard select Delimited Source as the data type (see Figure 3-37).
  3. Figure 3-37 Selecting a Delimited Source from the Import Metadata Wizard


    Selecting a Delimited Source from the Import Metadata Wizard

  4. You can supply either a schema name, a source file name, or both. Through the wizard you can browse to a file located in your project. You can also import data from any CSV file on your system using an absolute path prepended with the following:
  5. file:///

    For example, on Windows systems you can access an XML file such as Orders.xml from the root C: directory using the following URI:

    file:///<c:/home>/Orders.csv

    On a UNIX system, you would access such a file with the URI:

    file:///<home>/Orders.csv
  6. Select additional import options:
    • Header. Indicates whether the delimited file contains header data. Header data is located in the first row of the spreadsheet. If you check this option, the first row will not be treated as data.
    • Delimited or Fixed Width. Data in your file is either separated by a specific character (such as a comma) or is of a fixed width (such as 10 spaces). If the data is delimited, you also need to provide the delimited character. By default the character is a comma (,).
    • Figure 3-38 Specifying Import Delimited Metadata Characteristics


      Specifying Import Delimited Metadata Characteristics

  7. Once you have selected a document and, optionally, a schema, click Next to verify the location and unique location/name of your new data service.
  8. Figure 3-39 Delimited Document Imported Data Summary Screen


    Delimited Document Imported Data Summary Screen

    You can edit the data service name either to clarify the name or to avoid conflicts with other existing or planned data services. Any name conflicts are displayed in red. To change the name, double click on the name of the data service to activate the line editor.

  9. Click Finish. A data service (.ds file) will be created with your schema as its XML type.
Note: When importing CSV-type data there are several things to keep in mind:

 


Importing XML File Metadata

XML files are a convenient means of handling hierarchical data. XML files and associated schemas are easily turned into data services.

Importing XML file information is similar to importing a relational data source metadata (see Importing Relational Table and View Metadata).

The Metadata Import wizard allows you to browse for an XML file anywhere in your application. You can also import data from any XML file on your system using an absolute path prepended with the following:

file:///

For example, on Windows systems you can access an XML file such as Orders.xml from the root C: directory using the following URI:

file:///c:/Orders.xml

On a UNIX system, you would access such a file with the URI:

file:///home/Orders.xml

XML File Import Sample

In the RTLApp DataServices/Demo directory there is a sample that can be used to illustrate XML file metadata import.

Here are the steps involved:

  1. Select your AquaLogic Data Services Platform-based project in which you want to create your XML file metadata. For example, if you have a project called myProject, right-click on the project name and select Import Metadata... from the pop-up menu.
  2. From the available data sources in the Metadata Import wizard select XML Source.
  3. Figure 3-40 Selecting an XML File from the Import Metadata Wizard


    Selecting an XML File from the Import Metadata Wizard

  4. In order to access XML data you must first identify a schema; the schema must be located in your application.
  5. Figure 3-41 Specify an XML File Schema for XML Metadata Import


    Specify an XML File Schema for XML Metadata Import

  6. Optionally specify an XML file. If the XML file exists in your AquaLogic Data Services Platform-based project you can simply browse to it. More likely your document is available as a URI, in which case you want to leave the XML file field empty and supply a URI at runtime.
  7. Once you have selected a schema and optional document name, click Next to verify that the name of your new data service is unique to your application.
  8. Figure 3-42 XML File Imported Data Summary Screen


    XML File Imported Data Summary Screen

    You can edit the data service name either to clarify the name or to avoid conflicts with other existing or planned data services. Conflicts are shown in red. Simply click on the name of the data service to change its name. Then click Next.

  9. Next select a global element in your schema (Figure 3-43). Click Ok.
  10. Figure 3-43 A Selecting a Global Element When Importing XML Metadata


    A Selecting a Global Element When Importing XML Metadata

  11. Complete the importation of your procedures by reviewing and accepting items in the Summary screen (see step 4 in Importing Relational Table and View Metadata for details).

  12. A Selecting a Global Element When Importing XML Metadata

Testing the Metadata Import Wizard with an XML Data Source

When you create metadata for an XML data source but do not supply a data source name, you will need to identify the URI of your data source as a parameter when you execute the data service's read function (various methods of accessing data service functions are described in detail in the Client Application Developer's Guide).

The identification takes the form of:

<uri>/path/filename.xml

where uri is representative of a path or path alias, path represents the directory and filename.xml represents the filename. The .xml extension is needed.

You can access files using an absolute path prepended with the following:

file:///

For example, on Windows systems you can access an XML file such as Orders.xml from the root C: directory using the following URI:

file:///c:/Orders.xml

On a UNIX system, you would access such a file with the URI:

file:///home/Orders.xml

Figure 3-44 shows how the XML source file is referenced.

Figure 3-44 Specifying an XML Source URI in Test View

Specifying an XML Source URI in Test View

 


Updating Data Source Metadata

When you first create a physical data service its underlying metadata is, by definition, consistent with its data source. Over time, however, your metadata may become "out of sync" for several reasons:

You can use the Update Source Metadata right-click menu option to identify differences between your source metadata files and the structure of the source data including:

In the case of Source Unavailable, the issue likely relates to connectivity or permissions. In the case of the other types of reports, you can determine when and if to update data source metadata to conform with the underlying data sources.

If there are no differences between your metadata and the underlying source, the Update Source Metadata wizard will report up-to-date for each data service tested.

Considerations When Updating Source Metadata

Source metadata should be updated with care since the operation can have both direct and indirect consequences. For example, if you have added a relationship between two physical data services, updating your source metadata can potentially remove the relationship from both data services. If the relationship appears in a model diagram, the relationship line will appear in red, indicating that the relationship is no longer described by the respective data services.

In many cases the Update Source Metadata Wizard can automatically merge user changes with the updated metadata. See Using the Update Source Metadata Wizard, for details.

Direct and Indirect Effects

Direct effects apply to physical data services. Indirect effects occur to logical data services, since such services are themselves ultimately based — at least in part — on physical data service. For example, if you have created a new relationship between a physical and a logical data service, updating the physical data service can invalidate the relationship. In the case of the physical data service, there will be no relationship reference. The logical data service will retain the code describing the relationship but it will be invalid if the opposite relationship notations is no longer be present.

Thus updating source metadata should be done carefully. Several safeguards are in place to protect your development effort while preserving your ability to keep your metadata up-to-date. See Archival of Source Metadata for information of how your current metadata is preserved as part of the source update.

Using the Update Source Metadata Wizard

The Update Source Metadata wizard allows you to update your source metadata.

Note: Before attempting to update source metadata you should make sure that your build project has no errors.
Figure 3-45 Updating Source Metadata for Several Data Services

Updating Source Metadata for Several Data Services

You can verify that your data structure is up-to-date by performing a metadata update on one or multiple physical data services in your AquaLogic Data Services Platform-based project. For example, in Figure 3-45 all the physical data services in the project will be updated.

After you select your target(s), the wizard identifies the metadata that will be verified and any differences between your metadata and the underlying source.

You can select/deselect any data service or XFL file listed in the dialog using the checkbox to the left of the name (Figure 3-46).

Figure 3-46 Data Services Metadata to be Updated

Data Services Metadata to be Updated

Metadata Update Analysis

Next, an analysis is performed on your metadata by the wizard. The following types of synchronization mismatches are identified:

A update preview screen report (Figure 3-47) is prepared describing these differences both generally and for field-level data.

Figure 3-47 Metadata Update Plan for RTLApp's DataServices Project

Metadata Update Plan for RTLApp's DataServices Project

The Metadata Update Preview screen identifies:

Icons differentiate elements as to be added, removed, or changed. Table 3-48 describes the update source metadata message types and color legends.

Table 3-48 Source Metadata Update Targets and Color Legend
Category
Color
Description
Data source field added
Green
A data source field has been added since the last metadata update.
Data service schema (XML type) modified
Black
A change has been made in a schema that was derived from a data source.
Data source field deleted
Red
A field used by your metadata is no longer appearing in source.
Field modified
Blue
A field in your metadata does not exactly match the data source field.
Function modified
Blue
A function in your metadata does not exactly match the data source function.

Synchronization Mismatches

Under some circumstances the Update Source Metadata wizard flags data service artifacts as changed locally when, in fact, no change was made.

For example, in the case of importing a Web service operation, a schema that is dependent (or referenced) by another schema will be assigned an internally-generated filename. If a second imported Web service operation in your project references the same dependent schema, upon synchronization the wizard may note that the name of the imported secondary schema file has changed. Simply proceed with synchronization; the old second-level schema will automatically be removed.

Archival of Source Metadata

When you update source metadata two files are created and placed in a special directory in your application:

An update metadata source operations assigns the same timestamp to both generated files.

Figure 3-49 UpdateMetadataHistory Directory Sample Content

UpdateMetadataHistory Directory Sample Content

Working with a particular update operations report and source, you can often quickly restore relationships and other changes that were made to your metadata while being assured that your metadata is up-to-date.


  Back to Top       Previous  Next