Data Services Developer's Guide
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
A first step in enabling the data services provided by the BEA Aqualogic Data Services Platform (DSP) is to obtain metadata from physical data available to your application.
Topics in this chapter include:
Metadata is simply information about the structure of a data. For example, a list of the tables and columns in a database is metadata.
In DSP, data services are initially derived from metadata extracted from physical data sources. These base data services are often called physical data services.
Figure 3-1 Data Services Available to the RTL Sample Application
Table 3-2 list the types of sources from which DSP can create metadata.
Table 3-2 Data Sources Available for Creating Data Service Metadata
When information about physical data is developed using the Metadata Import Wizard two things happen:
.ds
) is created in your DSP-based project..xsd
), is created that describes the XML type of the data service. This schema is placed in a sub-directory of your newly created data service.DSP provides a Metadata Import wizard that introspects available data sources and identifies data objects that can be rendered as data services or functions. Once created, physical data services become the building-blocks for queries and logical data services.
The next sections of this chapter describe how you can use the Metadata Import wizard to create data services from various types of data.
You can create metadata on any relational data source available to the BEA WebLogic Platform. For details see the BEA Platform document entitled How Do I Connect a Database Control to a Database Such as SQL Server or Oracle.
Four types of metadata can be created from a relational data source:
Note: When using an XA transaction driver you need to mark your data source's connection pool to allow LocalTransaction in order for single database reads and updates to succeed.
For additional information in XA transaction adaptor settings see "Developing Adaptors" in BEA WebLogic Integration documentation: http://download.oracle.com/docs/cd/E13214_01/wli/docs81/devadapt/dbmssamp.html
To create metadata on relational tables and views follow these steps:
Figure 3-3 Selecting a Relational Source from the Import Metadata Wizard
Figure 3-4 Import Data Source Metadata Selection Dialog Box
For information on creating a new data source see Creating a New Data Source.
If you choose to select from an existing data source, several options are available (Figure 3-4).
If you choose to select all, a table will appear containing all the tables, views, and stored procedures in your data source organized by catalog and schema.
Sometimes you know exactly the objects in your data source that you want to turn into data services. Or your data source may be so large that a filter is needed. Or you may be looking for objects with specific naming characteristics (such as %audit2003%
, a string which would retrieve all objects containing the enclosed string).
In such cases you can identify the exact parts of your relational source that you want to become data service candidates using standard JDBC wildcards. An underscore (_) creates a wildcard for an individual character. A percentage sign (%) indicates a wildcard for a string. Entries are case-sensitive.
For example, you could search for all tables starting with CUST with the entry: CUST%. Or, if you had a relational schema called ELECTRONICS, you could enter that term in the Schema field and retrieve all the tables, views, and stored procedure that are a part of that schema.
CUST%, PAY%
entered in the Tables/Views field retrieves all tables and views starting with either CUST or PAY.
Note: If no items are entered for a particular field, all matching items are retrieved. For example, if no filtering entry is made for the Procedure field, all stored procedures in the data object will be retrieved.
For relational tables and views you should choose either the Select all option or Selected data source objects. For details on stored procedures see Importing Stored Procedure-Based Metadata.
Allows you to enter an SQL statement that is used as the basis for creating a data service. See Using SQL to Import Metadata for details.
Most often you will work with existing data sources. However, if you choose New... the WLS DataSource Viewer appears (Figure 3-5). Using the DataSource Viewer you can create new data pools and sources.
Figure 3-5 BEA WebLogic Data Source Viewer
For details on using the DataSource Viewer see Configuring a Data Source in WebLogic Workshop documentation.
Only data sources that have set up through the BEA WebLogic Administration Console are available to a Data Services Platform application or project. In order for the BEA WebLogic Server used by DSP to access a particular relational data source you need to set up a JDBC connection pool and a JDBC data source.
http://download.oracle.com/docs/cd/E13222_01/wls/docs81/ConsoleHelp/domain_jdbcconnectionpool_config_general.html
http://download.oracle.com/docs/cd/E13222_01/wls/docs81/ConsoleHelp/domain_jdbcdatasource_config.html
Figure 3-6 Selecting a Data Source
Once you have selected a data source, you need to choose how you want to develop your metadata — by selecting all objects in the database, by filtering database objects, or by entering a SQL statement. (see Figure 3-4).
Once you have selected a data source and any optional filters, a list of available database objects appears.
Figure 3-7 Identifying Database Objects to be Used as Data Services
Using standard dialog commands you can add one or several tables to the list of selected data objects. To deselect a table, select that table in the right-hand column and click Remove.
A Search field is also available. This is useful for data sources which have many objects. Enter a search string, then click Search repeatedly to move through your list.
You can edit the file name to clarify the name or to avoid conflicts. Simply click on the name of the file and make any editing changes.
Database vendors variously support database catalogs and schemas. Table 3-9 describes this support for several major vendors.
Table 3-9 Vendor Support for Catalog and Schema Objects
When a source name is encountered that does not fit within XML naming conventions, default generated names are converted according to rules described by the SQLX standard. Generally speaking, an invalid XML name character is replaced by its hexadecimal escape sequence (having the form _xUUUU_
).
For additional details see section 9.1 of the W3C draft version of this standard:
http://www.sqlx.org/SQL-XML-documents/5WD-14-XML-2003-12.pdf
Once you have created your data services you are ready to start constructing logical views on your physical data. See Using Data Services Design View. and Modeling Data Services.
Enterprise databases utilize stored procedures to improve query performance, manage and schedule data operations, enhance security, and so forth. You can import metadata based on stored procedures. Each stored procedure becomes a data service.
Note: Refer to your database documentation for details on managing stored procedures.
Stored procedures are essentially database objects that logically group a set of SQL and native database programming language statements together to perform a specific task.
Table 3-10 defines some commonly used terms as they apply to this discussion of stored procedures.
Table 3-10 Terms Commonly Used When Discussing Stored Procedures
The initial three steps for importing stored procedures are the same as importing any relational metadata (described under Importing Relational Table and View Metadata).
Note: Examples in this section use an Oracle database containing a large number of stored procedures.
You can select any combination of database tables, views, and stored procedures. If you select one or several stored procedures, the Metadata Import Wizard will guide you through the additional steps required to turn a stored procedure into a data service. These steps are:
Figure 3-11 Selecting Stored Procedure Database Objects to Import
Figure 3-12 Configuring a Stored Procedure in Pre-editing Mode
Data objects in the stored procedure that cannot be identified by the Metadata Import wizard will appear in red, without a datatype (Figure 3-12). In such cases you need to enter Edit mode (click the Edit button) to identify the data type.
Your goal in correcting an error condition associated with a stored procedure is to bring the metadata obtained by the import wizard into conformance with the actual metadata of the stored procedure. In some cases this will be correcting the location of the return type. In others you will need to adjust the type associated with an element of the procedure or add elements that would not found during the initial introspection of the stored procedure.
Figure 3-13 Stored Procedure in Editing Mode (with Callouts)
The Edit Procedure dialog allows you to:
You need to complete information for each selected stored procedure before you can create your data services. In particular, any procedures shown in red must be addressed.
Details for each section of the procedure import dialog box appear below.
Each element in a stored procedure is associated with a type. If the item is a simple type, you can simply choose from the pop-up list of types.
Figure 3-14 Changing the Type of an Element in a Stored Procedure
If the type is complex, you may need to supply an appropriate schema. Click on the schema location button and either enter a schema path name or browse to a schema. The schema must reside in your application.
After selecting a schema, both the path to the schema file and the URI appear. For example:
{http://temp.openuri.org/schemas/Customer.xsd}CUSTOMER
The Metadata Import wizard, working through JDBC, identifies any stored procedure parameters. This includes the name, mode (input [in], output [out], or bidirectional [inout]) and data type. The out mode supports the inclusion of a schema.
Complex type is only supported under three conditions:
All parameters are editable, including the name.
Note: If you make an incorrect choice you can use the Previous, then Next button to return the dialog to its initial state.
Not all databases support rowsets. In addition, JDBC does not report information related to defined rowsets. In order to create data services from stored procedures that use rowset information, supply the correct ordinal (matching number) and a schema. If the schema has multiple global elements, you can select the one you want from the Type column. Otherwise the type will be the first global element in your schema file.
The order of rowset information is significant; it must match the order in your data source. Use the Move Up / Move Down commands to adjust the ordinal number assigned to the rowset.
Complete the importation of your procedures by reviewing and accepting items in the Summary screen (see step 4. in Importing Relational Table and View Metadata for details).
Note: XML types in data services generated from stored procedures do not display native types. However, you can view the native type in the Source View pragma (see Using Source View).
Imported stored procedure metadata is quite similar to imported metadata for relational tables and views.
Note: If a stored procedure has only one return value and the value is either simple type or a RowSet which is mapping to an existing schema, no schema file created.
A rowset type is a complex type. The name of the rowset type can be:
The rowset type contains a sequence of a repeatable elements (for example called CUSTOMER) with the fields of the rowset.
Note: All rowset-type definitions must conform to this structure.
In some cases the Metadata Import wizard can automatically detect the structure of a rowset and create an element structure. However, if the structure is unknown, you will need to provide it through the wizard.
Each database vendor approaches stored procedures differently. XQuery support limitations are, in general, due to JDBC driver limitations.
DSP does not support rowset as an input parameter.
Oracle Stored Procedure Support
Table 3-15 summarizes DSP support for Oracle database procedures.
Table 3-15 Support for Oracle Store Procedures
Any Oracle PL/SQL data type except those listed below: Note: When defining function signatures, note that the Oracle %TYPE and %ROWTYPE types must be translated to XQuery types that match the true types underlying the stored procedure's %TYPE and %ROWTYPE declarations. %TYPE declarations map to simple types; %ROWTYPE declarations map to rowset types. For a list of database types supported by DSP see Relational Data Types-to-Metadata Conversion. |
|
Oracle supports returning PL/SQL data types such as NUMBER, VARCHAR, %TYPE, and %ROWTYPE as parameters. |
|
The following identifies limitations associated with importing Oracle database procedure metadata.
|
Sybase Stored Procedure Support
Table 3-16 summarizes DSP support for Sybase SQL Server database procedures.
Table 3-16 Support for Sybase Stored Procedures
For the complete list of database types supported by DSP see Relational Data Types-to-Metadata Conversion. |
|
Sybase functions supports returning a single value or a table. Procedures return data in the following ways:
|
|
The following identifies limitations associated with importing Sybase database procedure metadata:
|
IBM DB2 Stored Procedure Support
Table 3-17 summarizes DSP support for IBM DB2 database procedures.
Table 3-17 Support for IBM Store Procedures
Each function is also categorized as a scalar, column, row, or table function. |
|
For the complete list of database types supported by DSP see Relational Data Types-to-Metadata Conversion. |
|
DB2 supports returning a single value, a row of values, or a table. |
|
The following identifies limitations associated with importing DB2 database procedure metadata: |
Informix Stored Procedure Support
Table 3-18 summarizes DSP support for Informix database stored procedures.
Table 3-18 Support for Informix Stored Procedures
For the complete list of database types supported by DSP see Relational Data Types-to-Metadata Conversion. |
|
Informix supports returning single value, multiple values, and rowsets. |
|
Informix treats return value(s) from functions or procedures as a rowset. For this reason a rowset needs to be defined for the return value(s). The following limitations have been identified: Informix Native Driver Limitations
BEA WebLogic Driver Limitations
Due to the limitations described above, the following approach is suggested for importing Informix stored procedure metadata: 2. Define a schema that matches the return value structure (using the same approach as external schemas for other databases). |
Microsoft SQL Server Stored Procedure Support
Table 3-19 summarizes DSP support for Microsoft SQL Server database procedures.
Table 3-19 DSP Support for Microsoft SQL Server Stored Procedures
One of the relational import metadata options (see Figure 3-4) is to use an SQL statement to customize introspection of a data source. If you select this option the SQL Statement dialog appears.
Figure 3-20 SQL Statement Dialog Box
You can type or paste your SELECT statement into the statement box (Figure 3-20), indicating parameters with a "?" question-mark symbol. Using one of the DSP data samples, the following SELECT statement can be used:
SELECT * FROM RTLCUSTOMER.CUSTOMER WHERE CUSTOMER_ID = ?
RTLCUSTOMER is a schema in the data source, CUSTOMER is, in this case, a table.
For the parameter field, you would need to select a data type. In this case, CHAR or VARCHAR.
The next step is to assign a data service name.
When you run your query under Test View, you will need to supply the parameter in order for the query to run successfully.
Once you have entered your SQL statement and any required parameters click Next to change or verify the name and location of your new data service.
Figure 3-21 Relational SQL Statement Imported Data Summary Screen
The imported data summary screen identifies a proposed name for your new data service.
The final steps are no different than you used to create a data service from a table or view.
The following table shows how data types provided by various relational databases are converted into XQuery data types. Types are listed in alphabetical order.
Table 3-22 Relational Data Types and Their XQuery Counterparts
A web service is a self-contained, platform-independent unit of business logic that is accessible through application adaptors, as well as standards-based Internet protocols such as HTTP or SOAP.
Web services facilitate application-to-application communication and, as such, are increasingly important enterprise data resources. A familiar example of an externalized web service is a frequent-update weather portlet or stock quotes portlet that can easily be integrated into a Web application.
Turning a web service into a data service is similar to importing relational data source metadata (see Importing Relational Table and View Metadata). Here are the web service-specific steps involved:
Note: For the purpose of showing how to import web service metadata a WSDL file from the RTLApp sample is used for the remaining steps. If you are following these instructions locate ElecDBTestContract.wsdl
. It is found in the ElecWS folder of the RTLApp application.
<beahome>\weblogic81\samples\liquiddata\RTLApp\ElecWS\controls\
ElecDBTestContract.wsdl
Figure 3-24 Selecting a Web Service WSDL File
Figure 3-25 Identifying Web Service Operations to be Used as data services
Using standard dialog editing commands you can select one or several operations to be added to the list of selected web service operations. To deselect an operation, click on it, then click Remove. Or choose Remove All to return to the initial state.
Figure 3-26 Web Services Imported Data Summary Screen
The screen in Figure 3-26:
If you are interested in testing the Metadata Import wizard with a web service URI, the following URI is available as of this writing:
http://ws.strikeiron.cmo/DataEnhancement?WSDL
Simply copy the above string into the Web Service URI field and click Next to begin selecting elements you want to turn into data services.
You can create metadata based on custom Java functions. When you use the Metadata Import wizard to introspect a .class
file, metadata is created around both complex and simple types. Complex types become data services while simple Java routines are converted into XQueries and placed in an XQuery function library (XFL). In Source View (see Using Source View) a pragma is created that defines the function signature and relevant schema type for complex types such as Java classes and elements.
In the RTLApp DataServices/Demo directory there is a sample that can be used to illustrate Java function metadata import.
Your Java file can contains two types of functions:
Before you can create metadata on a custom Java function you must create a Java class containing both schema and function information. A detailed example is described in Creating XMLBean Support for Java Functions.
Importing Java function metadata is similar to importing relational data source metadata (see Importing Relational Table and View Metadata). Here are the Java function-specific steps involved:
.class
file from your .java
function and place it in your application's library.Figure 3-27 Selecting a Java Function as the Data Source
.class
file must be in your BEA WebLogic application. You can browse to your file or enter a fully-qualified path name starting from the root directory of your DSP-based project.Figure 3-28 Specifying a Java Class File for Metadata Import
Figure 3-29 Java Function Imported Data Summary Screen
Note: If your Java function contained any simple routines, these will be placed in an XML File Library.
Before you can import Java function metadata, you need to create a .class
file that contains XMLBean classes based on global elements and compiled versions of your Java functions. To do this, you first create XMLBean classes based on a schema of your data. There are several ways to accomplish this. In the example in this section you create a WebLogic Workshop project of type Schema.
Generally speaking, the process involves:
.xsd
file) representing the shape of the global elements invoked by your function..class
file, if under a DSP-based project, or you can add the JAR file from a Java project to the Library folder of your application..class
file.In the following example there are a number of custom functions in a .java
file called FuncData.java
. In the RTLApp this file can be found at:
ld:DataServices/Demo/Java/FuncData.java
Some functions in this file return primitive data types, while others return a complex element. The complex element representing the data to be introspected is in a schema file called FuncData.xsd
.
Contains Java functions to be converted into data service query functions. Also contains as small data sample. |
|
Contains a schema for the complex element identified in |
The schema file can be found at:
ld:DataServices/Demo/Java/schema/FuncData.xsd
To simplify the example a small data set is included in the .java
file as a string.
The following steps will create a data service from the Java functions in FuncData.java
:
Importing a schema file into a schema project automatically starts the project build process.
When successful, XMLBean classes are created for each function in your Java file and placed in a JAR file called JavaFunctSchema.jar
The .jar
file is located in the Libraries section of your application.
ld:DataServices/Demo/Java
folder in the RTLApp and select FuncData.java
for import. Click Import.The JAR file named for your DSP-based project is updated to include a.class
file named FuncData.class
; It is this file that can be introspected by the Metadata Import wizard. The file is located in a folder named JavaFuncMetadata in the Library section of your application.
Figure 3-30 Class File Generated Java Function XML Beans
FuncData.class
(dialog box is shown in Figure 3-28). Of the Java functions available to be imported in Figure 3-31 getAllProducts and getFirstProduct are complex elements. The remaining simple Java functions represent either Java primitives or Java object primitives and will be placed in an .xfl
file.
Figure 3-32 Java Function Import Summary Screen
You can edit the projected data service name for clarity or to avoid conflicts with other existing or planned data services. Conflicts are shown in red. Simply click on the name of the data service to change its name. You can also change the name of your library, as has been done in the Summary page shown in Figure 3-32.
When ready, click Finish to create your data service and library file. (See also XQuery Function Library (XFL) Files.)
The .java
file used in this example contains both functions and data. More typically, your routine will access data through a data import function. The first function in Listing 3-1 simply retrieves the first element in an array of PRODUCTS. The second returns the entire array.
Listing 3-1 JavaFunc.java getFirstPRODUCT() and getAllPRODUCTS() Functions
public class JavaFunc {
...
public static noNamespace.PRODUCTSDocument.PRODUCTS getFirstProduct(){
noNamespace.PRODUCTSDocument.PRODUCTS products = null;
try{
noNamespace.DbDocument dbDoc = noNamespace.DbDocument.Factory.parse(testCustomer);
products = dbDoc.getDb().getPRODUCTSArray(1);
//return products;
}catch(Exception e){
e.printStackTrace();
}
return products;
}
public static noNamespace.PRODUCTSDocument.PRODUCTS[] getAllProducts(){
noNamespace.PRODUCTSDocument.PRODUCTS[] products = null;
try{
noNamespace.DbDocument dbDoc = noNamespace.DbDocument.Factory.parse(testCustomer);
products = dbDoc.getDb().getPRODUCTSArray();
//return products;
}catch(Exception e){
e.printStackTrace();
}
return products;
}
}
The schema used to create XMLBeans is shown in Listing 3-2. It simply models the structure of the complex element; it could have been obtained by first introspecting the data directly.
Listing 3-2 B-PTest.xsd Model Complex Element Parsed by Java Function
<xs:schema elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema">
<xs:element name="db">
<xs:complexType>
<xs:sequence>
<xs:element ref="PRODUCTS" maxOccurs="unbounded"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="AVERAGE_SERVICE_COST" type="xs:decimal"/>
<xs:element name="LIST_PRICE" type="xs:decimal"/>
<xs:element name="MANUFACTURER" type="xs:string"/>
<xs:element name="PRODUCTS">
<xs:complexType>
<xs:sequence>
<xs:element ref="PRODUCT_NAME"/>
<xs:element ref="MANUFACTURER"/>
<xs:element ref="LIST_PRICE"/>
<xs:element ref="PRODUCT_DESCRIPTION"/>
<xs:element ref="AVERAGE_SERVICE_COST"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="PRODUCT_DESCRIPTION" type="xs:string"/>
<xs:element name="PRODUCT_NAME" type="xs:string"/>
</xs:schema>
Java functions require that an element returned (as specified in the return signature) come from a valid XML document. A valid XML document has a single root element with zero or more children, and its content matches the schema referred.
Listing 3-3 Approach When Data is Retrieved Through a Document
public static noNamespace.PRODUCTSDocument.PRODUCTS getNextProduct(){
// create the dbDocument (the root)
noNamespace.DbDocument dbDoc = noNamespace.DbDocument.Factory.newInstance();
// the db element from it
noNamespace.DbDocument.Db db = dbDoc.addNewDb();
// get the PRODUCTS element
PRODUCTS product = db.addNewPRODUCTS();
//.. create the children
product.setPRODUCTNAME("productName");
product.setMANUFACTURER("Manufacturer");
product.setLISTPRICE(BigDecimal.valueOf((long)12.22));
product.setPRODUCTDESCRIPTION("Product Description");
product.setAVERAGESERVICECOST(BigDecimal.valueOf((long)122.22));
// .. update children of db
db.setPRODUCTSArray(0,product);
// .. update the document with db
dbDoc.setDb(db);
//.. now dbDoc is a valid document with db and is children.
// we are interested in PRODUCTS which is a child of db.
// Hence always create a valid document before processing the
children.
// Just creating the child element and returning it, is not
// enough, since it does not mean the document is valid.
// The child needs to come from a valid document, which is created
// for the global element only.
return dbDoc.getDb().getPRODUCTSArray(0);
}
In DSP, user-defined functions are typically Java classes. The following are supported:
In order to support this functionality, the Metadata Import wizard supports marshalling and unmarshalling so that token iterators in Java are converted to XML and vice-versa.
Functions you create should be defined as static Java functions. The Java method name when used in an XQuery will be the XQuery function name qualified with a namespace.
Table 3-33 shows the casting algorithms for simple Java types, schema types and XQuery types.
Table 3-33 Simple Java Types and XQuery Counterparts
Java functions can also consume variables of XMLBean type that are generated by processing a schema via XMLBeans. The classes generated by XMLBeans can be referred in a Java function as parameters or return types.
The elements or types referred to in the schema should be global elements because these are the only types in XMLBeans that have static parse methods defined.
The next section provides additional code samples that illustrate how Java functions are used by the Metadata Import wizard to create data services.
In order to create data services or members of an XQuery function library, you would first start with a Java function.
As an example, the Java function getListGivenMixed( ) can be defined as:
public static float[] getListGivenMixed(float[] fpList, int size) {
int listLen = ((fpList.length > size) ? size : fpList.length);
float fpListop = new float[listLen];
for (int i =0; i < listLen; i++)
fpListop[i]=fpList[i];
return fpListop;
}
After the function is processed through the wizard the following metadata information is created:
xquery version "1.0" encoding "WINDOWS-1252";
(::pragma xfl <x:xfl xmlns:x="urn:annotations.ld.bea.com">
<creationDate>2005-06-01T14:25:50</creationDate>
<javaFunction class="DocTest"/>
</x:xfl>::)
declare namespace f1 = "lib:testdoc/library";
(::pragma function <f:function xmlns:f="urn:annotations.ld.bea.com" nativeName="getListGivenMixed">
<params>
<param nativeType="[F"/>
<param nativeType="int"/>
</params>
</f:function>::)
declare function f1:getListGivenMixed($x1 as xsd:float*, $x2 as xsd:int) as xsd:float* external;
Here is the corresponding XQuery for executing the above function:
declare namespace f1 = "ld:javaFunc/float";
let $y := (2.0, 4.0, 6.0, 8.0, 10.0)
let $x := f1:getListGivenMixed($y, 2)
return $x
Consider that you have a schema called Customer (customer.xsd
), as shown below:
<?xml version="1.0" encoding="UTF-8" ?>
<xs:schema targetNamespace="ld:xml/cust:/BEA_BB10000" xmlns:xs="http://www.w3.org/2001/XMLSchema">
<xs:element name="CUSTOMER">
<xs:complexType>
<xs:sequence>
<xs:element name="FIRST_NAME" type="xs:string" minOccurs="1"/>
<xs:element name="LAST_NAME" type="xs:string" minOccurs="1"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
If you want to generate a list conforming to the CUSTOMER element you could process the schema via XMLBeans and obtain xml.cust.beaBB10000.CUSTOMERDocument.CUSTOMER
. Now you can use the CUSTOMER element as shown:
public static xml.cust.beaBB10000.CUSTOMERDocument.CUSTOMER[]
getCustomerListGivenCustomerList(
xml.cust.beaBB10000.CUSTOMERDocument.CUSTOMER[] ipListOfCust)
throws XmlException {
xml.cust.beaBB10000.CUSTOMERDocument.CUSTOMER [] mylocalver =
ipListOfCust;
return mylocalver;
}
Then the metadata information produced by the wizard will be:
(::pragma function <f:function xmlns:f="urn:annotations.ld.bea.com" kind="datasource" access="public">
<params>
<param nativeType="[Lxml.cust.beaBB10000.CUSTOMERDocument$CUSTOMER;"/>
</params>
</f:function>::)
declare function f1:getCustomerListGivenCustomerList($x1 as element(t1:CUSTOMER)*) as element(t1:CUSTOMER)* external;
The corresponding XQuery for executing the above function is:
declare namespace f1 = "ld:javaFunc/CUSTOMER";
let $z := (
validate(<n:CUSTOMER xmlns:n="ld:xml/cust:/BEA_BB10000"><FIRST_NAME>John2</FIRST_NAME><LAST_NAME>Smith2</LAST_NAME>
</n:CUSTOMER>),
validate(<n:CUSTOMER xmlns:n="ld:xml/cust:/BEA_BB10000"><FIRST_NAME>John2</FIRST_NAME><LAST_NAME>Smith2</LAST_NAME>
</n:CUSTOMER>),
validate(<n:CUSTOMER xmlns:n="ld:xml/cust:/BEA_BB10000"><FIRST_NAME>John2</FIRST_NAME><LAST_NAME>Smith2</LAST_NAME>
</n:CUSTOMER>),
validate(<n:CUSTOMER xmlns:n="ld:xml/cust:/BEA_BB10000"><FIRST_NAME>John2</FIRST_NAME><LAST_NAME>Smith2</LAST_NAME>
</n:CUSTOMER>))
for $zz in $z
return
f1:getCustomerListGivenCustomerList($z)
The following restrictions apply to Java functions:
Spreadsheets offer a highly adaptable means of storing and manipulating information, especially information which needs to be changed quickly. You can easily turn such spreadsheet data in a data services.
Spreadsheet documents are often referred to as.csv
files, standing for comma-separated values. Although .csv
is not a typical native format for spreadsheets, the capability to save spreadsheets as .csv
files is nearly universal.
Although the separator field is often a comma, the metadata import wizard supports any ASCII character as a separator, as well as fixed-length fields.
In the RTLApp DataServices/Demo directory there is a sample that can be used to illustrate delimited file metadata import.
There are several approaches to developing metadata around delimited information, depending on your needs and the nature of the source.
Note: The generated schema takes the name of the source file.
Importing XML file information is similar to importing a relational data source metadata (see Importing Relational Table and View Metadata). Here are the steps that are involved:
Figure 3-34 Selecting a Delimited Source from the Import Metadata Wizard
,
).Figure 3-35 Specifying Import Delimited Metadata Characteristics
Figure 3-36 Delimited Document Imported Data Summary Screen
Note: When importing .csv
data there are several things to keep in mind:
XML files are a convenient means of handling hierarchical data. XML files and associated schemas are easily turned into data services.
Importing XML file information is similar to importing a relational data source metadata (see Importing Relational Table and View Metadata).
In the case of XML files, you need to supply both a schema and an XML file. The Metadata Import Wizard allows you to browse for an XML file anywhere in your application. You can also import data from any XML file on your system using an absolute path prepended with the following:
file:///
For example, on Windows systems you can access an XML file such as Orders.xml from the root C: directory using the following URI:
file:///c:/Orders.xml
On a UNIX system, you would access such a file with the URI:
file:///home/Orders.xml
In the RTLApp DataServices/Demo directory there is a sample that can be used to illustrate XML file metadata import.
Figure 3-37 Selecting an XML File from the Import Metadata Wizard
Figure 3-38 Specify an XML File Schema for XML Metadata Import
Figure 3-39 XML File Imported Data Summary Screen
You can edit the data service name either to clarify the name or to avoid conflicts with other existing or planned data services. Conflicts are shown in red. Simply click on the name of the data service to change its name. Then click Next.
Figure 3-40 A Selecting a Global Element When Importing XML Metadata
When you create metadata for an XML data source but do not supply a data source name, you will need to identify the URI of your data source as a parameter when you execute the data service's read function (various methods of accessing data service functions are described in detail in the Client Application Developer's Guide).
The identification takes the form of:
<uri>/path/filename.xml
where uri is representative of a path or path alias, path represents the directory and filename.xml represents the filename. The .xml extension is needed.
You can access files using an absolute path prepended with the following:
file:///
For example, on Windows systems you can access an XML file such as Orders.xml from the root C: directory using the following URI:
file:///c:/Orders.xml
On a UNIX system, you would access such a file with the URI:
file:///home/Orders.xml
Figure 3-41 shows how the XML source file is referenced.
Figure 3-41 Specifying an XML Source URI in Test View
When you first create a physical data service its underlying metadata is, by definition, consistent with its data source. Over time, however, you metadata can become "out of sync" for several reasons:
You can use the Update Source Metadata right-click menu option to report discrepancies between your source metadata files and the structure of the source data including:
In the case of Source Unavailable, the issue likely relates to connectivity or permissions. In the case of the other types of reports, you can determine when and if to update data source metadata to conform with the underlying data sources.
If there are no discrepancies between your metadata and the underlying source, the Update Source Metadata wizard will report up-to-date for each data service tested.
Source metadata should be updated with care since the operation can have both direct and indirect consequences. For example, if you have added a relationship between two physical data services, updating your source metadata will remove the relationship from both data services. If the relationship appears in a model diagram, the relationship line will appear in red, indicating that the relationship is no longer described by the respective data services.
Direct effects apply to physical data services. Indirect effects occur to logical data services, since such services are themselves ultimately based — at least in part — on physical data service. For example, if you have created a new relationship between a physical and a logical data service, updating the physical data service will invalidate the relationship. In the case of the physical data service, there will be no relationship reference. The logical data service will retain the code describing the relationship but it will be invalid because the opposite relationship notations will no longer be present.
Several safeguards are in place to protect your development effort while preserving your ability to keep your metadata up-to-date. These are described in the next section.
Thus updating source metadata should be done carefully. See Archival of Source Metadata for information of how your current metadata is preserved as part of the source update.
Note: Before attempting to update source metadata you should make sure that your build project has no errors.
The Update Source Metadata wizard allows you to update your source metadata.
Figure 3-42 Updating Source Metadata for Several Data Services
You verify that your data structure is up-to-date by performing a metadata update on one or several data services in your DSP-based project. In Figure 3-42 the update will be on all the data services in the project.
After you select your target(s), the wizard identifies the metadata that will be verified and any differences between your metadata and the underlying source.
Figure 3-43 Data Services Metadata to be Updated
Next, your metadata is updated and an on-screen report prepared. Both general and field-level differences are displayed.
Figure 3-44 Sample Update Preview Report
Table 3-45 describes the update source metadata message types and color legends.
Table 3-45 Source Metadata Update Targets and Color Legend
When you click Finish your metadata will be updated to conform with the underlying data sources.
When you update source metadata two files are created and placed in a special directory in your application:
ld:/updateMetadataHistory/metadatadiff<timestamp>.xml
ld:/updateMetadataHistory/sourceBackUp<timestamp>.zip
An update metadata source operations assigns the same timestamp to both generated files.
Figure 3-46 UpdateMetadataHistory Directory Sample Content
Working with a particular update operations report and source, you can often quickly restore relationships and other changes that were made to your metadata while being assured that your metadata is up-to-date.
![]() ![]() |
![]() |
![]() |