Oracle® Application Development Framework Developer's Guide For Forms/4GL Developers 10g Release 3 (10.1.3.0) Part Number B25947-02 |
|
|
View PDF |
This chapter describes advanced techniques for use in the entity objects in your business domain layer.
This chapter includes the following sections:
Section 26.1, "Creating Custom, Validated Data Types Using Domains"
Section 26.2, "Updating a Deleted Flag Instead of Deleting Rows"
Section 26.4, "Basing an Entity Object on a PL/SQL Package API"
Section 26.5, "Basing an Entity Object on a Join View or Remote DBLink"
Section 26.6, "Using Inheritance in Your Business Domain Layer"
Section 26.7, "Controlling Entity Posting Order to Avoid Constraint Violations"
Section 26.8, "Implementing Automatic Attribute Recalculation"
Note:
To experiment with a working version of the examples in this chapter, download theAdvancedEntityExamples
workspace from the Example Downloads page at http://otn.oracle.com/documentation/jdev/b25947_01/.
When you find yourself repeating the same sanity-checking validations on the values of similar attributes across multiple entity objects, you can save yourself time and effort by creating your own data types that encapsulate this validation. For example, imagine that across your business domain layer there are numerous entity object attributes that store strings that represent email addresses. One technique you could use to ensure that end-users always enter a valid email address everywhere one appears in your business domain layer is to:
Use a basic String
data type for each of these attributes
Add an attribute-level method validator with Java code that ensures that the String value has the format of a valid email address for each attribute
However, these approaches could get tedious quickly in a large application. Luckily, ADF Business Components offers an alternative that allows you to create your own EmailAddress
data type that represents email addresses. After centralizing all of the sanity-checking regarding email address values into this new custom data type, you can use the EmailAddress
as the type of every attribute in your application that represents an email address. By doing this, you make the intention of the attribute values more clear to other developers and simplify application maintenance by putting the validation in a single place. ADF Business Components calls these developer-created data types domains.
Note:
The examples in this section refer to theSimpleDomains
project in the AdvancedEntityExamples
workspace. See the note at the beginning of this chapter for download instructions. Run the CreateObjectType.sql
script in the Resources folder against the SRDemo
connection to set up the additional database objects required for the project.Domains are Java classes that extend the basic data types like String
, Number
, and Date
to add constructor-time validation to insure the candidate value passes relevant sanity checks. They offer you a way to define custom data types with cross-cutting behavior such as basic data type validation, formatting, and custom metadata properties in a way that are inherited by any entity objects or view objects that use the domain as the Java type of any of their attributes.
To create a domain, use the Create Domain wizard. This is available in the New Gallery in the ADF Business Components category.
In step 1, on the Name panel specify a name for the domain and a package in which it will reside. To create a domain based on a simple Java type, leave the Domain for an Oracle Object Type unchecked.
In step 2, on the Settings panel, indicate the base type for the domain and the database column type to which it will map. For example, if you were creating a domain called ShortEmailAddress
to hold eight-character short email addresses, you would set the base type to String
and the Database Column Type to VARCHAR2(8)
. You can set other common attribute settings on this panel as well.
Then, click Finish to create your domain.
When you create a domain, JDeveloper creates its XML component definition in the subdirectory of your project's source path that corresponds to the package name you chose. For example, if you created the ShortEmailAddress
domain in the devguide.advanced.domains
package, JDeveloper would create the ShortEmailAddress.xml
file in the ./devguide/advanced/domains
subdirectory. A domain always has a corresponding Java class, which JDeveloper creates in the common
subpackage of the package where the domain resides. This means it would create the ShortEmailAddress.java
class in the devguide.advanced.domains.common
package. The domain's Java class is automatically generated with the appropriate code to behave in a way that is identical to one of the built-in data types.
{para}?>
Once you've created a domain in a project, it automatically appears among the list of available data types in the Attribute Type dropdown list in the entity object and view object wizards and editors as shown in Figure 26-1. To use the domain as the type of a given attribute, just pick it from the list.
Note:
The entity-mapped attributes in an entity-based view object inherit their data type from their corresponding underlying entity object attribute, so if the entity attribute uses a domain type, so will the matching view object attribute. For transient or SQL-derived view object attributes, you can directly set the type to use a domain since it is not inherited from any underlying entity.Typically, the only coding task you need to do for a domain is to write custom code inside the generated validate()
method. Your implementation of the validate()
method should perform your sanity checks on the candidate value being constructed, and throw a DataCreationException
in the oracle.jbo
package if the validation fails.
In order to throw an exception message that is translatable, you can create a message bundle class similar to the one shown in Example 26-1. Create it in the same common
package as your domain classes themselves. The message bundle returns an array of {
MessageKeyString
,
TranslatableMessageString
}
pairs.
Example 26-1 Custom Message Bundle Class For Domain Exception Messages
package devguide.advanced.domains.common; import java.util.ListResourceBundle; public class ErrorMessages extends ListResourceBundle { public static final String INVALID_SHORTEMAIL = "30002"; public static final String INVALID_EVENNUMBER = "30003"; private static final Object[][] sMessageStrings = new String[][] { { INVALID_SHORTEMAIL, "A valid short email address has no @-sign or dot."}, { INVALID_EVENNUMBER, "Number must be even."} }; /** * Return String Identifiers and corresponding Messages * in a two-dimensional array. */ protected Object[][] getContents() { return sMessageStrings; } }
Since String
is a base JDK type, a domain based on a String
aggregates a private mData String
member field to hold the value that the domain represents. Then, the class implements the DomainInterface
expected by the ADF runtime, as well as the Serializable
interface, so the domain can be used in method arguments or returns types of ADF components custom client interfaces.
Example 26-2 shows the validate()
method for a simple ShortEmailAddress
domain class. It tests to make sure that the mData
value does not contains an at-sign or a dot, and if it does, then the method throws DataCreationException
referencing an appropriate message bundle and message key for the translatable error message.
Example 26-2 Simple ShortEmailAddress String-Based Domain Type with Custom Validation
public class ShortEmailAddress implements DomainInterface, Serializable { private String mData; // etc. /**Implements domain validation logic and throws a JboException on error. */ protected void validate() { int atpos = mData.indexOf('@'); int dotpos = mData.lastIndexOf('.'); if (atpos > -1 || dotpos > -1) { throw new DataCreationException(ErrorMessages.class, ErrorMessages.INVALID_SHORTEMAIL,null,null); } } // etc. }
Other simple domains based on a built-in type in the oracle.jbo.domain
package extend the base type as shown in Example 26-3. It illustrates the validate()
method for a simple Number-based domain called EvenNumber
that represents even numbers.
Example 26-3 Simple EvenNumber Number-Based Domain Type with Custom Validation
public class EvenNumber extends Number { // etc. /** * Validates that value is an even number, or else * throws a DataCreationException with a custom * error message. */ protected void validate() { if (getValue() % 2 == 1) { throw new DataCreationException(ErrorMessages.class, ErrorMessages.INVALID_EVENNUMBER,null,null); } } // etc. }
When you create a simple domain based on one of the basic data types, it is an immutable class. This just means that once you've constructed a new instance of it like this:
ShortEmailAddress email = new ShortEmailAddress("smuench");
You cannot change its value. If you want to reference a different short email address, you just construct another one:
ShortEmailAddress email = new ShortEmailAddress("bribet");
This is not a new concept since it's the same way that String
, Number
, and Date
classes behave, among others.
The Oracle database supports the ability to create user-defined types in the database. For example, you could create a type called POINT_TYPE
using the following DDL statement:
create type point_type as object ( x_coord number, y_coord number );
If you use user-defined types like POINT_TYPE
, you can create domains base on them, or you can reverse-engineer tables containing columns of object type to have JDeveloper create the domain for you.
To create a domain yourself, do the following in the Create Domain wizard:
In step 1 of the Create Domain wizard on the Name panel, check the Domain for an Oracle Object Type checkbox, then select the object type for which you want to create a domain from the Available Types list.
In step 2 on the Settings panel, use the Attribute dropdown list to switch between the multiple domain properties to adjust the settings as appropriate.
Click Finish
In addition to manually creating object type domains, when you use the Business Components from Tables wizard and select a table containing columns of an Oracle object type, JDeveloper automatically creates domains for those object types as part of the reverse-engineering process. For example, imagine you created a table like this with a column of type POINT_TYPE
:
create table interesting_points( id number primary key, coordinates point_type, description varchar2(20) );
If you create an entity object for the INTERESTING_POINTS
table in the Business Components from Tables wizard, then you will get both an InterestingPoints
entity object and a PointType domain. The latter will have been automatically created, based on the POINT_TYPE
object type, since it was required as the data type of the Coordinates
attribute of the InterestingPoints
entity object.
Unlike simple domains, object type domains are mutable. JDeveloper generates getter and setter methods into the domain class for each of the elements in the object type's structure. After changing any domain properties, when you set that domain as the value of a view object or entity object attribute, it is treated as a single unit. ADF does not track which domain properties have changed, only that a domain-valued attribute value has changed.
Note:
Domains based on Oracle object types are useful for working programmatically with data whose underlying type is an oracle object type. They also can simplify passing and receiving structure information to stored procedures. However, support for working with object type domains in the ADF binding layer is complete, so it's not straightforward to use object domain-valued attributes in declaratively-databound user interfaces.After selecting a domain in the Application Navigator, you can quickly navigate to its implementation class by:
Choosing Go to Domain Class on the right-mouse context menu, or
Double-clicking on the domain class in the Structure Window
When you create a business components archive, as described in Section 25.7, "Working with Libraries of Reusable Business Components", the domain classes and message bundle files in the *.common subdirectories of your project's source path get packaged into the *CSCommon.jar
. They are classes that are common to both the middle-tier application server and to an eventual remote-client you might need to support.
You can define custom metadata properties on a domain. Any entity object or view object attribute based on that domain inherits those custom properties as if they had been defined on the attribute itself. If the entity object or view object attribute defines the same custom property, its setting takes precedence over the value inherited from the domain.
JDeveloper will enforce declarative settings you impose at the domain definition level cannot be made less restrictive in the Entity Object editor or View Object editor for an attribute based on the domain type. For example, if you define a domain to have its Updatable property set to While New, then when you use your domain as the Java type of an entity object attribute, you can set Updatable to be Never (more restrictive) but you cannot set it to be Always. Similarly, if you define a domain to be Persistent, you cannot make it transient later. When sensible for your application, set declarative properties for a domain to be as lenient as possible so you can later make them more restrictive as needed.
For auditing purposes, once a row is added to a table, sometimes your requirements may demand that rows are never physically deleted from the table. Instead, when the end-user deletes the row in the user interface, the value of a DELETED
column should be updated from "N
" to "Y
" to mark it as deleted. This section explains the two method overrides required to alter an entity object's default behavior to achieve this effect. The following sections assume you want to change the Product
entity from the SRDemo application to behave in this way. They presume that you've altered the PRODUCTS table to have an additional DELETED
column, and synchronized the Product
entity with the database to add the corresponding Deleted
attribute.
To update a deleted flag when a row is removed, enable a custom Java class for your entity object and override the remove()
method to set the deleted flag before calling the super.remove()
method. Example 26-4 shows what this would look like in the ProductImpl
class of the SRDemo application's Product
entity object. It is important to set the attribute before calling super.remove()
since an attempt to set the attribute of a deleted row will encounter the DeadEntityAccessException
.
Example 26-4 Updating a Deleted Flag When a Product Entity Row is Removed
// In ProductImpl.java public void remove() { setDeleted("Y"); super.remove(); }
The row will still be removed from the row set, but it will have the value of its Deleted flag modified to "Y" in the entity cache. The second part of implementing this behavior involves forcing the entity to perform an UPDATE
instead of an INSERT
when it is asked to perform its DML operation. You need to implement both parts for a complete solution.
To force an entity object to be updated instead of deleted, override the doDML()
method and write code that conditionally changes the operation
flag. When the operation flag equals DML_DELETE
, your code will change it to DML_UPDATE
instead. Example 26-5 shows what this would look like in the ProductImpl
class of the SRDemo application's Product entity object.
Example 26-5 Forcing an Update DML Operation Instead of a Delete
// In ProductImpl.java protected void doDML(int operation, TransactionEvent e) { if (operation == DML_DELETE) { operation = DML_UPDATE; } super.doDML(operation, e); }
With this overridden doDML()
method in place to complement the overridden remove()
method described in the previous section, any attempt to remove a Product
entity through any view object with a Product
entity usage will update the DELETED
column instead of physically deleting the row. Of course, in order to prevent "deleted" products from appearing in your view object query results, you will need to appropriately modify their WHERE clauses to include only products WHERE DELETED = 'N'
.
This section describes several advanced techniques for working with associations between entity objects.
When you need to represent a more complex relationship between entities than one based only on the equality of matching attributes, you can modify the association's SQL clause to include more complex criteria. For example, sometimes the relationship between two entities depends on effectivity dates. A ServiceRequest
may be related to a Product
, however if the name of the product changes over time, each row in the PRODUCTS table might include additional EFFECTIVE_FROM
and EFFECTIVE_UNTIL
columns that track the range of dates in which that product row is (or was) in use. The relationship between a ServiceRequest
and the Product
with which it is associated might then be described by a combination of the matching ProdId
attributes and a condition that the service request's RequestDate
lie between the product's EffectiveFrom
and EffectiveUntil
dates.
You can setup this more complex relationship in the Association Editor. First add any additional necessary attribute pairs on the Entity Objects page, which in this example would include one (EffectiveFrom
, RequestDate
) pair and one (EffectiveUntil
, RequestDate
) pair. Then on the Association SQL page you can edit the Where field to change the WHERE clause to be:
(:Bind_ProdId = ServiceRequest.PROD_ID) AND (ServiceRequest.REQUEST_DATE BETWEEN :Bind_EffectiveFrom AND :Bind_EffectiveUntil)
When you create a view link between two entity-based view objects, on the View Link Properties page, you have the option to expose view link accessor attributes both at the view object level as well as at the entity object level. By default, a view link accessor is only exposed at the view object level of the destination view object. By checking the appropriate In Entity Object: SourceEntityName or In Entity Object:DestinationEntityName checkbox, you can opt to have JDeveloper include a view link attribute in either or both of the source or destination entity objects. This can provide a handy way for an entity object to access a related set of related view rows, especially when the query to produce the rows only depends on attributes of the current row.
Each time you retrieve an entity association accessor row set, by default the entity object creates a new RowSet
object to allow you to work with the rows. This does not imply re-executing the query to produce the results each time, only creating a new instance of a RowSet
object with its default iterator reset to the "slot" before the first row. To force the row set to refresh its rows from the database, you can call its executeQuery()
method.
Since there is a small amount of overhead associated with creating the row set, if your code makes numerous calls to the same association accessor attributes, you can consider enabling the association accessor row set retention for the source entity object in the association. To use the association accessor retention feature, first enable a custom Java entity collection class for your entity object. As with other custom entity Java classes you've seen, you do this on the Java panel of the Entity Object editor by selecting the Entity Collection Class checkbox. Then, in the YourEntityColl
Impl
class that JDeveloper creates for you, override the init()
method, and add a line after super.init()
that calls the setAssociationAccessorRetained()
method passing true
as the parameter. It affects all association accessor attributes for that entity object.
When this feature is enabled for an entity object, since the association accessor row set it not recreated each time, the current row of its default row set iterator is also retained as a side-effect. This means that your code will need to explicitly call the reset()
method on the row set you retrieve from the association accessor to reset the current row in its default row set iterator back to the "slot" before the first row.
Note, however, that with accessor retention enabled, your failure to call reset()
each time before you iterate through the rows in the accessor row set can result in a subtle, hard-to-detect error in your application. For example, if you iterate over the rows in an association accessor row set like this, for example, to calculate some aggregate total:
// In ProductImpl.java RowSet rs = (RowSet)getServiceRequests(); while (rs.hasNext()) { ServiceRequestImpl r = (ServiceRequestImpl)rs.next(); // Do something important with attributes in each row }
The first time you work with the accessor row set, the code will work. However, since the row set (and its default row set iterator) are retained, the second and subsequent times you access the row set the current row will already be at the end of the row set and the while loop will be skipped since rs.hasNext()
will be false
. Instead, with this feature enabled, write your accessor iteration code like this:
// In ProductImpl.java
RowSet rs = (RowSet)getServiceRequests();
rs.reset(); // Reset default row set iterator to slot before first row!
while (rs.hasNext()) {
ServiceRequestImpl r = (ServiceRequestImpl)rs.next();
// Do something important with attributes in each row
}
If you have a PL/SQL package that encapsulates insert, update, and delete access to an underlying table, you can override the default DML processing event for the entity object that represents that table to invoke the procedures in your PL/SQL API instead. Often, such PL/SQL packages are used in combination with a companion database view. Client programs read data from the underlying table using the database view, and "write" data back to the table using the procedures in the PL/SQL package. This section considers the code necessary to create a Product
entity object based on such a combination of a view and a package.
Given the PRODUCTS
table in the SRDemo schema, consider a database view named PRODUCTS_V
, created using the following DDL statement:
create or replace view products_v as select prod_id,name,image,description from products;
In addition, consider the simple PRODUCTS_API
package shown in Example 26-6 that encapsulates insert, update, and delete access to the underlying PRODUCTS
table.
Example 26-6 Simple PL/SQL Package API for the PRODUCTS Table
create or replace package products_api is procedure insert_product(p_prod_id number, p_name varchar2, p_image varchar2, p_description varchar2); procedure update_product(p_prod_id number, p_name varchar2, p_image varchar2, p_description varchar2); procedure delete_product(p_prod_id number); end products_api;
The following sections explain how to create an entity object based on the above combination of view and package.
Note:
The examples in this section refer to theEntityWrappingPLSQLPackage
project in the AdvancedEntityExamples
workspace. See the note at the beginning of this chapter for download instructions. Run the CreateAll.sql
script in the Resources folder against the SRDemo
connection to setup the additional database objects required for the project.To create an entity object based on a view, use the Create Entity Object wizard and perform the following steps:
In step 1 on the Name panel, give the entity a name like Product
and check the Views checkbox at the bottom of the Database Objects section.
This enables the display of the available database views in the current schema in the Schema Object list.
Select the desired database view in the Schema Object list.
In step 3 on the Attribute Settings panel, use the Select Attribute dropdown list to choose the attribute that will act as the primary key, then enable the Primary Key setting for that property.
Note:
When defining the entity based on a view, JDeveloper cannot automatically determine the primary key attribute since database views do not have related constraints in the database data dictionary.Then click Finish.
By default, an entity object based on a view performs all of the following directly against the underlying database view:
SELECT
statement (for findByPrimaryKey()
)
SELECT FOR UPDATE
statement (for lock()
), and
INSERT
, UPDATE
, DELETE
statements (for doDML()
)
The following sections first illustrate how to override the doDML()
operations, then explain how to extend that when necessary to override the lock()
and findByPrimaryKey()
handling in a second step.
If you plan to have more than one entity object based on a PL/SQL API, it's a smart idea to abstract the generic details into a base framework extension class. In doing this, you'll be using several of the concepts you learned in Chapter 25, "Advanced Business Components Techniques". Start by creating a PLSQLEntityImpl
class that extends the base EntityImpl
class that each one of your PL/SQL-based entities can use as their base class. As shown in Example 26-7, you'll override the doDML()
method of the base class to invoke a different helper method based on the operation.
Example 26-7 Overriding doDML() to Call Different Procedures Based on the Operation
// In PLSQLEntityImpl.java protected void doDML(int operation, TransactionEvent e) { // super.doDML(operation, e); if (operation == DML_INSERT) callInsertProcedure(e); else if (operation == DML_UPDATE) callUpdateProcedure(e); else if (operation == DML_DELETE) callDeleteProcedure(e); }
In the PLSQLEntityImpl.java
base class, you can write the helper methods so that they perform the default processing like this:
// In PLSQLEntityImpl.java /* Override in a subclass to perform non-default processing */ protected void callInsertProcedure(TransactionEvent e) { super.doDML(DML_INSERT, e); } /* Override in a subclass to perform non-default processing */ protected void callUpdateProcedure(TransactionEvent e) { super.doDML(DML_UPDATE, e); } /* Override in a subclass to perform non-default processing */ protected void callDeleteProcedure(TransactionEvent e) { super.doDML(DML_DELETE, e); }
After putting this infrastructure in place, when you base an entity object on the PLSQLEntityImpl
class, you can use the Source | Override Methods menu item to override the callInsertProcedure()
, callUpdateProcedure()
, and callDeleteProcedure()
helper methods and perform the appropriate stored procedure calls for that particular entity. To simplify the task of implementing these calls, you could add the callStoredProcedure()
helper method you learned about in Chapter 25, "Invoking Stored Procedures and Functions" to the PLSQLEntityImpl
class as well. This way, any PL/SQL-based entity objects that extend this class can leverage the helper method.
To implement the stored procedure calls for DML operations, do the following:
Use the Class Extends button on the Java panel of the Entity Object Editor to set your Product
entity object to have the PLSQLEntityImpl
class as its base class.
Enable a custom Java class for the Product
entity object.
Use the Source | Override Methods menu item and select the callInsertProcedure()
, callUpdateProcedure()
, and callDeleteProcedure()
methods.
Example 26-8 shows the code you would write in these overridden helper methods.
Example 26-8 Leveraging a Helper Method to Invoke Insert, Update, and Delete Procedures
// In ProductImpl.java protected void callInsertProcedure(TransactionEvent e) { callStoredProcedure("products_api.insert_product(?,?,?,?)", new Object[] { getProdId(), getName(), getImage(), getDescription() }); } protected void callUpdateProcedure(TransactionEvent e) { callStoredProcedure("products_api.update_product(?,?,?,?)", new Object[] { getProdId(), getName(), getImage(), getDescription() }); } protected void callDeleteProcedure(TransactionEvent e) { callStoredProcedure("products_api.delete_product(?)", new Object[] { getProdId() }); }
At this point, if you create a default entity-based view object called Products
for the Product
entity object and add an instance of it to a ProductModule
application module you can quickly test inserting, updating, and deleting rows from the Products
view object instance in the Business Components Browser.
Often, overriding just the insert, update, and delete operations will be enough. The default behavior that performs the SELECT
statement for findByPrimaryKey()
and the SELECT FOR UPDATE
statement for the lock()
against the database view works for most basic kinds of views.
However, if the view is complex and does not support SELECT FOR UPDATE
or if you need to perform the findByPrimaryKey()
and lock()
functionality using additional stored procedures API's, then you can follow the steps in the next section.
You can also handle the lock and findByPrimaryKey() functionality of an entity object by invoking stored procedures if necessary. Imagine that the PRODUCTS_API
package were updated to contain the two additional procedures shown in Example 26-9. Both the lock_product
and select_product
procedures accept a primary key attribute as an IN
parameter and return values for the remaining attributes using OUT
parameters.
Example 26-9 Additional Locking and Select Procedures for the PRODUCTS Table
/* Added to PRODUCTS_API package */ procedure lock_product(p_prod_id number, p_name OUT varchar2, p_image OUT varchar2, p_description OUT varchar2); procedure select_product(p_prod_id number, p_name OUT varchar2, p_image OUT varchar2, p_description OUT varchar2);
You can extend the PLSQLEntityImpl
base class to handle the lock()
and findByPrimaryKey()
overrides using helper methods similar to the ones you added for insert, update, delete. At runtime, both the lock()
and findByPrimaryKey()
operations end up invoking the lower-level entity object method called doSelect(boolean lock)
. The lock()
operation calls doSelect()
with a true
value for the parameter, while the findByPrimaryKey()
operation calls it passing false
instead.
Example 26-10 shows the overridden doSelect()
method in PLSQLEntityImpl
to delegate as appropriate to two helper methods that subclasses can override as necessary.
Example 26-10 Overriding doSelect() to Call Different Procedures Based on the Lock Parameter
// In PLSQLEntityImpl.java protected void doSelect(boolean lock) { if (lock) { callLockProcedureAndCheckForRowInconsistency(); } else { callSelectProcedure(); } }
The two helper methods are written to just perform the default functionality in the base PLSQLEntityImpl
class:
// In PLSQLEntityImpl.java /* Override in a subclass to perform non-default processing */ protected void callLockProcedureAndCheckForRowInconsistency() { super.doSelect(true); } /* Override in a subclass to perform non-default processing */ protected void callSelectProcedure() { super.doSelect(false); }
Notice that the helper method that performs locking has the name callLockProcedureAndCheckForRowInconsistency()
. This reminds developers that it is their responsibility to perform a check to detect at the time of locking the row whether the newly-selected row values are the same as the ones the entity object in the entity cache believes are the current database values.
To assist subclasses in performing this old-value versus new-value attribute comparison, you can add one final helper method to the PLSQLEntityImpl
class like this:
// In PLSQLEntityImpl protected void compareOldAttrTo(int attrIndex, Object newVal) { if ((getPostedAttribute(attrIndex) == null && newVal != null) || (getPostedAttribute(attrIndex) != null && newVal == null) || (getPostedAttribute(attrIndex) != null && newVal != null && !getPostedAttribute(attrIndex).equals(newVal))) { throw new RowInconsistentException(getKey()); } }
With the additional infrastructure in place in the base PLSQLEntityImpl
class, you can override the callSelectProcedure()
and callLockProcedureAndCheckForRowInconsistency()
helper methods in the Product
entity object's ProductImpl
class. Since the select_product
and lock_product
procedures have OUT
arguments, as you learned in Section 25.5.4, "Calling Other Types of Stored Procedures", you need to use a JDBC CallableStatement
object to perform these invocations.
Example 26-11 shows the code required to invoke the select_product
procedure. It's performing the following basic steps:
Creating a CallableStatement
for the PLSQL block to invoke.
Registering the OUT
parameters and types, by one-based bind variable position.
Setting the IN
parameter value.
Executing the statement.
Retrieving the possibly updated column values.
Populating the possibly updated attribute values in the row.
Closing the statement.
Example 26-11 Invoking the Stored Procedure to Select a Row by Primary Key
// In ProductImpl.java protected void callSelectProcedure() { String stmt = "begin products_api.select_product(?,?,?,?);end;"; // 1. Create a CallableStatement for the PLSQL block to invoke CallableStatement st = getDBTransaction().createCallableStatement(stmt, 0); try { // 2. Register the OUT parameters and types st.registerOutParameter(2, VARCHAR2); st.registerOutParameter(3, VARCHAR2); st.registerOutParameter(4, VARCHAR2); // 3. Set the IN parameter value st.setObject(1,getProdId()); // 4. Execute the statement st.executeUpdate(); // 5. Retrieve the possibly updated column values String possiblyUpdatedName = st.getString(2); String possiblyUpdatedImage = st.getString(3); String possiblyUpdatedDesc = st.getString(4); // 6. Populate the possibly updated attribute values in the row populateAttribute(NAME,possiblyUpdatedName,true,false,false); populateAttribute(IMAGE,possiblyUpdatedImage,true,false,false); populateAttribute(DESCRIPTION,possiblyUpdatedDesc,true,false,false); } catch (SQLException e) { throw new JboException(e); } finally { if (st != null) { try { // 7. Closing the statement st.close(); } catch (SQLException e) { } } } }
Example 26-12 shows the code to invoke the lock_product
procedure. It's doing basically the same steps as above, with just the following two interesting differences:
After retrieving the possibly updated column values from the OUT
parameters, it uses the compareOldAttrTo()
helper method inherited from the PLSQLEntityImpl
to detect whether or not a RowInconsistentException
should be thrown as a result of the row lock attempt.
In the catch (SQLException e)
block, it is testing to see whether the database has thrown the error:
ORA-00054: resource busy and acquire with NOWAIT specified
and if so, it again throws the ADF Business Components AlreadyLockedException
just as the default entity object implementation of the lock()
functionality would do in this situation.
Example 26-12 Invoking the Stored Procedure to Lock a Row by Primary Key
// In ProductImpl.java protected void callLockProcedureAndCheckForRowInconsistency() { String stmt = "begin products_api.lock_product(?,?,?,?);end;"; CallableStatement st = getDBTransaction().createCallableStatement(stmt, 0); try { st.registerOutParameter(2, VARCHAR2); st.registerOutParameter(3, VARCHAR2); st.registerOutParameter(4, VARCHAR2); st.setObject(1,getProdId()); st.executeUpdate(); String possiblyUpdatedName = st.getString(2); String possiblyUpdatedImage = st.getString(3); String possiblyUpdatedDesc = st.getString(4); compareOldAttrTo(NAME,possiblyUpdatedName); compareOldAttrTo(IMAGE,possiblyUpdatedImage); compareOldAttrTo(DESCRIPTION,possiblyUpdatedDesc); } catch (SQLException e) { if (Math.abs(e.getErrorCode()) == 54) { throw new AlreadyLockedException(e); } else { throw new JboException(e); } } finally { if (st != null) { try { st.close(); } catch (SQLException e) { } } } }
With these methods in place, you have a Product
entity object that wraps the PRODUCTS_API
package for all of its database operations. Due to the clean separation of the data querying functionality of view objects and the data validation and saving functionality of entity objects, you can now leverage this Product
entity object in any way you would use a normal entity object. You can build as many different view objects that use Product
as their entity usage as necessary.
If you need to create an entity object based on either of the following:
Synonym that resolves to a remote table over a DBLINK
View with INSTEAD OF
triggers
Then you will encounter the following error if any of its attributes are marked as Refresh on Insert or Refresh on Update:
JBO-26041: Failed to post data to database during "Update" ## Detail 0 ## ORA-22816: unsupported feature with RETURNING clause
The error says it all. These types of schema objects to not support the RETURNING
clause, which by default the entity object uses to more efficiently return the refreshed values in the same database roundtrip in which the INSERT
or UPDATE
operation was executed.
To disable the use of the RETURNING
clause for an entity object of this type, do the following:
Enable a custom entity definition class for the entity object.
In the custom entity definition class, override the createDef()
method to call:
setUseReturningClause(false)
{para}?>
If the Refresh on Insert attribute is the primary key of the entity object, you must identity some other attribute in the entity as an alternate unique key by setting the Unique Key property on it.
At runtime, when you have disabled the use of the RETURNING clause in this way, the entity object implements the Refresh on Insert and Refresh on Update behavior using a separate SELECT statement to retrieve the values to refresh after insert or update as appropriate.
Inheritance is a powerful feature of object-oriented development that can simplify development and maintenance when used appropriately. As you've seen in Section 25.9, "Creating Extended Components Using Inheritance", ADF Business Components supports using inheritance to create new components that extend existing ones in order to add additional properties or behavior or modify the behavior of the parent component. This section helps you understand when inheritance can be useful in modeling the different kinds of entities in your reusable business domain layer.
Note:
The examples in this section refer to theInheritanceAndPolymorphicQueries
project in the AdvancedEntityExamples
workspace. See the note at the beginning of this chapter for download instructions. Run the AlterUsersTable.sql
script in the Resources folder against the SRDemo
connection to setup the additional database objects required for the project.Your application's database schema might contain tables where different logical kinds of business information are stored in rows of the same table. These tables will typically have one column whose value determines the kind of information stored in each row. For example, the SRDemo application's USERS
table stores information about end-users, technicians, and managers in the same table. It contains a USER_ROLE
column whose value — user
, technician
, or manager
— determines what kind of user the row represents.
While the simple SRDemo application implementation doesn't yet contain this differentiation in this release, it's reasonable to assume that a future release of the application might require:
Managing additional database-backed attributes that are specific to managers or specific to technicians
Implementing common behavior for all users that is different for managers or technicians
Implementing new functionality that is specific to only managers or only technicians
Figure 26-2 shows what the business domain layer would look like if you created distinct User
, Manager
, and Technician
entity objects to allow distinguishing the different kinds of business information in a more formal way inside your application. Since technicians and managers are special kinds of users, their corresponding entity objects would extend the base User
entity object. This base User
entity object contains all of the attributes and methods that are common to all types of users. The performUserFeature()
method in the figure represents one of these common methods.
Then, for the Manager
and Technician
entity objects you can add specific additional attributes and methods that are unique to that kind of user. For example, in the figure, Manager
has an additional NextReview
attribute of type Date
to track when the manager must next review his employees. There is also a performManagerFeature()
method that is specific to managers. Similarly, the Technician
entity object has an additional Certified
attribute to track whether the technician has completed training certification. The performTechnicianFeature()
is a method that is specific to technicians. Finally, also note that since expertise areas only are relevant for technicians, the association between "users" and expertise levels is defined between Technician
and ExpertiseArea
.
By modeling these different kinds of users as distinct entity objects in an inheritance hierarchy in your domain business layer, you can simplify having them share common data and behavior and implement the aspects of the application that make them distinct.
To create entity objects in an inheritance hierarchy, you use the Create Entity Object wizard to create each entity following the steps outlined in the sections below. The example described here assumes that you've altered the SRDemo application's USERS
table by executing the following DDL statement to add two new columns to it:
alter table users add ( certified varchar2(1), next_review date );
Before creating entity objects in an inheritance hierarchy based on table containing different kinds of information, you should first identify which column in the table is used to distinguish the kind of row it is. In the SRDemo application's USERS
table, this is the USER_ROLE
column. Since it helps partition or "discriminate" the rows in the table into separate groups, this column is known as the discriminator column.
Next, determine the valid values that the descriminator column takes on in your table. You might know this off the top of your head, or you could execute a simple SQL statement in the JDeveloper SQL Worksheet to determine the answer. To access the worksheet:
Choose View | Connection Navigator.
Expand the Database folder and select the SRDemo
connection.
Choose SQL Worksheet from the right-mouse context menu.
Figure 26-3 shows the results of performing a SELECT DISTINCT
query in the SQL Worksheet on the USER_ROLE
column in the USERS
table. It confirms that the rows are partitioned into three groups based on the USER_ROLE
discriminator values: user
, technician
, and manager
.
Once you know how many different kinds of business entities are stored in the table, you will also know how many entity objects to create to model these distinct items. You'll typically create one entity object per kind of item. Next, in order to help determine which entity should act as the base of the hierarchy, you need to determine which subset of attributes is relevant to each kind of item.
Using the example above, assume you determine that all of the attributes except Certified
and NextReview
are relevant to all users, that Certified
is specific to technicians, and that NextReview
is specific to managers. This information leads you to determine that the Users
entity object should be the base of the hierarchy, with Manager
and Technician
entity object each extending Users
to add their specific attributes.
To create the base entity object in an inheritance hierarchy, use the Create Entity Object wizard and following these steps:
In step 1 on the Name panel, provide a name and package for the entity, and select the schema object on which the entity will be based.
For example, name the entity object User
and base it on the USERS
table.
In step 2 on the Attributes panel, select the attributes in the Entity Attributes list that are not relevant to the base entity object (if any) and click Remove to remove them.
For example, remove the Certified
and NextReview
attributes from the list.
In step 3 on the Attribute Settings panel, use the Select Attribute dropdown list to choose the attribute that will act as the discriminator for the family of inherited entity objects and check the Discriminator checkbox to identify it as such. Importantly, you must also supply a Default Value for this discriminator attribute to identify rows of this base entity type.
For example, select the UserRole
attribute, mark it as a discriminator attribute, and set its Default Value to the value "user
".
Note:
Leaving the Default Value blank for a discriminator attribute is legal. A blank default value means that a row whose discriminator column valueIS NULL
will be treated as this base entity type.Then click Finish to create the entity object.
To create a subtype entity object in an inheritance hierarchy, first do the following:
Determine the entity object that will be the parent entity object from which your new entity object will extend.
For example, the parent entity for a new Manager
entity object will be the User
entity created above.
Ensure that the parent entity has a discriminator attribute already identified.
The base type must already have the discriminator attribute identified as described in the section above. If it does not, use the Entity Object editor to set the Discriminator property on the appropriate attribute of the parent entity before creating the inherited child.
Then, use the Create Entity Object wizard and follow these steps to create the new subtype entity object in the hierarchy:
In step 1 on the Name panel, provide a name and package for the entity, and click the Browse button next to the Extends field to select the parent entity from which the entity being created will extend.
For example, name the new entity Manager
and select the User
entity object in the Extends field.
In step 2 on the Attributes panel, use the New from Table button to add the attributes corresponding to the underlying table columns that are specific to this new entity subtype.
For example, select the NEXT_REVIEW
column to add a corresponding NextReview
attribute to the Manager
entity object
Still on step 2, use the Override button to override the discriminator attribute so that you can customize the attribute metadata to supply a distinct Default Value for the Manager
subtype.
For example, override the UserRole
NextReview
attribute.
In step 3 on the Attribute Settings panel, use the Select Attribute dropdown list to choose the discriminator attribute. Importantly, you must change the Default Value field to supply a distinct default value for the discriminator attribute that defines the entity subtype being created.
For example, select the UserRole
attribute and change its Default Value to the value "manager
".
Then click Finish to create the subtype entity object.
Note:
You can repeat the same steps to define theTechnician
entity that extends User
to add the additional Certified
attribute and overrides the Default Value of the UserRole
discriminator attribute to have the value "technician
".To add methods to entity objects in an inheritance hierarchy, enable the custom Java class for the entity in question and visit the code editor to add the method.
To add a method that is common to all entity objects in the hierarchy, enable a custom Java class for the base entity object in the hierarchy and add the method in the code editor. For example, if you add the following method to the UserImpl
class for the base User
entity object, it will be inherited by all entity objects in the hierarchy:
// In UserImpl.java public void performUserFeature() { System.out.println("## performUserFeature as User"); }
To override a method in a subtype entity that is common to all entity objects in the hierarchy, enable a custom Java class for the subtype entity and choose Source | Override Methods to launch the Override Methods dialog. Select the method you want to override, and click OK. Then, customize the overridden method's implementation in the code editor. For example, imagine overriding the performUserFeature()
method in the ManagerImpl
class for the Manager
subtype entity object and change the implementation to look like this:
// In ManagerImpl.java public void performUserFeature() { System.out.println("## performUserFeature as Manager"); }
When working with instances of entity objects in a subtype hierarchy, sometimes you will process instances of multiple different subtypes. Since Manager
and Technician
entities are special kinds of User
, you can write code that works with all of them using the more generic UserImpl
type that they all have in common. When doing this generic kind of processing of classes that might be one of a family of subtypes in a hierarchy, Java will always invoke the most specific override of a method available.
This means that invoking the performUserFeature()
method on an instance of UserImpl
that happens to really be the more specific ManagerImpl
subtype, will the result in printing out the following:
## performUserFeature as Manager
instead of the default result that regular UserImpl
instances would get:
## performUserFeature as User
To add a method that is specific to a subtype entity object in the hierarchy, enable a custom Java class for that entity and add the method in the code editor. For example, you could add the following method to the ManagerImpl
class for the Manager
subtype entity object:
// In ManagerImpl.java public void performManagerFeature() { System.out.println("## performManagerFeature called"); }
{para}?>
In the example above, the User
entity object corresponded to a concrete kind of row in the USERS
table and it also played the role of the base entity in the hierarchy. In other words, all of its attributes were common to all entity objects in the hierarchy. You might wonder what would happen, however, if the User
entity required a property that was specific to users, but not common to managers or technicians. Imagine that end-users can participate in customer satisfaction surveys, but that managers and technicians do not. The User
entity would require a LastSurveyDate
attribute to handle this requirement, but it wouldn't make sense to have Manager
and Technician
entity objects inherit it.
In this case, you can introduce a new entity object called BaseUser
to act as the base entity in the hierarchy. It would have all of the attributes common to all User
, Manager
, and Technician
entity objects. Then each of the three entities the correspond to concrete rows that appear in the table could have some attributes that are inherited from BaseUser
and some that are specific to its subtype. In the BaseUser
type, so long as you mark the UserRole
attribute as a discriminator attribute, you can just leave the Default Value blank (or some other value that does not occur in the USER_ROLE
column in the table). Because at runtime you'll never be using instances of the BaseUser
entity, it doesn't really matter what its discriminator default value is.
When you use the findByPrimaryKey()
method on an entity definition, it only searches the entity cache for the entity object type on which you call it. In the example above, this means that if you call UserImpl.getDefinitionObject()
to access the entity definition for the User
entity object when you call findByPrimaryKey()
on it, you will only find entities in the cache that happen to be users. Sometimes this is exactly the behavior you want. However, if you want to find an entity object by primary key allowing the possibility that it might be a subtype in an inheritance hierarchy, then you can use the EntityDefImpl
class' findByPKExtended()
method instead. In the User
example described here, this alternative finder method would find an entity object by primary key whether it is a User
, Manager
, or Technician
. You can then use the Java instanceof
operator to test which type you found, and then cast the UserImpl
object to the more specific entity object type in order to work with features specific to that subtype.
When you create an entity-based view object with an entity usage corresponding to a base entity object in an inheritance hierarchy, you can configure the view object to query rows corresponding to multiple different subtypes in the base entity's subtype hierarchy. Each row in the view object will use the appropriate subtype entity object as the entity row part, based on matching the value of the discriminator attribute. See Section 27.6.2, "How To Create a View Object with a Polymorphic Entity Usage" for specific instructions on setting up and using these view objects.
Due to database constraints, when you perform DML operations to save changes to a number of related entity objects in the same transaction, the order in which the operations are performed can be significant. If you try to insert a new row containing foreign key references before inserting the row being referenced, the database can complain with a constraint violation. This section helps you understand the default order for processing of entity objects during commit time and how to programmatically influence that order when necessary.
Note:
The examples in this section refer to theControllingPostingOrder
project in the AdvancedEntityExamples
workspace. See the note at the beginning of this chapter for download instructions.By default, when you commit the transaction the entity objects in the pending changes list are processed in chronological order, in other words, the order in which the entities were added to the list. This means that, for example, if you create a new ServiceRequest
and then a new Product
related to that service request, the new ServiceRequest
will be inserted first and the new Product
second.
When two entity objects are related by a composition, the strict chronological ordering is modified automatically to ensure that composed parent and child entity rows are saved in an order that avoids violating any constraints. This means, for example, that a new parent entity row is inserted before any new composed children entity rows.
If your related entities are associated but not composed, then you need to write a bit of code to ensure that the related entities get saved in the appropriate order.
Consider the newServiceRequestForNewProduct()
custom method from an ExampleModule
application module in Example 26-13. It accepts a set of parameters and:
Creates a new ServiceRequest
.
Creates a new Product
.
Sets the product id to which the server request pertains.
Commits the transaction.
Constructs a Result
Java bean to hold new product ID and service request ID.
Results the result.
Note:
The code makes the assumption that bothServiceRequest.SvrId
and Product.ProdId
have been set to have DBSequence data type to populate their primary keys based on a sequence.Example 26-13 Creating a New ServiceRequest Then a New Product and Returning the New Ids
// In ExampleModuleImpl.java public Result newServiceRequestForNewProduct(String prodName, String prodDesc, String problemDesc, Number customerId) { // 1. Create a new ServiceRequest ServiceRequestImpl newSR = createNewServiceRequest(); // 2. Create a new Product ProductImpl newProd = createNewProduct(); newProd.setName(prodName); newProd.setDescription(prodDesc); // 3. Set the product id to which service request pertains newSR.setProdId(newProd.getProdId().getSequenceNumber()); newSR.setProblemDescription(problemDesc); newSR.setCreatedBy(customerId); // 4. Commit the transaction getDBTransaction().commit(); // 5. Construct a bean to hold new product id and SR id Result result = new Result(); result.setSvrId(newSR.getSvrId().getSequenceNumber()); result.setProdId(newProd.getProdId().getSequenceNumber()); // 6. Return the result return result; } private ServiceRequestImpl createNewServiceRequest() { EntityDefImpl srDef = ServiceRequestImpl.getDefinitionObject(); return (ServiceRequestImpl)srDef.createInstance2(getDBTransaction(),null); } private ProductImpl createNewProduct() { EntityDefImpl srDef = ProductImpl.getDefinitionObject(); return (ProductImpl)srDef.createInstance2(getDBTransaction(),null); }
If you add this method to the application module's client interface and test it from a test client program, you get an error:
oracle.jbo.DMLConstraintException: JBO-26048: Constraint "SVR_PRD_FK" violated during post operation: "Insert" using SQL Statement "BEGIN INSERT INTO SERVICE_REQUESTS( SVR_ID,STATUS,REQUEST_DATE, PROBLEM_DESCRIPTION,PROD_ID,CREATED_BY) VALUES (?,?,?,?,?,?) RETURNING SVR_ID INTO ?; END;". ## Detail 0 ## java.sql.SQLException: ORA-02291: integrity constraint (SRDEMO.SVR_PRD_FK) violated - parent key not found
The database complains when the SERVICE_REQUESTS
row is inserted that the value of its PROD_ID
foreign key doesn't correspond to any row in the PRODUCTS
table. This occurred because:
The code created the ServiceRequest
before the Product
ServiceRequest
and Product
entity objects are associated but not composed
The DML operations to save the new entity rows is done in chronological order, so the new ServiceRequest
gets inserted before the new Product
.
To remedy the problem, you could reorder the lines of code in the example to create the Product
first, then the ServiceRequest
. While this would address the immediate problem, it still leaves the chance that another application developer could creating things in an incorrect order.
The better solution is to make the entity objects themselves handle the posting order so it will work correctly regardless of the order of creation. To do this you need to override the postChanges()
method in the entity that contains the foreign key attribute referencing the associated entity object and write code as shown in Example 26-14. In this example, since it is the ServiceRequst
that contains the foreign key to the Product
entity, you need to update the ServiceRequest
to conditionally force a related, new Product
to post before the service request posts itself.
The code tests whether the entity being posted is in the STATUS_NEW
or STATUS_MODIFIED
state. If it is, it retrieves the related product using the getProduct()
association accessor. If the related Product
also has a post-state of STATUS_NEW
, then first it calls postChanges()
on the related parent row before calling super.postChanges()
to perform its own DML.
Example 26-14 Overriding postChanges() in ServiceRequestImpl to Post Product First
// In ServiceRequestImpl.java public void postChanges(TransactionEvent e) { /* If current entity is new or modified */ if (getPostState() == STATUS_NEW || getPostState() == STATUS_MODIFIED) { /* Get the associated product for the service request */ ProductImpl product = getProduct(); /* If there is an associated product */ if (product != null) { /* And if it's post-status is NEW */ if (product.getPostState() == STATUS_NEW) { /* * Post the product first, before posting this * entity by calling super below */ product.postChanges(e); } } } super.postChanges(e); }
If you were to re-run the example now, you would see that without changing the creation order in the newServiceRequestForNewProduct()
method's code, entities now post in the correct order — first new Product
, then new ServiceRequest
. Yet, there is still a problem. The constraint violation still appears, but now for a different reason!
If the primary key for the Product
entity object were user-assigned, then the code in Example 26-14 would be all that is required to address the constraint violation by correcting the post ordering.
Note:
An alternative to the programmatic technique discussed above, which solves the problem at the J2EE application layer, is the use of deferrable constraints at the database layer. If you have control over your database schema, consider defining (or altering) your foreign key constraints to beDEFERRABLE INITIALLY DEFERRED
. This causes the database to defer checking the constraint until transaction commit time. This allows the application to perform DML operations in any order provided that by COMMIT
time all appropriate related rows have been saved and would alleviate the parent/child ordering described above. However, you would still need to write the code described in the following sections to cascade-update the foreign key values if the parent's primary key is assigned from a sequence.In this example, however, the Product.ProdId
is assigned from a database sequence, and not user-assigned in this example. So when a new Product
entity row gets posted its ProdId
attribute is refreshed to reflect the database-assigned sequence value. The foreign key value in the ServiceRequest.ProdId
attribute referencing the new product is "orphaned" by this refreshing of the product's ID value. When the service request row is saved, its PROD_ID
value still doesn't match a row in the PRODUCTS
table, and the constraint violation occurs again. The next two sections discuss the solution to address this "orphaning" problem.
Recall from Section 6.6.3.8, "Trigger-Assigned Primary Key Values from a Database Sequence" that when an entity object's primary key attribute is of DBSequence
type, during the transaction in which it is created, its numerical value is a unique, temporary negative number. If you create a number of associated entities in the same transaction, the relationships between them are based on this temporary negative key value. When the entity objects with DBSequence
-value primary keys are posted, their primary key is refreshed to reflect the correct database-assigned sequence number, leaving the associated entities that are still holding onto the temporary negative foreign key value "orphaned".
For entity objects based on a composition, when the parent entity object's DBSequence
-valued primary key is refreshed, the composed children entity rows automatically have their temporary negative foreign key value updated to reflect the owning parent's refreshed, database-assigned primary key. This means that for composed entities, the "orphaning" problem does not occur.
However, when entity objects are related by an association that is not a composition, you need to write a little code to insure that related entity rows referencing the temporary negative number get updated to have the refreshed, database-assigned primary key value. The next section outlines the code required.
When an entity like Product
in this example has a DBSequence
-valued primary key, and it is referenced as a foreign key by other entities that are associated with (but not composed by) it, you need to override the postChanges()
method as shown in Example 26-15 to save a reference to the row set of entity rows that might be referencing this new Product
row. If the status of the current Product
row is New, then the code assigns the RowSet
-valued return of the getServiceRequest()
association accessor to the newServiceRequestsBeforePost member field before calling super.postChanges()
.
Example 26-15 Saving Reference to Entity Rows Referencing this New Product
// In ProductImpl.java RowSet newServiceRequestsBeforePost = null; public void postChanges(TransactionEvent TransactionEvent) { /* Only bother to update references if Product is a NEW one */ if (getPostState() == STATUS_NEW) { /* * Get a rowset of service requests related * to this new product before calling super */ newServiceRequestsBeforePost = (RowSet)getServiceRequest(); } super.postChanges(TransactionEvent); }
This saved RowSet
is then used by the overridden refreshFKInNewContainees()
method shown in Example 26-16. It gets called to allow a new entity row to cascade update its refreshed primary key value to any other entity rows that were referencing it before the call to postChanges()
. It iterates over the ServiceRequestImpl
rows in the newServiceRequestsBeforePost row set (if non-null) and sets the new product ID value of each one to the new sequence-assigned product value of the newly-posted Product
entity.
Example 26-16 Cascade Updating Referencing Entity Rows with New ProdId Value
// In ProductImpl.java protected void refreshFKInNewContainees() { if (newServiceRequestsBeforePost != null) { Number newProdId = getProdId().getSequenceNumber(); /* * Process the rowset of service requests that referenced * the new product prior to posting, and update their * ProdId attribute to reflect the refreshed ProdId value * that was assigned by a database sequence during posting. */ while (newServiceRequestsBeforePost.hasNext()) { ServiceRequestImpl svrReq = (ServiceRequestImpl)newServiceRequestsBeforePost.next(); svrReq.setProdId(newProdId); } closeNewServiceRequestRowSet(); } }
After implementing this change, the code in Example 26-13 runs without encountering any database constraint violations.
Section 6.10, "Adding Transient and Calculated Attributes to an Entity Object" explained how to add calculated attributes to an entity object. Often the formula for the calculated value will depend on other attribute values in the entity. For example, consider a LineItem
entity object representing the line item of an order. The LineItem
might have attributes like Price
and Quantity
. You might introduce a calculated attributed named ExtendedTotal
which you calculate by multiplying the price times the quantity. When either the Price
or Quantity
attributes is modified, you might expect the calculated attribute ExtendedTotal
to be updated to reflect the new extended total, but this does not happen automatically. Unlike a spreadsheet, the entity object does not have any built-in expression evaluation engine that understands what attributes your formula depends on.
To address this limitation, you can write code in a framework extension class for entity objects that add a recalculation facility. The SREntityImpl
framework extension class in the SRDemo application contains the code shown in Example 26-17 that does this. It does not try to implement a sophisticated expression evaluator. Instead, it leverages the custom properties mechanism to allow a developer to supply a declarative "hint" about which attributes (e.g. X
, Y
, and Z
) should be recalculated when another attribute like A
gets changed.
To leverage the generic facility, the developer of an entity object:
Bases his entity on the framework extension class containing this additional code,
Defines one or more entity-level custom properties that follow a particular naming pattern. These indicate to the generic code which attributes should get recalculated when a particular other attribute changes.
To indicate that "when attribute A
changes, recalculate attributes X
, Y
, and Z
" he would add a custom property named Recalc_A
with the comma-separated value "X,Y,Z
" to indicate that.
To implement the functionality the SREntityImpl
class overrides the notifyAttributesChanged()
method. This method gets invoked whenever the value of entity object attributes change. As arguments, the method receives two arrays:
int[]
of attribute index numbers whose values have changed
Object[]
containing the new values for those attributes
The code does the following basic steps:
Iterates over the set of custom entity properties
If property name starts with "Recalc_
" it gets the substring following this prefix to know the name of the attribute whose change should trigger recalculation of others.
Determines the index of the recalc-triggering attribute.
If the array of changed attribute indexes includes the index of the recalc-triggering attribute, then tokenize the comma-separated value of the property to find the names of the attributes to recalculate.
If there were any attributes to recalculate, add their attribute indexes to a new int[]
of attributes whose values have changed.
The new array is created by copying the existing array elements in the attrIndices array to a new array, then adding in the additional attribute index numbers.
Call the super
with the possibly updated array of changed attributes.
Example 26-17 Entity Framework Extension Code to Automatically Recalculate Derived Attributes
// In SREntityImpl.java protected void notifyAttributesChanged(int[] attrIndices, Object[] values) { int attrIndexCount = attrIndices.length; EntityDefImpl def = getEntityDef(); HashMap eoProps = def.getPropertiesMap(); if (eoProps != null && eoProps.size() > 0) { Iterator iter = eoProps.keySet().iterator(); ArrayList otherAttrIndices = null; // 1. Iterate over the set of custom entity properties while (iter.hasNext()) { String curPropName = (String)iter.next(); if (curPropName.startsWith(RECALC_PREFIX)) { // 2. If property name starts with "Recalc_" get follow attr name String changingAttrNameToCheck = curPropName.substring(PREFIX_LENGTH); // 3. Get the index of the recalc-triggering attribute int changingAttrIndexToCheck = def.findAttributeDef(changingAttrNameToCheck).getIndex(); if (isAttrIndexInList(changingAttrIndexToCheck,attrIndices)) { // 4. If list of changed attrs includes recalc-triggering attr, // then tokenize the comma-separated value of the property // to find the names of the attributes to recalculate String curPropValue = (String)eoProps.get(curPropName); StringTokenizer st = new StringTokenizer(curPropValue,","); if (otherAttrIndices == null) { otherAttrIndices = new ArrayList(); } while (st.hasMoreTokens()) { String attrName = st.nextToken(); int attrIndex = def.findAttributeDef(attrName).getIndex(); if (!isAttrIndexInList(attrIndex,attrIndices)) { Integer intAttr = new Integer(attrIndex); if (!otherAttrIndices.contains(intAttr)) { otherAttrIndices.add(intAttr); } } } } } } if (otherAttrIndices != null && otherAttrIndices.size() > 0) { // 5. If there were any attributes to recalculate, add their attribute // indexes to the int[] of attributes whose values have changed int extraAttrsToAdd = otherAttrIndices.size(); int[] newAttrIndices = new int[attrIndexCount + extraAttrsToAdd]; Object[] newValues = new Object[attrIndexCount + extraAttrsToAdd]; System.arraycopy(attrIndices,0,newAttrIndices,0,attrIndexCount); System.arraycopy(values,0,newValues,0,attrIndexCount); for (int z = 0; z < extraAttrsToAdd; z++) { newAttrIndices[attrIndexCount+z] = ((Integer)otherAttrIndices.get(z)).intValue(); newValues[attrIndexCount+z] = getAttribute((Integer)otherAttrIndices.get(z)); } attrIndices = newAttrIndices; values = newValues; } } // 6. Call the super with the possibly updated array of changed attributes super.notifyAttributesChanged(attrIndices, values); }
The ServiceHistory
entity object in the SRDemo application uses this feature by setting a custom entity property named Recalc_SvhType
with the value of Hidden
. This way, anytime the value of the SvhType attribute is changed, the value of the calculated Hidden
attribute is recalculated.
ADF Business Components comes with a base set of built-in declarative validation rules that you can use. However, the most powerful feature of the validator architecture for entity objects is that you can create your own custom validation rules. When you notice that you or your team are writing the same kind of validation code over and over, you can build a custom validation rule class that captures this common validation "pattern" in a parameterized way. Once you've defined a custom validation rule class, you can register it in JDeveloper so that it is as simple to use as any of the built-in rules. In fact, as you see in the following sections, you can even bundle your custom validation rule with a custom UI panel that JDeveloper will leverage automatically to facilitate developers' using and configuring the parameters your validation rule might require.
To write a custom validation rule for entity objects, create a Java class that implements the JboValidatorInterface
in the oracle.jbo.rules
package. As shown in Example 26-18, this interface contains one main validate()
method, and a getter and setter method for a Description
property.
Example 26-18 All Validation Rules Must Implement the JboValidatorInterface
package oracle.jbo.rules; public interface JboValidatorInterface { void validate(JboValidatorContext valCtx) { } java.lang.String getDescription() { } void setDescription(String description) { } }
If the behavior of your validation rule will be parameterized to make it more flexible, then add additional bean properties to your validator class for each parameter. For example, the SRDemo application contains a custom validation rule called DateMustComeAfterRule
which validates that one date attribute must come after another date attribute. To allow developer's using the rule to configure the names of the date attributes to use as the initial and later dates for validation, this class defines two properties initialDateAttrName
and laterDateAttrName
.
Example 26-19 shows the code that implements the custom validation rule. It extends the AbstractValidator
to inherit support for working automatically with the entity object's custom message bundle, where JDeveloper will automatically save the validation error message when a developer uses the rule on one of their entity objects.
The validate()
method of the validation rule gets invoked at runtime whenever the rule class should perform its functionality. The code performs the following basic steps:
Ensures validator is correctly attached at the entity level.
Gets the entity row being validated.
Gets the values of the initial and later date attributes.
Validate sthat initial date is before later date.
Throws an exception if the validation fails.
Example 26-19 Custom DateMustComeAfterRule in the SRDemo Application
package oracle.srdemo.model.frameworkExt.rules; // NOTE: Imports omitted public class DateMustComeAfterRule extends AbstractValidator implements JboValidatorInterface { /** * This method is invoked by the framework when the * validator should do its job. */ public void validate(JboValidatorContext valCtx) { // 1. If validator is correctly attached at the entity level... if (validatorAttachedAtEntityLevel(valCtx)) { // 2. Get the entity row being validated EntityImpl eo = (EntityImpl)valCtx.getSource(); // 3. Get the values of the initial and later date attributes Date initialDate = (Date) eo.getAttribute(getInitialDateAttrName()); Date laterDate = (Date) eo.getAttribute(getLaterDateAttrName()); // 4. Validate that initial date is before later date if (!validateValue(initialDate,laterDate)) { // 5. Throw the validation exception RulesBeanUtils.raiseException(getErrorMessageClass(), getErrorMsgId(), valCtx.getSource(), valCtx.getSourceType(), valCtx.getSourceFullName(), valCtx.getAttributeDef(), valCtx.getNewValue(), null, null); } } else { throw new RuntimeException("Rule must be at entity level"); } } /** * Validate that the initialDate comes before the laterDate. */ private boolean validateValue(Date initialDate, Date laterDate) { return (initialDate == null) || (laterDate == null) || (initialDate.compareTo(laterDate) < 0); } /** * Return true if validator is attached to entity object * level at runtime. */ private boolean validatorAttachedAtEntityLevel(JboValidatorContext ctx) { return ctx.getOldValue() instanceof EntityImpl; } // NOTE: Getter/Setter Methods omitted private String description; private String initialDateAttrName; private String laterDateAttrName; }
For easier reuse of your custom validation rules, you would typically package them into a JAR file for reference by applications that make use of the rules. In the SRDemo application, the FrameworkExtensions
project contains a DateMustComeAfterRule.deploy deployment profile that packages the rule class into a JAR file named DateMustComeAfterRule.jar
for use at runtime and design time.
Since a validation rule class is a bean, you can implement a standard JavaBean customizer class to improve the design time experience of setting the bean properties. In the example of the DateMustComeAfter
rule in the previous section, the two properties developers will need to configure are the initialDateAttrName
and laterDateAttrName
properties.
Figure 26-4 illustrates using JDeveloper's visual designer for Swing to create a DateMustComeAfterRuleCustomizer
using a JPanel
with a titled border containing two JLabel prompts and two JComboBox
controls for the dropdown lists. The code in the class populates the dropdown lists with the names of the Date-valued attributes of the current entity object being edited in the IDE. This will allow a developer who adds a DateMustComeAfterRule
validation to their entity object to easily pick which date attributes should be used for the starting and ending dates for validation.
To associate a customizer with your DateMustComeAfterRule
Java Bean, you follow the standard practice of creating a BeanInfo
class. As shown in Example 26-20, the DateMustComeAfterRuleBeanInfo
returns a BeanDescriptor that associates the customizer class with the DateMustComeAfter
bean class.
You would typically package your customizer class and this bean info in a separate JAR file for design-time-only use. The FrameworkExtensions
project in the SRDemo application contains a deployment profile that packages these classes in a DateMustComeAfterRuleDT.jar
.
Example 26-20 BeanInfo to Associate a Customizer with a Custom Validation Rule
package oracle.srdemo.model.frameworkExt.rules; import java.beans.BeanDescriptor; import java.beans.SimpleBeanInfo; public class DateMustComeAfterRuleBeanInfo extends SimpleBeanInfo { public BeanDescriptor getBeanDescriptor() { return new BeanDescriptor(DateMustComeAfterRule.class, DateMustComeAfterRuleCustomizer.class); } }
To use a custom validation rule in a project containing entity objects, follow these steps:
Define a project-level library for the rule JAR files.
Add that library to your project's library list.
Use the Business Components > Registered Rules panel of the Project Properties dialog to add a one or more validation rules.
When adding a validation rule, provide the fully-qualified name of the validation rule class, and supply a validation rule name that will appear in JDeveloper's list of available validators.
Figure 26-5 shows the Validation panel of the Entity Object editor for the SRDemo application's ServiceRequest
entity object. When you edit the DateMustComeAfter rule, you can see the custom editing panel is automatically discovered from the rule class' BeanInfo and used at design time to show the developer the starting and ending attribute names. JDeveloper provides the support for capturing the translatable error message that will be shown to the end-user if the validation rule fails at runtime.