There are four possibilities here:
Always query against the local repository. For example, you may decide that updates to profiles need to be sent to a remote system, but queries of that data will always be done locally.
Always query against the remote system. For example, you may want the data to remain in the remote system with no persistent storage in Dynamo.
Check the local repository first, then check the remote system.
Check the remote system first, then Dynamo.
If there is a Command
associated with the query operation then the remote system is queried. If no Command
is configured, then the local repository is queried.
When you want to execute a query against the Integration Repository, your code will look something like this:
Repository rep = getRepository(getMyRepository()); RepositoryView view = rep.getView(getMyView()); QueryBuilder builder = view.getQueryBuilder(); Query query = builder.createSomeQuery(MyQueryExpression); RepositoryItem[] results = view.executeQuery(query);
There is no Integration Repository specific code in any of this. This is because you build queries with the Integration Repository in exactly the same way that you would build queries with the SQL repository. This also means that you can use RQL. You can use standard query builder calls, so the Query
object that gets generated is a standard Query
object from the atg.repository.query
package.
This real difference is in the RepositoryView
. The Integration Framework uses a subclass named IntegrationRepositoryView
. This class provides an implementation of executeUncachedQuery
that is expected to call the query
Command
. There needs to be a subclass of IntegrationRepositoryView
for each remote system you want to query. This subclass is responsible for translating between the Oracle ATG Web Commerce Query
and the query format expected by the remote system.
A query
Command
will receive whatever input is created by the createQueryCommandInput
method of your IntegrationRepositoryView
.
The IntegrationRepositoryView.processResults
method is responsible for translating between the remote data format and our repository items.