12 Using RDF Graph Server and Query UI

This chapter contains the following sections:

12.1 Introduction to RDF Graph Server and Query UI

The RDF Graph Server and Query UI allows you to run SPARQL queries and perform advanced RDF graph data management operations using a REST API and an Oracle JET based query UI.

The RDF Graph Server and Query UI consists of RDF RESTful services and a Java EE client application called RDF Graph Query UI. This client serves as an administrative console for Oracle RDF and can be deployed to a Java EE container.

The RDF Graph Server and RDF RESTful services can be used to create a SPARQL endpoint for RDF graphs in Oracle Database.

The following figure shows the RDF Graph Server and Query UI architecture:

Figure 12-1 RDF Graph Server and Query UI

RDF Graph Server and Client Architectu

The salient features of the RDF Graph Query UI are as follows:

  • Uses RDF RESTful services to communicate with RDF data stores, which can be an Oracle RDF data source or an external RDF data source.

  • Allows you to perform CRUD operations on various RDF objects such as private networks, models, rule bases, entailments, network indexes and data types for Oracle data sources.

  • Allows you to execute SPARQL queries and update RDF data.

  • Provides a graph view of SPARQL query results.

  • Uses Oracle JET for user application web pages.

12.2 RDF Graph Server and Query UI Concepts

Learn the key concepts for using the RDF Graph Server and Query UI.

12.2.1 Data Sources

Data sources are repositories of RDF objects.

A data source can refer to an Oracle database, or to an external RDF service that can be accessed by an endpoint URL such as Dbpedia or Jena Apache Fuseki. The data source can be defined by generic and as well as specific parameters. Some of the generic parameters are name, type, and description. Specific parameters are JDBC properties (for database data sources) and endpoint base URL (for external data sources). Oracle Data Sources

Oracle data sources are defined using JDBC connections. Two types of Oracle JDBC data sources can be defined:

  • A container JDBC data source that can be defined inside the application Server (WebLogic, Tomcat, or others)

  • An Oracle wallet data source that contains the files needed to make the database connection

The parameters that define an Oracle database data source include:

  • name: a generic name of the data source.

  • type: the data source type. For databases must be ‘DATABASE’.

  • description (optional): a generic description of the data source.

  • properties: specific mapping parameters with values for data source properties:

    • For a container data source: JNDI name: Java naming and directory interface (JNDI) name.

    • For a wallet data source: wallet service: a string describing the wallet.

      For a cloud wallet it is usually an alias name stored in the tnsnames.ora file, but for a simple wallet it contains the host, port, and service name information.

The following example shows the JSON representation of a container data source:

   "name": "rdfuser_ds_ct",
   "type": "DATABASE",
   "description": "Database Container connection",
   "properties": {
       "jndiName": "jdbc/RDFUSER193c"

The following example shows the JSON representation of a wallet data source:

   "name": "rdfuser_ds_wallet",
   "type": "DATABASE",
   "description": "Database wallet connection",
   "properties": {
       "walletService": "db202002041627_medium"
} Endpoint URL Data Sources

External RDF data sources are defined using an endpoint URL. In general, each RDF store has a generic URL that accepts SPARQL queries and SPARQL updates. Depending on the RDF store service, it may also provide some capabilities request to retrieve available datasets.

Table 12-1 External Data source Parameters

Parameters Description


A generic name of the data source.


The type of the data source. For external data sources, the type must be ‘ENDPOINT’.


A generic description of the data source.


Specific mapping parameters with values for data source properties:

  • base URL: the base URL to issue SPARQL queries to RDF store. This is the default URL.
  • query URL (optional): the URL to execute SPARQL queries. If defined, it will overwrite the base URL value.
  • update URL (optional): the URL to execute SPARQL updates. If defined, it will overwrite the base URL value.
  • capabilities (optional): Some RDF stores (like Apache Jena Fuseki) may provide a capabilities URL that returns the datasets available in service. A JSON response is expected in this case.
  • get URL: the get capabilities URL.
  • datasets parameter: defines the JSON parameter that contains the RDF datasets information.
  • dataset parameter name: defines the JSON parameter that contains the RDF dataset name.

The following example shows the JSON representation of a Dbpedia external data source :

  "name": "dbpedia",
  "type": "ENDPOINT",
  "description": "Dbpedia RDF data - Dbpedia.org",
  "properties": {
      "baseUrl": "http://dbpedia.org/sparql",
      "provider": "Dbpedia"

The following example shows the JSON representation of a Apache Jena Fuseki external data source. The ${DATASET} is a parameter that is replaced at run time with the Fuseki dataset name:

    "name": "Fuseki",
    "type": "ENDPOINT",
    "description": "Jena Fuseki server",
    "properties": {
      "queryUrl": "http://localhost:8080/fuseki/${DATASET}/query",
      "baseUrl": "http://localhost:8080/fuseki",
      "capabilities": {
        "getUrl": "http://localhost:8080/fuseki/$/server",
        "datasetsParam": "datasets",
        "datasetNameParam": "ds.name"
      "provider": "Apache",
      "updateUrl": "http://localhost:8080/fuseki/${DATASET}/update"

12.2.2 RDF Datasets

Each RDF data source contains metadata information that describe the avaliable RDF objects.

The following describes the metadata information defined by each provider.

  • Oracle RDF data sources: The RDF metadata includes information about the following RDF objects: private networks, models (real, virtual, view), rulebases, entailments, network indexes and datatypes.

  • External RDF providers: For Apache Jena Fuseki, the metadata includes dataset names. Other external providers may not have a metadata concept, in which case the base URL points to generic (default) metadata.

RDF datasets point to one or more RDF objects available in the RDF data source. A dataset definition is used in SPARQL query requests. Each provider has its own set of properties to describe the RDF dataset.

The following are a few examples of a JSON representation of a dataset.

Oracle RDF dataset definition:

     "networkOwner": "RDFUSER",
     "networkName": "MYNET",
     "models": ["M1"]

Apache RDF Jena Fuseki dataset definition:

     "name": "dataset1"

For RDF stores that do not have a specific dataset, a simple JSON {} or a 'Default' name as shown for Apache Jena Fuseki in the above example can be used.

12.2.3 REST Services

An RDF REST API allows communication between client and backend RDF data stores.

The REST services can be divided into the following groups:

  • Server generic services: allows access to available data sources, and configuration settings for general, proxy, and logging parameters.

  • Oracle RDF services: allows CRUD operations on Oracle RDF objects.

  • SPARQL services: allows execution of SPARQL queries and updates on the data sources.

Assuming the deployment of RDF web application with context-root set to orardf, on localhost machine and port number 7101, the base URL for REST requests is http://localhost:7101/orardf/api/v1.

Most of the REST services are protected with Form-based authentication. Administrator users can define a public RDF data source using the RDF Graph Server and Query UI web application. The public REST endpoints will then be available to perform SPARQL queries on published datasets.


The examples in this section and throughout this chapter reference host machine as localhost and port number as 7101. These values can vary depending on your application deployment.

The following are some RDF REST examples:

  • Get the server information:

    The following is a public endpoint URL. It can be used to test if the server is up and running.


  • Get a list of data sources:


  • Get general configuration parameters:


  • Get a list of RDF semantic networks for Oracle RDF:


  • Get a list of all Oracle RDF models for MDSYS network:


  • Get a list of all Oracle RDF real models for a private semantic network (applies from 19c databases):


  • Post request for SPARQL query:

    http://localhost:7101/orardf/api/v1/datasets/query?datasource=rdfuser_ds_193c&datasetDef={"metadata":[ {"networkOwner":"RDFUSER", "networkName":"LOCALNET","models":["UNIV_BENCH"]} ] }

    Query Payload: select ?s ?p ?o where { ?s ?p ?o} limit 10

  • Get request for SPARQL query:

    http://localhost:7101/orardf/api/v1/datasets/query?datasource=rdfuser_ds_193c&query=select ?s ?p ?o where { ?s ?p ?o} limit 10&datasetDef={"metadata":[ {"networkOwner":"RDFUSER", "networkName":"LOCALNET","models":["UNIV_BENCH"]} ] }

  • Put request to publish an RDF model:

    http://localhost:7101/orardf/api/v1/datasets/publish/DSETNAME?datasetDef={"metadata":[ {"networkOwner":"RDFUSER", "networkName":"LOCALNET" "models":["UNIV_BENCH"]} ]}

    Default SPARQL Query Payload: select ?s ?p ?o where { ?s ?p ?o} limit 10

    This default SPARQL can be overwritten when requesting the contents of a published dataset. The datasource parameter in the preceding request is optional. However, if you define this parameter on the URL, it must match the current publishing data source name because this API version supports just one publishing data source. Otherwise, the published data source name is automatically used.

  • Get request for a published dataset:

    The following is a public endpoint URL. It is using the default parameters (SPARQL query, output format, and others) that are stored in dataset definition. However, these default parameters can be overwritten in REST request by passing new parameter values.


A detailed list of available REST services can be found in the Swagger json file, orardf_swagger.json, which is packaged in the application documentation directory.

12.3 Oracle RDF Graph Query UI

The Oracle RDF Graph Query UI is an Oracle JET based client that can be used to manage RDF objects from different data sources, and to perform SPARQL queries and updates.

This Java EE application helps to build application webpages that query and display RDF graphs. It supports queries across multiple data sources.

12.3.1 Installing RDF Graph Query UI

In order to get started on Oracle RDF Graph Query UI, you must download and install the application.

You can download RDF Graph Query UI using one of the following options:

The downloaded oracle-graph-webapps-<version>.zip deployment contains the files as shown in the following figure:

Figure 12-2 Oracle Graph Webapps deployment

Deployment files

The deployment of the RDF .war file provides the Oracle RDF Graph Query UI console.

The rdf-doc folder contains the User Guide documentation.

This deployment also includes the REST API running on the application server to handle communication between users and backend RDF data stores.

12.3.2 Managing User Roles for RDF Graph Query UI

Users will have access to the application resources based on their role level. In order to access the Query UI application, you need to enable a role for the user.

The following describes the different user roles and their privileges:

  • Administrator: An administrator has full access to the Query UI application and can update configuration files, manage RDF objects and can execute SPARQL queries and SPARQL updates.

  • RDF: A RDF user can read or write Oracle RDF objects and can execute SPARQL queries and SPARQL updates. But, cannot modify configuration files.

  • Guest: A guest user can only read Oracle RDF objects and can only execute SPARQL queries.

Figure 12-3 User Roles for RDF Graph Query

User Roles for RDF Graph Query

Application servers, such as WebLogic Server, Tomcat, and others, allow you to define and assign users to user groups. Administrators are set up at the time of the RDF Graph server installation, but the RDF and guest users must be created to access the application console. Managing Groups and Users in WebLogic Server

The security realms in WebLogic Server ensures that the user information entered as a part of installation is added by default to the Administrators group. Any user assigned to this group will have full access to the RDF Graph Query UI application.

To open the WebLogic Server Administration Console, enter http://localhost:7101/console in your browser and logon using your administrative credentials. Click on Security Realms as shown in the following figure:

Figure 12-4 WebLogic Server Administration Console

WebLogic Server Administration Console Creating User Groups in WebLogic Server

To create new user groups in WebLogic Server:

  1. Select the security realm from the listed Realms in Figure 12-4.

  2. Click Users and Groups and then Groups.

  3. Click New to create new RDF user groups in Weblogic as shown below:

    Figure 12-5 Creating new user groups in WebLogic Server

    Creating new user groups in WebLogic Server

The following example creates the following two user groups:

  • RDFreadUser: for guest users with just read access to application.

  • RDFreadwriteUser: for users with read and write access to RDF objects.

Figure 12-6 Created User Groups in WebLogic Server

Created User Groups in WebLogic Server Creating RDF and Guest Users in WebLogic Server

In order to have RDF and guest users in the user groups you must first create the RDF and guest users and then assign them to their respective groups.

To create new RDF and guest users in WebLogic server:

Prerequisites: RDF and guest users groups must be available or they must be created. See Creating User Groups in WebLogic Server for creating user groups.

  1. Select the security realm from the listed Realms as seen in Figure 12-4

  2. Click Users and Groups tab and then Users.

  3. Click New to create the RDF and guest users.

    Figure 12-7 Create new users in WebLogic Server

    Create new users in WebLogic Server

    The following example creates two new users :

    • rdfuser: user to be assigned to group with read and write privileges.

    • nonrdfuser: guest user to be assigned to group with just read privileges.

    Figure 12-8 New RFD and Guest users

    Description of Figure 12-8 follows
    Description of "Figure 12-8 New RFD and Guest users"
  4. Select a user name and click Groups to assign the user to a specific group.

  5. Assign rdfuser to RDFreadwriteUser group.

    Figure 12-9 RDF User

    RDF User
  6. Assign nonrdfuser to RDFreadUser group.

    Figure 12-10 RDF Guest User

    RDF Guest User Managing Users and Roles in Tomcat Server

For Apache Tomcat, edit the Tomcat users file conf/tomcat-users.xml to include the RDF user roles. For example:

<tomcat-users xmlns="http://tomcat.apache.org/xml" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="1.0" xsi:schemaLocation="http://tomcat.apache.org/xml tomcat-users.xsd">

    <role rolename="rdf-admin-user"/>

    <role rolename="rdf-read-user"/>

    <role rolename="rdf-readwrite-user"/>

    <user password="adminpassword" roles="manager-script,admin,rdf-admin-user" username="admin"/>

    <user password="rdfuserpassword" roles="rdf-readwrite-user" username="rdfuser"/>

    <user password="notrdfuserpassword" roles="rdf-read-user" username="notrdfuser"/>


12.3.3 Getting Started with RDF Graph Query UI

The Oracle Graph Query UI contains a main page with RDF graph feature details and links to get started.

Figure 12-11 Query UI Main Page

Query UI Home Page

The main page includes the following:

  • Home: Get an overview of the Oracle RDF Graph features.

  • Data sources: Manage your data sources.

  • Data: Manage, query or update RDF objects.

  • Settings: Set your configuration parameters. Data Sources Page

The Data Sources page allows you to create different types of data sources. Only administrator users can manage data sources. The RDF store can be linked to an Oracle Database or to an external RDF data provider. For Oracle data sources, there are two types of connections:

  • JDBC data source defined on an application server
  • Oracle wallet connection defined in a zip file

These database connections must be available in order to link the RDF web application to the data source.

To create a data source, click Data Sources, then Create.

Figure 12-12 Data Sources Page

Data Sources Page Oracle Container

In order to create a container data source for the UI , the JDBC data source must exist in the application server. Creating a JDBC Data Source in WebLogic Server

To create a JDBC data source in WebLogic Server:

  1. Log in to the WebLogic administration console as an administrator: http://localhost:7101/console.

  2. Click Services, then JDBC Data sources.

  3. Click New and select the Generic data source menu option to create a JDBC data source.

    Figure 12-13 Generic Data Source

    Generic Data Source
  4. Enter the JDBC data source information (name and JNDI name), then click Next.

    Figure 12-14 JDBC Data Source and JNDI

    JDBC Data Source and JNDI
  5. Accept the defaults on the next two pages.

  6. Enter the database connection information: service name, host, port, and user credentials.

    Figure 12-15 Create JDBC Data Source

    Create JDBC Data Source
  7. Click Next to continue.

  8. Click the Test Configuration button to validate the connection and click Next to continue.

    Figure 12-16 Validate connection

    Validate connection
  9. Select the server target and click Finish.

    Figure 12-17 Create JDBC Data Source

    Create JDBC Data Source

The JDBC data gets added to the data source table and the JNDI name is added to the combo box list in the create container dialog. Creating a JDBC Data Source in Tomcat

There are different ways to create a JDBC data source in Tomcat. See Tomcat documentaion for more details.

The following examples denote creation of JDBC data source in Tomcat by modifying the configuration files conf/server.xml and conf/content.xml.
  • Add global JNDI resources on conf/server.xml.

        <Resource name="jdbc/RDFUSER19c" auth="Container" global="jdbc/RDFUSER19c"
                  type="javax.sql.DataSource" driverClassName="oracle.jdbc.driver.OracleDriver"
                  username="rdfuser" password="rdfuserpwd" maxTotal="20" maxIdle="10"
  • Add the resource link to global JNDI’s on conf/context.xml:

        <ResourceLink name="jdbc/RDFUSER19c"
                      type="javax.sql.DataSource" />
    </Context> Creating an Oracle Container Data Source

To create a Oracle Container data source in the application server containing JDBC data sources:

  1. Click the Container button
  2. Enter the required Data Source Name, and select the JDBC data source JNDI name that exists on the Application Server.

Figure 12-18 Create Container Data Source

Create Container Data Source Oracle Wallet

Oracle Wallet provides a simple and easy method to manage database credentials across multiple domains. It lets you update database credentials by updating the wallet instead of having to change individual data source definitions. This is accomplished by using a database connection string in the data source definition that is resolved by an entry in the wallet.

The wallet can be a simple wallet for storing the SSO and PKI files, or a cloud wallet that also contains TNS information and other files.

To create a wallet data source in the Oracle Graph Query UI application, you must create a wallet zip file that stores user credentials for each service. Ensure that the file is stored in a safe location for security reasons. The wallet files can be created using some Oracle utilities such as mkstore or orapki, or using the Oracle Wallet Manager application

Creating a Simple Wallet

The following are the steps to create a Simple Wallet:

  1. Create the wallet directory:

    mkdir /tmp/wallet
  2. Create the wallet files using mkstore utility. You will be prompted for a password. Save the password to the wallet:

    ${ORACLE_HOME}/bin/mkstore –wrl /tmp/wallet –create
  3. Add a database connection with user credentials. The wallet service in this case will be a string with host, port, and service name information

    ${ORACLE_HOME}/bin/mkstore –wrl /tmp/wallet
          –createCredential host:port/serviceName username password
  4. Zip the wallet directory to make it available for use in web application

Figure 12-19 Simple Wallet

Simple Wallet

The created Simple wallet directory will contain the cwallet.sso and ewallet.p12 files.

Creating an Oracle Cloud Wallet

The following are the steps to create a Oracle Cloud Wallet:

  1. Navigate to the Autonomous Database details page.

  2. Click DB Connection.

  3. On the Database Connection page, select the Wallet Type.

  4. Click Download.

  5. Enter the password information and download the file ( default filename is Wallet_databasename.zip).

The cloud zip file contains the files displayed in Figure 12-20 . The tnsnames.ora file contains the wallet service alias names, and TCPS information. However, it does not contain the user credentials for each service. To use the wallet with the Oracle RDF Graph Query web application for creating a data source, you must store the credentials in the wallet file. Execute the following steps to add credentials to the wallet zip file:

  1. Unzip the cloud wallet zip file in a temporary directory.

  2. Use the service name alias in the tnsnames.ora to store credentials.

    For example, if the service name alias is db202002041627_medium:

    ${ORACLE_HOME}/bin/mkstore –wrl /tmp/cloudwallet
          –createCredential db202002041627_medium username password
  3. Zip the cloud wallet files into a new zip file.

Figure 12-20 Cloud Wallet

Cloud Wallet

Creating a Wallet Data Source

Using the wallet zip file, you can create a Wallet data source in the Oracle RDF Graph Query web application. Click the Wallet button to display the wallet dialog, and perform the following steps:

  1. Click on the upload button, and select the wallet zip file.

    The zip file gets uploaded to the server.

  2. Enter the required data source name.

  3. Enter the optional data source description.

  4. Define the wallet service:

    • For simple wallet, enter the wallet service string stored in the wallet file.

    • For cloud wallet, select the service name from the combo box seen in Figure 12-22


Figure 12-21 Wallet Data Source from simple zip

Wallet Data Source from simple zip

Figure 12-22 Wallet Data Source from cloud zip

Wallet Data Source from cloud zip Endpoint URL

External data sources are connected to the RDF data store using the endpoint URL.

You can execute SPARQL queries and updates to the RDF data store using a base URL. In some cases, such as Apache Jena Fuseki, there are specific URLs based on the dataset name. For example:

  • DBpedia Base URL: http://dbpedia.org/sparql

  • Apache Jena Fuseki (assuming a dataset name dset):

    • Query URL: http://localhost:8080/fuseki/dset/query

    • Update URL: http://localhost:8080/fuseki/dset/update

The RDF web application issues SPARQL queries to RDF datasets. These datasets can be retrieved from provider if a get capabilities request is available. For DBpedia, there is a single base URL to be used, and therefore a default single dataset is handled in application. For Apache Jena Fuseki, there is a request that returns the available RDF datasets in server: http://localhost:8080/fuseki/$/server. Using this request, the list of available datasets can be retrieved for specific use in an application.

To create an external RDF data source:

  1. Click the Endpoint button.
  2. Define the following parameters and then click OK:
    • Name: the data source name.
    • Description: optional description about data source.
    • Provider: optional provider name.
    • Base URL: base URL to access RDF service.
    • Query URL: optional URL to execute SPARQL queries (if not defined, the base URL is used).
    • Update URL: optional URL to execute SPARQL updates (if not defined, the base URL is used).
    • Capabilities parameters: properties to retrieve dataset information from RDF server.
    • Get URL: URL address that should return a JSON response with information about the dataset.
    • Datasets parameter: the property in JSON response that contains the dataset information.
    • Dataset name parameter: the property in datasets parameter that contains the dataset name.


    For Jena Fuseki, the expression ${DATASET} will be replaced by the dataset name at runtime when SPARQL queries or SPARQL updates are being executed.

The following figures show an example for creating a DBpedia and Apache Jena Fuseki data sources.

Figure 12-23 DBpedia Data Source

DBpedia Data Source

Figure 12-24 Apache Jena Fuseki Data Source

Apache Jena Fuseki Data Source RDF Data Page

You can manage and query RDF objects in the RDF Data page.

Figure 12-25 RDF Data Page

RDF Data Page

The left panel contains information on the available RDF data in the data source. The right panel is used for opening properties of a RDF object. Depending on the property type, SPARQL queries and SPARQL updates can be executed. Data Source Selection

The data source can be selected from the list of available data sources present in the Figure 12-25.

Figure 12-26 RDF Network

RDF Network
Select the desired Oracle RDF semantic network for the selected data source. Each network is identified by a network owner and network name.


Before Release 19, all semantic networks were stored on MDSYS schema. From Release 19 onwards, private networks are being supported. Semantic Network Actions

You can execute the following semantic network actions:

Figure 12-27 RDF Semantic Network Actions

RDF Semantic Network Actions
  • Create a semantic network.
  • Delete a semantic network.
  • Gather statistics for a network.
  • Refresh network indexes.
  • Purge values not in use. Importing Data

For Oracle semantic networks, the process of importing data into a RDF model is generally done by bulk loading the RDF triples that are available on staging table.

Figure 12-28 RDF Import Data Actions

RDF Import Data Actions

The available actions include:

  • Upload one or more RDF files into an Oracle RDF staging table. This staging table can be reused in other bulk load operations. Files with extensions .nt (N-triples), .nq (N-quads), .ttl (Turtle), and .trig (TriG) are supported for import. There is a limit of file size to be imported, which can be tuned by administrator.

    Also, zip files can be used to import multiple files at once. However, the zip file is validated first, and will be rejected if any of the following conditions occur:

    • Zip file contains directories
    • Zip entry name extension is not a known RDF format (.nt, .nq, .ttl, .trig)
    • Zip entry size or compressed size is undefined
    • Zip entry size does exceed maximum unzipped entry size
    • Inflate ratio between compressed size and file size is lower than minimum inflate ratio
    • Zip entries total size does exceed maximum unzipped total size
  • Bulk load the staging table records into an Oracle RDF model.

  • View the status of bulk load events. SPARQL Query Cache Manager

SPARQL queries are cached data source, and they apply to Oracle data sources. The translations of the SPARQL queries into SQL expressions are cached for Oracle RDF network models. Each model can stores up to 64 different SPARQL queries translations. The Query Cache Manager dialog, allows user to browse data source network cache for queries executed in models.

Figure 12-29 SPARQL Query Cache Manager

SPARQL Query Cache Manage

You can clear cache at different levels. The following describes the cache cleared against each level:

  • Data source: All network caches are cleared.
  • Network: All model caches are cleared.
  • Model: All cached queries for model are cleared.
  • Model Cache Identifier: Selected cache identifier is cleared.

Figure 12-30 Manage SPARQL Query Cache

Manage SPQAQL Query Cache RDF Objects Navigator

The navigator tree shows the available RDF objects for the selected data source.

  • For Oracle data sources, it will contain the different concept types like models, virtual models, view models, RDF view models, rule bases, entailments, network indexes, and datatype indexes.

    Figure 12-31 RDF Objects for Oracle Data Source

    RDF Objects for Oracle Data Source

  • For endpoint RDF data sources, the RDF navigator will have a list of names representing the available RDF datasets in the RDF store.

    Figure 12-32 RDF Objects from capabilities

    RDF Objects from capabilities

  • If an external RDF data source does not have a capabilities URL, then just a default dataset is shown.

    Figure 12-33 Default RDF Object

    Default RDF Object

To execute SPARQL queries and SPARQL updates, open the selected RDF object in the RDF objects navigator. For Oracle RDF objects, SPARQL queries are available for models (regular models, virtual models, and view models).

Different actions can be performed on the navigator tree nodes. Right-clicking on a node under RDF objects will bring the context menu options (such as Open, Rename, Analyze, Auxiliary tables, Delete, Visualize, and Publish) for that specific node.

It is important to note the following:
  • Publish menu item will be enabled only if the selected RDF data source is public.
  • Guest users cannot perform actions that require a write privilege.

Figure 12-34 RDF Navigator - Context Menu

RDF Navigator - Context Menu Data Source Published Datasets Navigator

If the selected RDF data source is public, a navigator node with the public datasets is displayed on the menu tree as shown in the following figure:

Figure 12-35 Data Source Published Datasets Navigator

Data Source Published Datasets Navigator Performing SPARQL Query and SPARQL Update Operations

To execute SPARQL queries and updates, open the selected RDF object in the RDF objects navigator. For Oracle RDF objects, SPARQL queries are available for regular models, virtual models, and view models.

You can define the following parameters for SPARQL queries:

  • SPARQL: the query string
  • RDF options: Oracle RDF options to be used when processing a query (See Additional Query Options for more information.)
  • Runtime parameters: fetch size, query timeout and others (this is applied to Oracle RDF data sources)
  • Binding parameters: the expression ?ora__bind is used as a binding parameter in a SPARQL string. Each binding parameter is defined by a type (uri or literal) and a value. For example:
    SELECT ?s ?p ?o WHERE { ?s ?p ?ora__bind } LIMIT 500

    An example of JSON representation of a binding parameter that can be passed to a REST query service is: { "type" : "literal", "value" : "abcdef" }

The following figure shows the SPARQL query page, containing the graph view.

Figure 12-36 SPARQL Query Page

SPARQL Query Page

The number of results on the SPARQL query is determined by the limit parameter in SPARQL string, or by the maximum number of rows that can be fetched from server. As an administrator you can set the maximum number of rows to be fetched in the settings page.

A graph view can be displayed for the query results. On the graph view, you must map the columns for the triple values (subject, predicate, and object). In a table view, the columns that represent URI values have hyperlinks.

Besides the Execute button to run the SPARQL query, there is also the Explain Plan button to retrieve the SQL query plan for the SPARQL. This basically displays a dialog with the EXPLAIN PLAN results and the SPARQL translation.

Figure 12-37 SQL EXPLAIN PLAN for SPARQL Translation

Description of Figure 12-37 follows
Description of "Figure 12-37 SQL EXPLAIN PLAN for SPARQL Translation" Publishing Oracle RDF Models
Oracle RDF models can be published as datasets. These are then available through a public REST endpoint for SPARQL queries. Administrator users can define a public RDF data source for publishing data by configuring the application general parameters (see General JSON configuration file).


It is important to be aware that by enabling RDF data publishing and defining a public RDF data source, your public URL endpoints for RDF datasets are exposed. This endpoint URL can be used directly in applications without entering credentials.
However, public endpoints have some security constraints related to execution of SPARQL queries. SPARQL updates, SPARQL SERVICE, and SPARQL user-defined functions are not allowed.

To publish an Oracle RDF model as a dataset:

  1. Right-click on the RDF model and select Publish from the menu as shown:

    Figure 12-38 Publish Menu

    The image shows the Publish Menu
  2. Enter the Dataset name (mandatory), Description, and Default SPARQL. This default SPARQL can be overwritten on the REST request.

    Figure 12-39 Publish RDF Model

    Publish RDF Model Window
  3. Click OK.

    The public endpoint GET URL for the dataset is displayed. Note that the POST request can also be used to access the endpoint.

    Figure 12-40 GET URL Endpoint

    GET URL Endpoint

    This URL uses the default values defined for the dataset and follows the pattern shown:


    You can override the default parameters stored in the dataset by modifying the URL to include one or more of the following parameters:
    • query: SPARQL query
    • format: output format (json, xml, csv, tsv, n-triples, turtle)
    • options: string with Oracle RDF options
    • params: JSON string with runtime parameters (timeout, fetchSize, and others)
    • bindings: JSON string with binding parameters (URI or literal values)

    The following shows the general pattern of the REST request to query published datasets (assuming the context root as orardf):


    In order to modify the default parameters, you must open the RDF dataset definition by selecting Open from the menu options shown in the following figure or by double clicking the published dataset:

    Figure 12-41 Open an RDF Dataset Definition

    Open an RDF Dataset Definition
    The RDF dataset definition for the selected published dataset opens as shown:

    Figure 12-42 RDF Dataset Definition

    RDF Dataset Definition

    You can update the default parameters and preview the results.


    • RDF user with administrator privileges can update and unpublish any dataset.
    • RDF user with read and write privileges can only manage the datasets that the user created.
    • RDF user with read privileges can only query the dataset. Published Dataset Playground

You can explore the published RDF datasets from a public web page.

You can access the page using the following URL format:


For example:


The public web page is displayed as shown:

Figure 12-43 Public Web Page

The image shows a public web page

The main components of this public page are:

  • Published Datasets: contains the names of the published RDF datasets for public RDF data source. To open the RDF dataset double click it or right click the tree dataset and execute the Open menu item as shown:

    Figure 12-44 Opening a Published Dataset on the Public Page

    The image shows opening of a published dataset on the public page.
  • The tab panel on the right allows you to execute SPARQL queries against the published RDF dataset. SPARQL query results are displayed in tabular as well as graph view formats. However, if the Accessibility switch on the top right corner of the page is switched ON, then the results are only displayed in tabular format.

    The following options are supported in the tab panel:

    • Templates: SPARQL template queries to use.
    • Add prefix: click to add the selected prefix in the combo box to a SPARQL query.
    • SPARQL: enter the SPARQL to be executed in the text area.
    • select/ask: select the output format for SPARQL SELECT and SPARQL ASK queries.
    • construct/describe: select the output format for SPARQL CONSTRUCT and SPARQL DESCRIBE queries.
    • Execute: click this button to execute the SPARQL query against the RDF public endpoint.
    • Table: shows the result in a tabular format.
    • Raw: shows the raw SPARQL result on specified format returned from server.
    • Download: click This is a download button to download the raw response. Support for Auxiliary Tables

Subject-Property-Matrix (SPM) auxiliary tables can be used to speed up SPARQL query execution. It is recommended you first refer to Speeding up Query Execution with SPM Auxiliary Tables, for a detailed description of SPM tables.

Single-Valued Property (SVP) tables hold values for single-valued RDF properties. A property p is single-valued in an RDF model if each resource in the model has at most one value for p.

Multi-Valued Property (MVP) tables hold values for multi-valued RDF properties. A property p is multi-valued in an RDF model if there exist two triples in the model (s p o1) and (s p o2) with o1 not equal to o2.

Property Chain (PCN) tables hold paths in the RDF graph. A set of triples t1, t2, …, tn form a path if for each ti where i > 1, the object value of ti-1 is equal to the subject value of ti.

SVP and PCN tables can be used to reduce joins on SPARQL query execution, while MVP tables allow better query optimizer statistics and query plans, which can help in speeding up the query execution. These auxiliary tables are associated with RDF models. Once they are created, they are automatically used during SPARQL queries execution, unless options are passed to not to use them.

The RDF Server and Query UI web application extends support to the SPM tables. You can manage these auxiliary tables by right clicking the RDF model and selecting the Auxiliary tables menu item as shown:

Figure 12-45 Auxiliary tables Menu

Description of Figure 12-45 follows
Description of "Figure 12-45 Auxiliary tables Menu" Creating Auxiliary Tables
You can create the SPM tables in the Predicates section of the UI by performing the following steps as shown:
  • Create a Predicates table, if one does not exist, with the statistics of each distinct predicate.
    This table contains the single and multi-valued predicates and their occurrences. For example:
  • Select the required predicates and click Add to Predicate List.
    This creates a selected list of predicates from which the different SPM tables can be created.
  • Optionally, you can define the predicate lexical values to be stored in an SPM table, by setting the lexical value column on the selected predicates in the Predicate List table.
    For example, in the following figure, the predicate order on the table is used to define the sequence order for PCN tables:
  • Create one of the following types of auxiliary tables depending on your requirement:

    Figure 12-48 Creating an Auxiliary Table

    Description of Figure 12-48 follows
    Description of "Figure 12-48 Creating an Auxiliary Table"
    • Click Create SVP table after entering the SVP table name to create an SVP table.

    • Click Create MVP table to create an MVP table. Note that to create an MVP table, you must select only one predicate in the Predicate List.
    • Click Create PCN table after entering the PCN table name to create a PCN table. Note that to create a PCN table, you must select at least two predicates in the Predicate List. Managing Auxiliary Tables
You can view the list of existing auxiliary tables for an RDF model in the Auxiliary tables section. Advanced Graph View

The RDF Graph Query UI supports an advanced graph view feature that allows users to interact directly with the graph visualization. This is unlike the graph displayed on the RDF model editor or public component where the graph view is just an output of the SPARQL results on the paging table.

This section describes the advanced graph view component, starting from the execution of a SPARQL CONSTRUCT or SPARQL DESCRIBE query to advanced interaction with the graph visualization.

The main user interface (UI) elements of the advanced graph view component are as shown:

Figure 12-54 Advanced Graph View Components

Description of Figure 12-54 follows
Description of "Figure 12-54 Advanced Graph View Components"

The following describes the UI components seen in the preceding figure:

  • SPARQL Query selector contains:
    • A text area with the SPARQL query (must be SPARQL CONSTRUCT or SPARQL DESCRIBE)
    • A tree with the root classes summaries (counts of incoming and outgoing predicates) resulting from the SPARQL query
  • A graph view area that displays the graph with the RDF nodes and edges

To access the advanced graph view feature, right-click on the RDF model and select Visualize as shown: Query Selector Panel

To start using the advanced graph view feature, you must first execute a SPARQL CONSTRUCT or SPARQL DESCRIBE query. The resulting query output is organized as summaries (counts for incoming and outgoing predicates) for the root classes (in general URI or blank node values).

The following figure shows a SPARQL CONSTRUCT query that produces two root classes, owl:Class and lehigh:Person:

Each root class has its own summary of incoming and outgoing predicates. You can double click on a root class to view the graph representation in the graph view panel.

It is highly recommended to define PREFIX expressions on the query to shorten the result labels in the graph nodes. It also helps to consume less space for the graphic representation of the nodes. Some well known RDF SPARQL prefixes (such as rdf, rdfs, owl, and others) are automatically recognized and can be avoided in the query expression.

As seen in the preceding figure, you can double click the tree node to open the element as a graph in the graph view. You can then interact directly with the graph in the graph view without using the root tree nodes in the Query selector panel. This panel can be collapsed to provide more space on the page for the graph view.

The following figure shows the owl:Class and lehigh:Person elements displayed in the graph view.

Figure 12-57 Advanced Graph View

Description of Figure 12-57 follows
Description of "Figure 12-57 Advanced Graph View"

Note that in some cases the SPARQL query execution may generate several root classes. However, it is not necessary to add all the root classes to a graph. This also helps to maintain a clean and readable graph area. Graph View

The graph view panel, where the graph is displayed, consists of the following components:

  • A toolbar with the following options:
    • Zoom Options: Includes zoom in, zoom out, fit all, and clear all actions. Additionally, zoom in and out actions can be achieved with the mouse wheel. Drag to pan graph is also available.
    • Layout: A few built-in layouts (such as random, grid, circle, concentric, breadth first, and cose).
    • Spacing factor: A slider to adjust the spacing between nodes (useful for lengthy edges).
    • Expand limit: The maximum number of node entries that can be expanded for an edge.
  • A drawing area with the RDF nodes and edges.

You can interact with the edges and nodes of the graph displayed in the graph view area. Initially, the graph displayed is based on the root class summaries (counts), but you can always expand the elements.

To expand a node in the graph, click on the node and then select Expand. New node elements with new edges linked to the selected node gets added to the graph. For example, in the following figure, the node lehigh:Person is shown expanded:

Star nodes (magenta color) contain the values associated with the edge predicate. To see these values, click on the node and select View Values:

Figure 12-59 Viewing Node Values

Description of Figure 12-59 follows
Description of "Figure 12-59 Viewing Node Values"

To expand the edge predicate summary, click on the edge and select Expand. Then the star node associated with it will be divided into new nodes and edges in the graph. However, if the expand limit value is lower than the summary count, then all the nodes will not be expanded. For example:

Figure 12-60 Expanding an Edge Predicate

Description of Figure 12-60 follows
Description of "Figure 12-60 Expanding an Edge Predicate"

The following figure displays the output for a circular layout:

Figure 12-61 Circular Layout Graph

Description of Figure 12-61 follows
Description of "Figure 12-61 Circular Layout Graph"

The following basic conventions apply to the graph displayed in the graph view:

  • URI nodes are displayed in orange color with labels inside the ellipse shape.
  • Blank nodes are displayed in green color with circle shape. Mousing over the blank node shows its label value.
  • Collapsed edges have the predicate with the count (if more than 1).
  • Star nodes in magenta color and circle shape contain the values associated with the collapsed edge.
  • Literal nodes are displayed with different colors depending on its type. A string literal is shown in cyan color with the label value. For long string values, the label length is reduced and mousing over literal node shows the full label value. Literals with datatype are displayed in different colors, and mousing over them shows the datatype name. Configuration Files for RDF Server and Client

The Graph Query UI application settings are determined by the JSON files that are included in the RDF Server and Client installation.

  • datasources.json: files with data sources information (general and access properties).

  • general.json: general configuration parameters.

  • proxy.json: proxy server parameters.

  • logging.json: logging settings.

On the server side, the directory WEB-INF/workspace is the default directory to store configuration information, logs, and temporary files. The configuration files are stored by default in WEB-INF/workspace/config.


If the RDF Graph Query application is deployed from an unexploded .war file, and if no JVM parameter is defined for the workspace folder location, then the default workspace location for the application is WEB-INF/workspace. However, any updates to the configuration, log, and temp files done by the application may be lost if the application is redeployed. Also, wallet data source files and published dataset files can be lost.

To overcome this, you must start the application server, such as Weblogic or Tomcat, with the JVM parameter oracle.rdf.workspace.dir set. For example: =Doracle.rdf.workspace.dir=/rdf/server/workspace. The workspace folder must exist on the file system. Otherwise, the workspace folder defaults to WEB-INF/workspace.

It is recommended to have a backup of the workspace folder, in case of redeploying the application on a different location. Copying the workspace folder contents to the location of the JVM parameter, allows to restore all configurations in new deployment. Data Sources JSON Configuration File

The JSON file for data sources stores the general attributes of a data source, including specific properties associated with data source.

The following example shows a data source JSON file with two data sources: one an Oracle container data source defined on the application server, and the other an external data source.

  "datasources" : [ 
    "name" : "rdfuser193c",
    "type" : "DATABASE",
    "description" : "19.3 Oracle database",
    "properties" : {
      "jndiName" : "jdbc/RDFUSER193c"
     "name" : "dbpedia",
     "type" : "ENDPOINT",
     "description" : "Dbpedia RDF data - Dbpedia.org",
      "properties" : {
         "baseUrl" : "http://dbpedia.org/sparql",
         "provider" : "Dbpedia"
} General JSON configuration file

The general JSON configuration file stores information related to SPARQL queries, JBDC parameters and upload parameters.

The JSON file includes the following parameters:

  • Maximum SPARQL rows: defines the limit of rows to be fetched for a SPARQL query. If a query returns more than this limit, the fetching process is stopped.
  • SPARQL Query Timeout: defines the time in seconds to wait for a query to complete.
  • Allow publishing: flag to enable public data source selection for using with SPARQL query endpoints.
  • Publishing data source: the RDF data source to publish datasets.
  • JDBC Fetch size: the fetch size parameter for JDBC queries.
  • JDBC Batch size: the batch parameter for JDBC updates.
  • Maximum file size to upload: the maximum file size to be uploaded into server.
  • Maximum unzipped item size: the maximum size for an item in a zip file.
  • Maximum unzipped total size: the size limit for all entries in a zip file.
  • Maximum zip inflate multiplier: maximum allowed multiplier when inflating files.

These parameters can be updated as shown in the following figures

Figure 12-62 General SPARQL Parameters

General SPARQL Parameters

Figure 12-63 General JDBC Parameters

General JDBC Parameters

Figure 12-64 General File Upload Parameters

General File Upload Parameters Proxy JSON Configuration File

The Proxy JSON configuration file contains proxy information for your enterprise network.

Figure 12-65 Proxy JSON Configuration File

Proxy JSON Configuration File

The file includes the following parameters:

  • Use proxy: flag to define if proxy parameters should be used.
  • Host: proxy host value.
  • Port: proxy port value. Logging JSON Configuration File

The Logging JSON configuration file contains the logging settings. You can specify the logging level.

For Administrators and RDF users, it is also possible to load the logs for further analysis.

Figure 12-66 Logging JSON Configuration File

Logging JSON Configuration File

12.3.4 Accessibility

You can turned on or off the accessibility during the user session.

Figure 12-67 Disabled Accessibility

Disabled Accessibility

Figure 12-68 Enabled Accessibility

Enabled Accessibility

When accessibility is turned on, the graph view of SPARQL queries is disabled.

Figure 12-69 Disabled Graph View

Accessibility Graph View