Getting Started

Bulk data capture to hydrate a data mart environment is a common use case for DaaS consumers. The following are a series of steps to help get new users started.

Create code to obtain and regenerate OAuth token

DaaS uses OAuth2 client-credentials for authentication. To perform bulk data capture for hydrating a data mart, users need to create a script to obtain and regenerate OAuth2 tokens, which are issued from a customer’s cloud account configured with an identity domain and confidential application. The process for configuring a new confidential application is described in DaaS setup documentation. Since DaaS uses the OAuth client credentials flow and does not provide a refresh token, clients must implement logic to request a new token when the current one expires.

import requests
from requests.auth import HTTPBasicAuth
 
def get_token(client_id, client_secret):
    basic = HTTPBasicAuth(client_id, client_secret)
    headers = {
        'Content-Type': 'application/x-www-form-urlencoded',
        'grant_type': 'client_credentials',
        'scope': 'texturadaas:read',
    }
    resp = requests.post(issuer_url, auth=basic, headers=headers, data={"grant_type": "client_credentials", "scope": "texturadaas:read"})
    resp.raise_for_status()
    return resp.json()

The function response provides an access_token for DaaS API requests and an expires_in value showing the token’s expiration time. The token’s time-to-live (TTL) can be adjusted in the confidential application setup.

>>> import get_token
>>> token = get_token(CLIENT_ID, CLIENT_SECRET)
>>> print(token)
{'access_token': '<REDACTed>', 'token_type': 'Bearer', 'expires_in': 3600}

Create queries to pull data from each top-level graph

Each top-level graph returns a list of objects that are themselves composed of scalar types, lists, and additional objects related by unique ids. Create queries for each top-level graph to extract data in a way that preserves relationships for later reconstruction in an external data warehouse. Write queries as simply as possible to minimize complexity. A postman collection with sample queries, including <graph>_withSubgraph_sample examples, is provided to assist customers in this process. For example, a new customer may execute postman queries in the following order and use included subgraph foreign keys to re-assemble data in their environment:

It is highly recommended to experiment with queries using a graphical client such as postman prior to attempting to script bulk requests.

Script bulk data load

Once a customer is comfortable with the response payloads from the available queries, customers can automate bulk data loads by scripting a client to iterate through response pages. Multi-threading requests within rate limits can further improve throughput.

def run_query(token, org_id):
    uat_daas_url = 'https://textura-data.prod.construction.ocs.oraclecloud.com/api/graphql'
    headers = {'Content-Type': 'application/json', 'Authorization': f'Bearer {token['access_token']}'}
    resp = requests.post(uat_daas_url, headers=headers, json={'query': f'query Contract{{ contract(offset: 0, next: 10, organizationID: {org_id}) {{ id }}}}'})
    resp.raise_for_status()
    return resp.json()

Create delta load script

After the initial data load, use the dateModifiedBegin filter in subsequent requests to retrieve only records that have changed (delta records).



Last Published Friday, January 16, 2026