Create

Create your network, clusters, and configure the environment.

Before You Begin

Before you begin, configure networking in the VCN segmented by purpose:
  • Public subnets support the service load balancer and the OKE API/service endpoints.
  • Private subnets host the worker nodes and the pods.
  • A dedicated private subnet hosts an Oracle Autonomous AI Database.

Consider the following to plan your subnet CIDR ranges:

  • Choose non-overlapping CIDR ranges (for example, VCN 10.0.0.0/16; subnets /24s like 10.0.10.0/24 nodes, 10.0.20.0/24 pods, 10.0.30.0/24 database).
  • Ensure no overlap with node/pod CIDRs, Kubernetes service CIDR, or cluster CIDR.
Follow these high-level steps to create and customize your VCN:
  1. Create a new VCN using the VCN wizard. The VCN includes the following gateways and subnets:
    • An internet gateway, NAT gateway, and service gateway for the VCN.
    • A regional public subnet with routing to the internet gateway. Instances in a public subnet might optionally have public IP addresses.
    • A regional private subnet with routing to the NAT gateway and service gateway (and therefore the Oracle Services Network). Instances in a private subnet can't have public IP addresses.
    Follow the steps from Create a VCN with Internet Connectivity in the OCI documentation.
  2. Customize and rename subnets for clarity and function, for example:
    • service-lb-public (public)
    • database-private-subner (private)

Create Network Resources

The VCN wizard created a private and public subnet for you.
You must now create three additional subnets:
  • Create an additional public subnet for the OKE API/OKE Service. For example, oke-api-service-public.
  • Create two additional private subnets within the same VCN for the Node and the Pod with egress to the internet via the VCN's NAT Gateway for Dify pods. For example, node-private-subnet and pod-private-subnet.
Follow these steps to create each subnet:
  1. Create an independent private route table for the Node subnet.
    1. Create a new Route Table (for example, rt-node-private) in the same VCN.
    2. Add a default route with destination 0.0.0.0/0 that targets the NAT Gateway created above.
    For more information, see Create a Route Table in the OCI documentation.
  2. Create a subnet network security gourp (NSG) for the Node subnet (for example, nsg-node-private) and create appropriate rules as needed. Use stateful rules unless your standards require stateless.
    For more information, see Create an NSG in the OCI documentation.
  3. Create a subnet security list (for example, sl-node-private) and open necessary ports for the Node subnet if you don't want to use NSGs.

    Tip:

    Oracle recommends that you prefer NSGs over security lists.
    For more information, see Create a Security List in the OCI documentation.
  4. Create the private, regional Node subnet and attach the route table and security list.
    1. On the Virtual Cloud Networks list page, select the VCN you created.
    2. On the details page, select Create Subnet.
    3. Create a regional subnet (for example, node-private-subnet) with the planned CIDR in the correct compartment.
    4. Disallow public IPs on VNICs (mark the subnet as private).
    5. Associate the subnet with the independent Route Table (for example, rt-node-private) that points 0.0.0.0/0 to the NAT Gateway.
    6. Attach the Security List or NSG.
    7. Select DHCP options; use VCN DNS if relying on OCI DNS resolution.
    1. Create an additional private subnet for the Pod, a public one for OKE/API service and attach the correct route table and security list or NSG by repeating these steps.
      For example:
      • node-private-subnet: private, associate rt-node-private, attach nsg-nodes.
      • pod-private-subnet: private, associate rt-pod-private, attach nsg-pods.
      • oke-api-service-public: public, restrict inbound to your admin IP ranges using your SL or NSG. When you create the public subnet for oke-api-service-public, connect it to the Internet Gateway.
    For more information, see Create a Subnet in the OCI documentation.

Create OKE Cluster

Create a cluster, select the configured VCN, and assign the designated worker node and pod subnets to the Kubernetes API.
Follow these steps to create and configure a cluster:
  1. In the OCI Console ensure you're in the correct tenancy and region.
  2. Click Developer Services, and then click Kubernetes Clusters (OKE).
  3. In the Create cluster page, select Custom Create, and click Proceed.
  4. On the Network setup step, select the target Compartment, select the configured VCN, and click Next.
    1. Assign the Kubernetes API endpoint subnet and select the option to assign a public IP address.
    2. Click Next.
  5. In the Load balancer subnets section, select Specify load balancer subnets option, then select your Service LB subnet, and click Next.
  6. In the Node pools page, specify configuration details for the first node pool in the cluster. Configure node settings for availability, storage, and access.
    1. In the Node placement configuration section, select the Worker node subnet that you created.
      For example, node-private-subnet.
    2. Select the Node type as Managed for managing the worker nodes in the node pool.
    3. Set the Node count to 3 for high availability.
      If available, distribute nodes across availability domains or fault domains.
    4. Set the Boot volume size to at least 50 GB.
      Increase from the default to avoid storage shortages for images, logs, and workloads.
    5. Upload the SSH public key for subsequent node management.
      Paste the contents of your .pub file to enable secure SSH access for troubleshooting and administration.
    6. In the Pod communication section, select the Pod subnet.
      For example, pod-private-subnet.
      Select the private, regional subnet for pods (for VCN-native pod networking); ensure its route table points to a NAT gateway and necessary egress is allowed.
  7. Review your selections and create the cluster. Verify the VCN and subnet selections for Kubernetes API endpoint, Worker nodes, Service LB, and Pods.
    Confirm Node count is 3, Boot volume size is ≥ 50 GB, and the SSH public key is correct. Click Create cluster.
  8. Validate the configuration after creation.
    1. Open the Cluster details page and confirm the Kubernetes API endpoint, Worker nodes, and Pods are associated with the intended subnets.
    2. Check worker node Boot volume sizes in Block Storage or Compute details ( ≥ 50 GB).
    3. (Optional) SSH into a worker node using the uploaded SSH public key to verify access (subject to your access model).
For more information on creating clusters, see Custom Create Workflow to Create a Cluster in the OCI documentation.

Configure Dify Environment and Access

Configure the Dify environment access to prepare it for deployment to initialize the platform:

Environmental Preparation

  1. Download the Dify Enterprise Edition installation package through Helm.
  2. Modify the values.yaml configuration.
  3. If you use a self-built PostgreSQL, Redis and MinIO, enable external component configurations such as externalPostgres and externalRedis.
  4. At the same time, create a local PVC, allocate 50G storage for data persistence.
  5. Run the kubectl command to complete the creation.

External access configuration

  1. Deploy a load-balancing Ingress Controller by running helm install ingress-nginx.
  2. After the Ingress obtains an external IP address, configure DNS.
  3. You can then access the Dify console and the enterprise management platform through the web to complete license registration, initial user creation, and plugin installation, for example LLM model plugin and database query plugin.