Working with Cloud Resources in Compute Cloud@Customer Isolated

Compute Cloud@Customer Isolated can be considered a rack-scale deployment unit of OCI, built on the same APIs. The Compute Enclave is the user environment where cloud resources are created and managed. It's logically isolated from the infrastructure administration (Service Enclave) interfaces.

The Compute Enclave interfaces provide access in the same way as OCI. Its CLI is identical while the browser UI offers practically the same user experience. API support is also identical, but limited to the subset of cloud services that Compute Cloud@Customer Isolated offers.

The consistency of the supported APIs is a crucial factor in the compatibility between the public and private cloud platforms. It ensures that the core cloud services support resources and configurations in the same way. More specifically, Compute Cloud@Customer Isolated supports the same logical constructs for networking and storage, manages user identity and access in the same way, and offers the same compute shapes and images for instance deployment as OCI. As a result, workloads set up in a public cloud environment are easily portable to Compute Cloud@Customer Isolated and vice versa. However, due to the disconnected operating mode of the private cloud environment, workloads must be migrated offline.

User Interfaces for the Compute Enclave

Use the Compute Cloud@Customer Isolated Compute Web UI, OCI CLI, and API to create and manage resources.

Use the following links to learn how to access, configure, and use the various user interfaces.

Task

Links

To use a browser-based UI – sign in to the Compute Web UI.

Signing in to the Compute Web UI

To use a command line UI – install and configure the OCI CLI.

Installing and Using the OCI CLI

To use API operations – determine the correct endpoint to use with API operations.

Working with API Signing Keys

Virtual Cloud Networks

On Compute Cloud@Customer Isolated, networking enables you to set up virtual versions of traditional network components. The infrastructure that provides the necessary services to deploy cloud workloads is configured to operate within the network environment of your data center. During initialization, the appliance's core network components are integrated with your existing data center network design.

Before you create resources such as compute instances, plan and configure your networking resources carefully. Some networking resources are difficult to change. As you create instances, assign them to the appropriate Virtual Cloud Network (VCNs) and subnets.

You can optionally configure VCN rules, gateways, additional VNICs, SR-IOV, DNS zones, steering policies, and enable IPv6 addresses.

Virtual Cloud Networking Basics

No.

Task

Links

1

Review conceptual VCN and subnet information.

Managing VCNs and Subnets

2

Create a VCN.

Create a VCN

3

Create subnets in the VCN.

Create Subnets

4

Configure VCN rules and options.

Configure VCN Rules and Options

5

Identify the type of gateway that's appropriate for your network configuration, and create the gateway.

Configuring VCN Gateways

More Virtual Cloud Networking Features

Task

Links

Optionally, configure more VNICs. Each instance has a primary VNIC that's automatically created and attached.

Configuring VNICs

Each instance has at least one private IP address. You can optionally add more VNICs after the instance is created.

Managing Private IP Addresses

Manage public IP addresses. A public IP address can optionally be assigned to an instance, along with other networking features to enable the instance to communicate outside of the VCN, including to the data center network.

Managing Public IP Addresses

Optionally, enable IPv6 addresses. Instances can be configured for connectivity with the on-premises network using IPv6 addresses.

Enabling IPv6 Virtual Networking

Optionally, configure SR-IOV. Single root I/O virtualization (SR-IOV) technology enables instances to achieve low latency and high throughput simultaneously on 1 or more physical links. VCNs, DRGs, and instances must be configured and enabled for SR-IOV.

Configuring SR-IOV for Virtual Networking

Optionally, manage public DNS zones. The Domain Name System (DNS) lets computers use hostnames instead of IP addresses to communicate with each other.

Managing Public DNS Zones

Optionally, manage steering policies. Steering policies are a way to distribute access to a single fully-qualified name across multiple servers.

Managing Traffic with Steering Policies

Compute Instances and Images

A compute instance is a virtual machine (VM), which is an independent computing environment that runs on top of physical hardware. The virtualization makes it possible to run multiple compute instances that are isolated from each other.

When you create a compute instance, you can select the most appropriate type of compute instance for your applications based on characteristics such as the number of CPUs, amount of memory, and network resources.

After you create a compute instance, you can access it securely from your computer, restart it, attach and detach volumes, and delete it when you're done with it.

Learn About the Compute Service

Task

Links

Learn about compute instance concepts, required components, boot volumes, and storage options.

Compute Instances

Learn about compute images. An image is a template of a virtual hard drive. The image provides the OS and other software for an instance. You specify an image to use when you create an instance.

Images for Instances

Learn about shapes. A shape is a template that determines the number of OCPUs, amount of memory, and number of VNICs that are allocated to an instance. You specify a shape when you create an instance.

Compute Shapes

Learn how to create an instance with required components by performing guided steps in a tutorial.

Tutorial: Launching Your First Instance

Optionally, configure instances for calling services. Instances can be configured to enable applications running on the instance to call services and manage resources.

Configuring Instances for Calling Services

Ways to Create and Manage Instances

Task

Links

One at a time: Create and manage instances individually using the Compute Web UI, OCI CLI, or API.

Working with Instances

Instance configurations: Instance configurations enable you to create consistent instances with the same configuration without reentering configuration values.

Working with Instance Configurations

Instance pools: An instance pool defines a set of instances which are managed as a group. Managing instances as a group enables you to efficiently provision instances and manage the state of instances.

Instance Pools

Container instances: a Container Instance is a serverless compute service that enables you to quickly and easily run containers without managing any servers. Container Instances run your containers on minimal compute instances that are optimized for container workloads.

Container Instances

Kubernetes Engine (OKE): The OKE service uses Kubernetes, the open-source system for automating deployment, scaling, and management of containerized applications across clusters of hosts. Kubernetes groups the containers that make up an application into logical units called pods for easy management. For more information.

Kubernetes Engine (OKE)

Working with Instances After Creation

Task

Links

Connect to an instance: You can connect to a running instance by using a Secure Shell (SSH) or Remote Desktop connection.

Connecting to a Compute Instance

Back up instances: You can back up instances and same them to another server for safekeeping. When needed, you can import the backup into an Object Storage bucket, and use it to create instances.

Backing Up and Restoring an Instance

Block Volumes

Block Volumes provide high-performance network storage capacity that supports a broad range of I/O intensive workloads.

A block volume is a detachable block storage device that enables you to dynamically expand the storage capacity of your compute instances, provide durable and persistent data storage that can be migrated across compute instances, and host large databases.

The Block Volume service enables you to group multiple volumes in a volume group. Volume groups simplify the process to create backups and clones.

You can create, attach, connect, and move volumes, and change volume performance to meet your storage, performance, and application requirements.

After you attach and connect a volume to a compute instance, you can use the volume like a regular hard drive. You can also disconnect a volume and attach it to another compute instance without the loss of data.

Task

Links

Learn about the Block Volume service, required components, access types, and performance options.

Block Volume Storage

Learn about volume backups and clones.

Volume Backups and Clones

Create and attach a block volume to an instance to expand the available storage on the instance.

Creating and Attaching Block Volumes

Manage block volumes. You can list, edit, move, clone, detach, and delete block volumes.

Managing Block Volumes

Manage boot volumes. When you create an instance, a new boot volume for the instance is created in the same compartment and attached to the instance. You can list, detach, reattach, back up, clone, and delete boot volumes.

Managing Boot Volumes

Optionally expand the size of a block volume. You can't reduce the size.

Resizing Volumes

Optionally, organize multiple volumes into a volume group. A volume group can include both block and boot volumes.

Managing Volume Groups

Optionally, make a point-in-time snapshot of the data on a block or boot volume. These backups can then be restored to new volumes any time.

Backing Up Block Volumes

Perform volume backups and volume group backups automatically using a schedule, and retain them based on the retention setting in the backup policy.

Managing Backup Policies

File Systems

The File Storage service provides a durable, scalable, secure network file system. You can connect to a File Storage service file system from any instance in your Virtual Cloud Network (VCN).

Task

Links

Learn about the File System service, supported protocols, required objects, paths, security rules, and network ports.

File Storage

Create a file system, mount target, and export.

Creating a File System, Mount Target, and Export

Before you can mount a file system, you must configure security rules to allow traffic to the mount target's VNIC using specific protocols and ports.

Controlling Access to File Storage

Instance users of UNIX based operating systems, such as Linux and Oracle Solaris, can use OS commands to mount and access file systems.

Mounting File Systems on UNIX-based Instances

You can make file systems available to Microsoft Windows instances by mapping a network drive to the mount target IP address and export path provided by the File Storage service. You can accomplish this task using NFS or SMB protocols.

Mounting File Systems on Microsoft Windows Instances

Manage mount targets and exports. For an instance to mount a file system, the instance's VCN must have a mount target. You can reuse the same mount target to make as many file systems available on the network as you want. To reuse the same mount target for multiple file systems, create an export in the mount target for each file system.

Managing Mount Targets and Exports

Manage file systems. A file system in the File Storage service represents a network file system that's mounted by one or more clients. Data is added to a file system from the client.

Managing File Systems

Optionally create and manage file system snapshots. Snapshots are a consistent, point-in-time view of your file systems.

Managing File System Snapshots

Optionally clone a file system. A clone is a new file system that is created based on a snapshot of an existing file system.

Managing File System Clones

Object Storage

The Object Storage service provides reliable and cost-efficient data durability.

The Object Storage service stores unstructured data of any content type, including analytic data and rich content, such as images and videos. The data is stored as an object in a bucket. Buckets are associated with a compartment within a tenancy. You can safely and securely store and retrieve data directly from the internet or from within Private Cloud Appliance.

Task

Links

Learn about Object Storage resources such as objects, buckets, and namespaces.

Object Storage

Create and manage buckets. A bucket is a container for storing objects in a compartment within an Object Storage namespace.

Managing Object Storage Buckets

Create and manage storage objects. This includes uploading, downloading, and deleting objects in a bucket.

Managing Storage Objects

Manage object versioning. Object versioning provides data protection against accidental or malicious object update, overwrite, or deletion.

Managing Object Versioning

Use preauthenticated requests to let users access a bucket or an object without having their own credentials, as long as the request creator has permissions to access those objects.

Using Preauthenticated Requests

Define retention rules to provide immutable storage options for data written to Object Storage for data governance, regulatory compliance, and legal hold requirements. Retention rules can also protect data from accidental or malicious writes or deletion.

Defining Retention Rules

Load Balancers

Load balancing is the method of sharing a workload equally among servers. It prevents clients from overwhelming certain servers.

The Load Balancer service provides automated traffic distribution from one entry point to multiple servers reachable from your virtual cloud network (VCN). The service offers a load balancer with your choice of a public or private IP address.

Two major types of load balancers are available on Private Cloud Appliance:

  • Load Balancer as a Service (LBaaS) – This type of load balancer operates at all protocol layers, including the application. When the term "load balancer" (LB) appears without qualification, the statement refers to LBaaS.
  • Network Load Balancers (NLB) – This type of load balancer operates on protocol layers below the application itself, at the Network Layer. The term "network load balancer" (NLB) always refers to a network load balancer, not to LBaaS.

For more information about the differences between the two types of load balancers, see Load Balancers. See also the following tables.

Load Balancer as a Service (LBaaS)

Task

Links

Learn how LBaaS automatically distributes network traffic.

Load Balancer as a Service

Learn about LBaaS frontend configurations, private and public load balancers, listeners, cipher suites, and session persistence.

Frontend Configuration

Manage load balancers. Load balancing is the method of sharing a workload equally among servers. You can create, view, edit, and delete load balancers.

Managing Load Balancers

Manage cipher suites. You can use cipher suites with a load balancer to determine the security, compatibility, and speed of HTTPS traffic.

Managing Cipher Suites

Import and manage SSL certificates.

Load Balancer SSL Certificates

Manage backend sets. The term backend refers to the components that receive, process, and respond to forwarded client requests.

Load Balancer SSL Certificates

Manage backend servers. When creating a load balancer, you must specify the backend servers to include in each backend set.

Managing Backend Servers

Manage virtual hostnames. You can use virtual hostnames with a load balancer for one or more listeners.

Managing Virtual Hostnames

Manage path route sets. You can apply a set of path routes to a load balancer (LB) to determine the appropriate destination backend set for incoming URIs.

Managing Path Route Sets

Manage listeners. You can use listeners to check for incoming traffic on the load balancer IP address.

Managing Listeners

Check the heath of the load balancer with health check tests.

Health Checks

Network Load Balancers (NLB)

Task

Links

Learn about network load balancers. You can configure the Network Load Balancing (NLB) feature to automatically distribute network traffic.

Network Load Balancers

Create and manage network load balancers.

Managing Network Load Balancers

Manage NLB backend sets. You can use backend sets to create logical entities consisting of an NLB policy, health check policy, and a list of backend servers for a NLB resource.

Managing NLB Backend Sets

Create and manage NLB backend servers. Backend servers receive incoming traffic based on the policies you specified for the backend set.

Managing NLB Backend Servers

Create and manage NLB listeners. Listeners check for incoming traffic on the network load balancer IP address.

Managing NLB Listeners

Check NLB health.

Checking NLB Health

View NLB work requests to see if any errors occurred.

Viewing NLB Work Request Errors

Kubernetes Engine (OKE)

The Oracle Kubernetes Engine (OKE) is a scalable, highly available service that can be used to deploy any containerized application to Private Cloud Appliance.

The OKE service uses Cluster API Provider (CAPI) and Cluster API Provider for Oracle Cloud Infrastructure (CAPOCI) to orchestrate the cluster on the Private Cloud Appliance.

The OKE service uses Kubernetes, the open-source system for automating deployment, scaling, and management of containerized applications across clusters of hosts. Kubernetes groups the containers that make up an application into logical units called pods for easy management.

For more information about Kubernetes in Oracle, see What Is Kuberrnetes? For more general information about Kubernetes, see the Kubernetes site

Task

Links

Learn which versions of Kubernetes and Terraform providers are supported, and what the OKE service limits are on Compute Cloud@Customer Isolated.

Kubernetes Engine (OKE)

Understand the distinct workflows for different types of administrators. Learn what is required of the administrators to enable and configure OKE.

OKE Workflow

Follow OKE best practices for best results using OKE clusters.

OKE Best Practices

Create OKE network resources.

Creating OKE Network Resources

Create, update, or delete OKE clusters. You can create either a public cluster or a private cluster.

Creating and Managing OKE Clusters

Optionally install and manage OKE cluster add-ons. Cluster add-ons extend core Kubernetes functionality and improve cluster manageability and performance.

Managing OKE Cluster Add-ons

Create, update, and delete node pools for an OKE cluster. Learn how to recognize node pool nodes in a list of all instances in a tenancy, and how to delete a single node from a node pool.

Creating and Managing OKE Worker Node Pools

Expose containerized applications. You expose applications so that worker node applications can be reached from outside the infrastructure.

Exposing Containerized Applications

Add persistent storage for use by applications on an OKE cluster node. Storage created in a container's root file system is deleted when you delete the container. For more durable storage for containerized applications, configure persistent volumes to store data outside of containers.

Adding Storage for Containerized Applications