Private Cloud Appliance Administrator Tasks
These prerequisite tasks must be performed by a Service Enclave administrator or by a Compute Enclave user that has authorization to create resources such as groups, policies, and tag namespaces.
Task | Description | Resources |
---|---|---|
Administration network |
If you enable the appliance administration network, verify that the administration network and the data center network are configured to allow traffic to and from the cluster control plane. |
Editing Administration Network Information in the Oracle Private Cloud Appliance Administrator Guide Administration Network Configuration Notes in the Oracle Private Cloud Appliance Installation Guide Access Configuration With Administration Network in the Oracle Private Cloud Appliance Security Guide |
Platform images |
Platform images include images required by OKE that have Kubernetes installed on them. Platform images should be imported to all tenancies in the Compute Enclave during appliance installation, upgrade, or patching. If this was not done, a Service Enclave administrator must import images. |
Providing Platform Images in the Oracle Private Cloud Appliance Administrator Guide |
OKE users group |
These users groups have a policy that authorizes members to use OKE. |
|
OraclePCA-OKE defined tag |
This tag is required to create or update an OKE cluster or node pool. This tag is used to identify instances that need to be in a dynamic group. |
|
OKE dynamic group |
The dynamic group authorizes its member instances to manage OKE resources. |
|
OraclePCA tags |
These tags are used when creating a cluster. |
|
Certificate Authority bundle |
After upgrade, patching, or any other outage, or if the automated Certificate Authority bundle update fails, you might need to update the CA bundle manually on the management node. |
At least three free public IP addresses are required to use OKE on Private Cloud Appliance. Verify that free public IP addresses are available for the NAT gateway, the control plane load balancer, and the worker load balancer. For more information, see Creating OKE Network Resources.
In the Service Web UI, select PCAConfig > Network Environment > Public IPs > "Free Public IPs". In the Service CLI, enter the following command:
PCA-ADMIN> show networkConfig "Free Public IPs"
Creating an OKE Users Group
OKE users groups have a policy that authorizes their members to use OKE. You need to create separate OKE users groups to authorize different users to use OKE in different compartments.
See Creating and Managing User Groups in the Oracle Private Cloud Appliance User Guide to create a group or update an existing group.
Include the manage cluster-family
authorization in the user group policy.
The following is an example policy for an OKE user
group. Depending on your organization, for example if you have a separate team who manage
network resources, some of the following "manage" authorizations could be "read" or "use"
authorizations, or you might need to add authorizations.
allow group group-name to read all-resources in tenancy allow group group-name to manage cluster-family in compartment compartment-name allow group group-name to manage instance-family in compartment compartment-name allow group group-name to manage network-load-balancers in compartment compartment-name allow group group-name to manage virtual-network-family in compartment compartment-name
See Managing Policies in the Oracle Private Cloud Appliance User Guide.
Creating a Cluster Dynamic Group
A dynamic group authorizes its member instances to manage OKE resources.
See Creating and Managing Dynamic Groups in the Oracle Private Cloud Appliance User Guide.
Enter the following matching rule to define the group:
tag.OraclePCA-OKE.cluster_id.value
All cluster nodes that have this tag are members of the dynamic group.
The following is an example policy for the dynamic group. In this example. oke_dyn_grp
is the name of the dynamic group and oke
is the name of the compartment where resources are created. Note that all policy statements are for the same compartment. If clusters in this group require access to resources in other compartments, change the policy accordingly. See Managing Policies in the Oracle Private Cloud Appliance User Guide.
allow dynamic-group oke_dyn_grp to manage file-family in compartment oke allow dynamic-group oke_dyn_grp to manage volume-family in compartment oke allow dynamic-group oke_dyn_grp to manage load-balancers in compartment oke allow dynamic-group oke_dyn_grp to manage instance-family in compartment oke allow dynamic-group oke_dyn_grp to manage virtual-network-family in compartment oke allow dynamic-group oke_dyn_grp to use tag-namespaces in compartment oke
For information about the purpose of the use tag-namespaces
policy, see Exposing Containerized Applications.
Using Terraform to Create a Dynamic Group
The following example shows how to use Terraform to create a dynamic group.
variables.tf
variable "oci_config_file_profile" { type = string default = "DEFAULT" } variable "tenancy_ocid" { description = "tenancy OCID" type = string nullable = false } variable "compartment_name" { description = "compartment name" type = string nullable = false } variable "oke_dyn_grp" { description = "Dynamic group that needs to be created for instance principal" default = "oke-dyn-ip-grp" } variable "oke_policy_name" { description = "Policy set name for dynamic group" default = "oke-instance-principal-policy" }
terraform.tfvars
# Name of the profile to use from $HOME/.oci/config oci_config_file_profile = "DEFAULT" # Tenancy OCID from the oci_config_file_profile profile. tenancy_ocid = "ocid1.tenancy.UNIQUE_ID" # Compartment name compartment_name = "oke" # Dynamic Group Name oke_dyn_grp = "oke-dyn-ip-group" # OKE Dynamic Group Policy Name oke_policy_name = "oke-dyn-grp-policy"
provider.tf
provider "oci" { config_file_profile = var.oci_config_file_profile tenancy_ocid = var.tenancy_ocid }
main.tf
terraform { required_providers { oci = { source = "oracle/oci" version = ">= 4.50.0, <= 6.36.0" # If necessary, you can pin a specific version here # version = "6.36.0" } } required_version = ">= 1.1" }
oke-dyn-grp.tf
resource "oci_identity_dynamic_group" "oke-dynamic-grp" { compartment_id = "${var.tenancy_ocid}" description = "PCA OKE worker dynamic group for instance principal" matching_rule = "tag.${oci_identity_tag_namespace.oracle-pca.name}.${oci_identity_tag.cluster-id.name}.value" name = "${var.oke_dyn_grp}" depends_on = [oci_identity_tag.cluster-id] }
oke-policy.tf
resource "oci_identity_policy" "oke-dyn-grp-policy" { compartment_id = "${var.tenancy_ocid}" description = "Dynamic group policies for OKE Resources" name = "${var.oke_policy_name}" statements = [ "allow dynamic-group ${oci_identity_dynamic_group.oke-dynamic-grp.name} to manage load-balancers in compartment ${var.compartment_name}", "allow dynamic-group ${oci_identity_dynamic_group.oke-dynamic-grp.name} to manage volume-family in compartment ${var.compartment_name}", "allow dynamic-group ${oci_identity_dynamic_group.oke-dynamic-grp.name} to manage file-family in compartment ${var.compartment_name}", "allow dynamic-group ${oci_identity_dynamic_group.oke-dynamic-grp.name} to manage instance-family in compartment ${var.compartment_name}", "allow dynamic-group ${oci_identity_dynamic_group.oke-dynamic-grp.name} to manage virtual-network-family in compartment ${var.compartment_name}", "allow dynamic-group ${oci_identity_dynamic_group.oke-dynamic-grp.name} to use tag-namespaces in compartment ${var.compartment_name}" ] depends_on = [oci_identity_dynamic_group.oke-dynamic-grp] }
oke-tag-ns.tf
Create the OraclePCA-OKE.cluster_id tag, which is also described in Creating the OraclePCA-OKE.cluster_id Tag.
resource "oci_identity_tag" "cluster-id" { description = "Default tag key definition" name = "cluster_id" tag_namespace_id = "${oci_identity_tag_namespace.oracle-pca.id}" depends_on = [oci_identity_tag_namespace.oracle-pca] } resource "oci_identity_tag_namespace" "oracle-pca" { compartment_id = "${var.tenancy_ocid}" description = "Default Tag namespace for Oracle PCA OKE" name = "OraclePCA-OKE" }
Updating the Certificate Authority Bundle
The Certificate Authority (CA) bundle for this Private Cloud Appliance is downloaded and made available to a cluster when the cluster is created. The CA bundle includes the certificate, private and public keys, and other authorization information.
The CA bundle is automatically updated on the appliance when regular certificate rotation occurs or when the appliance is upgraded, for example.
When the CA bundle is updated on the appliance, then it must be updated on the local system,
for example to enable use of cluster-api
. This is similar to replacing the CA
bundle in your ~/.oci
configuration so that you can run OCI CLI commands.
A process runs every hour to check the validity of the CA bundle and updates the CA bundle if necessary.
If you need to update the CA bundle between these hourly checks, the process can be run manually:
-
Log onto the management node of the Private Cloud Appliance as a system administrator with root privilege.
-
Get the name of an OKE pod.
The following command lists the three OKE pods in the
oke
namespace:# kubectl get pod -n oke -l app=oke
-
Run the command to update the CA bundle.
Use one of the
oke-uniqueID
pod names from the preceding step.# kubectl exec -it oke-6c4d85d6f-72fxs -n oke -c oke -- /usr/bin/pca-oke-cluster-tool
You can check Loki logs in Grafana for any errors that might have occurred when this process ran either automatically or manually. See "Accessing System Logs" in the Status and Health Monitoring chapter of the Oracle Private Cloud Appliance Administrator Guide.