Note:
- This tutorial is available in an Oracle-provided free lab environment.
- It uses example values for Oracle Cloud Infrastructure credentials, tenancy, and compartments. When completing your lab, substitute these values with ones specific to your cloud environment.
Introducing Kubectl with Oracle Cloud Native Environment
Introduction
Although graphical tools can manage Kubernetes, many administrators prefer to use command-line tools. The command line tool provided within the Kubernetes ecosystem is called kubectl. Kubectl is a versatile tool used to deploy and inspect the configurations and logs of the cluster resources and applications. Kubectl achieves this by using the Kubernetes API to authenticate with the Control Node of the Kubernetes cluster to complete any management actions requested by the administrator.
Most of the operations/commands available for kubectl provide administrators with the ability to deploy and manage applications deployed onto the Kubernetes cluster and inspect and manage the Kubernetes cluster resources. Because this tutorial is an introduction to kubectl, it will only focus on using kubectl to discover what Kubernetes resources are available in a new install of Oracle Cloud Native Environment.
Note: Many kubectl commands have the
--all-namespaces
option appended. For this reason, a shorthand for this option is the-A
flag. This tutorialuseskubectl -A
instead ofkubectl --all-namespaces
in most instances.
Objectives
This tutorial introduces kubectl, a widely used command-line tool for interacting with a Kubernetes cluster. Beginning with why it is helpful to learn about it, the tutorial will also introduce some basic kubectl commands that help to understand any services deployed on a Kubernetes cluster, for example:
- location of the configuration file
- kubectl config
- kubectl get
- kubectl describe
- kubectl version
Most of these commands provide both getter (to review settings) and setter (to define/update settings) functions. This tutorial only uses kubectl to view the current configuration information.
Prerequisites
-
Minimum of a 3-node Oracle Cloud Native Environment cluster:
- Operator node
- Kubernetes control plane node
- Kubernetes worker node
-
Each system should have Oracle Linux installed and configured with:
- An Oracle user account (used during the installation) with sudo access
- Key-based SSH, also known as password-less SSH, between the hosts
- Installation of Oracle Cloud Native Environment
Deploy Oracle Cloud Native Environment
Note: If running in your own tenancy, read the linux-virt-labs
GitHub project README.md and complete the prerequisites before deploying the lab environment.
-
Open a terminal on the Luna Desktop.
-
Clone the
linux-virt-labs
GitHub project.git clone https://github.com/oracle-devrel/linux-virt-labs.git
-
Change into the working directory.
cd linux-virt-labs/ocne
-
Install the required collections.
ansible-galaxy collection install -r requirements.yml
-
Deploy the lab environment.
ansible-playbook create_instance.yml -e localhost_python_interpreter="/usr/bin/python3.6"
The free lab environment requires the extra variable
local_python_interpreter
, which setsansible_python_interpreter
for plays running on localhost. This variable is needed because the environment installs the RPM package for the Oracle Cloud Infrastructure SDK for Python, located under the python3.6 modules.Important: Wait for the playbook to run successfully and reach the pause task. At this stage of the playbook, the installation of Oracle Cloud Native Environment is complete, and the instances are ready. Take note of the previous play, which prints the public and private IP addresses of the nodes it deploys and any other deployment information needed while running the lab.
Locate the Configuration File
Before looking at how to use kubectl to investigate a Kubernetes environment, it helps to know where to find an installer for your platform. However, this is optional for the tutorial because kubectl is already present.
-
Open a terminal and connect via ssh to the ocne-control-01 node.
ssh oracle@<ip_address_of_node>
-
The configuration file for kubectl is in the
$HOME/.kube
directory on a control plane node. Whilst it is not usually updated, it helps to know where to locate the configuration file(s), especially if more than one Kubernetes environment needs managing.cat $HOME/.kube/config
Example Output:
apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJek1EUXhOekUwTXpJeU9Gb1hEVE16TURReE5ERTBNekl5T0Zvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBSzR6CmJ3M0pCMTZrSUkzZHl6dndJYjdpNGxmWW9QUVc5cnF2Q1BTZ09lOUdDbThxWnJwWENSdGNrd3FsZlp0MnM2UncKTm1HdkRQTFJIcUZyNHBtTjV6Nk5vMzJzSnJ0MUlVOC9jb2NWRlRqQnA3Y3N1bmtYM0JBa3gzRm1sWmg2cjZOWQpOdUtyRWZJZ3ZZYnBVNS9KVHVyd0kwWGNKeTJUdUg4SkRxNjhWbkFwTXJCbEt6eVJpdTlpZUhKdEtyTGxpdVlWCkxOaFZxVi9lWU1JZDBvZlUvb3VKMEVnSnZ3cUduc1RJRmxiRlVaOGRWay8vT3E2WnJwT2E2dDJCanUzWm00VisKZEVwYUtjK2RNQnVRQVBoWHAwMFhtOUZLMUE3RVNaTWl5SkZUNHRtT0hSZmt3cnVuN2lSQ3FlcWdQQTU4amFCKwpjeVFoSlFWY3hIdWlIWXVTZVNrQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZOV2JSTWRaV0JJeWVvaGpibExtNTR4OUxtc3BNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBRkZuTGRsYjF2RFE1QkMvRHhjZwpKSHU3TU5CaFFqbWQ4anFscTdlNnZqaVd3TVRnNE9kUE11a05GTVV0aEI0eUxOY1Y5c3JNZG8vSmVjMkRqa2Z5CkVHOW1ZMWJlTkZrQlRCN0ZyK3FTYmFReFpvazhxeDNUdTJhdVZrZ1ZjWXpTZk5Nc1BjYlpGakM2SzMzVDZiNGQKZ3BDWmZFWUFhTXhkcWcvMUk0R3ZZU0duaVAvTWw2b2FmdnBad2VwVjRUTWJEaGZMdGpNZmlnOHZqdkhEKzdwOAo1VnM4MW9VM1hMS3c4NFRqazhzTVBuQ3FuMzQ0aWMrT3lwWUdvdG9HeW1ZMHBLQUhkRXduOTE5c3ZwRFVITGY1Ck81a0tDNWtPRjJkYkYxVndkck0rVXlwdyt4Q2JFMGs2VEFmUlFxbGtDUlo0NGNwam5JckpaNXVZTUVDYWlMN1UKMTIwPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== server: https://10.0.0.151:6443 name: kubernetes contexts: - context: cluster: kubernetes user: kubernetes-admin name: kubernetes-admin@kubernetes current-context: kubernetes-admin@kubernetes kind: Config preferences: {} users: - name: kubernetes-admin user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJVENDQWdtZ0F3SUJBZ0lJQXpkRCtyNlJTbHN3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TXpBME1UY3hORE15TWpoYUZ3MHlOREEwTVRZeE5ETXlNamxhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTBoRU9DNFpoRDczSkZzQVYKZ0E2S3VBZkRNeFV6REdPL0xVeXhFU043dGJFV3haSHVFNCt1NWpsaFFOSk4zd1hvTEVlWEhad1ZITjhKWDR2UwpnZ0R3SDdHYklTTWNFcmYvUXNDd3hjZU1EVldTWWo2Tyt1eFh2Sm8rYUNJcE5jd05aVXFnSnpaS1YvTjg1OG4rCllVMUw5aHdFSkpMUTdqYjF4YXBUMGRIc0hkSzlxa3JXOUFMbUpicVV3RDVGM1JwemZqdjdWTDE4TnRwekIwK3kKN2FYSFRhLzQ2bmIrNjlVTWtJVUd3c29kYlI4WXJrQVBZUlFjZWY4Sno2TFBCdlhubUorVXNLajJqRVhURzVhLwozYnJKdG5EcWVCaHBIRGNEQjZya0xzNFpwblBoeHFRZ2RKdE5KRjd6M1hjS1RnSFVSeDA3eGtkVDdIY0c1K05GCmJoYnU2UUlEQVFBQm8xWXdWREFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RBWURWUjBUQVFIL0JBSXdBREFmQmdOVkhTTUVHREFXZ0JUVm0wVEhXVmdTTW5xSVkyNVM1dWVNZlM1cgpLVEFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBVVJvVzBkN1VpZmF0MC9BK0o5MFBIOFJNV3RrS1RYTUhZM3BlCnJaZFVGdWlrWThiSDlDbWR0Q3pBSC9qblIxbjZwY0dQSWtBUlZqME14anRCYTl6SGpXUDBMbm91YmZ2THpwY2YKQmxsVXU4Ly8vcXA3c05zOHg4QUUxamFaOFlqQjY0aUFkcWFQR3pFbjliNHpWZzFSSTlxNThJT1d2THRVbVhXTwordzhsRzFNYUNUWGVOTTlUTGhrMDVkelhOYkNTZ3JET3BIdmplWGhrZndraUxFK1N6cm9FMktYR3JhMTdtMlZDCkVSaHhFc3BhWiswZHFIZDM1RTN6MW05YU1aWFAxdDYyUXpRRW9lQkszckdFY1dka0xORG5SQ2FzTWZTaDMwUDQKWGsxakJaSWd5Z2o0MHRiSEx5UlNnV0ZlSUN6eS90L3B1ZjgwUHQ0TG5zUFBvT05oc1E9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBMGhFT0M0WmhENzNKRnNBVmdBNkt1QWZETXhVekRHTy9MVXl4RVNON3RiRVd4Wkh1CkU0K3U1amxoUU5KTjN3WG9MRWVYSFp3VkhOOEpYNHZTZ2dEd0g3R2JJU01jRXJmL1FzQ3d4Y2VNRFZXU1lqNk8KK3V4WHZKbythQ0lwTmN3TlpVcWdKelpLVi9OODU4bitZVTFMOWh3RUpKTFE3amIxeGFwVDBkSHNIZEs5cWtyVwo5QUxtSmJxVXdENUYzUnB6Zmp2N1ZMMThOdHB6QjAreTdhWEhUYS80Nm5iKzY5VU1rSVVHd3NvZGJSOFlya0FQCllSUWNlZjhKejZMUEJ2WG5tSitVc0tqMmpFWFRHNWEvM2JySnRuRHFlQmhwSERjREI2cmtMczRacG5QaHhxUWcKZEp0TkpGN3ozWGNLVGdIVVJ4MDd4a2RUN0hjRzUrTkZiaGJ1NlFJREFRQUJBb0lCQUNMNTgveTNRekg3eDUraQpHL0pXZVlKcXlIV1k4Z2IrRkxiV0xpVk1ZeXk5YjYwMXZ3NUN2anhYRVhwWmlkMjRmZy9oVzZmeWRSRjVrWFgzCk1mV3pja2ZVcXArNTJOTEZFQnR0T2dHMFYvMWdZaDg1aTFUOFJSK0NEeUlIamhVSEJMUDQyUEd1ZUhKc2VEK2YKd2xzeEk1UzIxWG5CZUVneG5ucHJBY25OeWlLc3h1bngydWk3Z2FxY1BZbVVUWDlrTzN5Vnp4QlZMT1VUUzErRgoxU3R3VHoxQVBQT1didTV0aWJTNG03NUhINjIrVnlWVFV0OUdmVEdaRUlTK3lidTArZVg5MFBqdjAzZ3lyWExHCjFmRmdkOTRmbkhZZlJCb0V1WmhuSXRMVVVEMXBwa1F0czY4bnpZeUdOZFV2Q25rMk5zVUM5cXlLMVJ6Z0EyNGUKWnAveFFBRUNnWUVBN05XbjF4R0lqK1gzQ1RieFVnT25Mc21SVTVvaThwOVFhNUt1clh1WnlGNVBkVkxLUUVXbgpjWTZwK3ZxY3k5QWJkY1p4azZTY0RBRUF2NmkrT3VOajg0dU4wTUdTaCtycmhLaE1sMVN1NmFZK0xTZXdwelV0CjVUNkU3N21MQzM3N3IzNEloTnhCWGNqMitodkhsRkV6RFk1Y1pObnZaL1NieG54NTdOckNrS2tDZ1lFQTR4RGQKY1Y5VERna0NhbDRYRysxTjJoVWEvZXROUnVsZFdaaENzYlY3K3R6L0RnTlRXUytsS0JlU2x1R2lleW95MFYxdApFako5WnY0YUJHS1hxZ29UVXNMdzhmSVNZeTVZckNuK0lUWHdvY2tVRmlRMldEVG5uN3JkL3h3aDgzWndtaytsCmp1Z0k3dUZTNWxoMzZpaFZpSVhadHdkVm9YYjhSdGc1a0NzckZFRUNnWUJJWG5rdEZPUi81Q3Q2bTFsZVVGTnoKenBBajFjTzhFOGFGT0lzNzQ3cjRLU2xxbG1QTEEvM0lpMm1Sa2FiNytKbUxnWm9QSFl1NWQwejlROWp0TWJMSApKdXVzMEptd0FxNzVHRnhmR2JkaEdqV0JvdEV1SnVmaFZ4dFVEWVJaZlBIM2pER2FONXVaeHVFQlNCL1NTSVdyCkxNYzY0Z1Z2NUtUOUgrZzU0aGIyRVFLQmdHMnFJNGt4NU1jT2l1QWNlVVMvbzY0RUsza2ZQNzlUemdZTGg0cVUKZ0VCMG82cDg2TEJXVm9tNmVNM3VRNjhBZm5LbmtKb05VSXVCaGNkQVpzZDAva2dtWm9BenpiV2hHS3B0elpMMApuamRGQ2pKM1l0ZlBGVjhMdlZRTW5ra2JsdDZ3UU9GNEozaFgwdFgxUEZVWERkaFY3UVI4d2xxdFFNSm1nOGFoClVya0JBb0dCQUtqQ2xMNzlab2dVN3lQMGRWQ0pmVytvdFZCM2JZY01vdnp2Y3V0WWFvMEVPTDVEcVRpdnYvUWQKc3J2RzMrY2RIY0FKUFZPWWl3d3c3N0V5ZGJxdXNuY0VVL1VuemdON3B5L2lUS3hDNlZydmVVOVhvUW04T0RsYgpaRGlzTVp5REtVZkdQY0lhZjg0VVY0THhENlZMY2xoSzhaZzV3UWdaTHh4akxPMXBGTVdHCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
-
Note that the
.kube/config
file contains three main sections:- clusters: - the HTTPS endpoint for the Kubernetes cluster.
- user: - the user(s) allowed to connect/authenticate to the Kubernetes cluster.
- contexts: - combines the user and cluster information to enable kubectl to connect to the Kubernetes cluster.
View Context and Configuration Information
An easier way to view the information in the .kube/config
file is to use the kubectl config command.
-
View the configuration using
kubectl
.kubectl config view
Example Output:
apiVersion: v1 clusters: - cluster: certificate-authority-data: DATA+OMITTED server: https://10.0.0.151:6443 name: kubernetes contexts: - context: cluster: kubernetes user: kubernetes-admin name: kubernetes-admin@kubernetes current-context: kubernetes-admin@kubernetes kind: Config preferences: {} users: - name: kubernetes-admin user: client-certificate-data: REDACTED client-key-data: REDACTED
This presents the same information shown in the previous step. However, these fields are not presented in full:
certificate-authority-data
client-certificate-data
client-key-data
Instead, the certificate information itself is obfuscated and replaced with these placeholders:
- DATA+OMITTED
- REDACTED
Why the difference between DATA+OMITTED and REDACTED? This is because the
client-certificate-data
andclient-key-data
information is sensitive and therefore needs to be kept secret (REDACTED). However, because thecertificate-authority-data
is a public certificate it is not a secret. So, to differentiate between them this data is listed as DATA+OMITTED instead (see this GitHub Issue for more information).NOTE: The non-redacted information can be displayed using kubectl and appending either of these flags:
--raw
- Displays all raw byte and sensitive data--flatten
- Flattens the resulting kubeconfig file into self-contained output that can be used for creating portable kubeconfig files (which are not covered here)
-
Review the details of all defined
contexts
in the active configuration file.kubectl config get-contexts
Example Output:
CURRENT NAME CLUSTER AUTHINFO NAMESPACE * kubernetes-admin@kubernetes kubernetes kubernetes-admin
Note: If the configuration contains more than one context in the configuration file, the current default context is indicated with an asterisk (*). The tutorial environment has only one context defined.
-
Display the default context currently in use.
kubectl config current-context
Example Output:
kubernetes-admin@kubernetes
Install Auto-Completion
Kubectl provides support for auto-completion, which in turn helps to reduce the amount of typing required at the command line. However, this functionality requires bash-completion
to be present.
-
Install the Bash completion package.
sudo dnf install -y bash-completion
Depending on your environemnt, the installation of this package may already exist.
-
Enable auto-complete for kubectl in the Bash shell.
source <(kubectl completion bash)
-
Add auto-complete permanently to your Bash shell.
echo "source <(kubectl completion bash)" >> ~/.bashrc
-
By default, auto-completion will become available after reloading your shell. However, it is possible to enable auto-completion in the current shell session by sourcing the
~/.bashrc
file.source ~/.bashrc
-
Confirm bash-completion for kubectl is enabled.
Enter the command at the terminal prompt, but DO NOT press the Enter key. Instead, press the Tab key twice. The available sub-commands should be listed.
kubectl
Example Output:
alpha cluster-info diff label run annotate completion drain logs scale api-resources config edit options set api-versions cordon exec patch taint apply cp explain plugin top attach create expose port-forward uncordon auth debug get proxy version autoscale delete help replace wait certificate describe kustomize rollout
Lookup the Kubernetes Version
Before working with an Oracle Cloud Native Environment installation, the administrator can use these steps to gather some basic information about the topology and Kubernetes version used.
-
Display the short version of the Kubernetes version information.
kubectl version
Example Output:
Client Version: v1.28.3+3.el8 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.28.3+3.el8
-
Return more detailed information about the deployed Kubernetes version in YAML format.
kubectl version --output=yaml
Example Output:
clientVersion: buildDate: "2023-12-21T20:45:11Z" compiler: gc gitCommit: 77af7ebc37700e2397f5549ebb0f950b3e4ceaf9 gitTreeState: clean gitVersion: v1.28.3+3.el8 goVersion: go1.20.10 major: "1" minor: "28" platform: linux/amd64 kustomizeVersion: v5.0.4-0.20230601165947-6ce0bf390ce3 serverVersion: buildDate: "2023-12-21T20:54:37Z" compiler: gc gitCommit: 77af7ebc37700e2397f5549ebb0f950b3e4ceaf9 gitTreeState: clean gitVersion: v1.28.3+3.el8 goVersion: go1.20.10 major: "1" minor: "28" platform: linux/amd64
-
Return the detailed information in JSON format.
kubectl version --output=json
Example Output:
{ "clientVersion": { "major": "1", "minor": "28", "gitVersion": "v1.28.3+3.el8", "gitCommit": "77af7ebc37700e2397f5549ebb0f950b3e4ceaf9", "gitTreeState": "clean", "buildDate": "2023-12-21T20:45:11Z", "goVersion": "go1.20.10", "compiler": "gc", "platform": "linux/amd64" }, "kustomizeVersion": "v5.0.4-0.20230601165947-6ce0bf390ce3", "serverVersion": { "major": "1", "minor": "28", "gitVersion": "v1.28.3+3.el8", "gitCommit": "77af7ebc37700e2397f5549ebb0f950b3e4ceaf9", "gitTreeState": "clean", "buildDate": "2023-12-21T20:54:37Z", "goVersion": "go1.20.10", "compiler": "gc", "platform": "linux/amd64" } }
Confirm the Node Information
Nodes are the workhorse of a Kubernetes cluster, so the number of control and worker nodes deployed on a cluster is vital for the administrator to be aware of.
-
Concisely confirm the node information.
This command displays an abridged listing showing the node names (NAME), node roles (ROLES) and the Kubernetes version (VERSION)
kubectl get nodes
Example Output:
NAME STATUS ROLES AGE VERSION ocne-control-01 Ready control-plane 14m v1.28.3+3.el8 ocne-worker-01 Ready <none> 13m v1.28.3+3.el8
-
Return more detailed information about the nodes.
kubectl get nodes -o wide
Example Output:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME ocne-control-01 Ready control-plane 15m v1.28.3+3.el8 10.0.0.150 <none> Oracle Linux Server 8.9 5.15.0-202.135.2.el8uek.x86_64 cri-o://1.28.2 ocne-worker-01 Ready <none> 14m v1.28.3+3.el8 10.0.0.160 <none> Oracle Linux Server 8.9 5.15.0-202.135.2.el8uek.x86_64 cri-o://1.28.2
This command shows the more detailed information than the previous command by including this additional information:
- Internal and External IP information (INTERNAL-IP and EXTERNAL-IP)
- Operating System information on the node (OS_IMAGE)
- Kernel version on the node (KERNEL-VERSION)
- Container runtime information and version (CONTAINER-RUNTIME)
Confirm the API Versions in the Cluster
The API server is a critical component of the Kubernetes cluster control plane. It exposes a HTTP API that allows different cluster components and external resources (such as kubectl
) to communicate and orchestrate applications deployed onto the Kubernetes cluster.
-
Because the API versions change with each Kubernetes version, confirming which API versions are available may also be helpful. For example, to understand why a deployment is failing.
Note: The output on on your Kubernetes cluster may be different.
kubectl api-versions
Example Output:
admissionregistration.k8s.io/v1 apiextensions.k8s.io/v1 apiregistration.k8s.io/v1 apps/v1 authentication.k8s.io/v1 authorization.k8s.io/v1 autoscaling/v1 autoscaling/v2 batch/v1 certificates.k8s.io/v1 coordination.k8s.io/v1 discovery.k8s.io/v1 events.k8s.io/v1 flowcontrol.apiserver.k8s.io/v1beta2 flowcontrol.apiserver.k8s.io/v1beta3 networking.k8s.io/v1 node.k8s.io/v1 platform.verrazzano.io/v1alpha1 policy/v1 rbac.authorization.k8s.io/v1 scheduling.k8s.io/v1 storage.k8s.io/v1 v1
-
It is possible to lookup more detailed information about the API resources available on the Kubernetes cluster where the columns describe the following:
- NAME - the resource name
- SHORTNAMES - where listed, this represents an alternative short name to use with the
kubectl
command - APIVERSION - self-explanatory
- NAMESPACED - whilst most Kubernetes resources are deployed into one of the namespaces(shown as true), not all are (shown as false).
- KIND - confirms the ‘kind’ of resource type the API is related to, for example, a Deployment kind (found in a deployment.yaml file).
kubectl api-resources
Example Output:
Note: output has been abbreviated for clarity.
NAME SHORTNAMES APIVERSION NAMESPACED KIND bindings v1 true Binding componentstatuses cs v1 false ComponentStatus configmaps cm v1 true ConfigMap endpoints ep v1 true Endpoints events ev v1 true Event limitranges limits v1 true LimitRange namespaces ns v1 false Namespace nodes no v1 false Node persistentvolumeclaims pvc v1 true PersistentVolumeClaim persistentvolumes pv v1 false PersistentVolume pods po v1 true Pod ... ... clusterroles rbac.authorization.k8s.io/v1 false ClusterRole rolebindings rbac.authorization.k8s.io/v1 true RoleBinding roles rbac.authorization.k8s.io/v1 true Role priorityclasses pc scheduling.k8s.io/v1 false PriorityClass csidrivers storage.k8s.io/v1 false CSIDriver csinodes storage.k8s.io/v1 false CSINode csistoragecapacities storage.k8s.io/v1 true CSIStorageCapacity storageclasses sc storage.k8s.io/v1 false StorageClass volumeattachments storage.k8s.io/v1 false VolumeAttachment
NOTE: It is possible to display a shortened version of the same information by appending the
--namespaced=true
or--namespaced=false
flag to thekubectl api-resources
command.
Lookup Namespaces and Security Information
Namespaces provide the mechanism for applications deployed onto a cluster to isolate resources from each other when deployed in a cluster.
-
List all of the namespaces present.
kubectl get namespaces -A
Example Output:
NAME STATUS AGE default Active 21m externalip-validation-system Active 21m kube-node-lease Active 21m kube-public Active 21m kube-system Active 21m kubernetes-dashboard Active 21m ocne-modules Active 21m
-
List the certificates deployed onto the Kubernetes cluster.
Creating a ‘CertificateSigningRequest’ (CSR) is part of the signing process for an X.509 certificate. Whilst not part of an administrator’s daily work routine, it is helpful for the administrator to monitor the existing CSRs on a cluster so any redundant CSRs can be removed or updated.
kubectl get csr
Example Output:
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION csr-5zpkh 32m kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:7qbee3 <none> Approved,Issued csr-mhmrn 33m kubernetes.io/kube-apiserver-client-kubelet system:node:ocne-control <none> Approved,Issued
Note: A garbage collection process executes periodically on the cluster to remove old CertificateSigningRequest resources. Therefore if a
kubectl get csr
has previously returned output, but later returns a No Resources Found message, this indicates the garbage collector has executed (Please see Certificate and Certificate Signing Requests documentation for details.) -
Lookup Secrets deployed on the Kubernetes cluster.
Kubernetes uses Secrets to hold confidential data, such as passwords, keys, or tokens. It helps by keeping them separate from the container image, thereby ensuring no confidential data remains with the application code contained within the container.
kubectl get secrets -A
Example Output:
NAMESPACE NAME TYPE DATA AGE kube-system bootstrap-token-7qbee3 bootstrap.kubernetes.io/token 7 44m kube-system bootstrap-token-iqfcxc bootstrap.kubernetes.io/token 6 45m kube-system bootstrap-token-iqujbx bootstrap.kubernetes.io/token 4 45m kube-system kubeadm-certs Opaque 8 45m kubernetes-dashboard kubernetes-dashboard-certs Opaque 0 44m kubernetes-dashboard kubernetes-dashboard-csrf Opaque 1 44m kubernetes-dashboard kubernetes-dashboard-key-holder Opaque 2 44m
Lookup Services and Endpoint
Services within a Kubernetes cluster represent the endpoint(s) used to access applications deployed onto the cluster instead of connecting directly to a container. Which allows an individual pod to terminate, or be replaced, without interrupting the end user’s interaction with the application.
Endpoints track and map the IP addresses of the objects created for a Service to ensure all traffic is routed correctly to the correct application and is managed automatically by Kubernetes. However, if debugging an issue, this is how to look up their details.
-
List all the Services deployed.
kubectl get services -A
Example Output:
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 24m externalip-validation-system externalip-validation-webhook-service ClusterIP 10.110.142.187 <none> 443/TCP 23m kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 24m kubernetes-dashboard kubernetes-dashboard ClusterIP 10.102.48.239 <none> 443/TCP 23m ocne-modules verrazzano-module-operator ClusterIP 10.98.175.195 <none> 9100/TCP 23m
-
List any Endpoints on the Kubernetes cluster.
kubectl get endpoints -A
Example Output:
NAMESPACE NAME ENDPOINTS AGE default kubernetes 10.0.0.150:6443 26m externalip-validation-system externalip-validation-webhook-service 10.244.1.5:9443 25m kube-system kube-dns 10.244.1.2:53,10.244.1.3:53,10.244.1.2:53 + 3 more... 25m kubernetes-dashboard kubernetes-dashboard 10.244.1.4:8443 25m ocne-modules verrazzano-module-operator 10.0.0.160:9100 25m
-
List all the Pods deployed on the Kubernetes cluster.
kubectl get pods -A
Example Output:
NAMESPACE NAME READY STATUS RESTARTS AGE externalip-validation-system externalip-validation-webhook-7f859947f5-49bjj 1/1 Running 0 26m kube-system coredns-5d7b65fffd-fg4c2 1/1 Running 0 26m kube-system coredns-5d7b65fffd-h8mkb 1/1 Running 0 26m kube-system etcd-ocne-control-01 1/1 Running 0 27m kube-system kube-apiserver-ocne-control-01 1/1 Running 0 27m kube-system kube-controller-manager-ocne-control-01 1/1 Running 0 27m kube-system kube-flannel-ds-gxg8k 1/1 Running 0 26m kube-system kube-flannel-ds-skl7g 1/1 Running 0 26m kube-system kube-proxy-8bnrb 1/1 Running 0 26m kube-system kube-proxy-mx4p4 1/1 Running 0 26m kube-system kube-scheduler-ocne-control-01 1/1 Running 0 27m kubernetes-dashboard kubernetes-dashboard-547d4b479c-lq442 1/1 Running 0 26m ocne-modules verrazzano-module-operator-9bb46ff99-xc2gx 1/1 Running 0 26m
Lookup Pod Information
-
Retrieve information about any pod deployed on the cluster.
The previous kubectl get pods -A command returns a list of all the pods currently running on the cluster. Because this is a fresh install of Oracle Cloud Native Environment, only the ‘system’ pods are running. However, kubectl can retrieve more information about any of these deployed pods. All that needs noting are the
NAMESPACE
andNAME
values. Because many pods are dynamically assigned names, these values may change from those shown when you execute the same command. However, the etcd pod should remain the same, so this is the example used for the lab.kubectl describe pods etcd-ocne-control -n kube-system
Note: the
-n kube-system
directs kubectl to look inside the kube-system namespace for the etcd-ocne-control pod.Example Output:
Name: etcd-ocne-control-01 Namespace: kube-system Priority: 2000001000 Priority Class Name: system-node-critical Node: ocne-control-01/10.0.0.150 Start Time: Thu, 08 Feb 2024 08:36:39 +0000 Labels: component=etcd tier=control-plane Annotations: kubeadm.kubernetes.io/etcd.advertise-client-urls: https://10.0.0.150:2379 kubernetes.io/config.hash: 1d50be79b2fd4448e382bcc1e9ce30c6 kubernetes.io/config.mirror: 1d50be79b2fd4448e382bcc1e9ce30c6 kubernetes.io/config.seen: 2024-02-08T08:36:39.593748228Z kubernetes.io/config.source: file Status: Running SeccompProfile: RuntimeDefault IP: 10.0.0.150 IPs: IP: 10.0.0.150 Controlled By: Node/ocne-control-01 Containers: etcd: Container ID: cri-o://6839321f2ce71639b716b07847a7ac7bb373e9c43b875eae46936d91d4b66ded Image: container-registry.oracle.com/olcne/etcd:3.5.9 Image ID: 46569e5fbb9f36220ac138811ff51d644cc5bef74b1bdcd2706cb397381d5d80 Port: <none> Host Port: <none> Command: etcd --advertise-client-urls=https://10.0.0.150:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --experimental-initial-corrupt-check=true --experimental-watch-progress-notify-interval=5s --initial-advertise-peer-urls=https://10.0.0.150:2380 --initial-cluster=ocne-control-01=https://10.0.0.150:2380 --key-file=/etc/kubernetes/pki/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://10.0.0.150:2379 --listen-metrics-urls=http://0.0.0.0:2381 --listen-peer-urls=https://10.0.0.150:2380 --name=ocne-control-01 --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt State: Running Started: Thu, 08 Feb 2024 08:36:33 +0000 Ready: True Restart Count: 0 Requests: cpu: 100m memory: 100Mi Liveness: http-get http://0.0.0.0:2381/health%3Fexclude=NOSPACE&serializable=true delay=10s timeout=15s period=10s #success=1 #failure=8 Startup: http-get http://0.0.0.0:2381/health%3Fserializable=false delay=10s timeout=15s period=10s #success=1 #failure=24 Environment: <none> Mounts: /etc/kubernetes/pki/etcd from etcd-certs (rw) /var/lib/etcd from etcd-data (rw) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: etcd-certs: Type: HostPath (bare host directory volume) Path: /etc/kubernetes/pki/etcd HostPathType: DirectoryOrCreate etcd-data: Type: HostPath (bare host directory volume) Path: /var/lib/etcd HostPathType: DirectoryOrCreate QoS Class: Burstable Node-Selectors: <none> Tolerations: :NoExecute op=Exists Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulled 28m kubelet Container image "container-registry.oracle.com/olcne/etcd:3.5.9" already present on machine Normal Created 28m kubelet Created container etcd Normal Started 28m kubelet Started container etcd
For now, this serves only as an illustration to show how easy it is to use kubectl at the command line to return a large amount of detailed information about deployments. This ability will eventually become one of several tools the administrator will use to manage and troubleshoot the Kubernetes cluster they’re responsible for.
Summary
These steps conclude the brief introduction to the kubectl command-line tool, demonstrating its use to query a Kubernetes cluster and discovering a wealth of information about the services currently deployed to the Kubernetes cluster. However, kubectl is capable of more than simply retrieving information, it also manages application deployments and operations on the Kubernetes cluster.
For More Information
- Oracle Cloud Native Environment Documentation
- Oracle Cloud Native Environment Track
- Oracle Linux Training Station
More Learning Resources
Explore other labs on docs.oracle.com/learn or access more free learning content on the Oracle Learning YouTube channel. Additionally, visit education.oracle.com/learning-explorer to become an Oracle Learning Explorer.
For product documentation, visit Oracle Help Center.
Introducing Kubectl with Oracle Cloud Native Environment
F81027-06
June 2024