2 Installing Calico
This chapter discusses how to install the Calico module in Oracle Cloud Native Environment. This chapter also shows you how to install the Kubernetes Calico CNI when you create the Kubernetes module.
Prerequisites
This section contains the prerequisite information you need to set up the Tigera Calico operator.
Disabling firewalld Service
firewalld
service on each Kubernetes node:
sudo systemctl stop firewalld.service
sudo systemctl disable firewalld.service
Important:
As disabling the firewalld
service removes the network protection
provided by this service, you must implement Calico network policies to secure the
Kubernetes cluster. For information on how to secure the cluster using Calico, see the
upstream Calico documentation.
Updating Proxy Configuration
If you're using a proxy server in the environment, edit the CRI-O proxy configuration file
and add the Kubernetes service IP (the default is 10.96.0.1
) to the NO_PROXY
variable. For example, on each Kubernetes node, edit the
/etc/systemd/system/crio.service.d/proxy.conf
file:
[Service]
Environment="HTTP_PROXY=http://proxy.example.com:3128"
Environment="HTTPS_PROXY=https://proxy.example.com:3128"
Environment="NO_PROXY=mydomain.example.com,10.96.0.1"
Reload the configuration file and restart the crio
service:
sudo systemctl daemon-reload
sudo systemctl restart crio.service
Note:
You don't need to perform this step if you're using the olcnectl provision
command to perform a quick installation. This is set up for you automatically when using
that installation method and you provide any proxy information.
Kubernetes Module
To install the Calico module, the Kubernetes module must be created and installed with no
CNI set. When you create the Kubernetes module, set the --pod-network none
option as part of the olcnectl module create
command. For example:
olcnectl module create \
--environment-name myenvironment \
--module kubernetes \
--name mycluster \
--pod-network none \
...
Important:
To deploy the Tigera Calico operator as the native Kubernetes CNI, this isn't required.
You instead set this option to --pod-network calico
.
Creating a Calico Configuration File
You can optionally install the Calico module with a configuration file. A Calico
configuration file is used to configure any specific modifications of the Tigera Calico
operator. This YAML file contains the spec
part of an
operator.tigera.io/v1/Installation
. This file must be available on the
operator node.
Note:
You can't use a Calico configuration file if you deploy the Tigera Calico operator as a
native Kubernetes CNI when you create the Kubernetes module with the --pod-network
calico
option.
For information on the Calico configuration file, see the upstream Calico documentation.
An example Calico configuration file is:
installation:
cni:
type: Calico
calicoNetwork:
bgp: Disabled
ipPools:
- cidr: 198.51.100.0/24
encapsulation: VXLAN
registry: container-registry.oracle.com
imagePath: olcne
Deploying the Calico CNI
The easiest way to install Calico is to set the Kubernetes CNI to Calico when you create the Kubernetes module. This installs the Tigera Calico operator into the Kubernetes cluster with the default configuration. You don't need to install the Calico module with this method.
Before you set Calico as the Kubernetes CNI, perform the prerequisites to disable the
firewalld
service and update any proxy configuration for CRI-O, as
discussed in Prerequisites.
To set Calico as the Kubernetes CNI, create a Kubernetes module using the
--pod-network calico
option of the olcnectl module create --module
kubernetes
command. This option sets Calico as the Kubernetes CNI for pods in the
Kubernetes cluster. The name of the Kubernetes module in this example is
mycluster
.
olcnectl module create \
--environment-name myenvironment \
--module kubernetes \
--name mycluster \
--pod-network calico \
...
For more information on creating a Kubernetes module, see Kubernetes Module.
Deploying the Calico Module
The Calico module lets you use a configuration file to configure the Tigera Calico operator. If you don't use a configuration file, the installation is the same as when you use the native Calico CNI installation option when you create the Kubernetes module. The benefit of using the Calico module is that you can use a configuration file. If you don't want to change the operator configuration, you might want to install Calico using the Calico CNI method as less steps are required.
To use this method, you must create the Kubernetes module using the --pod-network
none
option. This option sets no Kubernetes CNI for pods in the cluster. You then
install the Calico module to set the CNI.
For the syntax to use to create a Calico module, see the calico
option of
the olcnectl module create
command in Platform Command-Line Interface.
To deploy the Calico module:
-
Create and install a Kubernetes module using the
--pod-network none
option of theolcnectl module create --module kubernetes
command. This option sets no Kubernetes CNI for pods in the cluster. The name of the Kubernetes module in this example ismycluster
. For example:olcnectl module create \ --environment-name myenvironment \ --module kubernetes \ --name mycluster \ --pod-network none \ ...
Important:
When you install the Kubernetes module with the
--pod-network none
option, allkube-system
pods are marked aspending
until you install the Calico module. When the Calico module is installed, thesekube-system
pods are marked asrunning
. -
Create a Calico module and associate it with the Kubernetes module named
mycluster
using the--calico-kubernetes-module
option. In this example, the Calico module is namedmycalico
.olcnectl module create \ --environment-name myenvironment \ --module calico \ --name mycalico \ --calico-kubernetes-module mycluster
The
--module
option sets the module type to create, which iscalico
. You define the name of the Calico module using the--name
option, which in this case ismycalico
.The
--calico-kubernetes-module
option sets the name of the Kubernetes module.An optional
--calico-installation-config
sets the location for a Calico configuration file. This file must be available on the operator node under the provided path. For information on creating this configuration file, see Prerequisites.If you don't include all the required options when adding the module, you're prompted to provide them.
-
Use the
olcnectl module install
command to install the Calico module. For example:olcnectl module install \ --environment-name myenvironment \ --name mycalico
You can optionally use the
--log-level
option to set the level of logging displayed in the command output. By default, error messages are displayed. For example, you can set the logging level to show all messages when you include:--log-level debug
The log messages are also saved as an operation log. You can view operation logs as commands are running, or when they've completed. For more information using operation logs, see Platform Command-Line Interface.
The Calico module is deployed into the Kubernetes cluster.
Verifying the Calico Deployment
This section contains information on how to verify the installation of Calico, with both the Kubernetes CNI installation option, or using the Calico module.
Verifying the Calico Module
If you installed Calico using the Calico module, you can verify the module is
deployed using the olcnectl module instances
command on the
operator node. For example:
olcnectl module instances \
--environment-name myenvironment
The output looks similar to:
INSTANCE MODULE STATE
mycalico calico installed
mycluster kubernetes installed
control1.example.com node installed
...
Note the entry for calico
in the MODULE
column is
in the installed
state.
In addition, use the olcnectl module report
command to review
information about the module. For example, use the following command to review the
Calico module named mycalico
in myenvironment
:
olcnectl module report \
--environment-name myenvironment \
--name mycalico \
--children
For more information on the syntax for the olcnectl module report
command, see Platform Command-Line Interface.
Verifying the Tigera Calico Operator
The Tigera Calico operator is deployed when you use the Kubernetes Calico CNI installation option and with the Calico module installation method. This section shows you some areas you can check to verify the Calico installation and learn about the configuration.
Tigera Calico Operator Status
You can get information about the Tigera Calico operator status using the
kubectl get tigerastatus
command:
kubectl get tigerastatus
The output shows the status of the operator components, and looks similar to:
NAME AVAILABLE PROGRESSING DEGRADED SINCE
apiserver True False False 8m24s
calico True False False 8m39s
IP Pools
You can get information about the default IP Pools that are set up using the
kubectl get ippools
command:
kubectl get ippools
The output looks similar to:
NAME CREATED AT
default-ipv4-ippool ...
To get more information about the IP Pool, use:
kubectl describe ippools default-ipv4-ippool
The output looks similar to:
Name: default-ipv4-ippool
Namespace:
Labels: <none>
Annotations: <none>
API Version: projectcalico.org/v3
Kind: IPPool
Metadata:
Creation Timestamp: ...
Resource Version: 1112
UID: fd04d1d2-b5c9-4feb-9385-2b423c4dd67f
Spec:
Allowed Uses:
Workload
Tunnel
Block Size: 26
Cidr: 10.244.0.0/16
Ipip Mode: Never
Nat Outgoing: true
Node Selector: all()
Vxlan Mode: Always
Events: <none>
Network Policies
To get information on the network policies that are set up, use:
kubectl get networkpolicies --all-namespaces
The output looks similar to:
NAMESPACE NAME POD-SELECTOR AGE
calico-apiserver allow-apiserver apiserver=true 66m
To get more information about the network policies, use:
kubectl describe networkpolicies --all-namespaces
The output looks similar to:
Name: allow-apiserver
Namespace: calico-apiserver
Created on: <date> 05:27:30 +0000 GMT
Labels: <none>
Annotations: <none>
Spec:
PodSelector: apiserver=true
Allowing ingress traffic:
To Port: 5443/TCP
From: <any> (traffic not restricted by source)
Not affecting egress traffic
Policy Types: Ingress
Installation Configuration
You can see the installation configuration for the Tigera Calico operator using:
kubectl get installation -o yaml
The output looks similar to:
apiVersion: v1
items:
- apiVersion: operator.tigera.io/v1
kind: Installation
...
spec:
calicoNetwork:
bgp: Disabled
hostPorts: Enabled
ipPools:
- blockSize: 26
cidr: 10.244.0.0/16
disableBGPExport: false
encapsulation: VXLAN
natOutgoing: Enabled
nodeSelector: all()
linuxDataplane: Iptables
multiInterfaceMode: None
nodeAddressAutodetectionV4:
firstFound: true
cni:
ipam:
type: Calico
type: Calico
controlPlaneReplicas: 2
flexVolumePath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/
imagePath: olcne
kubeletVolumePluginPath: /var/lib/kubelet
nodeUpdateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
nonPrivileged: Disabled
registry: container-registry.oracle.com/
variant: Calico
...