VMware vSphereで実行するOCNEクラスタの作成

VMware vSphereで実行するOracle Cloud Native Environment自己管理クラスタの構成

Cluster APIプロジェクトには、クラスタ管理用にKubernetesスタイルのAPIの標準的なセットが用意されています。Verrazzanoが現在、公式にサポートしているのは、OCIでのOCNEおよびOKEクラスタのプロビジョニングにおけるCluster APIの使用のみです。

ただし、Cluster APIプロジェクトの機能を試験的に使用して、VMware vSphereにOCNEクラスタを直接デプロイすることも可能です。

Cluster APIまたはvSphereでのCluster APIの詳細は、次を参照してください:

始める前に

既存のvSphere環境がある場合は、「VMwareソフトウェア定義データ・センターの設定」を無視して、「VM環境の準備」から開始できます。ご使用の環境が、「Cluster API Provider vSphere: Install Requirements」で指定された要件を満たしていることを確認してください。

それ以外の場合は、vSphere環境を作成します。「VMwareソフトウェア定義データ・センターの設定」の説明に従って、Oracle Cloud VMware Solutionを使用することをお薦めします。VMwareソフトウェア定義データ・センター(SDDC)がOracle Cloud Infrastructure (OCI)にデプロイされ、Oracle Cloudで実行されている他のOracleサービスと統合されます。このソリューションは、VMware社と連携して開発されたもので、VMware社が推奨するベスト・プラクティスに準拠した環境を構築します。

Oracle Cloud VMware Solutionの詳細は、Oracle Help Architecture Centerの高可用性VMwareをベースとしたSDDCのクラウドへのデプロイに関する項を参照してください。

VMwareソフトウェア定義データ・センターの設定

Oracle Cloud VMware Solutionを使用して、VMware SDDCを簡単に作成します。

  1. 仮想クラウド・ネットワーク(VCN)を設定します。既存のVCNを使用するか、SDDCプロビジョニング・プロセスの一環として、Oracle Cloud VMware Solutionで独自のVCNを作成するかを選択できます。既存のVCNを使用する場合は、Oracle Help Architecture Centerのデプロイメントの準備に定義されている要件を満たしていることを確認してください。

  2. SDDCをデプロイします。OCIで新しいVMware SDDCをリクエストするには、Oracle Help Architecture CenterのSDDCのデプロイの手順に従います。

  3. 様々なコンポーネントが正常に作成されたことを確認します。Oracle Help Architecture CenterのSDDC作成プロセスのモニターの手順に従います。

VM環境の準備

  1. Oracle Linuxインストール・メディアからOracle Linux 8 ISOイメージをダウンロードします。

  2. Oracle Linux 8 ISOイメージをvSphereにアップロードします。vSphereのドキュメントの「Upload ISO Image Installation Media for a Guest Operating System」のステップを利用します。

  3. vSphereのドキュメントの「Create a Virtual Machine with the New Virtual Machine Wizard」の手順に従って、VMをデプロイします。

  4. VMにcloud-initをインストールします。

    $ sudo yum install -y cloud-init
    

  5. cloud-initを初期化します。

    $ cloud-init init --local
    
    cloud-initが正常に構成されると、次のようなメッセージが返されます:
    $ cloud-init v. 20.1.0011 running 'init-local' at Fri, 01 Apr 2022 01:26:11 +0000. Up 38.70 seconds.
    

  6. VMを停止します。

  7. VMをテンプレートに変換し、OL8-Base-Templateという名前を付けます。vSphereのドキュメントの「Clone a Virtual Machine to a Template」の手順に従います。

管理クラスタの設定

Cluster APIでは、リソースをデプロイする開始点として初期クラスタが必要です。

  1. kindをインストールします。kindのドキュメントのインストールの手順に従います。

  2. kindを使用してKubernetesクラスタを作成します。このクラスタには、VMware SDDCからアクセスできる必要があります。『The Cluster API Book』の「Quick Start: Install and/or configure a Kubernetes cluster」の手順に従います。

  3. clusterctl CLIツールをインストールします。clusterctlは、Cluster APIの管理クラスタのライフサイクル操作を管理します。『The Cluster API Book』の「Quick Start: Install clusterctl」の手順に従います。

  4. 「CLI設定」の手順で、Verrazzano CLIツールをインストールします。

  5. devまたはprodプロファイルを使用して、クラスタにVerrazzanoをインストールします。「CLIを使用したインストール」の手順に従います。

  6. クラスタで、次のvSphere環境変数を設定します。ご使用の環境が反映されるように値を更新します。

    $ export VSPHERE_PASSWORD="<vmware-password>"
    $ export VSPHERE_USERNAME="administrator@vsphere.local"
    $ export VSPHERE_SERVER="<IP address or FQDN>"
    $ export VSPHERE_DATACENTER="<SDDC-Datacenter>"
    $ export VSPHERE_DATASTORE="<vSAN-Datastore>"	
    $ export VSPHERE_NETWORK="workload"
    $ export VSPHERE_RESOURCE_POOL="*/Resources/Workload"
    $ export VSPHERE_FOLDER="<folder-name>"
    $ export VSPHERE_TEMPLATE="OL8-Base-Template"
    $ export VSPHERE_SSH_AUTHORIZED_KEY="<Public-SSH-Authorized-Key>"
    $ export VSPHERE_TLS_THUMBPRINT="<SHA1 thumbprint of vCenter certificate>"
    $ export VSPHERE_STORAGE_POLICY=""
    $ export CONTROL_PLANE_ENDPOINT_IP="<IP address or FQDN>"
    
    環境変数の値の詳細は、Cluster API Provider vSphereのドキュメントの「Configuring and installing Cluster API Provider vSphere in a management cluster」を参照してください。

  7. Cluster API Provider vSphereをインストールして、管理クラスタを初期化します。

    $ clusterctl init -n verrazzano-capi -i vsphere
    

    管理クラスタが正常に初期化されると、clusterctlによって報告されます。

管理対象クラスタの作成

Cluster APIでは、クラスタ・テンプレートを使用して、事前に定義したCluster APIオブジェクトのセットをデプロイし、管理対象クラスタを作成します。

  1. クラスタ・テンプレートをコピーし、ローカルにvsphere-capi.yamlとして保存します。

    クラスタ・テンプレートはここをクリック
    apiVersion: cluster.x-k8s.io/v1beta1
    kind: Cluster
    metadata:
      labels:
        cluster.x-k8s.io/cluster-name: ${CLUSTER_NAME}
      name: ${CLUSTER_NAME}
      namespace: ${NAMESPACE}
    spec:
      clusterNetwork:
        pods:
          cidrBlocks:
            - ${POD_CIDR=192.168.0.0/16}
        serviceDomain: cluster.local
        services:
          cidrBlocks:
            - ${CLUSTER_CIDR=10.128.0.0/12}
      controlPlaneRef:
        apiVersion: controlplane.cluster.x-k8s.io/v1alpha1
        kind: OCNEControlPlane
        name: ${CLUSTER_NAME}-control-plane
        namespace: ${NAMESPACE}
      infrastructureRef:
        apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
        kind: VSphereCluster
        name: ${CLUSTER_NAME}
        namespace: ${NAMESPACE}
    ---
    apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
    kind: VSphereCluster
    metadata:
      name: ${CLUSTER_NAME}
      namespace: ${NAMESPACE}
    spec:
      controlPlaneEndpoint:
        host: ${CONTROL_PLANE_ENDPOINT_IP}
        port: 6443
      identityRef:
        kind: Secret
        name: ${CLUSTER_NAME}
      server: ${VSPHERE_SERVER}
      thumbprint: '${VSPHERE_TLS_THUMBPRINT}'
    ---
    apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
    kind: VSphereMachineTemplate
    metadata:
      name: ${CLUSTER_NAME}-control-plane
      namespace: ${NAMESPACE}
    spec:
      template:
        spec:
          cloneMode: linkedClone
          datacenter: ${VSPHERE_DATACENTER=oci-w01dc}
          datastore: ${VSPHERE_DATASTORE=vsanDatastore}
          diskGiB: ${VSPHERE_DISK=200}
          folder: ${VSPHERE_FOLDER=CAPI}
          memoryMiB: ${VSPHERE_MEMORY=32384}
          network:
            devices:
              - dhcp4: true
                networkName: "${VSPHERE_NETWORK=workload}"
          numCPUs: ${VSPHERE_CPU=4}
          os: Linux
          resourcePool: '${VSPHERE_RESOURCE_POOL=*/Resources/Workload}'
          server: '${VSPHERE_SERVER=11.0.11.130}'
          storagePolicyName: ${VSPHERE_STORAGE_POLICY=""}
          template: ${VSPHERE_TEMPLATE=OL8-Base-Template}
          thumbprint: '${VSPHERE_TLS_THUMBPRINT}'
    ---
    apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
    kind: VSphereMachineTemplate
    metadata:
      name: ${CLUSTER_NAME}-md-0
      namespace: ${NAMESPACE}
    spec:
      template:
        spec:
          cloneMode: linkedClone
          datacenter: ${VSPHERE_DATACENTER=oci-w01dc}
          datastore: ${VSPHERE_DATASTORE=vsanDatastore}
          diskGiB: ${VSPHERE_DISK=200}
          folder: ${VSPHERE_FOLDER=CAPI}
          memoryMiB: ${VSPHERE_MEMORY=32384}
          network:
            devices:
              - dhcp4: true
                networkName: "${VSPHERE_NETWORK=workload}"
          numCPUs: ${VSPHERE_CPU=4}
          os: Linux
          resourcePool: '${VSPHERE_RESOURCE_POOL=*/Resources/Workload}'
          server: '${VSPHERE_SERVER=11.0.11.130}'
          storagePolicyName: ${VSPHERE_STORAGE_POLICY=""}
          template: ${VSPHERE_TEMPLATE=OL8-Base-Template}
          thumbprint: '${VSPHERE_TLS_THUMBPRINT}'
    ---
    apiVersion: controlplane.cluster.x-k8s.io/v1alpha1
    kind: OCNEControlPlane
    metadata:
      name: ${CLUSTER_NAME}-control-plane
      namespace: ${NAMESPACE}
    spec:
      moduleOperator:
        enabled: true
      verrazzanoPlatformOperator:
        enabled: true
      controlPlaneConfig:
        clusterConfiguration:
          apiServer:
            extraArgs:
              cloud-provider: external
            certSANs:
              - localhost
              - 127.0.0.1
          dns:
            imageRepository: ${OCNE_IMAGE_REPOSITORY=container-registry.oracle.com}/${OCNE_IMAGE_PATH=olcne}
            imageTag: ${DNS_TAG=v1.9.3}
          etcd:
            local:
              imageRepository: ${OCNE_IMAGE_REPOSITORY=container-registry.oracle.com}/${OCNE_IMAGE_PATH=olcne}
              imageTag: ${ETCD_TAG=3.5.6}
          controllerManager:
            extraArgs:
              cloud-provider: external
          networking: {}
          scheduler: {}
          imageRepository: ${OCNE_IMAGE_REPOSITORY=container-registry.oracle.com}/${OCNE_IMAGE_PATH=olcne}
        files:
          - content: |
              apiVersion: v1
              kind: Pod
              metadata: 
                creationTimestamp: null
                name: kube-vip
                namespace: kube-system
              spec: 
                containers: 
                - args: 
                  - manager
                  env: 
                  - name: cp_enable
                    value: "true"
                  - name: vip_interface
                    value: ""
                  - name: address
                    value: ${CONTROL_PLANE_ENDPOINT_IP}
                  - name: port
                    value: "6443"
                  - name: vip_arp
                    value: "true"
                  - name: vip_leaderelection
                    value: "true"
                  - name: vip_leaseduration
                    value: "15"
                  - name: vip_renewdeadline
                    value: "10"
                  - name: vip_retryperiod
                    value: "2"
                  image: ghcr.io/kube-vip/kube-vip:v0.5.11
                  imagePullPolicy: IfNotPresent
                  name: kube-vip
                  resources: {}
                  securityContext: 
                    capabilities: 
                      add: 
                      - NET_ADMIN
                      - NET_RAW
                  volumeMounts: 
                  - mountPath: /etc/kubernetes/admin.conf
                    name: kubeconfig
                hostAliases: 
                - hostnames: 
                  - kubernetes
                  ip: 127.0.0.1
                hostNetwork: true
                volumes: 
                - hostPath: 
                    path: /etc/kubernetes/admin.conf
                    type: FileOrCreate
                  name: kubeconfig
              status: {}
            owner: root:root
            path: /etc/kubernetes/manifests/kube-vip.yaml
        initConfiguration:
          nodeRegistration:
            criSocket: /var/run/crio/crio.sock
            kubeletExtraArgs:
              cloud-provider: external
            name: '{{ local_hostname }}'
        joinConfiguration:
          discovery: {}
          nodeRegistration:
            criSocket: /var/run/crio/crio.sock
            kubeletExtraArgs:
              cloud-provider: external
            name: '{{ local_hostname }}'
        verbosity: 9
        preOCNECommands:
          - hostnamectl set-hostname "{{ ds.meta_data.hostname }}"
          - echo "::1         ipv6-localhost ipv6-loopback localhost6 localhost6.localdomain6"
            >/etc/hosts
          - echo "127.0.0.1   {{ ds.meta_data.hostname }} {{ local_hostname }} localhost
            localhost.localdomain localhost4 localhost4.localdomain4" >>/etc/hosts
        users:
          - name: opc
            sshAuthorizedKeys:
              - ${VSPHERE_SSH_AUTHORIZED_KEY}
            sudo: ALL=(ALL) NOPASSWD:ALL
      machineTemplate:
        infrastructureRef:
          apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
          kind: VSphereMachineTemplate
          name: ${CLUSTER_NAME}-control-plane
          namespace: ${NAMESPACE}
      replicas: ${CONTROL_PLANE_MACHINE_COUNT=1}
      version: ${KUBERNETES_VERSION=v1.26.6}
    ---
    apiVersion: bootstrap.cluster.x-k8s.io/v1alpha1
    kind: OCNEConfigTemplate
    metadata:
      name: ${CLUSTER_NAME}-md-0
      namespace: ${NAMESPACE}
    spec:
      template:
        spec:
          clusterConfiguration:
            imageRepository: ${OCNE_IMAGE_REPOSITORY=container-registry.oracle.com}/${OCNE_IMAGE_PATH=olcne}
          joinConfiguration:
            nodeRegistration:
              kubeletExtraArgs:
                cloud-provider: external
              name: '{{ local_hostname }}'
          verbosity: 9
          preOCNECommands:
            - hostnamectl set-hostname "{{ ds.meta_data.hostname }}"
            - echo "::1         ipv6-localhost ipv6-loopback localhost6 localhost6.localdomain6"
              >/etc/hosts
            - echo "127.0.0.1   {{ ds.meta_data.hostname }} {{ local_hostname }} localhost
              localhost.localdomain localhost4 localhost4.localdomain4" >>/etc/hosts
          users:
            - name: opc
              sshAuthorizedKeys:
                - ${VSPHERE_SSH_AUTHORIZED_KEY}
              sudo: ALL=(ALL) NOPASSWD:ALL
    ---
    apiVersion: cluster.x-k8s.io/v1beta1
    kind: MachineDeployment
    metadata:
      labels:
        cluster.x-k8s.io/cluster-name: ${CLUSTER_NAME}
      name: ${CLUSTER_NAME}-md-0
      namespace: ${NAMESPACE}
    spec:
      clusterName: ${CLUSTER_NAME}
      replicas: ${NODE_MACHINE_COUNT=3}
      selector:
        matchLabels: {}
      template:
        metadata:
          labels:
            cluster.x-k8s.io/cluster-name: ${CLUSTER_NAME}
        spec:
          bootstrap:
            configRef:
              apiVersion: bootstrap.cluster.x-k8s.io/v1alpha1
              kind: OCNEConfigTemplate
              name: ${CLUSTER_NAME}-md-0
          clusterName: ${CLUSTER_NAME}
          infrastructureRef:
            apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
            kind: VSphereMachineTemplate
            name: ${CLUSTER_NAME}-md-0
          version: ${KUBERNETES_VERSION=v1.26.6}
    ---
    apiVersion: addons.cluster.x-k8s.io/v1beta1
    kind: ClusterResourceSet
    metadata:
      name: ${CLUSTER_NAME}-crs-0
      namespace: ${NAMESPACE}
    spec:
      clusterSelector:
        matchLabels:
          cluster.x-k8s.io/cluster-name: ${CLUSTER_NAME}
      resources:
        - kind: Secret
          name: ${CLUSTER_NAME}-vsphere-csi-controller
        - kind: ConfigMap
          name: ${CLUSTER_NAME}-vsphere-csi-controller-role
        - kind: ConfigMap
          name: ${CLUSTER_NAME}-vsphere-csi-controller-binding
        - kind: Secret
          name: ${CLUSTER_NAME}-csi-vsphere-config
        - kind: ConfigMap
          name: csi.vsphere.vmware.com
        - kind: ConfigMap
          name: vsphere-csi-controller-sa
        - kind: ConfigMap
          name: vsphere-csi-node-sa
        - kind: ConfigMap
          name: ${CLUSTER_NAME}-vsphere-csi-node-cluster-role
        - kind: ConfigMap
          name: ${CLUSTER_NAME}-vsphere-csi-node-cluster-role-binding
        - kind: ConfigMap
          name: ${CLUSTER_NAME}-vsphere-csi-node-role
        - kind: ConfigMap
          name: ${CLUSTER_NAME}-vsphere-csi-node-binding
        - kind: ConfigMap
          name: ${CLUSTER_NAME}-internal-feature-states.csi.vsphere.vmware.com
        - kind: ConfigMap
          name: ${CLUSTER_NAME}-vsphere-csi-controller-service
        - kind: ConfigMap
          name: ${CLUSTER_NAME}-vsphere-csi-controller
        - kind: ConfigMap
          name: ${CLUSTER_NAME}-vsphere-csi-node
        - kind: ConfigMap
          name: ${CLUSTER_NAME}-vsphere-csi-node-windows
        - kind: Secret
          name: ${CLUSTER_NAME}-cloud-controller-manager
        - kind: Secret
          name: ${CLUSTER_NAME}-cloud-provider-vsphere-credentials
        - kind: ConfigMap
          name: ${CLUSTER_NAME}-cpi-manifests
      strategy: Reconcile
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: ${CLUSTER_NAME}
      namespace: ${NAMESPACE}
    stringData:
      password: ${VSPHERE_PASSWORD}
      username: ${VSPHERE_USERNAME}
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: ${CLUSTER_NAME}-vsphere-csi-controller
      namespace: ${NAMESPACE}
    data:
      data: |
        apiVersion: v1
        kind: ServiceAccount
        metadata: 
          name: vsphere-csi-controller
          namespace: kube-system
    ---
    apiVersion: v1
    data:
      data: |
        apiVersion: rbac.authorization.k8s.io/v1
        kind: ClusterRole
        metadata: 
          name: vsphere-csi-controller-role
        rules: 
        - apiGroups: [""]
          resources: ["nodes", "pods", "configmaps"]
          verbs: ["get", "list", "watch"]
        - apiGroups: [""]
          resources: ["persistentvolumeclaims"]
          verbs: ["get", "list", "watch", "update"]
        - apiGroups: [""]
          resources: ["persistentvolumeclaims/status"]
          verbs: ["patch"]
        - apiGroups: [""]
          resources: ["persistentvolumes"]
          verbs: ["get", "list", "watch", "create", "update", "delete", "patch"]
        - apiGroups: [""]
          resources: ["events"]
          verbs: ["get", "list", "watch", "create", "update", "patch"]
        - apiGroups: ["coordination.k8s.io"]
          resources: ["leases"]
          verbs: ["get", "watch", "list", "delete", "update", "create"]
        - apiGroups: ["storage.k8s.io"]
          resources: ["storageclasses", "csinodes"]
          verbs: ["get", "list", "watch"]
        - apiGroups: ["storage.k8s.io"]
          resources: ["volumeattachments"]
          verbs: ["get", "list", "watch", "patch"]
        - apiGroups: ["cns.vmware.com"]
          resources: ["triggercsifullsyncs"]
          verbs: ["create", "get", "update", "watch", "list"]
        - apiGroups: ["cns.vmware.com"]
          resources: ["cnsvspherevolumemigrations"]
          verbs: ["create", "get", "list", "watch", "update", "delete"]
        - apiGroups: ["apiextensions.k8s.io"]
          resources: ["customresourcedefinitions"]
          verbs: ["get", "create", "update"]
        - apiGroups: ["storage.k8s.io"]
          resources: ["volumeattachments/status"]
          verbs: ["patch"]
        - apiGroups: ["cns.vmware.com"]
          resources: ["cnsvolumeoperationrequests"]
          verbs: ["create", "get", "list", "update", "delete"]
        - apiGroups: [ "snapshot.storage.k8s.io" ]
          resources: [ "volumesnapshots" ]
          verbs: [ "get", "list" ]
        - apiGroups: [ "snapshot.storage.k8s.io" ]
          resources: [ "volumesnapshotclasses" ]
          verbs: [ "watch", "get", "list" ]
        - apiGroups: [ "snapshot.storage.k8s.io" ]
          resources: [ "volumesnapshotcontents" ]
          verbs: [ "create", "get", "list", "watch", "update", "delete", "patch"]
        - apiGroups: [ "snapshot.storage.k8s.io" ]
          resources: [ "volumesnapshotcontents/status" ]
          verbs: [ "update", "patch" ]
        - apiGroups: [ "cns.vmware.com" ]
          resources: [ "csinodetopologies" ]
          verbs: ["get", "update", "watch", "list"]
    kind: ConfigMap
    metadata:
      name: ${CLUSTER_NAME}-vsphere-csi-controller-role
      namespace: ${NAMESPACE}
    ---
    apiVersion: v1
    data:
      data: |
        apiVersion: rbac.authorization.k8s.io/v1
        kind: ClusterRoleBinding
        metadata: 
          name: vsphere-csi-controller-binding
        roleRef: 
          apiGroup: rbac.authorization.k8s.io
          kind: ClusterRole
          name: vsphere-csi-controller-role
        subjects: 
        - kind: ServiceAccount
          name: vsphere-csi-controller
          namespace: kube-system
    kind: ConfigMap
    metadata:
      name: ${CLUSTER_NAME}-vsphere-csi-controller-binding
      namespace: ${NAMESPACE}
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: ${CLUSTER_NAME}-csi-vsphere-config
      namespace: ${NAMESPACE}
    stringData:
      data: |
        apiVersion: v1
        kind: Secret
        metadata: 
          name: csi-vsphere-config
          namespace: kube-system
        stringData: 
          csi-vsphere.conf: |+
            [Global]
            thumbprint = "${VSPHERE_TLS_THUMBPRINT}"
            cluster-id = "${NAMESPACE}/${CLUSTER_NAME}"
    
            [VirtualCenter "${VSPHERE_SERVER}"]
            insecure-flag = "true"
            user = "${VSPHERE_USERNAME}"
            password = "${VSPHERE_PASSWORD}"
            datacenters = "${VSPHERE_DATACENTER}"
            targetvSANFileShareDatastoreURLs = "${VSPHERE_DATASTORE_URL_SAN}"
    
            [Network]
            public-network = "${VSPHERE_NETWORK=workload}"
    
        type: Opaque
    type: addons.cluster.x-k8s.io/resource-set
    ---
    apiVersion: v1
    data:
      data: |
        apiVersion: storage.k8s.io/v1
        kind: CSIDriver
        metadata:
          name: csi.vsphere.vmware.com
        spec:
          attachRequired: true
          podInfoOnMount: false
    kind: ConfigMap
    metadata:
      name: csi.vsphere.vmware.com
      namespace: ${NAMESPACE}
    ---
    apiVersion: v1
    data:
      data: |
        kind: ServiceAccount
        apiVersion: v1
        metadata:
          name: vsphere-csi-controller
          namespace: kube-system
    kind: ConfigMap
    metadata:
      name: vsphere-csi-controller-sa
      namespace: ${NAMESPACE}
    ---
    apiVersion: v1
    data:
      data: |
        kind: ServiceAccount
        apiVersion: v1
        metadata:
          name: vsphere-csi-node
          namespace: kube-system
    kind: ConfigMap
    metadata:
      name: vsphere-csi-node-sa
      namespace: ${NAMESPACE}
    ---
    apiVersion: v1
    data:
      data: |
        apiVersion: rbac.authorization.k8s.io/v1
        kind: ClusterRole
        metadata: 
          name: vsphere-csi-node-cluster-role
        rules: 
          - apiGroups: ["cns.vmware.com"]
            resources: ["csinodetopologies"]
            verbs: ["create", "watch", "get", "patch"]
          - apiGroups: [""]
            resources: ["nodes"]
            verbs: ["get"]
    kind: ConfigMap
    metadata:
      name: ${CLUSTER_NAME}-vsphere-csi-node-cluster-role
      namespace: ${NAMESPACE}
    ---
    apiVersion: v1
    data:
      data: |
        kind: ClusterRoleBinding
        apiVersion: rbac.authorization.k8s.io/v1
        metadata:
          name: vsphere-csi-node-cluster-role-binding
        subjects:
          - kind: ServiceAccount
            name: vsphere-csi-node
            namespace: kube-system
        roleRef:
          kind: ClusterRole
          name: vsphere-csi-node-cluster-role
          apiGroup: rbac.authorization.k8s.io
    kind: ConfigMap
    metadata:
      name: ${CLUSTER_NAME}-vsphere-csi-node-cluster-role-binding
      namespace: ${NAMESPACE}
    ---
    apiVersion: v1
    data:
      data: |
        kind: Role
        apiVersion: rbac.authorization.k8s.io/v1
        metadata:
          name: vsphere-csi-node-role
          namespace: kube-system
        rules:
          - apiGroups: [""]
            resources: ["configmaps"]
            verbs: ["get", "list", "watch"]
    kind: ConfigMap
    metadata:
      name: ${CLUSTER_NAME}-vsphere-csi-node-role
      namespace: ${NAMESPACE}
    ---
    apiVersion: v1
    data:
      data: |
        kind: RoleBinding
        apiVersion: rbac.authorization.k8s.io/v1
        metadata:
          name: vsphere-csi-node-binding
          namespace: kube-system
        subjects:
          - kind: ServiceAccount
            name: vsphere-csi-node
            namespace: kube-system
        roleRef:
          kind: Role
          name: vsphere-csi-node-role
          apiGroup: rbac.authorization.k8s.io
    kind: ConfigMap
    metadata:
      name: ${CLUSTER_NAME}-vsphere-csi-node-binding
      namespace: ${NAMESPACE}
    ---
    apiVersion: v1
    data:
      data: |
        apiVersion: v1
        data:
          "csi-migration": "true"
          "csi-auth-check": "true"
          "online-volume-extend": "true"
          "trigger-csi-fullsync": "false"
          "async-query-volume": "true"
          "improved-csi-idempotency": "true"
          "improved-volume-topology": "true"
          "block-volume-snapshot": "true"
          "csi-windows-support": "false"
          "use-csinode-id": "true"
          "list-volumes": "false"
          "pv-to-backingdiskobjectid-mapping": "false"
          "cnsmgr-suspend-create-volume": "true"
          "topology-preferential-datastores": "true"
          "max-pvscsi-targets-per-vm": "true"
        kind: ConfigMap
        metadata:
          name: internal-feature-states.csi.vsphere.vmware.com
          namespace: kube-system
    kind: ConfigMap
    metadata:
      name: ${CLUSTER_NAME}-internal-feature-states.csi.vsphere.vmware.com
      namespace: ${NAMESPACE}
    ---
    apiVersion: v1
    data:
      data: |
        apiVersion: v1
        kind: Service
        metadata:
          name: vsphere-csi-controller
          namespace: kube-system
          labels:
            app: vsphere-csi-controller
        spec:
          ports:
            - name: ctlr
              port: 2112
              targetPort: 2112
              protocol: TCP
            - name: syncer
              port: 2113
              targetPort: 2113
              protocol: TCP
          selector:
            app: vsphere-csi-controller
    kind: ConfigMap
    metadata:
      name: ${CLUSTER_NAME}-vsphere-csi-controller-service
      namespace: ${NAMESPACE}
    ---
    apiVersion: v1
    data:
      data: |
        kind: Deployment
        apiVersion: apps/v1
        metadata:
          name: vsphere-csi-controller
          namespace: kube-system
        spec:
          replicas: 1
          strategy:
            type: RollingUpdate
            rollingUpdate:
              maxUnavailable: 1
              maxSurge: 0
          selector:
            matchLabels:
              app: vsphere-csi-controller
          template:
            metadata:
              labels:
                app: vsphere-csi-controller
                role: vsphere-csi
            spec:
              affinity:
                podAntiAffinity:
                  requiredDuringSchedulingIgnoredDuringExecution:
                    - labelSelector:
                        matchExpressions:
                          - key: "app"
                            operator: In
                            values:
                              - vsphere-csi-controller
                      topologyKey: "kubernetes.io/hostname"
              serviceAccountName: vsphere-csi-controller
              nodeSelector:
                node-role.kubernetes.io/control-plane: ""
              tolerations:
                - key: node-role.kubernetes.io/master
                  operator: Exists
                  effect: NoSchedule
                - key: node-role.kubernetes.io/control-plane
                  operator: Exists
                  effect: NoSchedule
                # uncomment below toleration if you need an aggressive pod eviction in case when
                # node becomes not-ready or unreachable. Default is 300 seconds if not specified.
                #- key: node.kubernetes.io/not-ready
                #  operator: Exists
                #  effect: NoExecute
                #  tolerationSeconds: 30
                #- key: node.kubernetes.io/unreachable
                #  operator: Exists
                #  effect: NoExecute
                #  tolerationSeconds: 30
              dnsPolicy: "Default"
              containers:
                - name: csi-attacher
                  image: k8s.gcr.io/sig-storage/csi-attacher:v3.5.0
                  args:
                    - "--v=4"
                    - "--timeout=300s"
                    - "--csi-address=$(ADDRESS)"
                    - "--leader-election"
                    - "--kube-api-qps=100"
                    - "--kube-api-burst=100"
                  env:
                    - name: ADDRESS
                      value: /csi/csi.sock
                  volumeMounts:
                    - mountPath: /csi
                      name: socket-dir
                - name: csi-resizer
                  image: k8s.gcr.io/sig-storage/csi-resizer:v1.5.0
                  args:
                    - "--v=4"
                    - "--timeout=300s"
                    - "--handle-volume-inuse-error=false"
                    - "--csi-address=$(ADDRESS)"
                    - "--kube-api-qps=100"
                    - "--kube-api-burst=100"
                    - "--leader-election"
                  env:
                    - name: ADDRESS
                      value: /csi/csi.sock
                  volumeMounts:
                    - mountPath: /csi
                      name: socket-dir
                - name: vsphere-csi-controller
                  image: gcr.io/cloud-provider-vsphere/csi/release/driver:v2.7.0
                  args:
                    - "--fss-name=internal-feature-states.csi.vsphere.vmware.com"
                    - "--fss-namespace=$(CSI_NAMESPACE)"
                  imagePullPolicy: "Always"
                  env:
                    - name: CSI_ENDPOINT
                      value: unix:///csi/csi.sock
                    - name: X_CSI_MODE
                      value: "controller"
                    - name: X_CSI_SPEC_DISABLE_LEN_CHECK
                      value: "true"
                    - name: X_CSI_SERIAL_VOL_ACCESS_TIMEOUT
                      value: 3m
                    - name: VSPHERE_CSI_CONFIG
                      value: "/etc/cloud/csi-vsphere.conf"
                    - name: LOGGER_LEVEL
                      value: "PRODUCTION" # Options: DEVELOPMENT, PRODUCTION
                    - name: INCLUSTER_CLIENT_QPS
                      value: "100"
                    - name: INCLUSTER_CLIENT_BURST
                      value: "100"
                    - name: CSI_NAMESPACE
                      valueFrom:
                        fieldRef:
                          fieldPath: metadata.namespace
                  volumeMounts:
                    - mountPath: /etc/cloud
                      name: vsphere-config-volume
                      readOnly: true
                    - mountPath: /csi
                      name: socket-dir
                  ports:
                    - name: healthz
                      containerPort: 9808
                      protocol: TCP
                    - name: prometheus
                      containerPort: 2112
                      protocol: TCP
                  livenessProbe:
                    httpGet:
                      path: /healthz
                      port: healthz
                    initialDelaySeconds: 10
                    timeoutSeconds: 3
                    periodSeconds: 5
                    failureThreshold: 3
                - name: liveness-probe
                  image: k8s.gcr.io/sig-storage/livenessprobe:v2.7.0
                  args:
                    - "--v=4"
                    - "--csi-address=/csi/csi.sock"
                  volumeMounts:
                    - name: socket-dir
                      mountPath: /csi
                - name: vsphere-syncer
                  image: gcr.io/cloud-provider-vsphere/csi/release/syncer:v2.7.0
                  args:
                    - "--leader-election"
                    - "--fss-name=internal-feature-states.csi.vsphere.vmware.com"
                    - "--fss-namespace=$(CSI_NAMESPACE)"
                  imagePullPolicy: "Always"
                  ports:
                    - containerPort: 2113
                      name: prometheus
                      protocol: TCP
                  env:
                    - name: FULL_SYNC_INTERVAL_MINUTES
                      value: "30"
                    - name: VSPHERE_CSI_CONFIG
                      value: "/etc/cloud/csi-vsphere.conf"
                    - name: LOGGER_LEVEL
                      value: "PRODUCTION" # Options: DEVELOPMENT, PRODUCTION
                    - name: INCLUSTER_CLIENT_QPS
                      value: "100"
                    - name: INCLUSTER_CLIENT_BURST
                      value: "100"
                    - name: GODEBUG
                      value: x509sha1=1
                    - name: CSI_NAMESPACE
                      valueFrom:
                        fieldRef:
                          fieldPath: metadata.namespace
                  volumeMounts:
                    - mountPath: /etc/cloud
                      name: vsphere-config-volume
                      readOnly: true
                - name: csi-provisioner
                  image: k8s.gcr.io/sig-storage/csi-provisioner:v3.2.1
                  args:
                    - "--v=4"
                    - "--timeout=300s"
                    - "--csi-address=$(ADDRESS)"
                    - "--kube-api-qps=100"
                    - "--kube-api-burst=100"
                    - "--leader-election"
                    - "--default-fstype=ext4"
                    # needed only for topology aware setup
                    #- "--feature-gates=Topology=true"
                    #- "--strict-topology"
                  env:
                    - name: ADDRESS
                      value: /csi/csi.sock
                  volumeMounts:
                    - mountPath: /csi
                      name: socket-dir
                - name: csi-snapshotter
                  image: k8s.gcr.io/sig-storage/csi-snapshotter:v6.0.1
                  args:
                    - "--v=4"
                    - "--kube-api-qps=100"
                    - "--kube-api-burst=100"
                    - "--timeout=300s"
                    - "--csi-address=$(ADDRESS)"
                    - "--leader-election"
                  env:
                    - name: ADDRESS
                      value: /csi/csi.sock
                  volumeMounts:
                    - mountPath: /csi
                      name: socket-dir
              volumes:
                - name: vsphere-config-volume
                  secret:
                    secretName: csi-vsphere-config
                - name: socket-dir
                  emptyDir: {}
    
    kind: ConfigMap
    metadata:
      name: ${CLUSTER_NAME}-vsphere-csi-controller
      namespace: ${NAMESPACE}
    ---
    apiVersion: v1
    data:
      data: |
        kind: DaemonSet
        apiVersion: apps/v1
        metadata:
          name: vsphere-csi-node
          namespace: kube-system
        spec:
          selector:
            matchLabels:
              app: vsphere-csi-node
          updateStrategy:
            type: "RollingUpdate"
            rollingUpdate:
              maxUnavailable: 1
          template:
            metadata:
              labels:
                app: vsphere-csi-node
                role: vsphere-csi
            spec:
              nodeSelector:
                kubernetes.io/os: linux
              serviceAccountName: vsphere-csi-node
              hostNetwork: true
              dnsPolicy: "ClusterFirstWithHostNet"
              containers:
                - name: node-driver-registrar
                  image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.1
                  args:
                    - "--v=5"
                    - "--csi-address=$(ADDRESS)"
                    - "--kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)"
                  env:
                    - name: ADDRESS
                      value: /csi/csi.sock
                    - name: DRIVER_REG_SOCK_PATH
                      value: /var/lib/kubelet/plugins/csi.vsphere.vmware.com/csi.sock
                  volumeMounts:
                    - name: plugin-dir
                      mountPath: /csi
                    - name: registration-dir
                      mountPath: /registration
                  livenessProbe:
                    exec:
                      command:
                        - /csi-node-driver-registrar
                        - --kubelet-registration-path=/var/lib/kubelet/plugins/csi.vsphere.vmware.com/csi.sock
                        - --mode=kubelet-registration-probe
                    initialDelaySeconds: 3
                - name: vsphere-csi-node
                  image: gcr.io/cloud-provider-vsphere/csi/release/driver:v2.7.0
                  args:
                    - "--fss-name=internal-feature-states.csi.vsphere.vmware.com"
                    - "--fss-namespace=$(CSI_NAMESPACE)"
                  imagePullPolicy: "Always"
                  env:
                    - name: NODE_NAME
                      valueFrom:
                        fieldRef:
                          fieldPath: spec.nodeName
                    - name: CSI_ENDPOINT
                      value: unix:///csi/csi.sock
                    - name: MAX_VOLUMES_PER_NODE
                      value: "59" # Maximum number of volumes that controller can publish to the node. If value is not set or zero Kubernetes decide how many volumes can be published by the controller to the node.
                    - name: X_CSI_MODE
                      value: "node"
                    - name: X_CSI_SPEC_REQ_VALIDATION
                      value: "false"
                    - name: X_CSI_SPEC_DISABLE_LEN_CHECK
                      value: "true"
                    - name: LOGGER_LEVEL
                      value: "PRODUCTION" # Options: DEVELOPMENT, PRODUCTION
                    - name: GODEBUG
                      value: x509sha1=1
                    - name: CSI_NAMESPACE
                      valueFrom:
                        fieldRef:
                          fieldPath: metadata.namespace
                    - name: NODEGETINFO_WATCH_TIMEOUT_MINUTES
                      value: "1"
                  securityContext:
                    privileged: true
                    capabilities:
                      add: ["SYS_ADMIN"]
                    allowPrivilegeEscalation: true
                  volumeMounts:
                    - name: plugin-dir
                      mountPath: /csi
                    - name: pods-mount-dir
                      mountPath: /var/lib/kubelet
                      # needed so that any mounts setup inside this container are
                      # propagated back to the host machine.
                      mountPropagation: "Bidirectional"
                    - name: device-dir
                      mountPath: /dev
                    - name: blocks-dir
                      mountPath: /sys/block
                    - name: sys-devices-dir
                      mountPath: /sys/devices
                  ports:
                    - name: healthz
                      containerPort: 9808
                      protocol: TCP
                  livenessProbe:
                    httpGet:
                      path: /healthz
                      port: healthz
                    initialDelaySeconds: 10
                    timeoutSeconds: 5
                    periodSeconds: 5
                    failureThreshold: 3
                - name: liveness-probe
                  image: k8s.gcr.io/sig-storage/livenessprobe:v2.7.0
                  args:
                    - "--v=4"
                    - "--csi-address=/csi/csi.sock"
                  volumeMounts:
                    - name: plugin-dir
                      mountPath: /csi
              volumes:
                - name: registration-dir
                  hostPath:
                    path: /var/lib/kubelet/plugins_registry
                    type: Directory
                - name: plugin-dir
                  hostPath:
                    path: /var/lib/kubelet/plugins/csi.vsphere.vmware.com
                    type: DirectoryOrCreate
                - name: pods-mount-dir
                  hostPath:
                    path: /var/lib/kubelet
                    type: Directory
                - name: device-dir
                  hostPath:
                    path: /dev
                - name: blocks-dir
                  hostPath:
                    path: /sys/block
                    type: Directory
                - name: sys-devices-dir
                  hostPath:
                    path: /sys/devices
                    type: Directory
              tolerations:
                - effect: NoExecute
                  operator: Exists
                - effect: NoSchedule
                  operator: Exists
    kind: ConfigMap
    metadata:
      name: ${CLUSTER_NAME}-vsphere-csi-node
      namespace: ${NAMESPACE}
    ---
    apiVersion: v1
    data:
      data: |
        kind: DaemonSet
        apiVersion: apps/v1
        metadata:
          name: vsphere-csi-node-windows
          namespace: kube-system
        spec:
          selector:
            matchLabels:
              app: vsphere-csi-node-windows
          updateStrategy:
            type: RollingUpdate
            rollingUpdate:
              maxUnavailable: 1
          template:
            metadata:
              labels:
                app: vsphere-csi-node-windows
                role: vsphere-csi-windows
            spec:
              nodeSelector:
                kubernetes.io/os: windows
              serviceAccountName: vsphere-csi-node
              containers:
                - name: node-driver-registrar
                  image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.1
                  args:
                    - "--v=5"
                    - "--csi-address=$(ADDRESS)"
                    - "--kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)"
                  env:
                    - name: ADDRESS
                      value: 'unix://C:\\csi\\csi.sock'
                    - name: DRIVER_REG_SOCK_PATH
                      value: 'C:\\var\\lib\\kubelet\\plugins\\csi.vsphere.vmware.com\\csi.sock'
                  volumeMounts:
                    - name: plugin-dir
                      mountPath: /csi
                    - name: registration-dir
                      mountPath: /registration
                  livenessProbe:
                    exec:
                      command:
                        - /csi-node-driver-registrar.exe
                        - --kubelet-registration-path=C:\\var\\lib\\kubelet\\plugins\\csi.vsphere.vmware.com\\csi.sock
                        - --mode=kubelet-registration-probe
                    initialDelaySeconds: 3
                - name: vsphere-csi-node
                  image: gcr.io/cloud-provider-vsphere/csi/release/driver:v2.7.0
                  args:
                    - "--fss-name=internal-feature-states.csi.vsphere.vmware.com"
                    - "--fss-namespace=$(CSI_NAMESPACE)"
                  imagePullPolicy: "Always"
                  env:
                    - name: NODE_NAME
                      valueFrom:
                        fieldRef:
                          apiVersion: v1
                          fieldPath: spec.nodeName
                    - name: CSI_ENDPOINT
                      value: 'unix://C:\\csi\\csi.sock'
                    - name: MAX_VOLUMES_PER_NODE
                      value: "59" # Maximum number of volumes that controller can publish to the node. If value is not set or zero Kubernetes decide how many volumes can be published by the controller to the node.
                    - name: X_CSI_MODE
                      value: node
                    - name: X_CSI_SPEC_REQ_VALIDATION
                      value: 'false'
                    - name: X_CSI_SPEC_DISABLE_LEN_CHECK
                      value: "true"
                    - name: LOGGER_LEVEL
                      value: "PRODUCTION" # Options: DEVELOPMENT, PRODUCTION
                    - name: X_CSI_LOG_LEVEL
                      value: DEBUG
                    - name: CSI_NAMESPACE
                      valueFrom:
                        fieldRef:
                          fieldPath: metadata.namespace
                    - name: NODEGETINFO_WATCH_TIMEOUT_MINUTES
                      value: "1"
                  volumeMounts:
                    - name: plugin-dir
                      mountPath: 'C:\csi'
                    - name: pods-mount-dir
                      mountPath: 'C:\var\lib\kubelet'
                    - name: csi-proxy-volume-v1
                      mountPath: \\.\pipe\csi-proxy-volume-v1
                    - name: csi-proxy-filesystem-v1
                      mountPath: \\.\pipe\csi-proxy-filesystem-v1
                    - name: csi-proxy-disk-v1
                      mountPath: \\.\pipe\csi-proxy-disk-v1
                    - name: csi-proxy-system-v1alpha1
                      mountPath: \\.\pipe\csi-proxy-system-v1alpha1
                  ports:
                    - name: healthz
                      containerPort: 9808
                      protocol: TCP
                  livenessProbe:
                    httpGet:
                      path: /healthz
                      port: healthz
                    initialDelaySeconds: 10
                    timeoutSeconds: 5
                    periodSeconds: 5
                    failureThreshold: 3
                - name: liveness-probe
                  image: k8s.gcr.io/sig-storage/livenessprobe:v2.7.0
                  args:
                    - "--v=4"
                    - "--csi-address=/csi/csi.sock"
                  volumeMounts:
                    - name: plugin-dir
                      mountPath: /csi
              volumes:
                - name: registration-dir
                  hostPath:
                    path: 'C:\var\lib\kubelet\plugins_registry\'
                    type: Directory
                - name: plugin-dir
                  hostPath:
                    path: 'C:\var\lib\kubelet\plugins\csi.vsphere.vmware.com\'
                    type: DirectoryOrCreate
                - name: pods-mount-dir
                  hostPath:
                    path: \var\lib\kubelet
                    type: Directory
                - name: csi-proxy-disk-v1
                  hostPath:
                    path: \\.\pipe\csi-proxy-disk-v1
                    type: ''
                - name: csi-proxy-volume-v1
                  hostPath:
                    path: \\.\pipe\csi-proxy-volume-v1
                    type: ''
                - name: csi-proxy-filesystem-v1
                  hostPath:
                    path: \\.\pipe\csi-proxy-filesystem-v1
                    type: ''
                - name: csi-proxy-system-v1alpha1
                  hostPath:
                    path: \\.\pipe\csi-proxy-system-v1alpha1
                    type: ''
              tolerations:
                - effect: NoExecute
                  operator: Exists
                - effect: NoSchedule
                  operator: Exists
    kind: ConfigMap
    metadata:
      name: ${CLUSTER_NAME}-vsphere-csi-node-windows
      namespace: ${NAMESPACE}
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: ${CLUSTER_NAME}-cloud-controller-manager
      namespace: ${NAMESPACE}
    stringData:
      data: |
        apiVersion: v1
        kind: ServiceAccount
        metadata: 
          labels: 
            component: cloud-controller-manager
            vsphere-cpi-infra: service-account
          name: cloud-controller-manager
          namespace: kube-system
    type: addons.cluster.x-k8s.io/resource-set
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: ${CLUSTER_NAME}-cloud-provider-vsphere-credentials
      namespace: ${NAMESPACE}
    stringData:
      data: |
        apiVersion: v1
        kind: Secret
        metadata: 
          labels: 
            component: cloud-controller-manager
            vsphere-cpi-infra: secret
          name: cloud-provider-vsphere-credentials
          namespace: kube-system
        stringData: 
          ${VSPHERE_SERVER}.password: ${VSPHERE_PASSWORD}
          ${VSPHERE_SERVER}.username: ${VSPHERE_USERNAME}
        type: Opaque
    type: addons.cluster.x-k8s.io/resource-set
    ---
    apiVersion: v1
    data:
      data: |
        ---
        apiVersion: rbac.authorization.k8s.io/v1
        kind: ClusterRole
        metadata: 
          labels: 
            component: cloud-controller-manager
            vsphere-cpi-infra: role
          name: system:cloud-controller-manager
        rules: 
        - apiGroups: 
          - ""
          resources: 
          - events
          verbs: 
          - create
          - patch
          - update
        - apiGroups: 
          - ""
          resources: 
          - nodes
          verbs: 
          - '*'
        - apiGroups: 
          - ""
          resources: 
          - nodes/status
          verbs: 
          - patch
        - apiGroups: 
          - ""
          resources: 
          - services
          verbs: 
          - list
          - patch
          - update
          - watch
        - apiGroups: 
          - ""
          resources: 
          - services/status
          verbs: 
          - patch
        - apiGroups: 
          - ""
          resources: 
          - serviceaccounts
          verbs: 
          - create
          - get
          - list
          - watch
          - update
        - apiGroups: 
          - ""
          resources: 
          - persistentvolumes
          verbs: 
          - get
          - list
          - watch
          - update
        - apiGroups: 
          - ""
          resources: 
          - endpoints
          verbs: 
          - create
          - get
          - list
          - watch
          - update
        - apiGroups: 
          - ""
          resources: 
          - secrets
          verbs: 
          - get
          - list
          - watch
        - apiGroups: 
          - coordination.k8s.io
          resources: 
          - leases
          verbs: 
          - get
          - watch
          - list
          - update
          - create
        ---
        apiVersion: rbac.authorization.k8s.io/v1
        kind: ClusterRoleBinding
        metadata: 
          labels: 
            component: cloud-controller-manager
            vsphere-cpi-infra: cluster-role-binding
          name: system:cloud-controller-manager
        roleRef: 
          apiGroup: rbac.authorization.k8s.io
          kind: ClusterRole
          name: system:cloud-controller-manager
        subjects: 
        - kind: ServiceAccount
          name: cloud-controller-manager
          namespace: kube-system
        - kind: User
          name: cloud-controller-manager
        ---
        apiVersion: v1
        data: 
          vsphere.conf: |
            global: 
              port: 443
              secretName: cloud-provider-vsphere-credentials
              secretNamespace: kube-system
              thumbprint: '${VSPHERE_TLS_THUMBPRINT}'
            vcenter: 
              ${VSPHERE_SERVER}:
                datacenters: 
                - '${VSPHERE_DATACENTER}'
                server: '${VSPHERE_SERVER}'
        kind: ConfigMap
        metadata: 
          name: vsphere-cloud-config
          namespace: kube-system
        ---
        apiVersion: rbac.authorization.k8s.io/v1
        kind: RoleBinding
        metadata: 
          labels: 
            component: cloud-controller-manager
            vsphere-cpi-infra: role-binding
          name: servicecatalog.k8s.io:apiserver-authentication-reader
          namespace: kube-system
        roleRef: 
          apiGroup: rbac.authorization.k8s.io
          kind: Role
          name: extension-apiserver-authentication-reader
        subjects: 
        - kind: ServiceAccount
          name: cloud-controller-manager
          namespace: kube-system
        - kind: User
          name: cloud-controller-manager
        ---
        apiVersion: apps/v1
        kind: DaemonSet
        metadata: 
          labels: 
            component: cloud-controller-manager
            tier: control-plane
          name: vsphere-cloud-controller-manager
          namespace: kube-system
        spec: 
          selector: 
            matchLabels: 
              name: vsphere-cloud-controller-manager
          template: 
            metadata: 
              labels: 
                component: cloud-controller-manager
                name: vsphere-cloud-controller-manager
                tier: control-plane
            spec: 
              affinity: 
                nodeAffinity: 
                  requiredDuringSchedulingIgnoredDuringExecution: 
                    nodeSelectorTerms: 
                    - matchExpressions: 
                      - key: node-role.kubernetes.io/control-plane
                        operator: Exists
                    - matchExpressions: 
                      - key: node-role.kubernetes.io/master
                        operator: Exists
              containers: 
              - args: 
                - --v=2
                - --cloud-provider=vsphere
                - --cloud-config=/etc/cloud/vsphere.conf
                image: gcr.io/cloud-provider-vsphere/cpi/release/manager:v1.25.3
                name: vsphere-cloud-controller-manager
                resources: 
                  requests: 
                    cpu: 200m
                volumeMounts: 
                - mountPath: /etc/cloud
                  name: vsphere-config-volume
                  readOnly: true
              hostNetwork: true
              priorityClassName: system-node-critical
              securityContext: 
                runAsUser: 1001
              serviceAccountName: cloud-controller-manager
              tolerations: 
              - effect: NoSchedule
                key: node.cloudprovider.kubernetes.io/uninitialized
                value: "true"
              - effect: NoSchedule
                key: node-role.kubernetes.io/master
                operator: Exists
              - effect: NoSchedule
                key: node-role.kubernetes.io/control-plane
                operator: Exists
              - effect: NoSchedule
                key: node.kubernetes.io/not-ready
                operator: Exists
              volumes: 
              - configMap: 
                  name: vsphere-cloud-config
                name: vsphere-config-volume
          updateStrategy: 
            type: RollingUpdate
    kind: ConfigMap
    metadata:
      name: ${CLUSTER_NAME}-cpi-manifests
      namespace: ${NAMESPACE}
    
    
    
    
    ---
    apiVersion: addons.cluster.x-k8s.io/v1beta1
    kind: ClusterResourceSet
    metadata:
      name: ${CLUSTER_NAME}-calico-module-resource
      namespace: ${NAMESPACE}
    spec:
      clusterSelector:
        matchLabels:
          cluster.x-k8s.io/cluster-name: ${CLUSTER_NAME}
      resources:
        - kind: ConfigMap
          name: ${CLUSTER_NAME}-calico-module-cr
      strategy: Reconcile
    ---
    apiVersion: v1
    data:
      calico.yaml: |
        apiVersion: platform.verrazzano.io/v1alpha1
        kind: Module
        metadata:
          name: calico
          namespace: default
        spec:
          moduleName: calico
          targetNamespace: default
          values:
            tigeraOperator:
              version: ${TIGERA_TAG=v1.29.0}
            installation:
              cni:
                type: Calico
              calicoNetwork:
                bgp: Disabled
                ipPools:
                  - cidr: ${POD_CIDR=192.168.0.0/16}
                    encapsulation: VXLAN
              registry: ${OCNE_IMAGE_REPOSITORY=container-registry.oracle.com}
              imagePath: ${OCNE_IMAGE_PATH=olcne}
    kind: ConfigMap
    metadata:
      annotations:
        note: generated
      labels:
        type: generated
      name: ${CLUSTER_NAME}-calico-module-cr
      namespace: ${NAMESPACE}
    
  2. 次のコマンドを実行してテンプレートを生成し、適用します:

    $ clusterctl generate yaml --from vsphere-capi.yaml | kubectl apply -f -
    

kubeconfigファイルを取得するには、次を実行します:

$ clusterctl get kubeconfig kluster1 -n kluster1 > kluster1

クラスタ構成の終了

クラスタ・リソースを作成したら、追加のステップを実行して、クラスタの構成を終了する必要があります。

  1. vSphereにロード・バランサがない場合は、MetalLBをデプロイできます。

    $ export KUBECONFIG=kluster1
     
    ADDRESS_RANGE=${1:-"subnet-from-vSphere-network"};
     
    $ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml --wait=true;
    $ kubectl rollout status deployment -n metallb-system controller -w;
    $ kubectl apply -f -  <<EOF1
      apiVersion: metallb.io/v1beta1
      kind: IPAddressPool
      metadata:
        name: vzlocalpool
        namespace: metallb-system
      spec:
        addresses:
        - ${ADDRESS_RANGE}
    EOF1
     
    $ kubectl apply -f -  <<-EOF2
      apiVersion: metallb.io/v1beta1
      kind: L2Advertisement
      metadata:
        name: vzmetallb
        namespace: metallb-system
      spec:
        ipAddressPools:
        - vzlocalpool
    EOF2
     
    $ sleep 10;
    $ kubectl wait --namespace metallb-system --for=condition=ready pod --all --timeout=300s
    

  2. クラスタにデフォルトのストレージ・クラスを作成します。

    $ export KUBECONFIG=kluster1
    $ kubectl apply -f -  <<-EOF
      kind: StorageClass
      apiVersion: storage.k8s.io/v1
      metadata:
        name: vmware-sc
        annotations:
          storageclass.kubernetes.io/is-default-class: "true"
      provisioner: csi.vsphere.vmware.com
      volumeBindingMode: WaitForFirstConsumer
    EOF
    

  3. 管理対象クラスタにVerrazzanoをインストールします。

    $ export KUBECONFIG=kluster1 
    
    $ vz install -f - <<EOF
      apiVersion: install.verrazzano.io/v1beta1
      kind: Verrazzano
      metadata:
        name: example-verrazzano
      spec:
        profile: dev
        defaultVolumeSource:
          persistentVolumeClaim:
            claimName: verrazzano-storage
        volumeClaimSpecTemplates:
          - metadata:
              name: verrazzano-storage
            spec:
              resources:
                requests:
                  storage: 2Gi
    EOF
    

管理クラスタと最初の管理対象クラスタが稼働し、アプリケーションをデプロイする準備が整います。管理対象クラスタをさらに追加することもできます。

詳細は、Cluster APIおよびCluster API vSphereのドキュメントを参照してください:

デプロイメントのトラブルシューティング

vSphereリソースのデプロイメントが失敗した場合は、ログ・ファイルを確認して問題を診断できます。

vSphereクラスタ・コントローラ・プロバイダのログ:

$ kubectl logs -n verrazzano-capi -l cluster.x-k8s.io/provider=infrastructure-vsphere

OCNEコントロール・プレーン・プロバイダのログ:

$ kubectl logs -n verrazzano-capi -l cluster.x-k8s.io/provider=control-plane-ocne

ノート: Calicoの前にCSIポッドをデプロイすると、ポッドがCrashLoop状態になることがあります。ポッドを再起動して問題を修正してください。

$ kubectl --kubeconfig kluster1 scale deploy  -n kube-system vsphere-csi-controller --replicas=0
$ kubectl --kubeconfig kluster1 scale deploy  -n kube-system vsphere-csi-controller --replicas=1

クラスタの削除

  1. 管理対象クラスタを削除します。
    $ kubectl delete cluster $CLUSTER_NAME
    
  2. 管理クラスタを削除します。
    $ kind delete cluster
    

手動でのクリーンアップが必要な保留中のリソースが残ってしまう可能性があるため、kubectl delete -f capi-quickstart.yamlを使用して、クラスタ・テンプレート全体を一度に削除しないでください。