Introduction
Inventory File Overview
Procedure Steps
OCCNE Installation automation uses information within an OCCNE Inventory file to provision servers and virtual machines, install cloud native components, as well as configure all of the components within the cluster such that they constitute a cluster conformant to the OCCNE platform specifications. To assist with the creation of the OCCNE Inventory, a boilerplate OCCNE Inventory is provided. The boilerplate inventory file requires the input of site-specific information.
This document outlines the procedure for taking the OCCNE Inventory boilerplate and creating a site specific OCCNE Inventory file usable by the OCCNE Install Procedures.
The inventory file is an Initialization (INI) formatted file. The basic elements of an inventory file are hosts, properties, and groups.
A host is defined as a Fully Qualified Domain Name (FQDN). Properties are defined as key=value pairs.
A property applies to a specific host when it appears on the same line as the host.
Square brackets define group names. For example [host_hp_gen_10] defines the group of physical HP Gen10 machines. There is no explicit "end of group" delimiter, rather group definitions end at the next group declaration or the end of the file. Groups can not be nested.
A property applies to an entire group when it is defined under a group heading not on the same line as a host.
Groups of groups are formed using the children keyword. For example, the [occne:children] creates an occne group comprised of several other groups.
Inline comments are not allowed.
Data Tier Groups
The MySQL service is comprised of several nodes running on virtual machines on RMS hosts. This collection of hosts is referred to as the MySQL Cluster. Each host in the MySQL Cluster requires a NodeID parameter. Each host in the MySQL cluster is required to have a NodeID value that is unique across the MySQL cluster. Additional parameter range limitations are outlined below.
Prior to initiating the procedure steps, the Inventory Boilerplate should be copied to a system where it can be edited and saved for future use. Eventually the hosts.ini file needs to be transferred to OCCNE servers.
Table B-1 Procedure for OCCNE Inventory File Preparation
Step # | Procedure | Description |
---|---|---|
1.
|
OCCNE Cluster Name |
In order to provide each OCCNE host with a unique FQDN, the first step in composing the OCCNE Inventory is to create an OCCNE Cluster domain suffix. The OCCNE Cluster domain suffix starts with a Top-level Domain (TLD). The structure of a TLD is maintained by various government and commercial authorities. Additional domain name levels help identify the cluster and are added to help convey additional meaning. OCCNE suggests adding at least one "ad hoc" identifier and at least one "geographic" and "organizational" identifier. Geographic and organizational identifiers may be multiple levels deep. An example OCCNE Cluster Name using the following identifiers is below:
Example OCCNE Cluster name: atlantic.lab1.research.nc.us.oracle.com |
2.
|
Create host_hp_gen_10 and host_kernel_virtual group lists |
Using the OCCNE Cluster domain suffix created above, fill out the inventory boilerplate with the list of hosts in the host_hp_gen_10 and host_kernel_virtual groups. The recommended host name prefix for nodes in the host_hp_gen_10 groups is "k8s-x" where x is a number 1 to N. Kubernetes "master" and "worker" nodes should not be differentiated using the host name. The recommended host name prefix for nodes in the host_kernel_virtual group is "db-x" where x is a number 1 to N. MySQL Cluster nodes should not be differentiated using host names. |
3.
|
Edit occne:vars |
Edit the values in the occne:vars group to reflect site specific data. Values in the occne:vars group are defined below:
|
################################################################################ # # # Copyright (c) 2019 Oracle and/or its affiliates. All rights reserved. # # # ################################################################################ ################################################################################ # EXAMPLE OCCNE Cluster hosts.ini file. Defines OCCNE deployment variables # and targets. ################################################################################ # Definition of the host node local connection for Ansible control, # do not change [local] 127.0.0.1 ansible_connection=local ################################################################################ # This is a list of all of the nodes in the targeted deployment system with the # IP address to use for Ansible control during deployment. # For bare metal hosts, the IP of the ILO is used for driving reboots. # Host MAC addresses is used to identify nodes during PXE-boot phase of the # os_install process. # MAC addresses must be lowercase and delimited with a dash "-" [host_hp_gen_10] k8s-1.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-xx-xx-xx-xx k8s-2.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-xx-xx-xx-xx k8s-3.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-xx-xx-xx-xx k8s-4.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-xx-xx-xx-xx k8s-5.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-xx-xx-xx-xx k8s-6.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-xx-xx-xx-xx k8s-7.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-xx-xx-xx-xx db-1.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-xx-xx-xx-xx db-2.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-xx-xx-xx-xx [host_kernel_virtual] db-3.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx db-4.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx db-5.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx db-6.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx db-7.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx db-8.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx db-9.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx db-10.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx ############################################################################### # Node grouping of which nodes are in the occne system [occne:children] host_hp_gen_10 host_kernel_virtual k8s-cluster data_store ############################################################################### # Variables that define the OCCNE environment and specify target configuration. [occne:vars] occne_cluster_name=foo.lab.us.oracle.com nfs_host=10.75.216.xx nfs_path=/var/occne subnet_ipv4=10.75.216.0 subnet_cidr=/25 netmask=255.255.255.128 broadcast_address=10.75.216.127 default_route=10.75.216.1 next_server=10.75.216.114 name_server='10.75.124.245,10.75.124.246' ntp_server='10.75.124.245,10.75.124.246' http_proxy=http://www-proxy.us.oracle.com:80 https_proxy=http://www-proxy.us.oracle.com:80 occne_private_registry=bastion-1 occne_private_registry_address='10.75.216.xx' occne_private_registry_port=5000 metallb_peer_address=10.75.216.xx metallb_default_pool_protocol=bgp metallb_default_pool_addresses='10.75.xxx.xx/xx' pxe_install_lights_out_usr=root pxe_install_lights_out_passwd=TklcRoot occne_k8s_binary_repo='http://bastion-1:8082/binaries' helm_stable_repo_url='http://<bastion-1 IP addr>:<port>/charts/stable/' occne_helm_stable_repo_url='http://<bastion-1 IP addr>:<port>/charts/stable/' occne_helm_images_repo='bastion-1:5000'/ docker_rh_repo_base_url=http://<bastion-1 IP addr>/yum/centos/7/updates/x86_64/ docker_rh_repo_gpgkey=http://<bastion-1 IP addr>/yum/centos/RPM-GPG-CENTOS ############################################################################### # Node grouping of which nodes are in the occne data_store [data_store] db-1.foo.lab.us.oracle.com db-2.foo.lab.us.oracle.com ############################################################################### # Node grouping of which nodes are to be Kubernetes master nodes (must be at least 2) [kube-master] k8s-1.foo.lab.us.oracle.com k8s-2.foo.lab.us.oracle.com k8s-3.foo.lab.us.oracle.com ################################################################################ # Node grouping specifying which nodes are Kubernetes etcd data. # An odd number of etcd nodes is required. [etcd] k8s-1.foo.lab.us.oracle.com k8s-2.foo.lab.us.oracle.com k8s-3.foo.lab.us.oracle.com ################################################################################ # Node grouping specifying which nodes are Kubernetes worker nodes. # A minimum of two worker nodes is required. [kube-node] k8s-4.foo.lab.us.oracle.com k8s-5.foo.lab.us.oracle.com k8s-6.foo.lab.us.oracle.com k8s-7.foo.lab.us.oracle.com # Node grouping of which nodes are to be in the OC-CNE Kubernetes cluster [k8s-cluster:children] kube-node kube-master ################################################################################ # The following node groupings are for MySQL NDB cluster # installation under control of MySQL Cluster Manager ################################################################################ # NodeId should be unique across the cluster, each node should be assigned with # the unique NodeId, this id will control which data nodes should be part of # different node groups. For Management nodes, NodeId should be between 49 to # 255 and should be assigned with unique NodeId with in MySQL cluster. [mysqlndb_mgm_nodes] db-3.foo.lab.us.oracle.com NodeId=49 db-4.foo.lab.us.oracle.com NodeId=50 ############################################################################### # For data nodes, NodeId should be between 1 to 48, NodeId will be used to # group the data nodes among different Node Groups. [mysqlndb_data_nodes] db-5.foo.lab.us.oracle.com NodeId=1 db-6.foo.lab.us.oracle.com NodeId=2 db-7.foo.lab.us.oracle.com NodeId=3 db-8.foo.lab.us.oracle.com NodeId=4 ################################################################################ # For SQL nodes, NodeId should be between 49 to 255 and should be assigned with # unique NodeId with in MySQL cluster. [mysqlndb_sql_nodes] db-9.foo.lab.us.oracle.com NodeId=56 db-10.foo.lab.us.oracle.com NodeId=57 ################################################################################ # Node grouping of all of the nodes involved in the MySQL cluster [mysqlndb_all_nodes:children] mysqlndb_mgm_nodes mysqlndb_data_nodes mysqlndb_sql_nodes ################################################################################ # MCM and NDB cluster variables can be defined here to override the values. [mysqlndb_all_nodes:vars] occne_mysqlndb_NoOfReplicas=2 occne_mysqlndb_DataMemory=12G