Go to primary content
Oracle® Communications OC-CNE Installation Guide
Release 1.0
F16979-01
Go To Table Of Contents
Contents

Previous
Previous
Next
Next

Inventory File Preparation

Introduction

Inventory File Overview

Procedure Steps

OCCNE Installation automation uses information within an OCCNE Inventory file to provision servers and virtual machines, install cloud native components, as well as configure all of the components within the cluster such that they constitute a cluster conformant to the OCCNE platform specifications. To assist with the creation of the OCCNE Inventory, a boilerplate OCCNE Inventory is provided. The boilerplate inventory file requires the input of site-specific information.

This document outlines the procedure for taking the OCCNE Inventory boilerplate and creating a site specific OCCNE Inventory file usable by the OCCNE Install Procedures.

The inventory file is an Initialization (INI) formatted file. The basic elements of an inventory file are hosts, properties, and groups.

A host is defined as a Fully Qualified Domain Name (FQDN). Properties are defined as key=value pairs.

A property applies to a specific host when it appears on the same line as the host.

Square brackets define group names. For example [host_hp_gen_10] defines the group of physical HP Gen10 machines. There is no explicit "end of group" delimiter, rather group definitions end at the next group declaration or the end of the file. Groups can not be nested.

A property applies to an entire group when it is defined under a group heading not on the same line as a host.

Groups of groups are formed using the children keyword. For example, the [occne:children] creates an occne group comprised of several other groups.

Inline comments are not allowed.

The OCCNE Inventory file is composed of several groups:
  • host_hp_gen_10: list of all physical hosts in the OCCNE cluster. Each host in this group must also have several properties defined (outlined below)
    • ansible_host: The IP address for the host's teamed primary interface. The occne/os_install container uses this IP to configure a static IP for a pair of teamed interfaces when the hosts are provisioned.
    • ilo: The IP address of the host's iLO interface. This IP is manually configured as part of the OCCNE Configure Addresses for RMS iLOs, OA, EBIPA process.
    • mac: The MAC address of the host's network bootable interface. This is typically eno5 for Gen10 RMS hardware and eno1 for Gen10 bladed hardware. MAC addresses must use all lowercase alphanumeric values with a dash as the separator
  • host_kernel_virtual:list of all virtual hosts in the OCCNE cluster. Each host in this group must have the same properties defined as above with the exception of the ilo property
  • occne:children: Do not modify the children of the occne group
  • occne:vars: This is a list of variables representing configurable site-specific data. While some variables are optional, the ones listed in the boilerplate should be defined with valid values. If a given site does not have applicable data to fill in for a variable, the OCCNE installation or engineering team should be consulted. Individual variable values are explained in subsequent sections.
  • data_store: list of Storage Hosts
  • kube-master: list of Master Node hosts where kubernetes master components run.
  • etcd: set to the same list of nodes as the kube-master group. list of hosts that compose the etcd server. Should always be an odd number.
  • kube-node: list of Worker Nodes. Worker Nodes are where kubernetes pods run and should be comprised of the bladed hosts.
  • k8s-cluster:children: do not modify the children of k8s-cluster

Data Tier Groups

The MySQL service is comprised of several nodes running on virtual machines on RMS hosts. This collection of hosts is referred to as the MySQL Cluster. Each host in the MySQL Cluster requires a NodeID parameter. Each host in the MySQL cluster is required to have a NodeID value that is unique across the MySQL cluster. Additional parameter range limitations are outlined below.

  • mysqlndb_mgm_nodes: list of MySQL Management nodes. In OCCNE 1.0 this group consists of three virtual machines distributed equally among the kube-master nodes. These nodes must have a NodeId parameter defined
    • NodeId: Parameter must be unique across the MySQL Cluster and have a value between 49 and 255.
  • mysqlndb_data_nodes: List of MySQL Data nodes. In OCCNE 1.0 this group consists of four virtual machines distributed equally among the Storage Hosts. Requires a NodeId parameter.
    • NodeId: Parameter must be unique across the MySQL Cluster and have a value between 1 and 48.
  • mysqlndb_sql_nodes: List of MySQL nodes. In OCCNE 1.0 this group consists of two virtual machines distributed equally among the Storage Hosts. Requires a NodeId parameters.
    • NodeId: Parameter must be unique across the MySQL Clsuter and have a value between 49 and 255.
  • mysqlndb_all_nodes: Do not modify the children of the mysqlndb_all_nodes group.
  • mysqlndb_all_nodes: Do not modify the variables in this group
Prerequisites

Prior to initiating the procedure steps, the Inventory Boilerplate should be copied to a system where it can be edited and saved for future use. Eventually the hosts.ini file needs to be transferred to OCCNE servers.

References

Table B-1 Procedure for OCCNE Inventory File Preparation

Step # Procedure Description
1.

OCCNE Cluster Name

In order to provide each OCCNE host with a unique FQDN, the first step in composing the OCCNE Inventory is to create an OCCNE Cluster domain suffix. The OCCNE Cluster domain suffix starts with a Top-level Domain (TLD). The structure of a TLD is maintained by various government and commercial authorities. Additional domain name levels help identify the cluster and are added to help convey additional meaning. OCCNE suggests adding at least one "ad hoc" identifier and at least one "geographic" and "organizational" identifier.

Geographic and organizational identifiers may be multiple levels deep.

An example OCCNE Cluster Name using the following identifiers is below:

  • Ad hoc Identifier: atlantic
  • Organizational Identifier: lab1
  • Organizational Identifier: research
  • Geographical Identifier (State of North Carolina): nc
  • Geographical Identifier (Country of United States): us
  • TLD: oracle.com

Example OCCNE Cluster name: atlantic.lab1.research.nc.us.oracle.com

2.

Create host_hp_gen_10 and host_kernel_virtual group lists

Using the OCCNE Cluster domain suffix created above, fill out the inventory boilerplate with the list of hosts in the host_hp_gen_10 and host_kernel_virtual groups.

The recommended host name prefix for nodes in the host_hp_gen_10 groups is "k8s-x" where x is a number 1 to N. Kubernetes "master" and "worker" nodes should not be differentiated using the host name. The recommended host name prefix for nodes in the host_kernel_virtual group is "db-x" where x is a number 1 to N. MySQL Cluster nodes should not be differentiated using host names.

3.

Edit occne:vars

Edit the values in the occne:vars group to reflect site specific data. Values in the occne:vars group are defined below:

  • occne_cluster_name: set to the OCCNE Cluster Name generated in step 2.1 above.
  • nfs_host: set to the IP of the bastion host
  • nfs_path: set to the location of the nfs root created on the bastion host
  • subnet_ipv4: set to the subnet of the network used to assign IPs for OCCNE hosts
  • subnet_cidr: appears this is not used so does not need to be included. If it does need to be included, set to the cidr notation for the subnet. for example /24
  • netmask: set appropriately for the network used to assign IPs for OCCNE hosts
  • broadcast_address: set appropriately for the network used to assign IPs for OCCNE hosts
  • default_route: set to the IP of the TOR switch
  • next_server: set to the IP of the bastion host
  • name_server: Set to the IP of the bastion host.
  • ntp_server: Set to the IP of the TOR switch
  • http_proxy/https_proxy: set the http proxy.
  • occne_private_registry: set to the non-FQDN of the docker registry used by worker nodes to pull docker images from.

    Note: It's ok if this name is not in DNS, or if DNS is not available. The IP and Port settings are used to configure this registry on each host, placing the name and IP in each host's /etc/hosts file, ensuring the name resolves to an IP.

  • occne_private_registry_address: set to the IP of the docker registry above
  • occne_private_registry_port: set to the Port of the docker registry above
  • helm_stable_repo_url: set to the url of the local helm repo
  • occne_helm_stable_repo_url: set to the url of the local helm repo
  • occne_helm_images_repo: set to the url where images referenced in helm charts reside
  • pxe_install_lights_out_usr: set to the user name configured for iLO admins on each host in the OCCNE Frame
  • pxe_install_lights_out_passwd: set to the password configured for iLO admins on each host in the OCCNE Frame
  • occne_k8s_binary_repo: set the IP of bastion-1 and the port configured
  • docker_rh_repo_base_url: set to the URL of the repo containing the docker RPMs
  • docker_rh_repo_gpgkey: set to the URL of the gpgkey in the docker yum repo

OCCNE Inventory Boilerplate
################################################################################
#                                                                              #
# Copyright (c) 2019 Oracle and/or its affiliates. All rights reserved.        #
#                                                                              #
################################################################################

################################################################################
# EXAMPLE OCCNE Cluster hosts.ini file. Defines OCCNE deployment variables 
# and targets.  

################################################################################
# Definition of the host node local connection for Ansible control, 
# do not change
[local]
127.0.0.1 ansible_connection=local


################################################################################
# This is a list of all of the nodes in the targeted deployment system with the 
# IP address to use for Ansible control during deployment.
# For bare metal hosts, the IP of the ILO is used for driving reboots. 
# Host MAC addresses is used to identify nodes during PXE-boot phase of the 
# os_install process.
# MAC addresses must be lowercase and delimited with a dash "-"

[host_hp_gen_10]
k8s-1.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-xx-xx-xx-xx
k8s-2.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-xx-xx-xx-xx
k8s-3.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-xx-xx-xx-xx
k8s-4.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-xx-xx-xx-xx
k8s-5.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-xx-xx-xx-xx
k8s-6.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-xx-xx-xx-xx
k8s-7.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-xx-xx-xx-xx
db-1.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-xx-xx-xx-xx
db-2.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-xx-xx-xx-xx

[host_kernel_virtual]
db-3.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx
db-4.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx
db-5.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx
db-6.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx
db-7.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx
db-8.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx
db-9.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx
db-10.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx

###############################################################################
# Node grouping of which nodes are in the occne system
[occne:children]
host_hp_gen_10
host_kernel_virtual
k8s-cluster
data_store

###############################################################################
# Variables that define the OCCNE environment and specify target configuration.
[occne:vars]
occne_cluster_name=foo.lab.us.oracle.com
nfs_host=10.75.216.xx
nfs_path=/var/occne
subnet_ipv4=10.75.216.0
subnet_cidr=/25
netmask=255.255.255.128
broadcast_address=10.75.216.127
default_route=10.75.216.1
next_server=10.75.216.114
name_server='10.75.124.245,10.75.124.246'
ntp_server='10.75.124.245,10.75.124.246'
http_proxy=http://www-proxy.us.oracle.com:80
https_proxy=http://www-proxy.us.oracle.com:80
occne_private_registry=bastion-1
occne_private_registry_address='10.75.216.xx'
occne_private_registry_port=5000
metallb_peer_address=10.75.216.xx
metallb_default_pool_protocol=bgp
metallb_default_pool_addresses='10.75.xxx.xx/xx'
pxe_install_lights_out_usr=root
pxe_install_lights_out_passwd=TklcRoot
occne_k8s_binary_repo='http://bastion-1:8082/binaries'
helm_stable_repo_url='http://<bastion-1 IP addr>:<port>/charts/stable/'
occne_helm_stable_repo_url='http://<bastion-1 IP addr>:<port>/charts/stable/'
occne_helm_images_repo='bastion-1:5000'/
docker_rh_repo_base_url=http://<bastion-1 IP addr>/yum/centos/7/updates/x86_64/
docker_rh_repo_gpgkey=http://<bastion-1 IP addr>/yum/centos/RPM-GPG-CENTOS

###############################################################################
# Node grouping of which nodes are in the occne data_store
[data_store]
db-1.foo.lab.us.oracle.com
db-2.foo.lab.us.oracle.com

###############################################################################
# Node grouping of which nodes are to be Kubernetes master nodes (must be at least 2)
[kube-master]
k8s-1.foo.lab.us.oracle.com
k8s-2.foo.lab.us.oracle.com 
k8s-3.foo.lab.us.oracle.com 

################################################################################
# Node grouping specifying which nodes are Kubernetes etcd data.
# An odd number of etcd nodes is required.
[etcd]
k8s-1.foo.lab.us.oracle.com
k8s-2.foo.lab.us.oracle.com 
k8s-3.foo.lab.us.oracle.com 

################################################################################
# Node grouping specifying which nodes are Kubernetes worker nodes.
# A minimum of two worker nodes is required.
[kube-node]
k8s-4.foo.lab.us.oracle.com 
k8s-5.foo.lab.us.oracle.com 
k8s-6.foo.lab.us.oracle.com 
k8s-7.foo.lab.us.oracle.com 

# Node grouping of which nodes are to be in the OC-CNE Kubernetes cluster
[k8s-cluster:children]
kube-node
kube-master

################################################################################
# The following node groupings are for MySQL NDB cluster 
# installation under control of MySQL Cluster Manager

################################################################################
# NodeId should be unique across the cluster, each node should be assigned with 
# the unique NodeId, this id will control which data nodes should be part of 
# different node groups. For Management nodes, NodeId should be between 49 to 
# 255 and should be assigned with unique NodeId with in MySQL cluster.
[mysqlndb_mgm_nodes]
db-3.foo.lab.us.oracle.com NodeId=49
db-4.foo.lab.us.oracle.com NodeId=50

###############################################################################
# For data nodes, NodeId should be between 1 to 48, NodeId will be used to 
# group the data nodes among different Node Groups.
[mysqlndb_data_nodes]
db-5.foo.lab.us.oracle.com NodeId=1
db-6.foo.lab.us.oracle.com NodeId=2
db-7.foo.lab.us.oracle.com NodeId=3
db-8.foo.lab.us.oracle.com NodeId=4

################################################################################
# For SQL nodes, NodeId should be between 49 to 255 and should be assigned with 
# unique NodeId with in MySQL cluster.
[mysqlndb_sql_nodes]
db-9.foo.lab.us.oracle.com NodeId=56
db-10.foo.lab.us.oracle.com NodeId=57

################################################################################
# Node grouping of all of the nodes involved in the MySQL cluster
[mysqlndb_all_nodes:children]
mysqlndb_mgm_nodes
mysqlndb_data_nodes
mysqlndb_sql_nodes

################################################################################
# MCM and NDB cluster variables can be defined here to override the values.
[mysqlndb_all_nodes:vars]
occne_mysqlndb_NoOfReplicas=2
occne_mysqlndb_DataMemory=12G