Inventory File Preparation
Introduction
Inventory File Overview
Procedure Steps
OCCNE Installation automation uses information within an OCCNE Inventory file to provision servers and virtual machines, install cloud native components, as well as configure all of the components within the cluster such that they constitute a cluster conformant to the OCCNE platform specifications. To assist with the creation of the OCCNE Inventory, a boilerplate OCCNE Inventory is provided. The boilerplate inventory file requires the input of site-specific information.
This section outlines the procedure for taking the OCCNE Inventory boilerplate and creating a site specific OCCNE Inventory file usable by the OCCNE Install Procedures.
The inventory file is an Initialization (INI) formatted file. The basic elements of an inventory file are hosts, properties, and groups.
- A host is defined as a Fully Qualified Domain Name (FQDN). Properties are defined as key=value pairs.
- A property applies to a specific host when it appears on the same line as the host.
- Square brackets define group names. For example [host_hp_gen_10] defines the group of physical HP Gen10 machines. There is no explicit "end of group" delimiter, rather group definitions end at the next group declaration or the end of the file. Groups can not be nested.
- A property applies to an entire group when it is defined under a group heading not on the same line as a host.
- Groups of groups are formed using the children keyword. For example, the [occne:children] creates an occne group comprised of several other groups.
- Inline comments are not allowed
The OCCNE Inventory file is composed of several groups:
Table B-1 Base Groups
Num | Group Name | Description/Comments |
---|---|---|
1. | host_hp_gen_10 | The list of all physical hosts in the OCCNE
cluster. Each host in this group must also have several properties defined
(outlined below)
|
2. | host_kernel_virtual | The list of all virtual hosts in the OCCNE
cluster. Each host in this group must have the same properties defined as above
with the exception of the ilo property.
|
3. | occne:children | Do not modify the children of the occne group |
4. | occne:vars | This is a list of variables representing configurable site-specific data. While some variables are optional, the ones listed in the boilerplate should be defined with valid values. If a given site does not have applicable data to fill in for a variable, the OCCNE installation or engineering team should be consulted. Individual variable values are explained in subsequent sections. |
5. | data_store | The list of Storage Hosts |
6. | kube-master | The list of Master Node hosts where kubernetes master components run. |
7. | etcd | The list of hosts that compose the etcd server. Should always be an odd number. This set to the same list of nodes as the kube-master group. |
8. | kube-node | The list of Worker Nodes. Worker Nodes are where kubernetes pods run and should be comprised of the bladed hosts. |
9. | k8s-cluster:children | Do not modify the children of k8s-cluster |
10. | bastion_hosts | The list of Bastion Hosts |
11. | skip_kernel_virtual |
The list of Virtual Machines to be skipped while creating the other Virtual machines specified in the "host_kernel_virtual" host group. For Ex: Here if the bastion-2.foo.lab.us.oracle.com is already created then we can skip this VM and create only bastion-1 VM. [host_kernel_virtual] bastion-1.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx host_hp_gen_blade=db-1.foo.lab.us.oracle.com ilo_ansible_host=10.75.xxx.xx oam_ansible_host=10.75.xxx.xx bastion-2.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx host_hp_gen_blade=db-2.foo.lab.us.oracle.com ilo_ansible_host=10.75.xxx.xx oam_ansible_host=10.75.xxx.xx [skip_kernel_virtual] bastion-2.foo.lab.us.oracle.com |
Data Tier Groups
The MySQL service is comprised of several nodes running on virtual machines on RMS hosts. This collection of hosts is referred to as the MySQL Cluster. Each host in the MySQL Cluster requires a NodeID parameter. Each host in the MySQL cluster is required to have a NodeID value that is unique across the MySQL cluster. Additional parameter range limitations are outlined below.
Table B-2 Data Tier Groups
Num | Group Name | Description/Comments |
---|---|---|
1. | mysqlndb_mgm_nodes | The list of MySQL Management nodes. In OCCNE
1.2 this group consists of three virtual machines distributed equally among the
kube-master nodes. These nodes must have a NodeId parameter defined:
|
2. | mysqlndb_data_nodes_ng0 | The list of MySQL Data nodes, In OCCNE 1.2
this group consists of two virtual machine distributed equally among the
Storage Hosts. Each VM in this group should belong to the different Storage
Hosts. Requires a NodeId parameters.
For Ex: NodeId should be assigned with value 1 and 2 [mysqlndb_data_nodes_ng0] db-6.foo.lab.us.oracle.com NodeId=1 db-7.foo.lab.us.oracle.com NodeId=2 |
3. | mysqlndb_data_nodes_ng1 | The list of MySQL Data nodes, In OCCNE 1.2
this group consists of two virtual machine distributed equally among the
Storage Hosts. Each VM in this group should belong to the different Storage
Hosts.
For Ex: NodeId should be assigned with value 3 and 4 [mysqlndb_data_nodes_ng0] db-8.foo.lab.us.oracle.com NodeId=3 db-9.foo.lab.us.oracle.com NodeId=4 |
4. | mysqlndb_data_nodes | The list of MySQL Data node groups. In OCCNE 1.2 this group consists of 2 groups, each groups consists of two virtual machines distributed equally among the Storage Hosts. |
5. | mysqlndb_sql_nodes |
List of MySQL nodes. In OCCNE 1.0 this group consists of two virtual machines distributed equally among the Storage Hosts. Requires a NodeId parameters.
|
6. | mysqlndb_all_nodes:children | Do not modify the children of the mysqlndb_all_nodes group. |
7. | mysqlndb_all_nodes:vars | This is a list of variables representing configurable site-specific data. While some variables are optional, the ones listed in the boilerplate should be defined with valid values. If a given site does not have applicable data to fill in for a variable, the OCCNE installation or engineering team should be consulted. Individual variable values are explained in subsequent sections. |
Prior to initiating the procedure steps, the Inventory Boilerplate should be copied to a system where it can be edited and saved for future use. Eventually the hosts.ini file needs to be transferred to OCCNE servers.
Table B-3 Procedure for OCCNE Inventory File Preparation
Step # | Procedure | Description |
---|---|---|
1.
|
OCCNE Cluster Name |
In order to provide each OCCNE host with a unique FQDN, the first step in composing the OCCNE Inventory is to create an OCCNE Cluster domain suffix. The OCCNE Cluster domain suffix starts with a Top-level Domain (TLD). The structure of a TLD is maintained by various government and commercial authorities. Additional domain name levels help identify the cluster and are added to help convey additional meaning. OCCNE suggests adding at least one "ad hoc" identifier and at least one "geographic" and "organizational" identifier. Geographic and organizational identifiers may be multiple levels deep. An example OCCNE Cluster Name using the following identifiers is below:
Example OCCNE Cluster name: atlantic.lab1.research.nc.us.oracle.com |
2.
|
Create host_hp_gen_10 and host_kernel_virtual group lists |
Using the OCCNE Cluster domain suffix created above, fill out the inventory boilerplate with the list of hosts in the host_hp_gen_10 and host_kernel_virtual groups. The recommended host name prefix for nodes in the host_hp_gen_10 groups is "k8s-x" where x is a number 1 to N. Kubernetes "master" and "worker" nodes should not be differentiated using the host name. The recommended host name prefix for nodes in the host_kernel_virtual group is "db-x" where x is a number 1 to N. MySQL Cluster nodes should not be differentiated using host names. |
3.
|
Edit occne:vars |
Edit the values in the occne:vars group to reflect site specific data. Values in the occne:vars group are defined in Table B-4. |
4.
|
Edit mysqlndb_all_nodes:vars | Edit the values:
|
Table B-4 occne:vars
Num | Var Name | Description |
---|---|---|
1 | occne_cluster_name | Set to the OCCNE Cluster Name generated in step 2.1 above. |
2 | nfs_host | Set to the IP of the bastion host. |
3 | nfs_path | Set to the location of the nfs root created on the bastion host. |
4 | subnet_ipv4 | Set to the subnet of the network used to assign IPs for OCCNE hosts |
5 | subnet_cidr | Appears this is not used so does not need to be included. If it does need to be included, set to the cidr notation for the subnet. For example /24 |
6 | netmask | Set appropriately for the network used to assign IPs for OCCNE hosts. |
7 | broadcast_address | Set appropriately for the network used to assign IPs for OCCNE hosts. |
8 | default_route | Set to the IP of the TOR switch. |
9 | next_server | Set to the IP of the bastion host. |
10 | name_server | Set to the IP of the bastion host. |
11 | ntp_server | Set to the IP of the TOR switch. |
12 | http_proxy | Set the http proxy. |
13 | https_proxy | Set the https proxy. |
14 | occne_private_registry | Set to the non-FQDN of the docker registry used by worker nodes to pull docker images from. NOTE: It's ok if this name is not in DNS, or if DNS is not available. The IP and Port settings are used to configure this registry on each host, placing the name and IP in each host's /etc/hosts file, ensuring the name resolves to an IP. |
15 | occne_private_registry_address | Set to the IP of the docker registry above. |
16 | occne_private_registry_port | Set to the Port of the docker registry above |
17 | metallb_peer_address | Not used |
18 | metallb_default_pool_protocol | Not used |
19 | metallb_default_pool_addresses | Not used |
20 | pxe_install_lights_out_usr | Set to the user name configured for iLO admins on each host in the OCCNE Frame. |
21 | pxe_install_lights_out_passwd | Set to the password configured for iLO admins on each host in the OCCNE Frame. |
22 | occne_k8s_binary_repo | Set the internal IP of bastion-1 and the port configured. |
23 | helm_stable_repo_url | Set to the url of the local helm repo. |
24 | occne_helm_stable_repo_url | Set to the url of the local helm repo. |
25 | occne_helm_images_repo | Set to the url where images referenced in helm charts reside. |
26 | docker_rh_repo_base_url | Set to the URL of the repo containing the docker RPMs. |
27 | docker_rh_repo_gpgkey | Set to the URL of the gpgkey in the docker yum repo. |
28 | ilo_vlan_id | Set to the VLAN ID of the ILO network For Ex: 2 |
29 | ilo_subnet_ipv4 | Set to the subnet of the ILO network used to assign IPs for Storage hosts |
30 | ilo_subnet_cidr | Set to the cidr notation for the subnet. For example 24 |
31 | ilo_netmask | Set appropriately for the network used to assign ILO IPs for Storage hosts. |
32 | ilo_broadcast_address | Set appropriately for the network used to assign ILO IPs for OCCNE hosts. |
33 | ilo_default_route | Set to the ILO VIP of the TOR switch. |
34 | mgmt_vlan_id | Set to the VLAN ID of the Management network For Ex: 4 |
35 | mgmt_subnet_ipv4 | Set to the subnet of the Management network used to assign IPs for Storage hosts |
36 | mgmt_subnet_cidr | Set to the cidr notation for the Management subnet. For example 29 |
37 | mgmt_netmask | Set appropriately for the network used to assign Management IPs for Storage hosts. |
38 | mgmt_broadcast_address | Set appropriately for the network used to assign Management IPs for Storage hosts. |
39 | mgmt_default_route | Set to the Management VIP of the TOR switch. |
40 | signal_vlan_id | Set to the VLAN ID of the Signaling network For Ex: 5 |
41 | signal_subnet_ipv4 | Set to the subnet of the Signaling network used to assign IPs for Storage hosts |
42 | signal_subnet_cidr | Set to the cidr notation for the Signaling subnet. For example 29 |
43 | signal_netmask | Set appropriately for the network used to assign Signaling IPs for Storage hosts and MySQL SQL Node VM's. |
44 | signal_broadcast_address | Set appropriately for the network used to assign Signaling IPs for Storage hosts and MySQL SQL Node VM's. |
45 | signal_default_route | Set to the Signaling VIP of the TOR switch. |
46 | mysql_bastion_node_ram | Size of the RAM assigned to the bastion hosts. For EX: 8192 (in MB) |
47 | mysql_bastion_node_vcpus | Number of cpus assigned to the bastion hosts. For Ex: 4 |
48 | mysql_bastion_node_disk_size | Size of the Disk assigned to the bastion host. For Ex: 300 (means 300 GB) |
49 | mysql_mgm_node_ram | Size of the RAM assigned to the MySQL MGM Nodes. For EX: 8192 (in MB) |
50 | mysql_mgm_node_vcpus | Number of cpus assigned to the MySQL MGM Nodes. For Ex: 8 |
51 | mysql_mgm_node_disk_size | Size of the Disk assigned to the MySQL MGM Nodes. For Ex: 200 ( means 200 GB) |
52 | mysql_data_node_ram | Size of the RAM assigned to the MySQL DATA Nodes. For EX: 32768 (in MB) |
53 | mysql_data_node_vcpus | Number of cpus assigned to the MySQL DATA Nodes. For Ex: 12 |
54 | mysql_data_node_disk_size | Size of the Disk assigned to the MySQL DATA Nodes. For Ex: 600 (means 600 GB) |
55 | mysql_sql_node_ram | Size of the RAM assigned to the MySQL SQL Nodes. For EX: 16384 (in MB) |
56 | mysql_sql_node_vcpus | Number of cpus assigned to the MySQL SQL Nodes. For Ex: 8 |
57 | mysql_sql_node_disk_size | Size of the Disk assigned to the MySQL SQL Nodes. For Ex: 600 (means 600 GB) |
58 | occne_snmp_notifier_destination | Set to the address of SNMP trap receiver. For Ex: "127.0.0.1:162" |
################################################################################ # # # Copyright (c) 2019 Oracle and/or its affiliates. All rights reserved. # # # ################################################################################ ################################################################################ # EXAMPLE OCCNE Cluster hosts.ini file. Defines OCCNE deployment variables # and targets. ################################################################################ # Definition of the host node local connection for Ansible control, # do not change [local] 127.0.0.1 ansible_connection=local ################################################################################ # This is a list of all of the nodes in the targeted deployment system with the # IP address to use for Ansible control during deployment. # For bare metal hosts, the IP of the ILO is used for driving reboots. # Host MAC addresses is used to identify nodes during PXE-boot phase of the # os_install process. # MAC addresses must be lowercase and delimited with a dash "-" [host_hp_gen_10] k8s-1.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-xx-xx-xx-xx k8s-2.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-xx-xx-xx-xx k8s-3.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-xx-xx-xx-xx k8s-4.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-xx-xx-xx-xx k8s-5.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-xx-xx-xx-xx k8s-6.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-xx-xx-xx-xx k8s-7.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-xx-xx-xx-xx db-1.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-xx-xx-xx-xx db-2.foo.lab.us.oracle.com ansible_host=10.75.216.xx ilo=10.75.216.xx mac=xx-xx-xx-xx-xx-xx [host_kernel_virtual] db-3.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx db-4.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx db-5.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx db-6.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx db-7.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx db-8.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx db-9.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx db-10.foo.lab.us.oracle.com ansible_host=10.75.216.xx mac=xx-xx-xx-xx-xx-xx ############################################################################### # Node grouping of which nodes are in the occne system [occne:children] host_hp_gen_10 host_kernel_virtual k8s-cluster data_store ############################################################################### # Variables that define the OCCNE environment and specify target configuration. [occne:vars] occne_cluster_name=foo.lab.us.oracle.com nfs_host=10.75.216.xx nfs_path=/var/occne subnet_ipv4=10.75.216.0 subnet_cidr=/25 netmask=255.255.255.128 broadcast_address=10.75.216.127 default_route=10.75.216.1 next_server=10.75.216.114 name_server='10.75.124.245,10.75.124.246' ntp_server='10.75.124.245,10.75.124.246' http_proxy=http://www-proxy.us.oracle.com:80 https_proxy=http://www-proxy.us.oracle.com:80 occne_private_registry=bastion-1 occne_private_registry_address='10.75.216.xx' occne_private_registry_port=5000 metallb_peer_address=10.75.216.xx metallb_default_pool_protocol=bgp metallb_default_pool_addresses='10.75.xxx.xx/xx' pxe_install_lights_out_usr=root pxe_install_lights_out_passwd=TklcRoot occne_k8s_binary_repo='http://bastion-1:8082/binaries' helm_stable_repo_url='http://<bastion-1 IP addr>:<port>/charts/stable/' occne_helm_stable_repo_url='http://<bastion-1 IP addr>:<port>/charts/stable/' occne_helm_images_repo='bastion-1:5000'/ docker_rh_repo_base_url=http://<bastion-1 IP addr>/yum/centos/7/updates/x86_64/ docker_rh_repo_gpgkey=http://<bastion-1 IP addr>/yum/centos/RPM-GPG-CENTOS ############################################################################### # Node grouping of which nodes are in the occne data_store [data_store] db-1.foo.lab.us.oracle.com db-2.foo.lab.us.oracle.com ############################################################################### # Node grouping of which nodes are to be Kubernetes master nodes (must be at least 2) [kube-master] k8s-1.foo.lab.us.oracle.com k8s-2.foo.lab.us.oracle.com k8s-3.foo.lab.us.oracle.com ################################################################################ # Node grouping specifying which nodes are Kubernetes etcd data. # An odd number of etcd nodes is required. [etcd] k8s-1.foo.lab.us.oracle.com k8s-2.foo.lab.us.oracle.com k8s-3.foo.lab.us.oracle.com ################################################################################ # Node grouping specifying which nodes are Kubernetes worker nodes. # A minimum of two worker nodes is required. [kube-node] k8s-4.foo.lab.us.oracle.com k8s-5.foo.lab.us.oracle.com k8s-6.foo.lab.us.oracle.com k8s-7.foo.lab.us.oracle.com # Node grouping of which nodes are to be in the OC-CNE Kubernetes cluster [k8s-cluster:children] kube-node kube-master ################################################################################ # The following node groupings are for MySQL NDB cluster # installation under control of MySQL Cluster Manager ################################################################################ # NodeId should be unique across the cluster, each node should be assigned with # the unique NodeId, this id will control which data nodes should be part of # different node groups. For Management nodes, NodeId should be between 49 to # 255 and should be assigned with unique NodeId with in MySQL cluster. [mysqlndb_mgm_nodes] db-3.foo.lab.us.oracle.com NodeId=49 db-4.foo.lab.us.oracle.com NodeId=50 ############################################################################### # For data nodes, NodeId should be between 1 to 48, NodeId will be used to # group the data nodes among different Node Groups. [mysqlndb_data_nodes] db-5.foo.lab.us.oracle.com NodeId=1 db-6.foo.lab.us.oracle.com NodeId=2 db-7.foo.lab.us.oracle.com NodeId=3 db-8.foo.lab.us.oracle.com NodeId=4 ################################################################################ # For SQL nodes, NodeId should be between 49 to 255 and should be assigned with # unique NodeId with in MySQL cluster. [mysqlndb_sql_nodes] db-9.foo.lab.us.oracle.com NodeId=56 db-10.foo.lab.us.oracle.com NodeId=57 ################################################################################ # Node grouping of all of the nodes involved in the MySQL cluster [mysqlndb_all_nodes:children] mysqlndb_mgm_nodes mysqlndb_data_nodes mysqlndb_sql_nodes ################################################################################ # MCM and NDB cluster variables can be defined here to override the values. [mysqlndb_all_nodes:vars] occne_mysqlndb_NoOfReplicas=2 occne_mysqlndb_DataMemory=12G